I0214 12:56:11.939247 8 e2e.go:243] Starting e2e run "c17a9fbc-6909-4f44-abd6-f96cfc9860fc" on Ginkgo node 1 Running Suite: Kubernetes e2e suite =================================== Random Seed: 1581684970 - Will randomize all specs Will run 215 of 4412 specs Feb 14 12:56:12.251: INFO: >>> kubeConfig: /root/.kube/config Feb 14 12:56:12.255: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Feb 14 12:56:12.286: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Feb 14 12:56:12.346: INFO: 10 / 10 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Feb 14 12:56:12.346: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Feb 14 12:56:12.346: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Feb 14 12:56:12.357: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Feb 14 12:56:12.357: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'weave-net' (0 seconds elapsed) Feb 14 12:56:12.357: INFO: e2e test version: v1.15.7 Feb 14 12:56:12.359: INFO: kube-apiserver version: v1.15.1 SSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 14 12:56:12.359: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap Feb 14 12:56:12.543: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-f3226d4c-6190-4103-85c0-e313eea1a60a STEP: Creating a pod to test consume configMaps Feb 14 12:56:12.574: INFO: Waiting up to 5m0s for pod "pod-configmaps-ec23bf14-5871-4d29-9e08-4fc62118e9e5" in namespace "configmap-2952" to be "success or failure" Feb 14 12:56:12.595: INFO: Pod "pod-configmaps-ec23bf14-5871-4d29-9e08-4fc62118e9e5": Phase="Pending", Reason="", readiness=false. Elapsed: 20.967577ms Feb 14 12:56:14.609: INFO: Pod "pod-configmaps-ec23bf14-5871-4d29-9e08-4fc62118e9e5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035194778s Feb 14 12:56:16.631: INFO: Pod "pod-configmaps-ec23bf14-5871-4d29-9e08-4fc62118e9e5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.057135063s Feb 14 12:56:18.647: INFO: Pod "pod-configmaps-ec23bf14-5871-4d29-9e08-4fc62118e9e5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.07329805s Feb 14 12:56:20.664: INFO: Pod "pod-configmaps-ec23bf14-5871-4d29-9e08-4fc62118e9e5": Phase="Pending", Reason="", readiness=false. Elapsed: 8.089653657s Feb 14 12:56:22.688: INFO: Pod "pod-configmaps-ec23bf14-5871-4d29-9e08-4fc62118e9e5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.113712597s STEP: Saw pod success Feb 14 12:56:22.688: INFO: Pod "pod-configmaps-ec23bf14-5871-4d29-9e08-4fc62118e9e5" satisfied condition "success or failure" Feb 14 12:56:22.697: INFO: Trying to get logs from node iruya-node pod pod-configmaps-ec23bf14-5871-4d29-9e08-4fc62118e9e5 container configmap-volume-test: STEP: delete the pod Feb 14 12:56:23.017: INFO: Waiting for pod pod-configmaps-ec23bf14-5871-4d29-9e08-4fc62118e9e5 to disappear Feb 14 12:56:23.026: INFO: Pod pod-configmaps-ec23bf14-5871-4d29-9e08-4fc62118e9e5 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 14 12:56:23.027: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2952" for this suite. Feb 14 12:56:29.056: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 14 12:56:29.200: INFO: namespace configmap-2952 deletion completed in 6.169255793s • [SLOW TEST:16.841 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl logs should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 14 12:56:29.200: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1292 STEP: creating an rc Feb 14 12:56:29.314: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8762' Feb 14 12:56:31.471: INFO: stderr: "" Feb 14 12:56:31.471: INFO: stdout: "replicationcontroller/redis-master created\n" [It] should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Waiting for Redis master to start. Feb 14 12:56:32.486: INFO: Selector matched 1 pods for map[app:redis] Feb 14 12:56:32.486: INFO: Found 0 / 1 Feb 14 12:56:33.513: INFO: Selector matched 1 pods for map[app:redis] Feb 14 12:56:33.513: INFO: Found 0 / 1 Feb 14 12:56:34.487: INFO: Selector matched 1 pods for map[app:redis] Feb 14 12:56:34.488: INFO: Found 0 / 1 Feb 14 12:56:35.483: INFO: Selector matched 1 pods for map[app:redis] Feb 14 12:56:35.484: INFO: Found 0 / 1 Feb 14 12:56:36.497: INFO: Selector matched 1 pods for map[app:redis] Feb 14 12:56:36.497: INFO: Found 0 / 1 Feb 14 12:56:39.903: INFO: Selector matched 1 pods for map[app:redis] Feb 14 12:56:39.903: INFO: Found 0 / 1 Feb 14 12:56:40.487: INFO: Selector matched 1 pods for map[app:redis] Feb 14 12:56:40.487: INFO: Found 0 / 1 Feb 14 12:56:41.488: INFO: Selector matched 1 pods for map[app:redis] Feb 14 12:56:41.489: INFO: Found 0 / 1 Feb 14 12:56:42.488: INFO: Selector matched 1 pods for map[app:redis] Feb 14 12:56:42.488: INFO: Found 0 / 1 Feb 14 12:56:43.486: INFO: Selector matched 1 pods for map[app:redis] Feb 14 12:56:43.486: INFO: Found 0 / 1 Feb 14 12:56:44.492: INFO: Selector matched 1 pods for map[app:redis] Feb 14 12:56:44.492: INFO: Found 1 / 1 Feb 14 12:56:44.492: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Feb 14 12:56:44.498: INFO: Selector matched 1 pods for map[app:redis] Feb 14 12:56:44.498: INFO: ForEach: Found 1 pods from the filter. Now looping through them. STEP: checking for a matching strings Feb 14 12:56:44.499: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-ssjr5 redis-master --namespace=kubectl-8762' Feb 14 12:56:44.665: INFO: stderr: "" Feb 14 12:56:44.666: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 14 Feb 12:56:43.200 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 14 Feb 12:56:43.200 # Server started, Redis version 3.2.12\n1:M 14 Feb 12:56:43.201 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 14 Feb 12:56:43.201 * The server is now ready to accept connections on port 6379\n" STEP: limiting log lines Feb 14 12:56:44.666: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-ssjr5 redis-master --namespace=kubectl-8762 --tail=1' Feb 14 12:56:44.810: INFO: stderr: "" Feb 14 12:56:44.810: INFO: stdout: "1:M 14 Feb 12:56:43.201 * The server is now ready to accept connections on port 6379\n" STEP: limiting log bytes Feb 14 12:56:44.810: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-ssjr5 redis-master --namespace=kubectl-8762 --limit-bytes=1' Feb 14 12:56:45.013: INFO: stderr: "" Feb 14 12:56:45.013: INFO: stdout: " " STEP: exposing timestamps Feb 14 12:56:45.013: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-ssjr5 redis-master --namespace=kubectl-8762 --tail=1 --timestamps' Feb 14 12:56:45.150: INFO: stderr: "" Feb 14 12:56:45.151: INFO: stdout: "2020-02-14T12:56:43.202578429Z 1:M 14 Feb 12:56:43.201 * The server is now ready to accept connections on port 6379\n" STEP: restricting to a time range Feb 14 12:56:47.652: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-ssjr5 redis-master --namespace=kubectl-8762 --since=1s' Feb 14 12:56:47.880: INFO: stderr: "" Feb 14 12:56:47.880: INFO: stdout: "" Feb 14 12:56:47.881: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-ssjr5 redis-master --namespace=kubectl-8762 --since=24h' Feb 14 12:56:48.087: INFO: stderr: "" Feb 14 12:56:48.088: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 14 Feb 12:56:43.200 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 14 Feb 12:56:43.200 # Server started, Redis version 3.2.12\n1:M 14 Feb 12:56:43.201 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 14 Feb 12:56:43.201 * The server is now ready to accept connections on port 6379\n" [AfterEach] [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1298 STEP: using delete to clean up resources Feb 14 12:56:48.088: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8762' Feb 14 12:56:48.175: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Feb 14 12:56:48.175: INFO: stdout: "replicationcontroller \"redis-master\" force deleted\n" Feb 14 12:56:48.176: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=nginx --no-headers --namespace=kubectl-8762' Feb 14 12:56:48.317: INFO: stderr: "No resources found.\n" Feb 14 12:56:48.317: INFO: stdout: "" Feb 14 12:56:48.317: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=nginx --namespace=kubectl-8762 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Feb 14 12:56:48.475: INFO: stderr: "" Feb 14 12:56:48.476: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 14 12:56:48.476: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8762" for this suite. Feb 14 12:57:10.511: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 14 12:57:10.631: INFO: namespace kubectl-8762 deletion completed in 22.144904746s • [SLOW TEST:41.431 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-network] DNS should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 14 12:57:10.632: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test externalName service STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8686.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-8686.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8686.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-8686.svc.cluster.local; sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Feb 14 12:57:24.954: INFO: File wheezy_udp@dns-test-service-3.dns-8686.svc.cluster.local from pod dns-8686/dns-test-19c1ab25-35d0-46f6-b43a-afe58cf6b2ce contains '' instead of 'foo.example.com.' Feb 14 12:57:24.962: INFO: File jessie_udp@dns-test-service-3.dns-8686.svc.cluster.local from pod dns-8686/dns-test-19c1ab25-35d0-46f6-b43a-afe58cf6b2ce contains '' instead of 'foo.example.com.' Feb 14 12:57:24.962: INFO: Lookups using dns-8686/dns-test-19c1ab25-35d0-46f6-b43a-afe58cf6b2ce failed for: [wheezy_udp@dns-test-service-3.dns-8686.svc.cluster.local jessie_udp@dns-test-service-3.dns-8686.svc.cluster.local] Feb 14 12:57:29.996: INFO: DNS probes using dns-test-19c1ab25-35d0-46f6-b43a-afe58cf6b2ce succeeded STEP: deleting the pod STEP: changing the externalName to bar.example.com STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8686.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-8686.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8686.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-8686.svc.cluster.local; sleep 1; done STEP: creating a second pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Feb 14 12:57:46.335: INFO: File wheezy_udp@dns-test-service-3.dns-8686.svc.cluster.local from pod dns-8686/dns-test-ee697dc2-c5df-4c7b-a67c-6f40b8b0ea9a contains '' instead of 'bar.example.com.' Feb 14 12:57:46.344: INFO: File jessie_udp@dns-test-service-3.dns-8686.svc.cluster.local from pod dns-8686/dns-test-ee697dc2-c5df-4c7b-a67c-6f40b8b0ea9a contains '' instead of 'bar.example.com.' Feb 14 12:57:46.344: INFO: Lookups using dns-8686/dns-test-ee697dc2-c5df-4c7b-a67c-6f40b8b0ea9a failed for: [wheezy_udp@dns-test-service-3.dns-8686.svc.cluster.local jessie_udp@dns-test-service-3.dns-8686.svc.cluster.local] Feb 14 12:57:51.366: INFO: File wheezy_udp@dns-test-service-3.dns-8686.svc.cluster.local from pod dns-8686/dns-test-ee697dc2-c5df-4c7b-a67c-6f40b8b0ea9a contains 'foo.example.com. ' instead of 'bar.example.com.' Feb 14 12:57:51.393: INFO: File jessie_udp@dns-test-service-3.dns-8686.svc.cluster.local from pod dns-8686/dns-test-ee697dc2-c5df-4c7b-a67c-6f40b8b0ea9a contains 'foo.example.com. ' instead of 'bar.example.com.' Feb 14 12:57:51.393: INFO: Lookups using dns-8686/dns-test-ee697dc2-c5df-4c7b-a67c-6f40b8b0ea9a failed for: [wheezy_udp@dns-test-service-3.dns-8686.svc.cluster.local jessie_udp@dns-test-service-3.dns-8686.svc.cluster.local] Feb 14 12:57:56.361: INFO: File wheezy_udp@dns-test-service-3.dns-8686.svc.cluster.local from pod dns-8686/dns-test-ee697dc2-c5df-4c7b-a67c-6f40b8b0ea9a contains 'foo.example.com. ' instead of 'bar.example.com.' Feb 14 12:57:56.381: INFO: File jessie_udp@dns-test-service-3.dns-8686.svc.cluster.local from pod dns-8686/dns-test-ee697dc2-c5df-4c7b-a67c-6f40b8b0ea9a contains 'foo.example.com. ' instead of 'bar.example.com.' Feb 14 12:57:56.381: INFO: Lookups using dns-8686/dns-test-ee697dc2-c5df-4c7b-a67c-6f40b8b0ea9a failed for: [wheezy_udp@dns-test-service-3.dns-8686.svc.cluster.local jessie_udp@dns-test-service-3.dns-8686.svc.cluster.local] Feb 14 12:58:01.376: INFO: DNS probes using dns-test-ee697dc2-c5df-4c7b-a67c-6f40b8b0ea9a succeeded STEP: deleting the pod STEP: changing the service to type=ClusterIP STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8686.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-8686.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8686.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-8686.svc.cluster.local; sleep 1; done STEP: creating a third pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Feb 14 12:58:17.763: INFO: File wheezy_udp@dns-test-service-3.dns-8686.svc.cluster.local from pod dns-8686/dns-test-4035a377-3900-4b6b-9bc4-dc9c7573e1ce contains '' instead of '10.104.246.187' Feb 14 12:58:17.772: INFO: File jessie_udp@dns-test-service-3.dns-8686.svc.cluster.local from pod dns-8686/dns-test-4035a377-3900-4b6b-9bc4-dc9c7573e1ce contains '' instead of '10.104.246.187' Feb 14 12:58:17.772: INFO: Lookups using dns-8686/dns-test-4035a377-3900-4b6b-9bc4-dc9c7573e1ce failed for: [wheezy_udp@dns-test-service-3.dns-8686.svc.cluster.local jessie_udp@dns-test-service-3.dns-8686.svc.cluster.local] Feb 14 12:58:22.841: INFO: DNS probes using dns-test-4035a377-3900-4b6b-9bc4-dc9c7573e1ce succeeded STEP: deleting the pod STEP: deleting the test externalName service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 14 12:58:23.077: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-8686" for this suite. Feb 14 12:58:31.160: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 14 12:58:31.251: INFO: namespace dns-8686 deletion completed in 8.111747083s • [SLOW TEST:80.619 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run --rm job should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 14 12:58:31.251: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: executing a command with run --rm and attach with stdin Feb 14 12:58:31.476: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-9109 run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed'' Feb 14 12:58:42.478: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0214 12:58:40.855810 238 log.go:172] (0xc0009be210) (0xc00033c8c0) Create stream\nI0214 12:58:40.856147 238 log.go:172] (0xc0009be210) (0xc00033c8c0) Stream added, broadcasting: 1\nI0214 12:58:40.871863 238 log.go:172] (0xc0009be210) Reply frame received for 1\nI0214 12:58:40.871911 238 log.go:172] (0xc0009be210) (0xc0005665a0) Create stream\nI0214 12:58:40.871927 238 log.go:172] (0xc0009be210) (0xc0005665a0) Stream added, broadcasting: 3\nI0214 12:58:40.873160 238 log.go:172] (0xc0009be210) Reply frame received for 3\nI0214 12:58:40.873188 238 log.go:172] (0xc0009be210) (0xc00033c960) Create stream\nI0214 12:58:40.873194 238 log.go:172] (0xc0009be210) (0xc00033c960) Stream added, broadcasting: 5\nI0214 12:58:40.876807 238 log.go:172] (0xc0009be210) Reply frame received for 5\nI0214 12:58:40.876874 238 log.go:172] (0xc0009be210) (0xc000900000) Create stream\nI0214 12:58:40.876900 238 log.go:172] (0xc0009be210) (0xc000900000) Stream added, broadcasting: 7\nI0214 12:58:40.880938 238 log.go:172] (0xc0009be210) Reply frame received for 7\nI0214 12:58:40.881199 238 log.go:172] (0xc0005665a0) (3) Writing data frame\nI0214 12:58:40.881315 238 log.go:172] (0xc0005665a0) (3) Writing data frame\nI0214 12:58:40.889517 238 log.go:172] (0xc0009be210) Data frame received for 5\nI0214 12:58:40.889540 238 log.go:172] (0xc00033c960) (5) Data frame handling\nI0214 12:58:40.889554 238 log.go:172] (0xc00033c960) (5) Data frame sent\nI0214 12:58:40.893392 238 log.go:172] (0xc0009be210) Data frame received for 5\nI0214 12:58:40.893405 238 log.go:172] (0xc00033c960) (5) Data frame handling\nI0214 12:58:40.893416 238 log.go:172] (0xc00033c960) (5) Data frame sent\nI0214 12:58:42.420549 238 log.go:172] (0xc0009be210) (0xc0005665a0) Stream removed, broadcasting: 3\nI0214 12:58:42.420755 238 log.go:172] (0xc0009be210) Data frame received for 1\nI0214 12:58:42.420866 238 log.go:172] (0xc00033c8c0) (1) Data frame handling\nI0214 12:58:42.420905 238 log.go:172] (0xc00033c8c0) (1) Data frame sent\nI0214 12:58:42.420958 238 log.go:172] (0xc0009be210) (0xc00033c960) Stream removed, broadcasting: 5\nI0214 12:58:42.421098 238 log.go:172] (0xc0009be210) (0xc00033c8c0) Stream removed, broadcasting: 1\nI0214 12:58:42.421224 238 log.go:172] (0xc0009be210) (0xc000900000) Stream removed, broadcasting: 7\nI0214 12:58:42.421254 238 log.go:172] (0xc0009be210) Go away received\nI0214 12:58:42.421410 238 log.go:172] (0xc0009be210) (0xc00033c8c0) Stream removed, broadcasting: 1\nI0214 12:58:42.421460 238 log.go:172] (0xc0009be210) (0xc0005665a0) Stream removed, broadcasting: 3\nI0214 12:58:42.421474 238 log.go:172] (0xc0009be210) (0xc00033c960) Stream removed, broadcasting: 5\nI0214 12:58:42.421502 238 log.go:172] (0xc0009be210) (0xc000900000) Stream removed, broadcasting: 7\n" Feb 14 12:58:42.478: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n" STEP: verifying the job e2e-test-rm-busybox-job was deleted [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 14 12:58:44.501: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9109" for this suite. Feb 14 12:58:58.561: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 14 12:58:58.700: INFO: namespace kubectl-9109 deletion completed in 14.1893061s • [SLOW TEST:27.449 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run --rm job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 14 12:58:58.700: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Feb 14 13:02:01.802: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 14 13:02:01.807: INFO: Pod pod-with-poststart-exec-hook still exists Feb 14 13:02:03.808: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 14 13:02:03.886: INFO: Pod pod-with-poststart-exec-hook still exists Feb 14 13:02:05.808: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 14 13:02:05.831: INFO: Pod pod-with-poststart-exec-hook still exists Feb 14 13:02:07.808: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 14 13:02:07.820: INFO: Pod pod-with-poststart-exec-hook still exists Feb 14 13:02:09.808: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 14 13:02:09.816: INFO: Pod pod-with-poststart-exec-hook still exists Feb 14 13:02:11.808: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 14 13:02:11.828: INFO: Pod pod-with-poststart-exec-hook still exists Feb 14 13:02:13.808: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 14 13:02:13.817: INFO: Pod pod-with-poststart-exec-hook still exists Feb 14 13:02:15.808: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 14 13:02:15.827: INFO: Pod pod-with-poststart-exec-hook still exists Feb 14 13:02:17.808: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 14 13:02:17.817: INFO: Pod pod-with-poststart-exec-hook still exists Feb 14 13:02:19.808: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 14 13:02:19.817: INFO: Pod pod-with-poststart-exec-hook still exists Feb 14 13:02:21.808: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 14 13:02:21.814: INFO: Pod pod-with-poststart-exec-hook still exists Feb 14 13:02:23.808: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 14 13:02:23.816: INFO: Pod pod-with-poststart-exec-hook still exists Feb 14 13:02:25.808: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 14 13:02:25.837: INFO: Pod pod-with-poststart-exec-hook still exists Feb 14 13:02:27.808: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 14 13:02:27.819: INFO: Pod pod-with-poststart-exec-hook still exists Feb 14 13:02:29.808: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 14 13:02:29.817: INFO: Pod pod-with-poststart-exec-hook still exists Feb 14 13:02:31.808: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 14 13:02:31.819: INFO: Pod pod-with-poststart-exec-hook still exists Feb 14 13:02:33.808: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 14 13:02:33.831: INFO: Pod pod-with-poststart-exec-hook still exists Feb 14 13:02:35.808: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 14 13:02:35.823: INFO: Pod pod-with-poststart-exec-hook still exists Feb 14 13:02:37.808: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 14 13:02:37.819: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 14 13:02:37.819: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-1910" for this suite. Feb 14 13:03:01.862: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 14 13:03:02.006: INFO: namespace container-lifecycle-hook-1910 deletion completed in 24.177485668s • [SLOW TEST:243.306 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 14 13:03:02.006: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0214 13:03:33.034643 8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Feb 14 13:03:33.034: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 14 13:03:33.034: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-1524" for this suite. Feb 14 13:03:41.067: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 14 13:03:41.156: INFO: namespace gc-1524 deletion completed in 8.117043426s • [SLOW TEST:39.150 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 14 13:03:41.157: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name cm-test-opt-del-a7a511e8-311d-402e-999a-da58f7856dc8 STEP: Creating configMap with name cm-test-opt-upd-e7da9dab-0833-42e2-93bd-68f6385b1dad STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-a7a511e8-311d-402e-999a-da58f7856dc8 STEP: Updating configmap cm-test-opt-upd-e7da9dab-0833-42e2-93bd-68f6385b1dad STEP: Creating configMap with name cm-test-opt-create-a628bd81-205b-4beb-93f7-7cb400b7d2f9 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 14 13:05:12.730: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2800" for this suite. Feb 14 13:05:36.784: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 14 13:05:36.908: INFO: namespace projected-2800 deletion completed in 24.163778361s • [SLOW TEST:115.751 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 14 13:05:36.908: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Feb 14 13:05:37.620: INFO: Waiting up to 5m0s for pod "downward-api-32176057-4ac6-4beb-99bd-8eb2911a5a88" in namespace "downward-api-4397" to be "success or failure" Feb 14 13:05:37.631: INFO: Pod "downward-api-32176057-4ac6-4beb-99bd-8eb2911a5a88": Phase="Pending", Reason="", readiness=false. Elapsed: 10.346322ms Feb 14 13:05:39.666: INFO: Pod "downward-api-32176057-4ac6-4beb-99bd-8eb2911a5a88": Phase="Pending", Reason="", readiness=false. Elapsed: 2.045800977s Feb 14 13:05:41.675: INFO: Pod "downward-api-32176057-4ac6-4beb-99bd-8eb2911a5a88": Phase="Pending", Reason="", readiness=false. Elapsed: 4.05503242s Feb 14 13:05:43.684: INFO: Pod "downward-api-32176057-4ac6-4beb-99bd-8eb2911a5a88": Phase="Pending", Reason="", readiness=false. Elapsed: 6.063388874s Feb 14 13:05:45.696: INFO: Pod "downward-api-32176057-4ac6-4beb-99bd-8eb2911a5a88": Phase="Pending", Reason="", readiness=false. Elapsed: 8.075458198s Feb 14 13:05:47.707: INFO: Pod "downward-api-32176057-4ac6-4beb-99bd-8eb2911a5a88": Phase="Pending", Reason="", readiness=false. Elapsed: 10.087154472s Feb 14 13:05:49.749: INFO: Pod "downward-api-32176057-4ac6-4beb-99bd-8eb2911a5a88": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.128876013s STEP: Saw pod success Feb 14 13:05:49.749: INFO: Pod "downward-api-32176057-4ac6-4beb-99bd-8eb2911a5a88" satisfied condition "success or failure" Feb 14 13:05:49.755: INFO: Trying to get logs from node iruya-node pod downward-api-32176057-4ac6-4beb-99bd-8eb2911a5a88 container dapi-container: STEP: delete the pod Feb 14 13:05:49.847: INFO: Waiting for pod downward-api-32176057-4ac6-4beb-99bd-8eb2911a5a88 to disappear Feb 14 13:05:49.864: INFO: Pod downward-api-32176057-4ac6-4beb-99bd-8eb2911a5a88 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 14 13:05:49.864: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4397" for this suite. Feb 14 13:05:55.945: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 14 13:05:56.075: INFO: namespace downward-api-4397 deletion completed in 6.172380898s • [SLOW TEST:19.167 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 14 13:05:56.076: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating replication controller my-hostname-basic-b44ffd09-4784-4d2e-9824-01077a25d294 Feb 14 13:05:56.147: INFO: Pod name my-hostname-basic-b44ffd09-4784-4d2e-9824-01077a25d294: Found 0 pods out of 1 Feb 14 13:06:01.154: INFO: Pod name my-hostname-basic-b44ffd09-4784-4d2e-9824-01077a25d294: Found 1 pods out of 1 Feb 14 13:06:01.154: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-b44ffd09-4784-4d2e-9824-01077a25d294" are running Feb 14 13:06:05.180: INFO: Pod "my-hostname-basic-b44ffd09-4784-4d2e-9824-01077a25d294-sn5js" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-14 13:05:56 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-14 13:05:56 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-b44ffd09-4784-4d2e-9824-01077a25d294]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-14 13:05:56 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-b44ffd09-4784-4d2e-9824-01077a25d294]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-14 13:05:56 +0000 UTC Reason: Message:}]) Feb 14 13:06:05.180: INFO: Trying to dial the pod Feb 14 13:06:10.218: INFO: Controller my-hostname-basic-b44ffd09-4784-4d2e-9824-01077a25d294: Got expected result from replica 1 [my-hostname-basic-b44ffd09-4784-4d2e-9824-01077a25d294-sn5js]: "my-hostname-basic-b44ffd09-4784-4d2e-9824-01077a25d294-sn5js", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 14 13:06:10.218: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-7518" for this suite. Feb 14 13:06:16.249: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 14 13:06:16.433: INFO: namespace replication-controller-7518 deletion completed in 6.208218555s • [SLOW TEST:20.357 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 14 13:06:16.434: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Feb 14 13:06:16.540: INFO: Creating deployment "test-recreate-deployment" Feb 14 13:06:16.570: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 Feb 14 13:06:16.578: INFO: new replicaset for deployment "test-recreate-deployment" is yet to be created Feb 14 13:06:18.594: INFO: Waiting deployment "test-recreate-deployment" to complete Feb 14 13:06:18.598: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717282376, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717282376, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717282376, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717282376, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 14 13:06:20.612: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717282376, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717282376, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717282376, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717282376, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 14 13:06:22.613: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717282376, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717282376, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717282376, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717282376, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 14 13:06:24.604: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717282376, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717282376, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717282376, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717282376, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 14 13:06:26.608: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717282376, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717282376, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717282376, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717282376, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 14 13:06:28.618: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717282376, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717282376, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717282376, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717282376, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 14 13:06:30.611: INFO: Triggering a new rollout for deployment "test-recreate-deployment" Feb 14 13:06:30.629: INFO: Updating deployment test-recreate-deployment Feb 14 13:06:30.629: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Feb 14 13:06:31.088: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment,GenerateName:,Namespace:deployment-6785,SelfLink:/apis/apps/v1/namespaces/deployment-6785/deployments/test-recreate-deployment,UID:2f9b03c6-1754-44e1-81fd-d12806be2859,ResourceVersion:24319528,Generation:2,CreationTimestamp:2020-02-14 13:06:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[{Available False 2020-02-14 13:06:30 +0000 UTC 2020-02-14 13:06:30 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2020-02-14 13:06:31 +0000 UTC 2020-02-14 13:06:16 +0000 UTC ReplicaSetUpdated ReplicaSet "test-recreate-deployment-5c8c9cc69d" is progressing.}],ReadyReplicas:0,CollisionCount:nil,},} Feb 14 13:06:31.119: INFO: New ReplicaSet "test-recreate-deployment-5c8c9cc69d" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5c8c9cc69d,GenerateName:,Namespace:deployment-6785,SelfLink:/apis/apps/v1/namespaces/deployment-6785/replicasets/test-recreate-deployment-5c8c9cc69d,UID:4e813ce2-ccfc-42c2-80a5-f243a1f6585b,ResourceVersion:24319527,Generation:1,CreationTimestamp:2020-02-14 13:06:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 2f9b03c6-1754-44e1-81fd-d12806be2859 0xc000a586a7 0xc000a586a8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Feb 14 13:06:31.119: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": Feb 14 13:06:31.119: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-6df85df6b9,GenerateName:,Namespace:deployment-6785,SelfLink:/apis/apps/v1/namespaces/deployment-6785/replicasets/test-recreate-deployment-6df85df6b9,UID:49f9e069-f150-49d4-9552-786ec249b94e,ResourceVersion:24319516,Generation:2,CreationTimestamp:2020-02-14 13:06:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 2f9b03c6-1754-44e1-81fd-d12806be2859 0xc000a58777 0xc000a58778}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Feb 14 13:06:31.122: INFO: Pod "test-recreate-deployment-5c8c9cc69d-pcqs6" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5c8c9cc69d-pcqs6,GenerateName:test-recreate-deployment-5c8c9cc69d-,Namespace:deployment-6785,SelfLink:/api/v1/namespaces/deployment-6785/pods/test-recreate-deployment-5c8c9cc69d-pcqs6,UID:052101d8-8087-4c80-942e-2824573a652b,ResourceVersion:24319529,Generation:0,CreationTimestamp:2020-02-14 13:06:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-recreate-deployment-5c8c9cc69d 4e813ce2-ccfc-42c2-80a5-f243a1f6585b 0xc000a591a7 0xc000a591a8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-jvs2j {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-jvs2j,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-jvs2j true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000a59250} {node.kubernetes.io/unreachable Exists NoExecute 0xc000a59270}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 13:06:30 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-14 13:06:30 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-14 13:06:30 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 13:06:30 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2020-02-14 13:06:30 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 14 13:06:31.122: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-6785" for this suite. Feb 14 13:06:39.154: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 14 13:06:39.437: INFO: namespace deployment-6785 deletion completed in 8.311528792s • [SLOW TEST:23.003 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 14 13:06:39.437: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update Feb 14 13:06:39.640: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:watch-9715,SelfLink:/api/v1/namespaces/watch-9715/configmaps/e2e-watch-test-resource-version,UID:e0fd21bc-57fd-4abf-bfe6-2e77c79d510e,ResourceVersion:24319569,Generation:0,CreationTimestamp:2020-02-14 13:06:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Feb 14 13:06:39.641: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:watch-9715,SelfLink:/api/v1/namespaces/watch-9715/configmaps/e2e-watch-test-resource-version,UID:e0fd21bc-57fd-4abf-bfe6-2e77c79d510e,ResourceVersion:24319570,Generation:0,CreationTimestamp:2020-02-14 13:06:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 14 13:06:39.641: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-9715" for this suite. Feb 14 13:06:45.677: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 14 13:06:45.810: INFO: namespace watch-9715 deletion completed in 6.1606349s • [SLOW TEST:6.373 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 14 13:06:45.811: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Feb 14 13:06:45.902: INFO: (0) /api/v1/nodes/iruya-node/proxy/logs/:
alternatives.log
alternatives.l... (200; 21.477268ms)
Feb 14 13:06:45.910: INFO: (1) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.069112ms)
Feb 14 13:06:45.914: INFO: (2) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.244ms)
Feb 14 13:06:45.919: INFO: (3) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.57189ms)
Feb 14 13:06:45.927: INFO: (4) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.691284ms)
Feb 14 13:06:45.959: INFO: (5) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 32.252647ms)
Feb 14 13:06:45.966: INFO: (6) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.873213ms)
Feb 14 13:06:45.972: INFO: (7) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.092808ms)
Feb 14 13:06:45.977: INFO: (8) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.675023ms)
Feb 14 13:06:45.981: INFO: (9) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.010603ms)
Feb 14 13:06:45.985: INFO: (10) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.136408ms)
Feb 14 13:06:45.990: INFO: (11) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.308839ms)
Feb 14 13:06:45.994: INFO: (12) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.895906ms)
Feb 14 13:06:45.998: INFO: (13) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.602146ms)
Feb 14 13:06:46.002: INFO: (14) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.277302ms)
Feb 14 13:06:46.007: INFO: (15) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.123166ms)
Feb 14 13:06:46.014: INFO: (16) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.534437ms)
Feb 14 13:06:46.020: INFO: (17) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.016984ms)
Feb 14 13:06:46.026: INFO: (18) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.020857ms)
Feb 14 13:06:46.032: INFO: (19) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.389429ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 14 13:06:46.032: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-9979" for this suite.
Feb 14 13:06:54.497: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 13:06:54.816: INFO: namespace proxy-9979 deletion completed in 8.779210225s

• [SLOW TEST:9.005 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58
    should proxy logs on node using proxy subresource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 14 13:06:54.817: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-map-6110133c-3a84-44f0-b3a9-67bed4033109
STEP: Creating a pod to test consume secrets
Feb 14 13:06:55.026: INFO: Waiting up to 5m0s for pod "pod-secrets-778aabb5-379c-4426-b6db-88ee9be80e83" in namespace "secrets-1430" to be "success or failure"
Feb 14 13:06:55.105: INFO: Pod "pod-secrets-778aabb5-379c-4426-b6db-88ee9be80e83": Phase="Pending", Reason="", readiness=false. Elapsed: 77.957549ms
Feb 14 13:06:57.114: INFO: Pod "pod-secrets-778aabb5-379c-4426-b6db-88ee9be80e83": Phase="Pending", Reason="", readiness=false. Elapsed: 2.087169077s
Feb 14 13:06:59.123: INFO: Pod "pod-secrets-778aabb5-379c-4426-b6db-88ee9be80e83": Phase="Pending", Reason="", readiness=false. Elapsed: 4.095982914s
Feb 14 13:07:01.129: INFO: Pod "pod-secrets-778aabb5-379c-4426-b6db-88ee9be80e83": Phase="Pending", Reason="", readiness=false. Elapsed: 6.101866357s
Feb 14 13:07:03.140: INFO: Pod "pod-secrets-778aabb5-379c-4426-b6db-88ee9be80e83": Phase="Pending", Reason="", readiness=false. Elapsed: 8.113402122s
Feb 14 13:07:05.162: INFO: Pod "pod-secrets-778aabb5-379c-4426-b6db-88ee9be80e83": Phase="Pending", Reason="", readiness=false. Elapsed: 10.134699244s
Feb 14 13:07:07.172: INFO: Pod "pod-secrets-778aabb5-379c-4426-b6db-88ee9be80e83": Phase="Pending", Reason="", readiness=false. Elapsed: 12.145364934s
Feb 14 13:07:09.188: INFO: Pod "pod-secrets-778aabb5-379c-4426-b6db-88ee9be80e83": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.160796541s
STEP: Saw pod success
Feb 14 13:07:09.188: INFO: Pod "pod-secrets-778aabb5-379c-4426-b6db-88ee9be80e83" satisfied condition "success or failure"
Feb 14 13:07:09.194: INFO: Trying to get logs from node iruya-node pod pod-secrets-778aabb5-379c-4426-b6db-88ee9be80e83 container secret-volume-test: 
STEP: delete the pod
Feb 14 13:07:09.411: INFO: Waiting for pod pod-secrets-778aabb5-379c-4426-b6db-88ee9be80e83 to disappear
Feb 14 13:07:09.420: INFO: Pod pod-secrets-778aabb5-379c-4426-b6db-88ee9be80e83 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 14 13:07:09.421: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-1430" for this suite.
Feb 14 13:07:15.999: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 13:07:16.101: INFO: namespace secrets-1430 deletion completed in 6.674288033s

• [SLOW TEST:21.284 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-api-machinery] Garbage collector 
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 14 13:07:16.101: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
Feb 14 13:07:24.129: INFO: 0 pods remaining
Feb 14 13:07:24.129: INFO: 0 pods has nil DeletionTimestamp
Feb 14 13:07:24.129: INFO: 
STEP: Gathering metrics
W0214 13:07:24.963033       8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Feb 14 13:07:24.963: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 14 13:07:24.963: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-1108" for this suite.
Feb 14 13:07:37.018: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 13:07:37.126: INFO: namespace gc-1108 deletion completed in 12.157776023s

• [SLOW TEST:21.025 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-apps] ReplicaSet 
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 14 13:07:37.126: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb 14 13:07:37.318: INFO: Creating ReplicaSet my-hostname-basic-95b900f1-0a33-4f5b-9808-301d70229c8f
Feb 14 13:07:37.456: INFO: Pod name my-hostname-basic-95b900f1-0a33-4f5b-9808-301d70229c8f: Found 0 pods out of 1
Feb 14 13:07:42.467: INFO: Pod name my-hostname-basic-95b900f1-0a33-4f5b-9808-301d70229c8f: Found 1 pods out of 1
Feb 14 13:07:42.468: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-95b900f1-0a33-4f5b-9808-301d70229c8f" is running
Feb 14 13:07:46.489: INFO: Pod "my-hostname-basic-95b900f1-0a33-4f5b-9808-301d70229c8f-pk4f8" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-14 13:07:37 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-14 13:07:37 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-95b900f1-0a33-4f5b-9808-301d70229c8f]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-14 13:07:37 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-95b900f1-0a33-4f5b-9808-301d70229c8f]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-14 13:07:37 +0000 UTC Reason: Message:}])
Feb 14 13:07:46.489: INFO: Trying to dial the pod
Feb 14 13:07:51.535: INFO: Controller my-hostname-basic-95b900f1-0a33-4f5b-9808-301d70229c8f: Got expected result from replica 1 [my-hostname-basic-95b900f1-0a33-4f5b-9808-301d70229c8f-pk4f8]: "my-hostname-basic-95b900f1-0a33-4f5b-9808-301d70229c8f-pk4f8", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 14 13:07:51.535: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replicaset-6023" for this suite.
Feb 14 13:07:57.592: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 13:07:57.729: INFO: namespace replicaset-6023 deletion completed in 6.180461623s

• [SLOW TEST:20.603 seconds]
[sig-apps] ReplicaSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl cluster-info 
  should check if Kubernetes master services is included in cluster-info  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 14 13:07:57.730: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should check if Kubernetes master services is included in cluster-info  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: validating cluster-info
Feb 14 13:07:57.951: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info'
Feb 14 13:08:00.256: INFO: stderr: ""
Feb 14 13:08:00.256: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.24.4.57:6443\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.24.4.57:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 14 13:08:00.257: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1548" for this suite.
Feb 14 13:08:08.455: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 13:08:08.696: INFO: namespace kubectl-1548 deletion completed in 8.318302797s

• [SLOW TEST:10.966 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl cluster-info
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should check if Kubernetes master services is included in cluster-info  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 14 13:08:08.698: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name projected-secret-test-map-9d2e2452-d707-4454-b6b7-00a244a58ee7
STEP: Creating a pod to test consume secrets
Feb 14 13:08:08.984: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-d94a4551-029f-4861-8d19-29a364c30e1e" in namespace "projected-8811" to be "success or failure"
Feb 14 13:08:09.102: INFO: Pod "pod-projected-secrets-d94a4551-029f-4861-8d19-29a364c30e1e": Phase="Pending", Reason="", readiness=false. Elapsed: 118.196379ms
Feb 14 13:08:11.109: INFO: Pod "pod-projected-secrets-d94a4551-029f-4861-8d19-29a364c30e1e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.125338832s
Feb 14 13:08:13.115: INFO: Pod "pod-projected-secrets-d94a4551-029f-4861-8d19-29a364c30e1e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.130673203s
Feb 14 13:08:15.123: INFO: Pod "pod-projected-secrets-d94a4551-029f-4861-8d19-29a364c30e1e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.139025891s
Feb 14 13:08:17.141: INFO: Pod "pod-projected-secrets-d94a4551-029f-4861-8d19-29a364c30e1e": Phase="Pending", Reason="", readiness=false. Elapsed: 8.156848187s
Feb 14 13:08:19.148: INFO: Pod "pod-projected-secrets-d94a4551-029f-4861-8d19-29a364c30e1e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.163500632s
STEP: Saw pod success
Feb 14 13:08:19.148: INFO: Pod "pod-projected-secrets-d94a4551-029f-4861-8d19-29a364c30e1e" satisfied condition "success or failure"
Feb 14 13:08:19.151: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-d94a4551-029f-4861-8d19-29a364c30e1e container projected-secret-volume-test: 
STEP: delete the pod
Feb 14 13:08:19.409: INFO: Waiting for pod pod-projected-secrets-d94a4551-029f-4861-8d19-29a364c30e1e to disappear
Feb 14 13:08:19.424: INFO: Pod pod-projected-secrets-d94a4551-029f-4861-8d19-29a364c30e1e no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 14 13:08:19.424: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8811" for this suite.
Feb 14 13:08:25.473: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 13:08:25.602: INFO: namespace projected-8811 deletion completed in 6.167735181s

• [SLOW TEST:16.905 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 14 13:08:25.603: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb 14 13:08:25.740: INFO: (0) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 9.345135ms)
Feb 14 13:08:25.747: INFO: (1) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.398942ms)
Feb 14 13:08:25.752: INFO: (2) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.7025ms)
Feb 14 13:08:25.799: INFO: (3) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 46.215917ms)
Feb 14 13:08:25.815: INFO: (4) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 16.085305ms)
Feb 14 13:08:25.829: INFO: (5) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 13.112528ms)
Feb 14 13:08:25.836: INFO: (6) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.951164ms)
Feb 14 13:08:25.843: INFO: (7) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.518438ms)
Feb 14 13:08:25.848: INFO: (8) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.530491ms)
Feb 14 13:08:25.855: INFO: (9) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.428139ms)
Feb 14 13:08:25.864: INFO: (10) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.309736ms)
Feb 14 13:08:25.873: INFO: (11) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 9.015181ms)
Feb 14 13:08:25.886: INFO: (12) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 12.651687ms)
Feb 14 13:08:25.893: INFO: (13) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.985333ms)
Feb 14 13:08:25.911: INFO: (14) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 18.740313ms)
Feb 14 13:08:25.926: INFO: (15) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 14.030745ms)
Feb 14 13:08:25.934: INFO: (16) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.331638ms)
Feb 14 13:08:25.946: INFO: (17) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 12.073516ms)
Feb 14 13:08:25.958: INFO: (18) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 11.174302ms)
Feb 14 13:08:25.967: INFO: (19) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 9.171491ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 14 13:08:25.967: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-8827" for this suite.
Feb 14 13:08:32.107: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 13:08:32.195: INFO: namespace proxy-8827 deletion completed in 6.221951853s

• [SLOW TEST:6.593 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58
    should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 14 13:08:32.196: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb 14 13:08:32.305: INFO: Waiting up to 5m0s for pod "downwardapi-volume-335bdeeb-3ea7-47e6-978c-d014062bd69d" in namespace "downward-api-4956" to be "success or failure"
Feb 14 13:08:32.358: INFO: Pod "downwardapi-volume-335bdeeb-3ea7-47e6-978c-d014062bd69d": Phase="Pending", Reason="", readiness=false. Elapsed: 52.929702ms
Feb 14 13:08:34.366: INFO: Pod "downwardapi-volume-335bdeeb-3ea7-47e6-978c-d014062bd69d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.060834996s
Feb 14 13:08:36.379: INFO: Pod "downwardapi-volume-335bdeeb-3ea7-47e6-978c-d014062bd69d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.073712272s
Feb 14 13:08:38.389: INFO: Pod "downwardapi-volume-335bdeeb-3ea7-47e6-978c-d014062bd69d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.083894905s
Feb 14 13:08:40.413: INFO: Pod "downwardapi-volume-335bdeeb-3ea7-47e6-978c-d014062bd69d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.10777695s
Feb 14 13:08:42.422: INFO: Pod "downwardapi-volume-335bdeeb-3ea7-47e6-978c-d014062bd69d": Phase="Pending", Reason="", readiness=false. Elapsed: 10.11678899s
Feb 14 13:08:44.430: INFO: Pod "downwardapi-volume-335bdeeb-3ea7-47e6-978c-d014062bd69d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.124378631s
STEP: Saw pod success
Feb 14 13:08:44.430: INFO: Pod "downwardapi-volume-335bdeeb-3ea7-47e6-978c-d014062bd69d" satisfied condition "success or failure"
Feb 14 13:08:44.434: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-335bdeeb-3ea7-47e6-978c-d014062bd69d container client-container: 
STEP: delete the pod
Feb 14 13:08:44.609: INFO: Waiting for pod downwardapi-volume-335bdeeb-3ea7-47e6-978c-d014062bd69d to disappear
Feb 14 13:08:44.618: INFO: Pod downwardapi-volume-335bdeeb-3ea7-47e6-978c-d014062bd69d no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 14 13:08:44.618: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-4956" for this suite.
Feb 14 13:08:50.688: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 13:08:50.811: INFO: namespace downward-api-4956 deletion completed in 6.184624084s

• [SLOW TEST:18.615 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 14 13:08:50.812: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0777 on tmpfs
Feb 14 13:08:51.167: INFO: Waiting up to 5m0s for pod "pod-621dfbdc-1ff4-48a4-a3af-c7f5fc86f711" in namespace "emptydir-6138" to be "success or failure"
Feb 14 13:08:51.173: INFO: Pod "pod-621dfbdc-1ff4-48a4-a3af-c7f5fc86f711": Phase="Pending", Reason="", readiness=false. Elapsed: 5.475381ms
Feb 14 13:08:53.184: INFO: Pod "pod-621dfbdc-1ff4-48a4-a3af-c7f5fc86f711": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016540658s
Feb 14 13:08:55.193: INFO: Pod "pod-621dfbdc-1ff4-48a4-a3af-c7f5fc86f711": Phase="Pending", Reason="", readiness=false. Elapsed: 4.025926527s
Feb 14 13:08:57.204: INFO: Pod "pod-621dfbdc-1ff4-48a4-a3af-c7f5fc86f711": Phase="Pending", Reason="", readiness=false. Elapsed: 6.036753537s
Feb 14 13:08:59.214: INFO: Pod "pod-621dfbdc-1ff4-48a4-a3af-c7f5fc86f711": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.046889151s
STEP: Saw pod success
Feb 14 13:08:59.214: INFO: Pod "pod-621dfbdc-1ff4-48a4-a3af-c7f5fc86f711" satisfied condition "success or failure"
Feb 14 13:08:59.217: INFO: Trying to get logs from node iruya-node pod pod-621dfbdc-1ff4-48a4-a3af-c7f5fc86f711 container test-container: 
STEP: delete the pod
Feb 14 13:08:59.275: INFO: Waiting for pod pod-621dfbdc-1ff4-48a4-a3af-c7f5fc86f711 to disappear
Feb 14 13:08:59.315: INFO: Pod pod-621dfbdc-1ff4-48a4-a3af-c7f5fc86f711 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 14 13:08:59.315: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-6138" for this suite.
Feb 14 13:09:05.370: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 13:09:05.448: INFO: namespace emptydir-6138 deletion completed in 6.125330793s

• [SLOW TEST:14.637 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 14 13:09:05.449: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb 14 13:09:05.576: INFO: Waiting up to 5m0s for pod "downwardapi-volume-cfe6b549-9c3e-4a5b-b39d-15e9a516571f" in namespace "downward-api-5861" to be "success or failure"
Feb 14 13:09:05.582: INFO: Pod "downwardapi-volume-cfe6b549-9c3e-4a5b-b39d-15e9a516571f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.251064ms
Feb 14 13:09:07.597: INFO: Pod "downwardapi-volume-cfe6b549-9c3e-4a5b-b39d-15e9a516571f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020946654s
Feb 14 13:09:09.608: INFO: Pod "downwardapi-volume-cfe6b549-9c3e-4a5b-b39d-15e9a516571f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.032329791s
Feb 14 13:09:11.615: INFO: Pod "downwardapi-volume-cfe6b549-9c3e-4a5b-b39d-15e9a516571f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.038429882s
Feb 14 13:09:13.768: INFO: Pod "downwardapi-volume-cfe6b549-9c3e-4a5b-b39d-15e9a516571f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.191514527s
STEP: Saw pod success
Feb 14 13:09:13.768: INFO: Pod "downwardapi-volume-cfe6b549-9c3e-4a5b-b39d-15e9a516571f" satisfied condition "success or failure"
Feb 14 13:09:13.774: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-cfe6b549-9c3e-4a5b-b39d-15e9a516571f container client-container: 
STEP: delete the pod
Feb 14 13:09:13.936: INFO: Waiting for pod downwardapi-volume-cfe6b549-9c3e-4a5b-b39d-15e9a516571f to disappear
Feb 14 13:09:13.955: INFO: Pod downwardapi-volume-cfe6b549-9c3e-4a5b-b39d-15e9a516571f no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 14 13:09:13.955: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-5861" for this suite.
Feb 14 13:09:19.988: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 13:09:20.101: INFO: namespace downward-api-5861 deletion completed in 6.138110037s

• [SLOW TEST:14.653 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 14 13:09:20.102: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb 14 13:09:20.192: INFO: Waiting up to 5m0s for pod "downwardapi-volume-af9777a6-4781-4e77-a753-eb38bc62e7f4" in namespace "projected-7855" to be "success or failure"
Feb 14 13:09:20.197: INFO: Pod "downwardapi-volume-af9777a6-4781-4e77-a753-eb38bc62e7f4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.884237ms
Feb 14 13:09:22.205: INFO: Pod "downwardapi-volume-af9777a6-4781-4e77-a753-eb38bc62e7f4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012419139s
Feb 14 13:09:24.215: INFO: Pod "downwardapi-volume-af9777a6-4781-4e77-a753-eb38bc62e7f4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.022732267s
Feb 14 13:09:26.230: INFO: Pod "downwardapi-volume-af9777a6-4781-4e77-a753-eb38bc62e7f4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.037541795s
Feb 14 13:09:28.240: INFO: Pod "downwardapi-volume-af9777a6-4781-4e77-a753-eb38bc62e7f4": Phase="Running", Reason="", readiness=true. Elapsed: 8.04819152s
Feb 14 13:09:30.250: INFO: Pod "downwardapi-volume-af9777a6-4781-4e77-a753-eb38bc62e7f4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.057707835s
STEP: Saw pod success
Feb 14 13:09:30.250: INFO: Pod "downwardapi-volume-af9777a6-4781-4e77-a753-eb38bc62e7f4" satisfied condition "success or failure"
Feb 14 13:09:30.255: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-af9777a6-4781-4e77-a753-eb38bc62e7f4 container client-container: 
STEP: delete the pod
Feb 14 13:09:30.362: INFO: Waiting for pod downwardapi-volume-af9777a6-4781-4e77-a753-eb38bc62e7f4 to disappear
Feb 14 13:09:30.369: INFO: Pod downwardapi-volume-af9777a6-4781-4e77-a753-eb38bc62e7f4 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 14 13:09:30.370: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7855" for this suite.
Feb 14 13:09:36.461: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 13:09:36.619: INFO: namespace projected-7855 deletion completed in 6.244581366s

• [SLOW TEST:16.518 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 14 13:09:36.620: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward api env vars
Feb 14 13:09:36.764: INFO: Waiting up to 5m0s for pod "downward-api-178601f8-2441-497e-9fa1-11459a34c94a" in namespace "downward-api-2520" to be "success or failure"
Feb 14 13:09:36.816: INFO: Pod "downward-api-178601f8-2441-497e-9fa1-11459a34c94a": Phase="Pending", Reason="", readiness=false. Elapsed: 50.450276ms
Feb 14 13:09:38.827: INFO: Pod "downward-api-178601f8-2441-497e-9fa1-11459a34c94a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.062363838s
Feb 14 13:09:40.843: INFO: Pod "downward-api-178601f8-2441-497e-9fa1-11459a34c94a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.078207743s
Feb 14 13:09:42.853: INFO: Pod "downward-api-178601f8-2441-497e-9fa1-11459a34c94a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.087575303s
Feb 14 13:09:44.871: INFO: Pod "downward-api-178601f8-2441-497e-9fa1-11459a34c94a": Phase="Pending", Reason="", readiness=false. Elapsed: 8.105568402s
Feb 14 13:09:46.885: INFO: Pod "downward-api-178601f8-2441-497e-9fa1-11459a34c94a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.120114794s
STEP: Saw pod success
Feb 14 13:09:46.885: INFO: Pod "downward-api-178601f8-2441-497e-9fa1-11459a34c94a" satisfied condition "success or failure"
Feb 14 13:09:46.894: INFO: Trying to get logs from node iruya-node pod downward-api-178601f8-2441-497e-9fa1-11459a34c94a container dapi-container: 
STEP: delete the pod
Feb 14 13:09:47.405: INFO: Waiting for pod downward-api-178601f8-2441-497e-9fa1-11459a34c94a to disappear
Feb 14 13:09:47.413: INFO: Pod downward-api-178601f8-2441-497e-9fa1-11459a34c94a no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 14 13:09:47.414: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-2520" for this suite.
Feb 14 13:09:53.447: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 13:09:53.626: INFO: namespace downward-api-2520 deletion completed in 6.205590119s

• [SLOW TEST:17.006 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[sig-api-machinery] Aggregator 
  Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 14 13:09:53.627: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename aggregator
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76
Feb 14 13:09:53.684: INFO: >>> kubeConfig: /root/.kube/config
[It] Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Registering the sample API server.
Feb 14 13:09:54.778: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set
Feb 14 13:09:57.040: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717282594, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717282594, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717282594, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717282594, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 14 13:09:59.050: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717282594, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717282594, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717282594, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717282594, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 14 13:10:01.052: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717282594, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717282594, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717282594, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717282594, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 14 13:10:03.047: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717282594, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717282594, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717282594, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717282594, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 14 13:10:05.052: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717282594, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717282594, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717282594, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717282594, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 14 13:10:10.780: INFO: Waited 3.719447766s for the sample-apiserver to be ready to handle requests.
[AfterEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67
[AfterEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 14 13:10:11.503: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "aggregator-6463" for this suite.
Feb 14 13:10:17.560: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 13:10:17.658: INFO: namespace aggregator-6463 deletion completed in 6.146578255s

• [SLOW TEST:24.031 seconds]
[sig-api-machinery] Aggregator
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 14 13:10:17.658: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-4ca6d3de-a8c0-47f0-bd68-ddabf3aa108c
STEP: Creating a pod to test consume configMaps
Feb 14 13:10:17.837: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-158398d4-169d-4b96-a538-a4c780e0af64" in namespace "projected-2174" to be "success or failure"
Feb 14 13:10:17.881: INFO: Pod "pod-projected-configmaps-158398d4-169d-4b96-a538-a4c780e0af64": Phase="Pending", Reason="", readiness=false. Elapsed: 43.507273ms
Feb 14 13:10:19.924: INFO: Pod "pod-projected-configmaps-158398d4-169d-4b96-a538-a4c780e0af64": Phase="Pending", Reason="", readiness=false. Elapsed: 2.086173127s
Feb 14 13:10:21.969: INFO: Pod "pod-projected-configmaps-158398d4-169d-4b96-a538-a4c780e0af64": Phase="Pending", Reason="", readiness=false. Elapsed: 4.132001306s
Feb 14 13:10:23.986: INFO: Pod "pod-projected-configmaps-158398d4-169d-4b96-a538-a4c780e0af64": Phase="Pending", Reason="", readiness=false. Elapsed: 6.14822642s
Feb 14 13:10:26.003: INFO: Pod "pod-projected-configmaps-158398d4-169d-4b96-a538-a4c780e0af64": Phase="Pending", Reason="", readiness=false. Elapsed: 8.165906741s
Feb 14 13:10:28.026: INFO: Pod "pod-projected-configmaps-158398d4-169d-4b96-a538-a4c780e0af64": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.188969115s
STEP: Saw pod success
Feb 14 13:10:28.027: INFO: Pod "pod-projected-configmaps-158398d4-169d-4b96-a538-a4c780e0af64" satisfied condition "success or failure"
Feb 14 13:10:28.034: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-158398d4-169d-4b96-a538-a4c780e0af64 container projected-configmap-volume-test: 
STEP: delete the pod
Feb 14 13:10:28.155: INFO: Waiting for pod pod-projected-configmaps-158398d4-169d-4b96-a538-a4c780e0af64 to disappear
Feb 14 13:10:28.175: INFO: Pod pod-projected-configmaps-158398d4-169d-4b96-a538-a4c780e0af64 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 14 13:10:28.175: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2174" for this suite.
Feb 14 13:10:34.252: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 13:10:34.342: INFO: namespace projected-2174 deletion completed in 6.158082608s

• [SLOW TEST:16.685 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-cli] Kubectl client [k8s.io] Proxy server 
  should support proxy with --port 0  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 14 13:10:34.343: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should support proxy with --port 0  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: starting the proxy server
Feb 14 13:10:34.392: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter'
STEP: curling proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 14 13:10:34.475: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3464" for this suite.
Feb 14 13:10:40.567: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 13:10:40.676: INFO: namespace kubectl-3464 deletion completed in 6.192031404s

• [SLOW TEST:6.333 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Proxy server
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should support proxy with --port 0  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 14 13:10:40.677: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb 14 13:10:40.828: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3e152b43-d24f-4297-ac74-701a9e249785" in namespace "projected-1742" to be "success or failure"
Feb 14 13:10:40.979: INFO: Pod "downwardapi-volume-3e152b43-d24f-4297-ac74-701a9e249785": Phase="Pending", Reason="", readiness=false. Elapsed: 150.322448ms
Feb 14 13:10:42.987: INFO: Pod "downwardapi-volume-3e152b43-d24f-4297-ac74-701a9e249785": Phase="Pending", Reason="", readiness=false. Elapsed: 2.158811687s
Feb 14 13:10:45.548: INFO: Pod "downwardapi-volume-3e152b43-d24f-4297-ac74-701a9e249785": Phase="Pending", Reason="", readiness=false. Elapsed: 4.71928552s
Feb 14 13:10:47.556: INFO: Pod "downwardapi-volume-3e152b43-d24f-4297-ac74-701a9e249785": Phase="Pending", Reason="", readiness=false. Elapsed: 6.727620189s
Feb 14 13:10:49.563: INFO: Pod "downwardapi-volume-3e152b43-d24f-4297-ac74-701a9e249785": Phase="Pending", Reason="", readiness=false. Elapsed: 8.7350282s
Feb 14 13:10:51.582: INFO: Pod "downwardapi-volume-3e152b43-d24f-4297-ac74-701a9e249785": Phase="Pending", Reason="", readiness=false. Elapsed: 10.75347772s
Feb 14 13:10:53.611: INFO: Pod "downwardapi-volume-3e152b43-d24f-4297-ac74-701a9e249785": Phase="Pending", Reason="", readiness=false. Elapsed: 12.782935634s
Feb 14 13:10:55.619: INFO: Pod "downwardapi-volume-3e152b43-d24f-4297-ac74-701a9e249785": Phase="Running", Reason="", readiness=true. Elapsed: 14.790639568s
Feb 14 13:10:57.627: INFO: Pod "downwardapi-volume-3e152b43-d24f-4297-ac74-701a9e249785": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.798954033s
STEP: Saw pod success
Feb 14 13:10:57.627: INFO: Pod "downwardapi-volume-3e152b43-d24f-4297-ac74-701a9e249785" satisfied condition "success or failure"
Feb 14 13:10:57.631: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-3e152b43-d24f-4297-ac74-701a9e249785 container client-container: 
STEP: delete the pod
Feb 14 13:10:58.135: INFO: Waiting for pod downwardapi-volume-3e152b43-d24f-4297-ac74-701a9e249785 to disappear
Feb 14 13:10:58.153: INFO: Pod downwardapi-volume-3e152b43-d24f-4297-ac74-701a9e249785 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 14 13:10:58.154: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1742" for this suite.
Feb 14 13:11:04.204: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 13:11:04.329: INFO: namespace projected-1742 deletion completed in 6.163524208s

• [SLOW TEST:23.653 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 14 13:11:04.330: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a service in the namespace
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
STEP: Recreating the namespace
STEP: Verifying there is no service in the namespace
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 14 13:11:10.862: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "namespaces-9476" for this suite.
Feb 14 13:11:16.899: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 13:11:17.031: INFO: namespace namespaces-9476 deletion completed in 6.162161709s
STEP: Destroying namespace "nsdeletetest-9370" for this suite.
Feb 14 13:11:17.032: INFO: Namespace nsdeletetest-9370 was already deleted
STEP: Destroying namespace "nsdeletetest-9048" for this suite.
Feb 14 13:11:23.070: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 13:11:23.157: INFO: namespace nsdeletetest-9048 deletion completed in 6.12426073s

• [SLOW TEST:18.827 seconds]
[sig-api-machinery] Namespaces [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 14 13:11:23.158: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb 14 13:11:23.227: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c6ad2ef3-808c-4282-a6fc-65e9f0f65490" in namespace "downward-api-4171" to be "success or failure"
Feb 14 13:11:23.232: INFO: Pod "downwardapi-volume-c6ad2ef3-808c-4282-a6fc-65e9f0f65490": Phase="Pending", Reason="", readiness=false. Elapsed: 5.053545ms
Feb 14 13:11:25.242: INFO: Pod "downwardapi-volume-c6ad2ef3-808c-4282-a6fc-65e9f0f65490": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015225951s
Feb 14 13:11:27.261: INFO: Pod "downwardapi-volume-c6ad2ef3-808c-4282-a6fc-65e9f0f65490": Phase="Pending", Reason="", readiness=false. Elapsed: 4.034169971s
Feb 14 13:11:29.271: INFO: Pod "downwardapi-volume-c6ad2ef3-808c-4282-a6fc-65e9f0f65490": Phase="Pending", Reason="", readiness=false. Elapsed: 6.04386183s
Feb 14 13:11:31.281: INFO: Pod "downwardapi-volume-c6ad2ef3-808c-4282-a6fc-65e9f0f65490": Phase="Pending", Reason="", readiness=false. Elapsed: 8.054203827s
Feb 14 13:11:33.288: INFO: Pod "downwardapi-volume-c6ad2ef3-808c-4282-a6fc-65e9f0f65490": Phase="Pending", Reason="", readiness=false. Elapsed: 10.060864529s
Feb 14 13:11:35.294: INFO: Pod "downwardapi-volume-c6ad2ef3-808c-4282-a6fc-65e9f0f65490": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.06715109s
STEP: Saw pod success
Feb 14 13:11:35.294: INFO: Pod "downwardapi-volume-c6ad2ef3-808c-4282-a6fc-65e9f0f65490" satisfied condition "success or failure"
Feb 14 13:11:35.297: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-c6ad2ef3-808c-4282-a6fc-65e9f0f65490 container client-container: 
STEP: delete the pod
Feb 14 13:11:35.344: INFO: Waiting for pod downwardapi-volume-c6ad2ef3-808c-4282-a6fc-65e9f0f65490 to disappear
Feb 14 13:11:35.352: INFO: Pod downwardapi-volume-c6ad2ef3-808c-4282-a6fc-65e9f0f65490 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 14 13:11:35.353: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-4171" for this suite.
Feb 14 13:11:42.625: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 13:11:42.818: INFO: namespace downward-api-4171 deletion completed in 7.457587561s

• [SLOW TEST:19.660 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 14 13:11:42.818: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44
[It] should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
Feb 14 13:11:43.010: INFO: PodSpec: initContainers in spec.initContainers
Feb 14 13:12:55.267: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-30923a74-c132-46a5-8fd1-85fcebe35725", GenerateName:"", Namespace:"init-container-7683", SelfLink:"/api/v1/namespaces/init-container-7683/pods/pod-init-30923a74-c132-46a5-8fd1-85fcebe35725", UID:"c8b66cf4-262a-452d-925c-d0d3653f06ec", ResourceVersion:"24320607", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63717282703, loc:(*time.Location)(0x7ea48a0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"10062605"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-mln6k", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc001b51340), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-mln6k", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-mln6k", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-mln6k", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc00251acf8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"iruya-node", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0026cbbc0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc00251ad80)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc00251ada0)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc00251ada8), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc00251adac), PreemptionPolicy:(*v1.PreemptionPolicy)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717282703, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717282703, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717282703, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717282703, loc:(*time.Location)(0x7ea48a0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.96.3.65", PodIP:"10.44.0.1", StartTime:(*v1.Time)(0xc002a94e80), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0021a04d0)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0021a0540)}, Ready:false, RestartCount:3, Image:"busybox:1.29", ImageID:"docker-pullable://busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"docker://70e74c629305658a500e49b6f4ca1ca918938b90c16b046caf908968cb8f5947"}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc002a94ec0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:""}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc002a94ea0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:""}}, QOSClass:"Guaranteed"}}
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 14 13:12:55.269: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-7683" for this suite.
Feb 14 13:13:17.318: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 13:13:17.462: INFO: namespace init-container-7683 deletion completed in 22.175421858s

• [SLOW TEST:94.643 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[sig-storage] Secrets 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 14 13:13:17.462: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name s-test-opt-del-c9b800ce-13ef-40d4-b573-9a14bce8122e
STEP: Creating secret with name s-test-opt-upd-9a106cb9-1712-41c5-8a1b-2b415097f458
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-c9b800ce-13ef-40d4-b573-9a14bce8122e
STEP: Updating secret s-test-opt-upd-9a106cb9-1712-41c5-8a1b-2b415097f458
STEP: Creating secret with name s-test-opt-create-56f21bd5-65eb-4ce2-8080-63041305b4ad
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 14 13:14:52.064: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-8276" for this suite.
Feb 14 13:15:14.113: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 13:15:14.227: INFO: namespace secrets-8276 deletion completed in 22.157748161s

• [SLOW TEST:116.765 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl replace 
  should update a single-container pod's image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 14 13:15:14.228: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1721
[It] should update a single-container pod's image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Feb 14 13:15:14.288: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --labels=run=e2e-test-nginx-pod --namespace=kubectl-6530'
Feb 14 13:15:14.438: INFO: stderr: ""
Feb 14 13:15:14.439: INFO: stdout: "pod/e2e-test-nginx-pod created\n"
STEP: verifying the pod e2e-test-nginx-pod is running
STEP: verifying the pod e2e-test-nginx-pod was created
Feb 14 13:15:24.492: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-nginx-pod --namespace=kubectl-6530 -o json'
Feb 14 13:15:24.640: INFO: stderr: ""
Feb 14 13:15:24.640: INFO: stdout: "{\n    \"apiVersion\": \"v1\",\n    \"kind\": \"Pod\",\n    \"metadata\": {\n        \"creationTimestamp\": \"2020-02-14T13:15:14Z\",\n        \"labels\": {\n            \"run\": \"e2e-test-nginx-pod\"\n        },\n        \"name\": \"e2e-test-nginx-pod\",\n        \"namespace\": \"kubectl-6530\",\n        \"resourceVersion\": \"24320873\",\n        \"selfLink\": \"/api/v1/namespaces/kubectl-6530/pods/e2e-test-nginx-pod\",\n        \"uid\": \"9587f9b1-ccb4-4d78-9b71-a4becf03ecb5\"\n    },\n    \"spec\": {\n        \"containers\": [\n            {\n                \"image\": \"docker.io/library/nginx:1.14-alpine\",\n                \"imagePullPolicy\": \"IfNotPresent\",\n                \"name\": \"e2e-test-nginx-pod\",\n                \"resources\": {},\n                \"terminationMessagePath\": \"/dev/termination-log\",\n                \"terminationMessagePolicy\": \"File\",\n                \"volumeMounts\": [\n                    {\n                        \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n                        \"name\": \"default-token-qlnj2\",\n                        \"readOnly\": true\n                    }\n                ]\n            }\n        ],\n        \"dnsPolicy\": \"ClusterFirst\",\n        \"enableServiceLinks\": true,\n        \"nodeName\": \"iruya-node\",\n        \"priority\": 0,\n        \"restartPolicy\": \"Always\",\n        \"schedulerName\": \"default-scheduler\",\n        \"securityContext\": {},\n        \"serviceAccount\": \"default\",\n        \"serviceAccountName\": \"default\",\n        \"terminationGracePeriodSeconds\": 30,\n        \"tolerations\": [\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/not-ready\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            },\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/unreachable\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            }\n        ],\n        \"volumes\": [\n            {\n                \"name\": \"default-token-qlnj2\",\n                \"secret\": {\n                    \"defaultMode\": 420,\n                    \"secretName\": \"default-token-qlnj2\"\n                }\n            }\n        ]\n    },\n    \"status\": {\n        \"conditions\": [\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-02-14T13:15:14Z\",\n                \"status\": \"True\",\n                \"type\": \"Initialized\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-02-14T13:15:22Z\",\n                \"status\": \"True\",\n                \"type\": \"Ready\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-02-14T13:15:22Z\",\n                \"status\": \"True\",\n                \"type\": \"ContainersReady\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-02-14T13:15:14Z\",\n                \"status\": \"True\",\n                \"type\": \"PodScheduled\"\n            }\n        ],\n        \"containerStatuses\": [\n            {\n                \"containerID\": \"docker://c173a91e8e82c872544de131e8037998f0dfb8116edc8ed475f00dde74f3f6c9\",\n                \"image\": \"nginx:1.14-alpine\",\n                \"imageID\": \"docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7\",\n                \"lastState\": {},\n                \"name\": \"e2e-test-nginx-pod\",\n                \"ready\": true,\n                \"restartCount\": 0,\n                \"state\": {\n                    \"running\": {\n                        \"startedAt\": \"2020-02-14T13:15:22Z\"\n                    }\n                }\n            }\n        ],\n        \"hostIP\": \"10.96.3.65\",\n        \"phase\": \"Running\",\n        \"podIP\": \"10.44.0.1\",\n        \"qosClass\": \"BestEffort\",\n        \"startTime\": \"2020-02-14T13:15:14Z\"\n    }\n}\n"
STEP: replace the image in the pod
Feb 14 13:15:24.641: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-6530'
Feb 14 13:15:25.167: INFO: stderr: ""
Feb 14 13:15:25.168: INFO: stdout: "pod/e2e-test-nginx-pod replaced\n"
STEP: verifying the pod e2e-test-nginx-pod has the right image docker.io/library/busybox:1.29
[AfterEach] [k8s.io] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1726
Feb 14 13:15:25.184: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=kubectl-6530'
Feb 14 13:15:31.253: INFO: stderr: ""
Feb 14 13:15:31.254: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 14 13:15:31.254: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6530" for this suite.
Feb 14 13:15:37.345: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 13:15:37.485: INFO: namespace kubectl-6530 deletion completed in 6.221242668s

• [SLOW TEST:23.257 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should update a single-container pod's image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 14 13:15:37.486: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward api env vars
Feb 14 13:15:37.560: INFO: Waiting up to 5m0s for pod "downward-api-1136530d-9448-4393-881d-2514fefd84c0" in namespace "downward-api-350" to be "success or failure"
Feb 14 13:15:37.622: INFO: Pod "downward-api-1136530d-9448-4393-881d-2514fefd84c0": Phase="Pending", Reason="", readiness=false. Elapsed: 62.360942ms
Feb 14 13:15:39.646: INFO: Pod "downward-api-1136530d-9448-4393-881d-2514fefd84c0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.086006884s
Feb 14 13:15:41.655: INFO: Pod "downward-api-1136530d-9448-4393-881d-2514fefd84c0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.094730202s
Feb 14 13:15:43.665: INFO: Pod "downward-api-1136530d-9448-4393-881d-2514fefd84c0": Phase="Pending", Reason="", readiness=false. Elapsed: 6.104765194s
Feb 14 13:15:45.717: INFO: Pod "downward-api-1136530d-9448-4393-881d-2514fefd84c0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.156642614s
STEP: Saw pod success
Feb 14 13:15:45.718: INFO: Pod "downward-api-1136530d-9448-4393-881d-2514fefd84c0" satisfied condition "success or failure"
Feb 14 13:15:45.728: INFO: Trying to get logs from node iruya-node pod downward-api-1136530d-9448-4393-881d-2514fefd84c0 container dapi-container: 
STEP: delete the pod
Feb 14 13:15:45.889: INFO: Waiting for pod downward-api-1136530d-9448-4393-881d-2514fefd84c0 to disappear
Feb 14 13:15:45.896: INFO: Pod downward-api-1136530d-9448-4393-881d-2514fefd84c0 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 14 13:15:45.896: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-350" for this suite.
Feb 14 13:15:51.975: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 13:15:52.143: INFO: namespace downward-api-350 deletion completed in 6.24068614s

• [SLOW TEST:14.657 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-storage] Projected combined 
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected combined
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 14 13:15:52.143: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-projected-all-test-volume-a23539b3-6b78-4274-b932-66824597b0e2
STEP: Creating secret with name secret-projected-all-test-volume-b1d5c9e5-3c83-40cc-b999-87ca1acb1ada
STEP: Creating a pod to test Check all projections for projected volume plugin
Feb 14 13:15:52.368: INFO: Waiting up to 5m0s for pod "projected-volume-202ebbb1-e31b-40e6-8ab2-26ab22b33521" in namespace "projected-5159" to be "success or failure"
Feb 14 13:15:52.457: INFO: Pod "projected-volume-202ebbb1-e31b-40e6-8ab2-26ab22b33521": Phase="Pending", Reason="", readiness=false. Elapsed: 88.687138ms
Feb 14 13:15:54.494: INFO: Pod "projected-volume-202ebbb1-e31b-40e6-8ab2-26ab22b33521": Phase="Pending", Reason="", readiness=false. Elapsed: 2.125870835s
Feb 14 13:15:56.513: INFO: Pod "projected-volume-202ebbb1-e31b-40e6-8ab2-26ab22b33521": Phase="Pending", Reason="", readiness=false. Elapsed: 4.144213786s
Feb 14 13:15:58.540: INFO: Pod "projected-volume-202ebbb1-e31b-40e6-8ab2-26ab22b33521": Phase="Pending", Reason="", readiness=false. Elapsed: 6.171802955s
Feb 14 13:16:00.559: INFO: Pod "projected-volume-202ebbb1-e31b-40e6-8ab2-26ab22b33521": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.191084799s
STEP: Saw pod success
Feb 14 13:16:00.560: INFO: Pod "projected-volume-202ebbb1-e31b-40e6-8ab2-26ab22b33521" satisfied condition "success or failure"
Feb 14 13:16:00.578: INFO: Trying to get logs from node iruya-node pod projected-volume-202ebbb1-e31b-40e6-8ab2-26ab22b33521 container projected-all-volume-test: 
STEP: delete the pod
Feb 14 13:16:00.708: INFO: Waiting for pod projected-volume-202ebbb1-e31b-40e6-8ab2-26ab22b33521 to disappear
Feb 14 13:16:00.788: INFO: Pod projected-volume-202ebbb1-e31b-40e6-8ab2-26ab22b33521 no longer exists
[AfterEach] [sig-storage] Projected combined
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 14 13:16:00.788: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5159" for this suite.
Feb 14 13:16:06.829: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 13:16:06.979: INFO: namespace projected-5159 deletion completed in 6.178500491s

• [SLOW TEST:14.836 seconds]
[sig-storage] Projected combined
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 14 13:16:06.981: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb 14 13:16:07.074: INFO: Waiting up to 5m0s for pod "downwardapi-volume-59d2ef7c-6828-4992-b000-e6ad1030575e" in namespace "projected-5425" to be "success or failure"
Feb 14 13:16:07.091: INFO: Pod "downwardapi-volume-59d2ef7c-6828-4992-b000-e6ad1030575e": Phase="Pending", Reason="", readiness=false. Elapsed: 16.743678ms
Feb 14 13:16:09.098: INFO: Pod "downwardapi-volume-59d2ef7c-6828-4992-b000-e6ad1030575e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024066885s
Feb 14 13:16:11.105: INFO: Pod "downwardapi-volume-59d2ef7c-6828-4992-b000-e6ad1030575e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.031189372s
Feb 14 13:16:13.118: INFO: Pod "downwardapi-volume-59d2ef7c-6828-4992-b000-e6ad1030575e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.043356341s
Feb 14 13:16:15.146: INFO: Pod "downwardapi-volume-59d2ef7c-6828-4992-b000-e6ad1030575e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.071841084s
STEP: Saw pod success
Feb 14 13:16:15.147: INFO: Pod "downwardapi-volume-59d2ef7c-6828-4992-b000-e6ad1030575e" satisfied condition "success or failure"
Feb 14 13:16:15.172: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-59d2ef7c-6828-4992-b000-e6ad1030575e container client-container: 
STEP: delete the pod
Feb 14 13:16:15.309: INFO: Waiting for pod downwardapi-volume-59d2ef7c-6828-4992-b000-e6ad1030575e to disappear
Feb 14 13:16:15.373: INFO: Pod downwardapi-volume-59d2ef7c-6828-4992-b000-e6ad1030575e no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 14 13:16:15.374: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5425" for this suite.
Feb 14 13:16:21.414: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 13:16:21.568: INFO: namespace projected-5425 deletion completed in 6.176451111s

• [SLOW TEST:14.588 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 14 13:16:21.570: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod busybox-cdb45a3e-5de6-435f-89c9-b06a3b0fd6aa in namespace container-probe-9076
Feb 14 13:16:31.828: INFO: Started pod busybox-cdb45a3e-5de6-435f-89c9-b06a3b0fd6aa in namespace container-probe-9076
STEP: checking the pod's current state and verifying that restartCount is present
Feb 14 13:16:31.831: INFO: Initial restart count of pod busybox-cdb45a3e-5de6-435f-89c9-b06a3b0fd6aa is 0
Feb 14 13:17:22.150: INFO: Restart count of pod container-probe-9076/busybox-cdb45a3e-5de6-435f-89c9-b06a3b0fd6aa is now 1 (50.31915583s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 14 13:17:22.170: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-9076" for this suite.
Feb 14 13:17:28.254: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 13:17:28.407: INFO: namespace container-probe-9076 deletion completed in 6.21764665s

• [SLOW TEST:66.837 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should fail to create secret due to empty secret key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 14 13:17:28.407: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create secret due to empty secret key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name secret-emptykey-test-f9169e2a-cd52-4730-a6db-c87cf3b417d7
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 14 13:17:28.492: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-6674" for this suite.
Feb 14 13:17:34.520: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 13:17:34.625: INFO: namespace secrets-6674 deletion completed in 6.127814736s

• [SLOW TEST:6.218 seconds]
[sig-api-machinery] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31
  should fail to create secret due to empty secret key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 14 13:17:34.626: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
[It] should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Feb 14 13:17:34.756: INFO: Number of nodes with available pods: 0
Feb 14 13:17:34.756: INFO: Node iruya-node is running more than one daemon pod
Feb 14 13:17:35.784: INFO: Number of nodes with available pods: 0
Feb 14 13:17:35.784: INFO: Node iruya-node is running more than one daemon pod
Feb 14 13:17:36.767: INFO: Number of nodes with available pods: 0
Feb 14 13:17:36.767: INFO: Node iruya-node is running more than one daemon pod
Feb 14 13:17:37.775: INFO: Number of nodes with available pods: 0
Feb 14 13:17:37.775: INFO: Node iruya-node is running more than one daemon pod
Feb 14 13:17:38.779: INFO: Number of nodes with available pods: 0
Feb 14 13:17:38.780: INFO: Node iruya-node is running more than one daemon pod
Feb 14 13:17:40.968: INFO: Number of nodes with available pods: 0
Feb 14 13:17:40.969: INFO: Node iruya-node is running more than one daemon pod
Feb 14 13:17:41.787: INFO: Number of nodes with available pods: 0
Feb 14 13:17:41.787: INFO: Node iruya-node is running more than one daemon pod
Feb 14 13:17:42.914: INFO: Number of nodes with available pods: 0
Feb 14 13:17:42.914: INFO: Node iruya-node is running more than one daemon pod
Feb 14 13:17:43.797: INFO: Number of nodes with available pods: 0
Feb 14 13:17:43.797: INFO: Node iruya-node is running more than one daemon pod
Feb 14 13:17:44.773: INFO: Number of nodes with available pods: 1
Feb 14 13:17:44.773: INFO: Node iruya-node is running more than one daemon pod
Feb 14 13:17:45.800: INFO: Number of nodes with available pods: 2
Feb 14 13:17:45.800: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Stop a daemon pod, check that the daemon pod is revived.
Feb 14 13:17:45.844: INFO: Number of nodes with available pods: 1
Feb 14 13:17:45.845: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Feb 14 13:17:46.869: INFO: Number of nodes with available pods: 1
Feb 14 13:17:46.869: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Feb 14 13:17:47.883: INFO: Number of nodes with available pods: 1
Feb 14 13:17:47.883: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Feb 14 13:17:48.863: INFO: Number of nodes with available pods: 1
Feb 14 13:17:48.863: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Feb 14 13:17:49.865: INFO: Number of nodes with available pods: 1
Feb 14 13:17:49.866: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Feb 14 13:17:50.867: INFO: Number of nodes with available pods: 1
Feb 14 13:17:50.867: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Feb 14 13:17:51.858: INFO: Number of nodes with available pods: 1
Feb 14 13:17:51.858: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Feb 14 13:17:53.209: INFO: Number of nodes with available pods: 1
Feb 14 13:17:53.209: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Feb 14 13:17:53.931: INFO: Number of nodes with available pods: 1
Feb 14 13:17:53.931: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Feb 14 13:17:54.873: INFO: Number of nodes with available pods: 1
Feb 14 13:17:54.873: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Feb 14 13:17:56.535: INFO: Number of nodes with available pods: 1
Feb 14 13:17:56.536: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Feb 14 13:17:56.868: INFO: Number of nodes with available pods: 1
Feb 14 13:17:56.868: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Feb 14 13:17:57.861: INFO: Number of nodes with available pods: 1
Feb 14 13:17:57.861: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Feb 14 13:17:58.881: INFO: Number of nodes with available pods: 1
Feb 14 13:17:58.881: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Feb 14 13:17:59.884: INFO: Number of nodes with available pods: 2
Feb 14 13:17:59.884: INFO: Number of running nodes: 2, number of available pods: 2
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-9215, will wait for the garbage collector to delete the pods
Feb 14 13:17:59.973: INFO: Deleting DaemonSet.extensions daemon-set took: 14.819517ms
Feb 14 13:18:00.274: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.791432ms
Feb 14 13:18:17.983: INFO: Number of nodes with available pods: 0
Feb 14 13:18:17.983: INFO: Number of running nodes: 0, number of available pods: 0
Feb 14 13:18:17.991: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-9215/daemonsets","resourceVersion":"24321300"},"items":null}

Feb 14 13:18:17.994: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-9215/pods","resourceVersion":"24321300"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 14 13:18:18.004: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-9215" for this suite.
Feb 14 13:18:24.045: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 13:18:24.146: INFO: namespace daemonsets-9215 deletion completed in 6.138705408s

• [SLOW TEST:49.521 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 14 13:18:24.147: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test override command
Feb 14 13:18:24.250: INFO: Waiting up to 5m0s for pod "client-containers-9bd55a4f-4f91-4053-b467-fdc7e03a0e7c" in namespace "containers-3450" to be "success or failure"
Feb 14 13:18:24.255: INFO: Pod "client-containers-9bd55a4f-4f91-4053-b467-fdc7e03a0e7c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.211431ms
Feb 14 13:18:26.274: INFO: Pod "client-containers-9bd55a4f-4f91-4053-b467-fdc7e03a0e7c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023093504s
Feb 14 13:18:28.285: INFO: Pod "client-containers-9bd55a4f-4f91-4053-b467-fdc7e03a0e7c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.034742336s
Feb 14 13:18:30.294: INFO: Pod "client-containers-9bd55a4f-4f91-4053-b467-fdc7e03a0e7c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.04301979s
Feb 14 13:18:32.302: INFO: Pod "client-containers-9bd55a4f-4f91-4053-b467-fdc7e03a0e7c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.051488222s
STEP: Saw pod success
Feb 14 13:18:32.302: INFO: Pod "client-containers-9bd55a4f-4f91-4053-b467-fdc7e03a0e7c" satisfied condition "success or failure"
Feb 14 13:18:32.306: INFO: Trying to get logs from node iruya-node pod client-containers-9bd55a4f-4f91-4053-b467-fdc7e03a0e7c container test-container: 
STEP: delete the pod
Feb 14 13:18:32.371: INFO: Waiting for pod client-containers-9bd55a4f-4f91-4053-b467-fdc7e03a0e7c to disappear
Feb 14 13:18:32.375: INFO: Pod client-containers-9bd55a4f-4f91-4053-b467-fdc7e03a0e7c no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 14 13:18:32.375: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-3450" for this suite.
Feb 14 13:18:38.398: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 13:18:38.547: INFO: namespace containers-3450 deletion completed in 6.168560729s

• [SLOW TEST:14.400 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 14 13:18:38.548: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Cleaning up the secret
STEP: Cleaning up the configmap
STEP: Cleaning up the pod
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 14 13:18:50.821: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-wrapper-2407" for this suite.
Feb 14 13:18:57.072: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 13:18:57.297: INFO: namespace emptydir-wrapper-2407 deletion completed in 6.358146978s

• [SLOW TEST:18.749 seconds]
[sig-storage] EmptyDir wrapper volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 14 13:18:57.298: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0644 on node default medium
Feb 14 13:18:57.488: INFO: Waiting up to 5m0s for pod "pod-10e20251-a132-400e-8704-53d2ebe5cd56" in namespace "emptydir-4225" to be "success or failure"
Feb 14 13:18:57.550: INFO: Pod "pod-10e20251-a132-400e-8704-53d2ebe5cd56": Phase="Pending", Reason="", readiness=false. Elapsed: 61.672006ms
Feb 14 13:18:59.561: INFO: Pod "pod-10e20251-a132-400e-8704-53d2ebe5cd56": Phase="Pending", Reason="", readiness=false. Elapsed: 2.072459528s
Feb 14 13:19:01.571: INFO: Pod "pod-10e20251-a132-400e-8704-53d2ebe5cd56": Phase="Pending", Reason="", readiness=false. Elapsed: 4.082295616s
Feb 14 13:19:03.582: INFO: Pod "pod-10e20251-a132-400e-8704-53d2ebe5cd56": Phase="Pending", Reason="", readiness=false. Elapsed: 6.09327517s
Feb 14 13:19:05.592: INFO: Pod "pod-10e20251-a132-400e-8704-53d2ebe5cd56": Phase="Pending", Reason="", readiness=false. Elapsed: 8.104071923s
Feb 14 13:19:07.603: INFO: Pod "pod-10e20251-a132-400e-8704-53d2ebe5cd56": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.114801333s
STEP: Saw pod success
Feb 14 13:19:07.604: INFO: Pod "pod-10e20251-a132-400e-8704-53d2ebe5cd56" satisfied condition "success or failure"
Feb 14 13:19:07.610: INFO: Trying to get logs from node iruya-node pod pod-10e20251-a132-400e-8704-53d2ebe5cd56 container test-container: 
STEP: delete the pod
Feb 14 13:19:07.755: INFO: Waiting for pod pod-10e20251-a132-400e-8704-53d2ebe5cd56 to disappear
Feb 14 13:19:07.766: INFO: Pod pod-10e20251-a132-400e-8704-53d2ebe5cd56 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 14 13:19:07.767: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-4225" for this suite.
Feb 14 13:19:15.885: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 13:19:16.007: INFO: namespace emptydir-4225 deletion completed in 8.229950447s

• [SLOW TEST:18.710 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 14 13:19:16.008: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name projected-secret-test-map-e7cf2be2-b60b-45df-a6c5-4cace61c1f1b
STEP: Creating a pod to test consume secrets
Feb 14 13:19:16.154: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-573ddf3b-73c9-4ff8-bae1-b35b10cd05b5" in namespace "projected-979" to be "success or failure"
Feb 14 13:19:16.174: INFO: Pod "pod-projected-secrets-573ddf3b-73c9-4ff8-bae1-b35b10cd05b5": Phase="Pending", Reason="", readiness=false. Elapsed: 19.578631ms
Feb 14 13:19:18.180: INFO: Pod "pod-projected-secrets-573ddf3b-73c9-4ff8-bae1-b35b10cd05b5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02537715s
Feb 14 13:19:20.185: INFO: Pod "pod-projected-secrets-573ddf3b-73c9-4ff8-bae1-b35b10cd05b5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.031108945s
Feb 14 13:19:22.195: INFO: Pod "pod-projected-secrets-573ddf3b-73c9-4ff8-bae1-b35b10cd05b5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.041187085s
Feb 14 13:19:24.207: INFO: Pod "pod-projected-secrets-573ddf3b-73c9-4ff8-bae1-b35b10cd05b5": Phase="Pending", Reason="", readiness=false. Elapsed: 8.052692441s
Feb 14 13:19:26.219: INFO: Pod "pod-projected-secrets-573ddf3b-73c9-4ff8-bae1-b35b10cd05b5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.064657555s
STEP: Saw pod success
Feb 14 13:19:26.219: INFO: Pod "pod-projected-secrets-573ddf3b-73c9-4ff8-bae1-b35b10cd05b5" satisfied condition "success or failure"
Feb 14 13:19:26.225: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-573ddf3b-73c9-4ff8-bae1-b35b10cd05b5 container projected-secret-volume-test: 
STEP: delete the pod
Feb 14 13:19:26.832: INFO: Waiting for pod pod-projected-secrets-573ddf3b-73c9-4ff8-bae1-b35b10cd05b5 to disappear
Feb 14 13:19:26.842: INFO: Pod pod-projected-secrets-573ddf3b-73c9-4ff8-bae1-b35b10cd05b5 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 14 13:19:26.842: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-979" for this suite.
Feb 14 13:19:32.895: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 13:19:33.033: INFO: namespace projected-979 deletion completed in 6.179785743s

• [SLOW TEST:17.025 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 14 13:19:33.033: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Given a ReplicationController is created
STEP: When the matched label of one of its pods change
Feb 14 13:19:33.180: INFO: Pod name pod-release: Found 0 pods out of 1
Feb 14 13:19:38.193: INFO: Pod name pod-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 14 13:19:39.232: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-1650" for this suite.
Feb 14 13:19:47.270: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 13:19:47.383: INFO: namespace replication-controller-1650 deletion completed in 8.141402444s

• [SLOW TEST:14.350 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-storage] Downward API volume 
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 14 13:19:47.384: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating the pod
Feb 14 13:19:58.260: INFO: Successfully updated pod "annotationupdatebffc5195-c20b-4ab4-9007-c4a7bf1fe4e7"
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 14 13:20:02.359: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-2905" for this suite.
Feb 14 13:20:24.386: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 13:20:24.496: INFO: namespace downward-api-2905 deletion completed in 22.131565368s

• [SLOW TEST:37.113 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 14 13:20:24.497: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-map-1222de9c-6f0d-4f9f-b94b-1219e6bc104a
STEP: Creating a pod to test consume configMaps
Feb 14 13:20:24.720: INFO: Waiting up to 5m0s for pod "pod-configmaps-82e0b880-57c6-40cd-a466-f91ecc905d69" in namespace "configmap-1270" to be "success or failure"
Feb 14 13:20:24.769: INFO: Pod "pod-configmaps-82e0b880-57c6-40cd-a466-f91ecc905d69": Phase="Pending", Reason="", readiness=false. Elapsed: 48.741698ms
Feb 14 13:20:26.786: INFO: Pod "pod-configmaps-82e0b880-57c6-40cd-a466-f91ecc905d69": Phase="Pending", Reason="", readiness=false. Elapsed: 2.065568532s
Feb 14 13:20:28.794: INFO: Pod "pod-configmaps-82e0b880-57c6-40cd-a466-f91ecc905d69": Phase="Pending", Reason="", readiness=false. Elapsed: 4.07375359s
Feb 14 13:20:30.800: INFO: Pod "pod-configmaps-82e0b880-57c6-40cd-a466-f91ecc905d69": Phase="Pending", Reason="", readiness=false. Elapsed: 6.079907837s
Feb 14 13:20:32.813: INFO: Pod "pod-configmaps-82e0b880-57c6-40cd-a466-f91ecc905d69": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.093028462s
STEP: Saw pod success
Feb 14 13:20:32.813: INFO: Pod "pod-configmaps-82e0b880-57c6-40cd-a466-f91ecc905d69" satisfied condition "success or failure"
Feb 14 13:20:32.818: INFO: Trying to get logs from node iruya-node pod pod-configmaps-82e0b880-57c6-40cd-a466-f91ecc905d69 container configmap-volume-test: 
STEP: delete the pod
Feb 14 13:20:32.939: INFO: Waiting for pod pod-configmaps-82e0b880-57c6-40cd-a466-f91ecc905d69 to disappear
Feb 14 13:20:32.951: INFO: Pod pod-configmaps-82e0b880-57c6-40cd-a466-f91ecc905d69 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 14 13:20:32.952: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-1270" for this suite.
Feb 14 13:20:38.983: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 13:20:39.127: INFO: namespace configmap-1270 deletion completed in 6.169683856s

• [SLOW TEST:14.631 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not cause race condition when used for configmaps [Serial] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 14 13:20:39.128: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not cause race condition when used for configmaps [Serial] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating 50 configmaps
STEP: Creating RC which spawns configmap-volume pods
Feb 14 13:20:39.873: INFO: Pod name wrapped-volume-race-462038c6-4e1f-42f6-b6e1-0293d72def13: Found 0 pods out of 5
Feb 14 13:20:44.943: INFO: Pod name wrapped-volume-race-462038c6-4e1f-42f6-b6e1-0293d72def13: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-462038c6-4e1f-42f6-b6e1-0293d72def13 in namespace emptydir-wrapper-6936, will wait for the garbage collector to delete the pods
Feb 14 13:21:15.052: INFO: Deleting ReplicationController wrapped-volume-race-462038c6-4e1f-42f6-b6e1-0293d72def13 took: 18.626756ms
Feb 14 13:21:17.154: INFO: Terminating ReplicationController wrapped-volume-race-462038c6-4e1f-42f6-b6e1-0293d72def13 pods took: 2.101618294s
STEP: Creating RC which spawns configmap-volume pods
Feb 14 13:22:06.854: INFO: Pod name wrapped-volume-race-b62b90e8-7346-4ee0-a3ab-0860e9cab074: Found 0 pods out of 5
Feb 14 13:22:11.899: INFO: Pod name wrapped-volume-race-b62b90e8-7346-4ee0-a3ab-0860e9cab074: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-b62b90e8-7346-4ee0-a3ab-0860e9cab074 in namespace emptydir-wrapper-6936, will wait for the garbage collector to delete the pods
Feb 14 13:22:42.005: INFO: Deleting ReplicationController wrapped-volume-race-b62b90e8-7346-4ee0-a3ab-0860e9cab074 took: 14.009891ms
Feb 14 13:22:42.405: INFO: Terminating ReplicationController wrapped-volume-race-b62b90e8-7346-4ee0-a3ab-0860e9cab074 pods took: 400.619084ms
STEP: Creating RC which spawns configmap-volume pods
Feb 14 13:23:27.162: INFO: Pod name wrapped-volume-race-4e0d5215-f301-4788-b24c-a728145cae1a: Found 0 pods out of 5
Feb 14 13:23:32.179: INFO: Pod name wrapped-volume-race-4e0d5215-f301-4788-b24c-a728145cae1a: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-4e0d5215-f301-4788-b24c-a728145cae1a in namespace emptydir-wrapper-6936, will wait for the garbage collector to delete the pods
Feb 14 13:24:10.338: INFO: Deleting ReplicationController wrapped-volume-race-4e0d5215-f301-4788-b24c-a728145cae1a took: 17.814947ms
Feb 14 13:24:10.839: INFO: Terminating ReplicationController wrapped-volume-race-4e0d5215-f301-4788-b24c-a728145cae1a pods took: 500.91255ms
STEP: Cleaning up the configMaps
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 14 13:24:58.893: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-wrapper-6936" for this suite.
Feb 14 13:25:12.943: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 13:25:13.063: INFO: namespace emptydir-wrapper-6936 deletion completed in 14.161277156s

• [SLOW TEST:273.935 seconds]
[sig-storage] EmptyDir wrapper volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  should not cause race condition when used for configmaps [Serial] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 14 13:25:13.063: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap configmap-2293/configmap-test-ab64ce23-231f-4546-aef5-ebb73dc8442b
STEP: Creating a pod to test consume configMaps
Feb 14 13:25:13.155: INFO: Waiting up to 5m0s for pod "pod-configmaps-8fa84a82-d63d-46f1-bd43-f927a8ad9753" in namespace "configmap-2293" to be "success or failure"
Feb 14 13:25:13.158: INFO: Pod "pod-configmaps-8fa84a82-d63d-46f1-bd43-f927a8ad9753": Phase="Pending", Reason="", readiness=false. Elapsed: 3.199775ms
Feb 14 13:25:15.163: INFO: Pod "pod-configmaps-8fa84a82-d63d-46f1-bd43-f927a8ad9753": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00787096s
Feb 14 13:25:17.171: INFO: Pod "pod-configmaps-8fa84a82-d63d-46f1-bd43-f927a8ad9753": Phase="Pending", Reason="", readiness=false. Elapsed: 4.015579136s
Feb 14 13:25:19.219: INFO: Pod "pod-configmaps-8fa84a82-d63d-46f1-bd43-f927a8ad9753": Phase="Pending", Reason="", readiness=false. Elapsed: 6.063584779s
Feb 14 13:25:21.237: INFO: Pod "pod-configmaps-8fa84a82-d63d-46f1-bd43-f927a8ad9753": Phase="Pending", Reason="", readiness=false. Elapsed: 8.081383938s
Feb 14 13:25:23.244: INFO: Pod "pod-configmaps-8fa84a82-d63d-46f1-bd43-f927a8ad9753": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.089268997s
STEP: Saw pod success
Feb 14 13:25:23.245: INFO: Pod "pod-configmaps-8fa84a82-d63d-46f1-bd43-f927a8ad9753" satisfied condition "success or failure"
Feb 14 13:25:23.249: INFO: Trying to get logs from node iruya-node pod pod-configmaps-8fa84a82-d63d-46f1-bd43-f927a8ad9753 container env-test: 
STEP: delete the pod
Feb 14 13:25:23.329: INFO: Waiting for pod pod-configmaps-8fa84a82-d63d-46f1-bd43-f927a8ad9753 to disappear
Feb 14 13:25:23.336: INFO: Pod pod-configmaps-8fa84a82-d63d-46f1-bd43-f927a8ad9753 no longer exists
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 14 13:25:23.336: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-2293" for this suite.
Feb 14 13:25:29.445: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 13:25:29.602: INFO: namespace configmap-2293 deletion completed in 6.254372892s

• [SLOW TEST:16.538 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources Simple CustomResourceDefinition 
  creating/deleting custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 14 13:25:29.602: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] creating/deleting custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb 14 13:25:29.677: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 14 13:25:30.809: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-2771" for this suite.
Feb 14 13:25:36.842: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 13:25:36.977: INFO: namespace custom-resource-definition-2771 deletion completed in 6.161587436s

• [SLOW TEST:7.375 seconds]
[sig-api-machinery] CustomResourceDefinition resources
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  Simple CustomResourceDefinition
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:35
    creating/deleting custom resource definition objects works  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 14 13:25:36.978: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-b7e24a33-47b6-401e-9b8e-3e1fdebab84d
STEP: Creating a pod to test consume secrets
Feb 14 13:25:37.250: INFO: Waiting up to 5m0s for pod "pod-secrets-8455f4dc-0f4f-49d1-bd59-6d3e348ed2be" in namespace "secrets-7646" to be "success or failure"
Feb 14 13:25:37.369: INFO: Pod "pod-secrets-8455f4dc-0f4f-49d1-bd59-6d3e348ed2be": Phase="Pending", Reason="", readiness=false. Elapsed: 119.132513ms
Feb 14 13:25:39.377: INFO: Pod "pod-secrets-8455f4dc-0f4f-49d1-bd59-6d3e348ed2be": Phase="Pending", Reason="", readiness=false. Elapsed: 2.127007855s
Feb 14 13:25:41.386: INFO: Pod "pod-secrets-8455f4dc-0f4f-49d1-bd59-6d3e348ed2be": Phase="Pending", Reason="", readiness=false. Elapsed: 4.135367315s
Feb 14 13:25:43.399: INFO: Pod "pod-secrets-8455f4dc-0f4f-49d1-bd59-6d3e348ed2be": Phase="Pending", Reason="", readiness=false. Elapsed: 6.148208887s
Feb 14 13:25:45.410: INFO: Pod "pod-secrets-8455f4dc-0f4f-49d1-bd59-6d3e348ed2be": Phase="Pending", Reason="", readiness=false. Elapsed: 8.159801851s
Feb 14 13:25:47.423: INFO: Pod "pod-secrets-8455f4dc-0f4f-49d1-bd59-6d3e348ed2be": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.172392896s
STEP: Saw pod success
Feb 14 13:25:47.423: INFO: Pod "pod-secrets-8455f4dc-0f4f-49d1-bd59-6d3e348ed2be" satisfied condition "success or failure"
Feb 14 13:25:47.430: INFO: Trying to get logs from node iruya-node pod pod-secrets-8455f4dc-0f4f-49d1-bd59-6d3e348ed2be container secret-volume-test: 
STEP: delete the pod
Feb 14 13:25:47.525: INFO: Waiting for pod pod-secrets-8455f4dc-0f4f-49d1-bd59-6d3e348ed2be to disappear
Feb 14 13:25:47.537: INFO: Pod pod-secrets-8455f4dc-0f4f-49d1-bd59-6d3e348ed2be no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 14 13:25:47.537: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-7646" for this suite.
Feb 14 13:25:53.679: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 13:25:53.871: INFO: namespace secrets-7646 deletion completed in 6.324978007s
STEP: Destroying namespace "secret-namespace-2521" for this suite.
Feb 14 13:25:59.900: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 13:26:00.040: INFO: namespace secret-namespace-2521 deletion completed in 6.168567152s

• [SLOW TEST:23.063 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[k8s.io] Pods 
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 14 13:26:00.040: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb 14 13:26:00.129: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 14 13:26:08.224: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-5795" for this suite.
Feb 14 13:27:00.328: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 13:27:00.459: INFO: namespace pods-5795 deletion completed in 52.225845625s

• [SLOW TEST:60.418 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 14 13:27:00.460: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods
STEP: Gathering metrics
W0214 13:27:41.560029       8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Feb 14 13:27:41.560: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 14 13:27:41.560: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-2808" for this suite.
Feb 14 13:28:01.703: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 13:28:01.871: INFO: namespace gc-2808 deletion completed in 20.30525795s

• [SLOW TEST:61.411 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 14 13:28:01.872: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward api env vars
Feb 14 13:28:02.012: INFO: Waiting up to 5m0s for pod "downward-api-8fe63e23-142f-4535-b35b-8a0df825a064" in namespace "downward-api-9681" to be "success or failure"
Feb 14 13:28:02.024: INFO: Pod "downward-api-8fe63e23-142f-4535-b35b-8a0df825a064": Phase="Pending", Reason="", readiness=false. Elapsed: 12.09905ms
Feb 14 13:28:04.047: INFO: Pod "downward-api-8fe63e23-142f-4535-b35b-8a0df825a064": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034809506s
Feb 14 13:28:06.063: INFO: Pod "downward-api-8fe63e23-142f-4535-b35b-8a0df825a064": Phase="Pending", Reason="", readiness=false. Elapsed: 4.050810359s
Feb 14 13:28:08.073: INFO: Pod "downward-api-8fe63e23-142f-4535-b35b-8a0df825a064": Phase="Pending", Reason="", readiness=false. Elapsed: 6.06053334s
Feb 14 13:28:10.080: INFO: Pod "downward-api-8fe63e23-142f-4535-b35b-8a0df825a064": Phase="Pending", Reason="", readiness=false. Elapsed: 8.068243273s
Feb 14 13:28:12.100: INFO: Pod "downward-api-8fe63e23-142f-4535-b35b-8a0df825a064": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.087861858s
STEP: Saw pod success
Feb 14 13:28:12.100: INFO: Pod "downward-api-8fe63e23-142f-4535-b35b-8a0df825a064" satisfied condition "success or failure"
Feb 14 13:28:12.107: INFO: Trying to get logs from node iruya-node pod downward-api-8fe63e23-142f-4535-b35b-8a0df825a064 container dapi-container: 
STEP: delete the pod
Feb 14 13:28:12.333: INFO: Waiting for pod downward-api-8fe63e23-142f-4535-b35b-8a0df825a064 to disappear
Feb 14 13:28:12.353: INFO: Pod downward-api-8fe63e23-142f-4535-b35b-8a0df825a064 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 14 13:28:12.354: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-9681" for this suite.
Feb 14 13:28:18.405: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 13:28:18.599: INFO: namespace downward-api-9681 deletion completed in 6.231723729s

• [SLOW TEST:16.727 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 14 13:28:18.600: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-map-4a87ac57-dfbe-4fa2-b565-079802e86e23
STEP: Creating a pod to test consume configMaps
Feb 14 13:28:18.673: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-4bc52e06-2228-42cd-b2a8-ab31bb58fec1" in namespace "projected-9751" to be "success or failure"
Feb 14 13:28:18.728: INFO: Pod "pod-projected-configmaps-4bc52e06-2228-42cd-b2a8-ab31bb58fec1": Phase="Pending", Reason="", readiness=false. Elapsed: 55.436887ms
Feb 14 13:28:20.747: INFO: Pod "pod-projected-configmaps-4bc52e06-2228-42cd-b2a8-ab31bb58fec1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.074356298s
Feb 14 13:28:22.756: INFO: Pod "pod-projected-configmaps-4bc52e06-2228-42cd-b2a8-ab31bb58fec1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.083046512s
Feb 14 13:28:24.762: INFO: Pod "pod-projected-configmaps-4bc52e06-2228-42cd-b2a8-ab31bb58fec1": Phase="Pending", Reason="", readiness=false. Elapsed: 6.089249086s
Feb 14 13:28:26.775: INFO: Pod "pod-projected-configmaps-4bc52e06-2228-42cd-b2a8-ab31bb58fec1": Phase="Pending", Reason="", readiness=false. Elapsed: 8.10255034s
Feb 14 13:28:28.787: INFO: Pod "pod-projected-configmaps-4bc52e06-2228-42cd-b2a8-ab31bb58fec1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.113998395s
STEP: Saw pod success
Feb 14 13:28:28.787: INFO: Pod "pod-projected-configmaps-4bc52e06-2228-42cd-b2a8-ab31bb58fec1" satisfied condition "success or failure"
Feb 14 13:28:28.791: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-4bc52e06-2228-42cd-b2a8-ab31bb58fec1 container projected-configmap-volume-test: 
STEP: delete the pod
Feb 14 13:28:29.416: INFO: Waiting for pod pod-projected-configmaps-4bc52e06-2228-42cd-b2a8-ab31bb58fec1 to disappear
Feb 14 13:28:29.435: INFO: Pod pod-projected-configmaps-4bc52e06-2228-42cd-b2a8-ab31bb58fec1 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 14 13:28:29.436: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9751" for this suite.
Feb 14 13:28:35.477: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 13:28:35.633: INFO: namespace projected-9751 deletion completed in 6.184410977s

• [SLOW TEST:17.033 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 14 13:28:35.633: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0666 on node default medium
Feb 14 13:28:35.712: INFO: Waiting up to 5m0s for pod "pod-28b4b096-4738-4b4a-84be-a6fc49a3ceb3" in namespace "emptydir-4035" to be "success or failure"
Feb 14 13:28:35.759: INFO: Pod "pod-28b4b096-4738-4b4a-84be-a6fc49a3ceb3": Phase="Pending", Reason="", readiness=false. Elapsed: 47.335604ms
Feb 14 13:28:37.765: INFO: Pod "pod-28b4b096-4738-4b4a-84be-a6fc49a3ceb3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.053092472s
Feb 14 13:28:39.773: INFO: Pod "pod-28b4b096-4738-4b4a-84be-a6fc49a3ceb3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.06111544s
Feb 14 13:28:41.779: INFO: Pod "pod-28b4b096-4738-4b4a-84be-a6fc49a3ceb3": Phase="Pending", Reason="", readiness=false. Elapsed: 6.067310772s
Feb 14 13:28:43.795: INFO: Pod "pod-28b4b096-4738-4b4a-84be-a6fc49a3ceb3": Phase="Pending", Reason="", readiness=false. Elapsed: 8.083055531s
Feb 14 13:28:45.802: INFO: Pod "pod-28b4b096-4738-4b4a-84be-a6fc49a3ceb3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.090073535s
STEP: Saw pod success
Feb 14 13:28:45.802: INFO: Pod "pod-28b4b096-4738-4b4a-84be-a6fc49a3ceb3" satisfied condition "success or failure"
Feb 14 13:28:45.807: INFO: Trying to get logs from node iruya-node pod pod-28b4b096-4738-4b4a-84be-a6fc49a3ceb3 container test-container: 
STEP: delete the pod
Feb 14 13:28:45.895: INFO: Waiting for pod pod-28b4b096-4738-4b4a-84be-a6fc49a3ceb3 to disappear
Feb 14 13:28:45.907: INFO: Pod pod-28b4b096-4738-4b4a-84be-a6fc49a3ceb3 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 14 13:28:45.907: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-4035" for this suite.
Feb 14 13:28:51.984: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 13:28:52.132: INFO: namespace emptydir-4035 deletion completed in 6.17438859s

• [SLOW TEST:16.499 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 14 13:28:52.134: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 14 13:29:52.303: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-3079" for this suite.
Feb 14 13:30:14.346: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 13:30:14.426: INFO: namespace container-probe-3079 deletion completed in 22.11662167s

• [SLOW TEST:82.293 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl label 
  should update the label on a resource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 14 13:30:14.427: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1210
STEP: creating the pod
Feb 14 13:30:14.535: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1348'
Feb 14 13:30:18.127: INFO: stderr: ""
Feb 14 13:30:18.127: INFO: stdout: "pod/pause created\n"
Feb 14 13:30:18.127: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause]
Feb 14 13:30:18.128: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-1348" to be "running and ready"
Feb 14 13:30:18.234: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 106.273165ms
Feb 14 13:30:20.248: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.120120425s
Feb 14 13:30:22.267: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 4.139463473s
Feb 14 13:30:24.285: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 6.157342962s
Feb 14 13:30:26.301: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 8.17304148s
Feb 14 13:30:26.301: INFO: Pod "pause" satisfied condition "running and ready"
Feb 14 13:30:26.301: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause]
[It] should update the label on a resource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: adding the label testing-label with value testing-label-value to a pod
Feb 14 13:30:26.302: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-1348'
Feb 14 13:30:26.441: INFO: stderr: ""
Feb 14 13:30:26.441: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod has the label testing-label with the value testing-label-value
Feb 14 13:30:26.441: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-1348'
Feb 14 13:30:26.604: INFO: stderr: ""
Feb 14 13:30:26.604: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          8s    testing-label-value\n"
STEP: removing the label testing-label of a pod
Feb 14 13:30:26.605: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-1348'
Feb 14 13:30:26.714: INFO: stderr: ""
Feb 14 13:30:26.714: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod doesn't have the label testing-label
Feb 14 13:30:26.715: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-1348'
Feb 14 13:30:26.838: INFO: stderr: ""
Feb 14 13:30:26.838: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          8s    \n"
[AfterEach] [k8s.io] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1217
STEP: using delete to clean up resources
Feb 14 13:30:26.838: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1348'
Feb 14 13:30:26.986: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb 14 13:30:26.986: INFO: stdout: "pod \"pause\" force deleted\n"
Feb 14 13:30:26.987: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-1348'
Feb 14 13:30:27.192: INFO: stderr: "No resources found.\n"
Feb 14 13:30:27.192: INFO: stdout: ""
Feb 14 13:30:27.193: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-1348 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Feb 14 13:30:27.286: INFO: stderr: ""
Feb 14 13:30:27.286: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 14 13:30:27.286: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1348" for this suite.
Feb 14 13:30:33.318: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 13:30:33.448: INFO: namespace kubectl-1348 deletion completed in 6.158308069s

• [SLOW TEST:19.022 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should update the label on a resource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl patch 
  should add annotations for pods in rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 14 13:30:33.449: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should add annotations for pods in rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating Redis RC
Feb 14 13:30:33.526: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2208'
Feb 14 13:30:33.959: INFO: stderr: ""
Feb 14 13:30:33.960: INFO: stdout: "replicationcontroller/redis-master created\n"
STEP: Waiting for Redis master to start.
Feb 14 13:30:34.974: INFO: Selector matched 1 pods for map[app:redis]
Feb 14 13:30:34.974: INFO: Found 0 / 1
Feb 14 13:30:35.987: INFO: Selector matched 1 pods for map[app:redis]
Feb 14 13:30:35.988: INFO: Found 0 / 1
Feb 14 13:30:36.973: INFO: Selector matched 1 pods for map[app:redis]
Feb 14 13:30:36.973: INFO: Found 0 / 1
Feb 14 13:30:37.971: INFO: Selector matched 1 pods for map[app:redis]
Feb 14 13:30:37.971: INFO: Found 0 / 1
Feb 14 13:30:38.969: INFO: Selector matched 1 pods for map[app:redis]
Feb 14 13:30:38.969: INFO: Found 0 / 1
Feb 14 13:30:39.991: INFO: Selector matched 1 pods for map[app:redis]
Feb 14 13:30:39.991: INFO: Found 0 / 1
Feb 14 13:30:40.967: INFO: Selector matched 1 pods for map[app:redis]
Feb 14 13:30:40.967: INFO: Found 0 / 1
Feb 14 13:30:41.971: INFO: Selector matched 1 pods for map[app:redis]
Feb 14 13:30:41.971: INFO: Found 1 / 1
Feb 14 13:30:41.971: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
STEP: patching all pods
Feb 14 13:30:41.977: INFO: Selector matched 1 pods for map[app:redis]
Feb 14 13:30:41.977: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Feb 14 13:30:41.977: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod redis-master-fr2vr --namespace=kubectl-2208 -p {"metadata":{"annotations":{"x":"y"}}}'
Feb 14 13:30:42.192: INFO: stderr: ""
Feb 14 13:30:42.192: INFO: stdout: "pod/redis-master-fr2vr patched\n"
STEP: checking annotations
Feb 14 13:30:42.211: INFO: Selector matched 1 pods for map[app:redis]
Feb 14 13:30:42.211: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 14 13:30:42.211: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-2208" for this suite.
Feb 14 13:31:04.284: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 13:31:04.414: INFO: namespace kubectl-2208 deletion completed in 22.189130537s

• [SLOW TEST:30.965 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl patch
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should add annotations for pods in rc  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 14 13:31:04.415: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test env composition
Feb 14 13:31:04.508: INFO: Waiting up to 5m0s for pod "var-expansion-684589b6-062e-4f94-a9d4-0b02b678c2b7" in namespace "var-expansion-6035" to be "success or failure"
Feb 14 13:31:04.535: INFO: Pod "var-expansion-684589b6-062e-4f94-a9d4-0b02b678c2b7": Phase="Pending", Reason="", readiness=false. Elapsed: 27.071336ms
Feb 14 13:31:06.550: INFO: Pod "var-expansion-684589b6-062e-4f94-a9d4-0b02b678c2b7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.041480085s
Feb 14 13:31:08.564: INFO: Pod "var-expansion-684589b6-062e-4f94-a9d4-0b02b678c2b7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.055280861s
Feb 14 13:31:10.582: INFO: Pod "var-expansion-684589b6-062e-4f94-a9d4-0b02b678c2b7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.073887019s
Feb 14 13:31:12.601: INFO: Pod "var-expansion-684589b6-062e-4f94-a9d4-0b02b678c2b7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.092918619s
STEP: Saw pod success
Feb 14 13:31:12.602: INFO: Pod "var-expansion-684589b6-062e-4f94-a9d4-0b02b678c2b7" satisfied condition "success or failure"
Feb 14 13:31:12.609: INFO: Trying to get logs from node iruya-node pod var-expansion-684589b6-062e-4f94-a9d4-0b02b678c2b7 container dapi-container: 
STEP: delete the pod
Feb 14 13:31:12.754: INFO: Waiting for pod var-expansion-684589b6-062e-4f94-a9d4-0b02b678c2b7 to disappear
Feb 14 13:31:12.782: INFO: Pod var-expansion-684589b6-062e-4f94-a9d4-0b02b678c2b7 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 14 13:31:12.783: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-6035" for this suite.
Feb 14 13:31:18.815: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 13:31:18.898: INFO: namespace var-expansion-6035 deletion completed in 6.106189487s

• [SLOW TEST:14.484 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 14 13:31:18.900: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-map-12c244ee-5fa7-44a0-beed-24e9180d3f5e
STEP: Creating a pod to test consume configMaps
Feb 14 13:31:19.076: INFO: Waiting up to 5m0s for pod "pod-configmaps-0aceb6c6-1742-43c3-9d1e-8be6cdc38f6f" in namespace "configmap-875" to be "success or failure"
Feb 14 13:31:19.083: INFO: Pod "pod-configmaps-0aceb6c6-1742-43c3-9d1e-8be6cdc38f6f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.725574ms
Feb 14 13:31:21.093: INFO: Pod "pod-configmaps-0aceb6c6-1742-43c3-9d1e-8be6cdc38f6f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017535125s
Feb 14 13:31:23.101: INFO: Pod "pod-configmaps-0aceb6c6-1742-43c3-9d1e-8be6cdc38f6f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.024557144s
Feb 14 13:31:25.113: INFO: Pod "pod-configmaps-0aceb6c6-1742-43c3-9d1e-8be6cdc38f6f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.036860628s
Feb 14 13:31:27.162: INFO: Pod "pod-configmaps-0aceb6c6-1742-43c3-9d1e-8be6cdc38f6f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.08654029s
Feb 14 13:31:29.173: INFO: Pod "pod-configmaps-0aceb6c6-1742-43c3-9d1e-8be6cdc38f6f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.097385493s
STEP: Saw pod success
Feb 14 13:31:29.173: INFO: Pod "pod-configmaps-0aceb6c6-1742-43c3-9d1e-8be6cdc38f6f" satisfied condition "success or failure"
Feb 14 13:31:29.178: INFO: Trying to get logs from node iruya-node pod pod-configmaps-0aceb6c6-1742-43c3-9d1e-8be6cdc38f6f container configmap-volume-test: 
STEP: delete the pod
Feb 14 13:31:29.260: INFO: Waiting for pod pod-configmaps-0aceb6c6-1742-43c3-9d1e-8be6cdc38f6f to disappear
Feb 14 13:31:29.318: INFO: Pod pod-configmaps-0aceb6c6-1742-43c3-9d1e-8be6cdc38f6f no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 14 13:31:29.318: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-875" for this suite.
Feb 14 13:31:35.366: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 13:31:35.480: INFO: namespace configmap-875 deletion completed in 6.154320775s

• [SLOW TEST:16.580 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 14 13:31:35.480: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Performing setup for networking test in namespace pod-network-test-5331
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Feb 14 13:31:35.591: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Feb 14 13:32:09.827: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostName&protocol=http&host=10.32.0.4&port=8080&tries=1'] Namespace:pod-network-test-5331 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 14 13:32:09.828: INFO: >>> kubeConfig: /root/.kube/config
I0214 13:32:09.915235       8 log.go:172] (0xc000fd1b80) (0xc001c5cf00) Create stream
I0214 13:32:09.915483       8 log.go:172] (0xc000fd1b80) (0xc001c5cf00) Stream added, broadcasting: 1
I0214 13:32:09.933337       8 log.go:172] (0xc000fd1b80) Reply frame received for 1
I0214 13:32:09.933480       8 log.go:172] (0xc000fd1b80) (0xc0003a99a0) Create stream
I0214 13:32:09.933495       8 log.go:172] (0xc000fd1b80) (0xc0003a99a0) Stream added, broadcasting: 3
I0214 13:32:09.942986       8 log.go:172] (0xc000fd1b80) Reply frame received for 3
I0214 13:32:09.943168       8 log.go:172] (0xc000fd1b80) (0xc0003a9a40) Create stream
I0214 13:32:09.943194       8 log.go:172] (0xc000fd1b80) (0xc0003a9a40) Stream added, broadcasting: 5
I0214 13:32:09.947445       8 log.go:172] (0xc000fd1b80) Reply frame received for 5
I0214 13:32:10.265377       8 log.go:172] (0xc000fd1b80) Data frame received for 3
I0214 13:32:10.265512       8 log.go:172] (0xc0003a99a0) (3) Data frame handling
I0214 13:32:10.265555       8 log.go:172] (0xc0003a99a0) (3) Data frame sent
I0214 13:32:10.481341       8 log.go:172] (0xc000fd1b80) Data frame received for 1
I0214 13:32:10.481824       8 log.go:172] (0xc000fd1b80) (0xc0003a99a0) Stream removed, broadcasting: 3
I0214 13:32:10.482193       8 log.go:172] (0xc001c5cf00) (1) Data frame handling
I0214 13:32:10.482272       8 log.go:172] (0xc001c5cf00) (1) Data frame sent
I0214 13:32:10.482320       8 log.go:172] (0xc000fd1b80) (0xc0003a9a40) Stream removed, broadcasting: 5
I0214 13:32:10.482461       8 log.go:172] (0xc000fd1b80) (0xc001c5cf00) Stream removed, broadcasting: 1
I0214 13:32:10.482484       8 log.go:172] (0xc000fd1b80) Go away received
I0214 13:32:10.483183       8 log.go:172] (0xc000fd1b80) (0xc001c5cf00) Stream removed, broadcasting: 1
I0214 13:32:10.483283       8 log.go:172] (0xc000fd1b80) (0xc0003a99a0) Stream removed, broadcasting: 3
I0214 13:32:10.483293       8 log.go:172] (0xc000fd1b80) (0xc0003a9a40) Stream removed, broadcasting: 5
Feb 14 13:32:10.483: INFO: Waiting for endpoints: map[]
Feb 14 13:32:10.496: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostName&protocol=http&host=10.44.0.1&port=8080&tries=1'] Namespace:pod-network-test-5331 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 14 13:32:10.496: INFO: >>> kubeConfig: /root/.kube/config
I0214 13:32:10.578598       8 log.go:172] (0xc0028ea840) (0xc0025b2640) Create stream
I0214 13:32:10.578828       8 log.go:172] (0xc0028ea840) (0xc0025b2640) Stream added, broadcasting: 1
I0214 13:32:10.596212       8 log.go:172] (0xc0028ea840) Reply frame received for 1
I0214 13:32:10.596267       8 log.go:172] (0xc0028ea840) (0xc001c5d040) Create stream
I0214 13:32:10.596279       8 log.go:172] (0xc0028ea840) (0xc001c5d040) Stream added, broadcasting: 3
I0214 13:32:10.598772       8 log.go:172] (0xc0028ea840) Reply frame received for 3
I0214 13:32:10.598799       8 log.go:172] (0xc0028ea840) (0xc001700c80) Create stream
I0214 13:32:10.598808       8 log.go:172] (0xc0028ea840) (0xc001700c80) Stream added, broadcasting: 5
I0214 13:32:10.603815       8 log.go:172] (0xc0028ea840) Reply frame received for 5
I0214 13:32:10.746649       8 log.go:172] (0xc0028ea840) Data frame received for 3
I0214 13:32:10.747081       8 log.go:172] (0xc001c5d040) (3) Data frame handling
I0214 13:32:10.747164       8 log.go:172] (0xc001c5d040) (3) Data frame sent
I0214 13:32:10.895816       8 log.go:172] (0xc0028ea840) (0xc001c5d040) Stream removed, broadcasting: 3
I0214 13:32:10.896160       8 log.go:172] (0xc0028ea840) Data frame received for 1
I0214 13:32:10.896191       8 log.go:172] (0xc0025b2640) (1) Data frame handling
I0214 13:32:10.896255       8 log.go:172] (0xc0025b2640) (1) Data frame sent
I0214 13:32:10.896267       8 log.go:172] (0xc0028ea840) (0xc0025b2640) Stream removed, broadcasting: 1
I0214 13:32:10.896322       8 log.go:172] (0xc0028ea840) (0xc001700c80) Stream removed, broadcasting: 5
I0214 13:32:10.896703       8 log.go:172] (0xc0028ea840) Go away received
I0214 13:32:10.896971       8 log.go:172] (0xc0028ea840) (0xc0025b2640) Stream removed, broadcasting: 1
I0214 13:32:10.897011       8 log.go:172] (0xc0028ea840) (0xc001c5d040) Stream removed, broadcasting: 3
I0214 13:32:10.897025       8 log.go:172] (0xc0028ea840) (0xc001700c80) Stream removed, broadcasting: 5
Feb 14 13:32:10.897: INFO: Waiting for endpoints: map[]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 14 13:32:10.897: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-5331" for this suite.
Feb 14 13:32:36.957: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 13:32:37.087: INFO: namespace pod-network-test-5331 deletion completed in 26.177810321s

• [SLOW TEST:61.607 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 14 13:32:37.088: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb 14 13:32:37.289: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1834fac4-dcb3-41be-93ba-0a3967e5787f" in namespace "projected-4162" to be "success or failure"
Feb 14 13:32:37.301: INFO: Pod "downwardapi-volume-1834fac4-dcb3-41be-93ba-0a3967e5787f": Phase="Pending", Reason="", readiness=false. Elapsed: 11.675114ms
Feb 14 13:32:39.311: INFO: Pod "downwardapi-volume-1834fac4-dcb3-41be-93ba-0a3967e5787f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021625763s
Feb 14 13:32:42.507: INFO: Pod "downwardapi-volume-1834fac4-dcb3-41be-93ba-0a3967e5787f": Phase="Pending", Reason="", readiness=false. Elapsed: 5.218184774s
Feb 14 13:32:44.564: INFO: Pod "downwardapi-volume-1834fac4-dcb3-41be-93ba-0a3967e5787f": Phase="Pending", Reason="", readiness=false. Elapsed: 7.27471477s
Feb 14 13:32:46.584: INFO: Pod "downwardapi-volume-1834fac4-dcb3-41be-93ba-0a3967e5787f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 9.29456843s
STEP: Saw pod success
Feb 14 13:32:46.584: INFO: Pod "downwardapi-volume-1834fac4-dcb3-41be-93ba-0a3967e5787f" satisfied condition "success or failure"
Feb 14 13:32:46.591: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-1834fac4-dcb3-41be-93ba-0a3967e5787f container client-container: 
STEP: delete the pod
Feb 14 13:32:46.813: INFO: Waiting for pod downwardapi-volume-1834fac4-dcb3-41be-93ba-0a3967e5787f to disappear
Feb 14 13:32:46.821: INFO: Pod downwardapi-volume-1834fac4-dcb3-41be-93ba-0a3967e5787f no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 14 13:32:46.821: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4162" for this suite.
Feb 14 13:32:52.866: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 13:32:52.986: INFO: namespace projected-4162 deletion completed in 6.155255525s

• [SLOW TEST:15.898 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should create and stop a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 14 13:32:52.987: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should create and stop a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a replication controller
Feb 14 13:32:53.106: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5109'
Feb 14 13:32:53.450: INFO: stderr: ""
Feb 14 13:32:53.450: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Feb 14 13:32:53.451: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5109'
Feb 14 13:32:53.625: INFO: stderr: ""
Feb 14 13:32:53.625: INFO: stdout: "update-demo-nautilus-lftqx update-demo-nautilus-zws28 "
Feb 14 13:32:53.626: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-lftqx -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5109'
Feb 14 13:32:53.776: INFO: stderr: ""
Feb 14 13:32:53.776: INFO: stdout: ""
Feb 14 13:32:53.776: INFO: update-demo-nautilus-lftqx is created but not running
Feb 14 13:32:58.777: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5109'
Feb 14 13:32:58.911: INFO: stderr: ""
Feb 14 13:32:58.911: INFO: stdout: "update-demo-nautilus-lftqx update-demo-nautilus-zws28 "
Feb 14 13:32:58.912: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-lftqx -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5109'
Feb 14 13:33:00.345: INFO: stderr: ""
Feb 14 13:33:00.346: INFO: stdout: ""
Feb 14 13:33:00.346: INFO: update-demo-nautilus-lftqx is created but not running
Feb 14 13:33:05.347: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5109'
Feb 14 13:33:05.497: INFO: stderr: ""
Feb 14 13:33:05.497: INFO: stdout: "update-demo-nautilus-lftqx update-demo-nautilus-zws28 "
Feb 14 13:33:05.497: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-lftqx -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5109'
Feb 14 13:33:05.612: INFO: stderr: ""
Feb 14 13:33:05.612: INFO: stdout: "true"
Feb 14 13:33:05.613: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-lftqx -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5109'
Feb 14 13:33:05.718: INFO: stderr: ""
Feb 14 13:33:05.718: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb 14 13:33:05.718: INFO: validating pod update-demo-nautilus-lftqx
Feb 14 13:33:05.761: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb 14 13:33:05.761: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb 14 13:33:05.761: INFO: update-demo-nautilus-lftqx is verified up and running
Feb 14 13:33:05.761: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-zws28 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5109'
Feb 14 13:33:05.866: INFO: stderr: ""
Feb 14 13:33:05.866: INFO: stdout: "true"
Feb 14 13:33:05.867: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-zws28 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5109'
Feb 14 13:33:06.012: INFO: stderr: ""
Feb 14 13:33:06.012: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb 14 13:33:06.012: INFO: validating pod update-demo-nautilus-zws28
Feb 14 13:33:06.024: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb 14 13:33:06.024: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb 14 13:33:06.025: INFO: update-demo-nautilus-zws28 is verified up and running
STEP: using delete to clean up resources
Feb 14 13:33:06.025: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5109'
Feb 14 13:33:06.166: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb 14 13:33:06.166: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Feb 14 13:33:06.166: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-5109'
Feb 14 13:33:06.252: INFO: stderr: "No resources found.\n"
Feb 14 13:33:06.252: INFO: stdout: ""
Feb 14 13:33:06.252: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-5109 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Feb 14 13:33:06.334: INFO: stderr: ""
Feb 14 13:33:06.334: INFO: stdout: "update-demo-nautilus-lftqx\nupdate-demo-nautilus-zws28\n"
Feb 14 13:33:06.835: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-5109'
Feb 14 13:33:07.578: INFO: stderr: "No resources found.\n"
Feb 14 13:33:07.578: INFO: stdout: ""
Feb 14 13:33:07.579: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-5109 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Feb 14 13:33:07.758: INFO: stderr: ""
Feb 14 13:33:07.758: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 14 13:33:07.759: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5109" for this suite.
Feb 14 13:33:13.798: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 13:33:13.932: INFO: namespace kubectl-5109 deletion completed in 6.166976098s

• [SLOW TEST:20.945 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create and stop a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 14 13:33:13.933: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb 14 13:33:14.121: INFO: Waiting up to 5m0s for pod "downwardapi-volume-87ee583d-5862-42af-8255-2b0fc38c76ea" in namespace "downward-api-8434" to be "success or failure"
Feb 14 13:33:14.186: INFO: Pod "downwardapi-volume-87ee583d-5862-42af-8255-2b0fc38c76ea": Phase="Pending", Reason="", readiness=false. Elapsed: 64.116034ms
Feb 14 13:33:16.193: INFO: Pod "downwardapi-volume-87ee583d-5862-42af-8255-2b0fc38c76ea": Phase="Pending", Reason="", readiness=false. Elapsed: 2.071994315s
Feb 14 13:33:18.203: INFO: Pod "downwardapi-volume-87ee583d-5862-42af-8255-2b0fc38c76ea": Phase="Pending", Reason="", readiness=false. Elapsed: 4.081125963s
Feb 14 13:33:20.212: INFO: Pod "downwardapi-volume-87ee583d-5862-42af-8255-2b0fc38c76ea": Phase="Pending", Reason="", readiness=false. Elapsed: 6.090342676s
Feb 14 13:33:22.225: INFO: Pod "downwardapi-volume-87ee583d-5862-42af-8255-2b0fc38c76ea": Phase="Pending", Reason="", readiness=false. Elapsed: 8.103820926s
Feb 14 13:33:24.239: INFO: Pod "downwardapi-volume-87ee583d-5862-42af-8255-2b0fc38c76ea": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.117671921s
STEP: Saw pod success
Feb 14 13:33:24.240: INFO: Pod "downwardapi-volume-87ee583d-5862-42af-8255-2b0fc38c76ea" satisfied condition "success or failure"
Feb 14 13:33:24.249: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-87ee583d-5862-42af-8255-2b0fc38c76ea container client-container: 
STEP: delete the pod
Feb 14 13:33:24.388: INFO: Waiting for pod downwardapi-volume-87ee583d-5862-42af-8255-2b0fc38c76ea to disappear
Feb 14 13:33:24.448: INFO: Pod downwardapi-volume-87ee583d-5862-42af-8255-2b0fc38c76ea no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 14 13:33:24.448: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-8434" for this suite.
Feb 14 13:33:30.497: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 13:33:30.644: INFO: namespace downward-api-8434 deletion completed in 6.183303397s

• [SLOW TEST:16.711 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 14 13:33:30.644: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-1d77eead-68aa-4aae-aeb5-144523a3765a
STEP: Creating a pod to test consume secrets
Feb 14 13:33:30.786: INFO: Waiting up to 5m0s for pod "pod-secrets-becf480c-be1e-41a1-8ae3-f19b7153ac91" in namespace "secrets-2025" to be "success or failure"
Feb 14 13:33:30.810: INFO: Pod "pod-secrets-becf480c-be1e-41a1-8ae3-f19b7153ac91": Phase="Pending", Reason="", readiness=false. Elapsed: 23.680135ms
Feb 14 13:33:32.834: INFO: Pod "pod-secrets-becf480c-be1e-41a1-8ae3-f19b7153ac91": Phase="Pending", Reason="", readiness=false. Elapsed: 2.048124206s
Feb 14 13:33:34.842: INFO: Pod "pod-secrets-becf480c-be1e-41a1-8ae3-f19b7153ac91": Phase="Pending", Reason="", readiness=false. Elapsed: 4.056177439s
Feb 14 13:33:36.855: INFO: Pod "pod-secrets-becf480c-be1e-41a1-8ae3-f19b7153ac91": Phase="Pending", Reason="", readiness=false. Elapsed: 6.068426621s
Feb 14 13:33:38.864: INFO: Pod "pod-secrets-becf480c-be1e-41a1-8ae3-f19b7153ac91": Phase="Pending", Reason="", readiness=false. Elapsed: 8.078144537s
Feb 14 13:33:40.883: INFO: Pod "pod-secrets-becf480c-be1e-41a1-8ae3-f19b7153ac91": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.096872082s
STEP: Saw pod success
Feb 14 13:33:40.884: INFO: Pod "pod-secrets-becf480c-be1e-41a1-8ae3-f19b7153ac91" satisfied condition "success or failure"
Feb 14 13:33:40.901: INFO: Trying to get logs from node iruya-node pod pod-secrets-becf480c-be1e-41a1-8ae3-f19b7153ac91 container secret-env-test: 
STEP: delete the pod
Feb 14 13:33:41.463: INFO: Waiting for pod pod-secrets-becf480c-be1e-41a1-8ae3-f19b7153ac91 to disappear
Feb 14 13:33:41.468: INFO: Pod pod-secrets-becf480c-be1e-41a1-8ae3-f19b7153ac91 no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 14 13:33:41.468: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-2025" for this suite.
Feb 14 13:33:47.507: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 13:33:47.680: INFO: namespace secrets-2025 deletion completed in 6.202909038s

• [SLOW TEST:17.035 seconds]
[sig-api-machinery] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 14 13:33:47.680: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0777 on tmpfs
Feb 14 13:33:47.858: INFO: Waiting up to 5m0s for pod "pod-d090bf05-baa7-43bd-a84d-1a022eaaaef6" in namespace "emptydir-560" to be "success or failure"
Feb 14 13:33:47.880: INFO: Pod "pod-d090bf05-baa7-43bd-a84d-1a022eaaaef6": Phase="Pending", Reason="", readiness=false. Elapsed: 21.366507ms
Feb 14 13:33:49.970: INFO: Pod "pod-d090bf05-baa7-43bd-a84d-1a022eaaaef6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.111389158s
Feb 14 13:33:51.986: INFO: Pod "pod-d090bf05-baa7-43bd-a84d-1a022eaaaef6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.128202917s
Feb 14 13:33:53.994: INFO: Pod "pod-d090bf05-baa7-43bd-a84d-1a022eaaaef6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.136057581s
Feb 14 13:33:56.006: INFO: Pod "pod-d090bf05-baa7-43bd-a84d-1a022eaaaef6": Phase="Pending", Reason="", readiness=false. Elapsed: 8.147827216s
Feb 14 13:33:58.015: INFO: Pod "pod-d090bf05-baa7-43bd-a84d-1a022eaaaef6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.156294783s
STEP: Saw pod success
Feb 14 13:33:58.015: INFO: Pod "pod-d090bf05-baa7-43bd-a84d-1a022eaaaef6" satisfied condition "success or failure"
Feb 14 13:33:58.019: INFO: Trying to get logs from node iruya-node pod pod-d090bf05-baa7-43bd-a84d-1a022eaaaef6 container test-container: 
STEP: delete the pod
Feb 14 13:33:58.074: INFO: Waiting for pod pod-d090bf05-baa7-43bd-a84d-1a022eaaaef6 to disappear
Feb 14 13:33:58.081: INFO: Pod pod-d090bf05-baa7-43bd-a84d-1a022eaaaef6 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 14 13:33:58.082: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-560" for this suite.
Feb 14 13:34:04.177: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 13:34:04.287: INFO: namespace emptydir-560 deletion completed in 6.2004114s

• [SLOW TEST:16.607 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-cli] Kubectl client [k8s.io] Proxy server 
  should support --unix-socket=/path  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 14 13:34:04.287: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should support --unix-socket=/path  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Starting the proxy
Feb 14 13:34:04.415: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix050747222/test'
STEP: retrieving proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 14 13:34:04.527: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1540" for this suite.
Feb 14 13:34:10.563: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 13:34:10.680: INFO: namespace kubectl-1540 deletion completed in 6.144675072s

• [SLOW TEST:6.393 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Proxy server
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should support --unix-socket=/path  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 14 13:34:10.683: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-map-0de75bc9-bd63-4822-8eb4-3f9aa7ed642c
STEP: Creating a pod to test consume configMaps
Feb 14 13:34:10.816: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-13ad1bc4-c242-4a4d-b9d0-aad2ba00230d" in namespace "projected-8593" to be "success or failure"
Feb 14 13:34:10.828: INFO: Pod "pod-projected-configmaps-13ad1bc4-c242-4a4d-b9d0-aad2ba00230d": Phase="Pending", Reason="", readiness=false. Elapsed: 11.138418ms
Feb 14 13:34:13.576: INFO: Pod "pod-projected-configmaps-13ad1bc4-c242-4a4d-b9d0-aad2ba00230d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.759272097s
Feb 14 13:34:15.587: INFO: Pod "pod-projected-configmaps-13ad1bc4-c242-4a4d-b9d0-aad2ba00230d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.770595827s
Feb 14 13:34:17.599: INFO: Pod "pod-projected-configmaps-13ad1bc4-c242-4a4d-b9d0-aad2ba00230d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.782833334s
Feb 14 13:34:19.610: INFO: Pod "pod-projected-configmaps-13ad1bc4-c242-4a4d-b9d0-aad2ba00230d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.793158546s
STEP: Saw pod success
Feb 14 13:34:19.610: INFO: Pod "pod-projected-configmaps-13ad1bc4-c242-4a4d-b9d0-aad2ba00230d" satisfied condition "success or failure"
Feb 14 13:34:19.614: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-13ad1bc4-c242-4a4d-b9d0-aad2ba00230d container projected-configmap-volume-test: 
STEP: delete the pod
Feb 14 13:34:19.866: INFO: Waiting for pod pod-projected-configmaps-13ad1bc4-c242-4a4d-b9d0-aad2ba00230d to disappear
Feb 14 13:34:19.876: INFO: Pod pod-projected-configmaps-13ad1bc4-c242-4a4d-b9d0-aad2ba00230d no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 14 13:34:19.877: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8593" for this suite.
Feb 14 13:34:25.918: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 13:34:26.086: INFO: namespace projected-8593 deletion completed in 6.200045027s

• [SLOW TEST:15.403 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 14 13:34:26.087: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72
[It] deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb 14 13:34:26.181: INFO: Creating deployment "nginx-deployment"
Feb 14 13:34:26.187: INFO: Waiting for observed generation 1
Feb 14 13:34:28.847: INFO: Waiting for all required pods to come up
Feb 14 13:34:28.858: INFO: Pod name nginx: Found 10 pods out of 10
STEP: ensuring each pod is running
Feb 14 13:34:58.110: INFO: Waiting for deployment "nginx-deployment" to complete
Feb 14 13:34:58.120: INFO: Updating deployment "nginx-deployment" with a non-existent image
Feb 14 13:34:58.139: INFO: Updating deployment nginx-deployment
Feb 14 13:34:58.139: INFO: Waiting for observed generation 2
Feb 14 13:35:01.205: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8
Feb 14 13:35:01.777: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8
Feb 14 13:35:01.791: INFO: Waiting for the first rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas
Feb 14 13:35:02.132: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0
Feb 14 13:35:02.132: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5
Feb 14 13:35:02.148: INFO: Waiting for the second rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas
Feb 14 13:35:02.161: INFO: Verifying that deployment "nginx-deployment" has minimum required number of available replicas
Feb 14 13:35:02.161: INFO: Scaling up the deployment "nginx-deployment" from 10 to 30
Feb 14 13:35:02.175: INFO: Updating deployment nginx-deployment
Feb 14 13:35:02.175: INFO: Waiting for the replicasets of deployment "nginx-deployment" to have desired number of replicas
Feb 14 13:35:03.518: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20
Feb 14 13:35:03.971: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66
Feb 14 13:35:09.827: INFO: Deployment "nginx-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment,GenerateName:,Namespace:deployment-6916,SelfLink:/apis/apps/v1/namespaces/deployment-6916/deployments/nginx-deployment,UID:22931dce-4d36-43fb-ab0f-81f575aa8bf5,ResourceVersion:24324651,Generation:3,CreationTimestamp:2020-02-14 13:34:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*30,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:33,UpdatedReplicas:13,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[{Available False 2020-02-14 13:35:02 +0000 UTC 2020-02-14 13:35:02 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2020-02-14 13:35:04 +0000 UTC 2020-02-14 13:34:26 +0000 UTC ReplicaSetUpdated ReplicaSet "nginx-deployment-55fb7cb77f" is progressing.}],ReadyReplicas:8,CollisionCount:nil,},}

Feb 14 13:35:10.728: INFO: New ReplicaSet "nginx-deployment-55fb7cb77f" of Deployment "nginx-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f,GenerateName:,Namespace:deployment-6916,SelfLink:/apis/apps/v1/namespaces/deployment-6916/replicasets/nginx-deployment-55fb7cb77f,UID:35019d6e-06d9-4790-af00-8c26e3a1f45b,ResourceVersion:24324643,Generation:3,CreationTimestamp:2020-02-14 13:34:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 22931dce-4d36-43fb-ab0f-81f575aa8bf5 0xc002571b47 0xc002571b48}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*13,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Feb 14 13:35:10.728: INFO: All old ReplicaSets of Deployment "nginx-deployment":
Feb 14 13:35:10.728: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498,GenerateName:,Namespace:deployment-6916,SelfLink:/apis/apps/v1/namespaces/deployment-6916/replicasets/nginx-deployment-7b8c6f4498,UID:3cde4385-7712-42b2-a39d-3d5cedb06680,ResourceVersion:24324647,Generation:3,CreationTimestamp:2020-02-14 13:34:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 22931dce-4d36-43fb-ab0f-81f575aa8bf5 0xc002571c17 0xc002571c18}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*20,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[],},}
Feb 14 13:35:13.234: INFO: Pod "nginx-deployment-55fb7cb77f-2vw8f" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-2vw8f,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-6916,SelfLink:/api/v1/namespaces/deployment-6916/pods/nginx-deployment-55fb7cb77f-2vw8f,UID:fe9d778a-733c-41e0-9bda-6957d5932cdb,ResourceVersion:24324573,Generation:0,CreationTimestamp:2020-02-14 13:34:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 35019d6e-06d9-4790-af00-8c26e3a1f45b 0xc002bb2577 0xc002bb2578}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-c9h7s {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-c9h7s,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-c9h7s true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002bb25e0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002bb2600}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 13:34:58 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-14 13:34:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-14 13:34:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 13:34:58 +0000 UTC  }],Message:,Reason:,HostIP:10.96.2.216,PodIP:,StartTime:2020-02-14 13:34:58 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 14 13:35:13.234: INFO: Pod "nginx-deployment-55fb7cb77f-42whb" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-42whb,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-6916,SelfLink:/api/v1/namespaces/deployment-6916/pods/nginx-deployment-55fb7cb77f-42whb,UID:cd42edbb-41d2-4027-b0f5-d0e4d0089ed8,ResourceVersion:24324641,Generation:0,CreationTimestamp:2020-02-14 13:35:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 35019d6e-06d9-4790-af00-8c26e3a1f45b 0xc002bb26d7 0xc002bb26d8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-c9h7s {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-c9h7s,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-c9h7s true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002bb2740} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002bb2760}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 13:35:04 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-14 13:35:04 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-14 13:35:04 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 13:35:03 +0000 UTC  }],Message:,Reason:,HostIP:10.96.2.216,PodIP:,StartTime:2020-02-14 13:35:04 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 14 13:35:13.234: INFO: Pod "nginx-deployment-55fb7cb77f-5r2xs" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-5r2xs,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-6916,SelfLink:/api/v1/namespaces/deployment-6916/pods/nginx-deployment-55fb7cb77f-5r2xs,UID:ddc021c5-0c90-4d14-ba72-931c6262f191,ResourceVersion:24324628,Generation:0,CreationTimestamp:2020-02-14 13:35:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 35019d6e-06d9-4790-af00-8c26e3a1f45b 0xc002bb2837 0xc002bb2838}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-c9h7s {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-c9h7s,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-c9h7s true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002bb28b0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002bb28d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 13:35:04 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 14 13:35:13.235: INFO: Pod "nginx-deployment-55fb7cb77f-854nr" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-854nr,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-6916,SelfLink:/api/v1/namespaces/deployment-6916/pods/nginx-deployment-55fb7cb77f-854nr,UID:463b5c6b-55b4-46ad-8e56-17bc566b81ad,ResourceVersion:24324577,Generation:0,CreationTimestamp:2020-02-14 13:34:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 35019d6e-06d9-4790-af00-8c26e3a1f45b 0xc002bb2957 0xc002bb2958}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-c9h7s {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-c9h7s,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-c9h7s true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002bb29d0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002bb29f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 13:34:58 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-14 13:34:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-14 13:34:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 13:34:58 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2020-02-14 13:34:58 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 14 13:35:13.235: INFO: Pod "nginx-deployment-55fb7cb77f-cp774" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-cp774,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-6916,SelfLink:/api/v1/namespaces/deployment-6916/pods/nginx-deployment-55fb7cb77f-cp774,UID:77d55dfb-e288-43a8-9808-5abf5e09bfc4,ResourceVersion:24324630,Generation:0,CreationTimestamp:2020-02-14 13:35:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 35019d6e-06d9-4790-af00-8c26e3a1f45b 0xc002bb2ac7 0xc002bb2ac8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-c9h7s {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-c9h7s,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-c9h7s true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002bb2b40} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002bb2b60}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 13:35:04 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 14 13:35:13.235: INFO: Pod "nginx-deployment-55fb7cb77f-h9vlm" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-h9vlm,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-6916,SelfLink:/api/v1/namespaces/deployment-6916/pods/nginx-deployment-55fb7cb77f-h9vlm,UID:0f107ade-fb75-4e39-90fa-a124c0eab02c,ResourceVersion:24324607,Generation:0,CreationTimestamp:2020-02-14 13:35:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 35019d6e-06d9-4790-af00-8c26e3a1f45b 0xc002bb2be7 0xc002bb2be8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-c9h7s {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-c9h7s,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-c9h7s true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002bb2c60} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002bb2c80}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 13:35:03 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 14 13:35:13.235: INFO: Pod "nginx-deployment-55fb7cb77f-h9w47" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-h9w47,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-6916,SelfLink:/api/v1/namespaces/deployment-6916/pods/nginx-deployment-55fb7cb77f-h9w47,UID:c56b3cde-f3a0-424d-83fd-a984ea96090f,ResourceVersion:24324606,Generation:0,CreationTimestamp:2020-02-14 13:35:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 35019d6e-06d9-4790-af00-8c26e3a1f45b 0xc002bb2d07 0xc002bb2d08}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-c9h7s {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-c9h7s,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-c9h7s true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002bb2d70} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002bb2d90}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 13:35:03 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 14 13:35:13.235: INFO: Pod "nginx-deployment-55fb7cb77f-hll6m" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-hll6m,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-6916,SelfLink:/api/v1/namespaces/deployment-6916/pods/nginx-deployment-55fb7cb77f-hll6m,UID:ddbfd28e-cc3b-4faa-b0ca-5b80e4e2b8e9,ResourceVersion:24324635,Generation:0,CreationTimestamp:2020-02-14 13:35:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 35019d6e-06d9-4790-af00-8c26e3a1f45b 0xc002bb2e17 0xc002bb2e18}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-c9h7s {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-c9h7s,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-c9h7s true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002bb2e80} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002bb2ea0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 13:35:04 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 14 13:35:13.236: INFO: Pod "nginx-deployment-55fb7cb77f-j9cwf" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-j9cwf,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-6916,SelfLink:/api/v1/namespaces/deployment-6916/pods/nginx-deployment-55fb7cb77f-j9cwf,UID:db2663bf-e051-4fad-ba77-9e287470a14d,ResourceVersion:24324570,Generation:0,CreationTimestamp:2020-02-14 13:34:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 35019d6e-06d9-4790-af00-8c26e3a1f45b 0xc002bb2f27 0xc002bb2f28}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-c9h7s {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-c9h7s,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-c9h7s true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002bb2fa0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002bb2fc0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 13:34:58 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-14 13:34:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-14 13:34:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 13:34:58 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2020-02-14 13:34:58 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 14 13:35:13.236: INFO: Pod "nginx-deployment-55fb7cb77f-lxtm2" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-lxtm2,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-6916,SelfLink:/api/v1/namespaces/deployment-6916/pods/nginx-deployment-55fb7cb77f-lxtm2,UID:c7069f1f-dd10-4236-b054-8055dae118d8,ResourceVersion:24324639,Generation:0,CreationTimestamp:2020-02-14 13:35:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 35019d6e-06d9-4790-af00-8c26e3a1f45b 0xc002bb3097 0xc002bb3098}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-c9h7s {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-c9h7s,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-c9h7s true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002bb3100} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002bb3120}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 13:35:04 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 14 13:35:13.236: INFO: Pod "nginx-deployment-55fb7cb77f-tgqpq" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-tgqpq,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-6916,SelfLink:/api/v1/namespaces/deployment-6916/pods/nginx-deployment-55fb7cb77f-tgqpq,UID:e1aa42b1-547a-4736-82e5-a297d8175126,ResourceVersion:24324555,Generation:0,CreationTimestamp:2020-02-14 13:34:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 35019d6e-06d9-4790-af00-8c26e3a1f45b 0xc002bb31a7 0xc002bb31a8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-c9h7s {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-c9h7s,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-c9h7s true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002bb3210} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002bb3230}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 13:34:58 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-14 13:34:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-14 13:34:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 13:34:58 +0000 UTC  }],Message:,Reason:,HostIP:10.96.2.216,PodIP:,StartTime:2020-02-14 13:34:58 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 14 13:35:13.236: INFO: Pod "nginx-deployment-55fb7cb77f-vl6g4" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-vl6g4,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-6916,SelfLink:/api/v1/namespaces/deployment-6916/pods/nginx-deployment-55fb7cb77f-vl6g4,UID:1ca31a34-6d97-42a5-a894-e2d858afaa28,ResourceVersion:24324554,Generation:0,CreationTimestamp:2020-02-14 13:34:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 35019d6e-06d9-4790-af00-8c26e3a1f45b 0xc002bb3307 0xc002bb3308}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-c9h7s {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-c9h7s,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-c9h7s true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002bb3380} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002bb33a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 13:34:58 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-14 13:34:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-14 13:34:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 13:34:58 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2020-02-14 13:34:58 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 14 13:35:13.236: INFO: Pod "nginx-deployment-55fb7cb77f-zsjqf" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-zsjqf,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-6916,SelfLink:/api/v1/namespaces/deployment-6916/pods/nginx-deployment-55fb7cb77f-zsjqf,UID:c3cb7a7a-1111-4ae9-b3e2-8488cf36ed92,ResourceVersion:24324637,Generation:0,CreationTimestamp:2020-02-14 13:35:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 35019d6e-06d9-4790-af00-8c26e3a1f45b 0xc002bb3477 0xc002bb3478}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-c9h7s {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-c9h7s,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-c9h7s true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002bb34f0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002bb3510}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 13:35:04 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 14 13:35:13.237: INFO: Pod "nginx-deployment-7b8c6f4498-28rrp" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-28rrp,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6916,SelfLink:/api/v1/namespaces/deployment-6916/pods/nginx-deployment-7b8c6f4498-28rrp,UID:1edd4b3a-6e17-422c-a0bc-6a7b64062898,ResourceVersion:24324481,Generation:0,CreationTimestamp:2020-02-14 13:34:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 3cde4385-7712-42b2-a39d-3d5cedb06680 0xc002bb3597 0xc002bb3598}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-c9h7s {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-c9h7s,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-c9h7s true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002bb3600} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002bb3620}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 13:34:26 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 13:34:50 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 13:34:50 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 13:34:26 +0000 UTC  }],Message:,Reason:,HostIP:10.96.2.216,PodIP:10.32.0.6,StartTime:2020-02-14 13:34:26 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-14 13:34:49 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://ee49a45a928af344730f4e1028d3dc110ec46a5932bba9c557f9705a1e01a32e}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 14 13:35:13.237: INFO: Pod "nginx-deployment-7b8c6f4498-2st4m" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-2st4m,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6916,SelfLink:/api/v1/namespaces/deployment-6916/pods/nginx-deployment-7b8c6f4498-2st4m,UID:d7119a2e-05bf-4789-a509-3a63f6ecd792,ResourceVersion:24324633,Generation:0,CreationTimestamp:2020-02-14 13:35:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 3cde4385-7712-42b2-a39d-3d5cedb06680 0xc002bb36f7 0xc002bb36f8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-c9h7s {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-c9h7s,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-c9h7s true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002bb3760} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002bb3780}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 13:35:04 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 14 13:35:13.237: INFO: Pod "nginx-deployment-7b8c6f4498-4n6sc" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-4n6sc,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6916,SelfLink:/api/v1/namespaces/deployment-6916/pods/nginx-deployment-7b8c6f4498-4n6sc,UID:74bf30a7-c954-49a3-963e-7c8c2e4e25ba,ResourceVersion:24324512,Generation:0,CreationTimestamp:2020-02-14 13:34:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 3cde4385-7712-42b2-a39d-3d5cedb06680 0xc002bb3807 0xc002bb3808}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-c9h7s {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-c9h7s,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-c9h7s true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002bb3880} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002bb38a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 13:34:26 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 13:34:55 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 13:34:55 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 13:34:26 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.2,StartTime:2020-02-14 13:34:26 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-14 13:34:53 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://bd32cb0b96dbf0edb6ed7a3551ad6e2f2e8886202ed839f8305a78510349c88f}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 14 13:35:13.237: INFO: Pod "nginx-deployment-7b8c6f4498-4q9r4" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-4q9r4,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6916,SelfLink:/api/v1/namespaces/deployment-6916/pods/nginx-deployment-7b8c6f4498-4q9r4,UID:d8d0aedb-ecab-4502-a32f-6a60434f4452,ResourceVersion:24324638,Generation:0,CreationTimestamp:2020-02-14 13:35:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 3cde4385-7712-42b2-a39d-3d5cedb06680 0xc002bb3977 0xc002bb3978}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-c9h7s {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-c9h7s,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-c9h7s true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002bb39e0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002bb3a00}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 13:35:04 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 14 13:35:13.238: INFO: Pod "nginx-deployment-7b8c6f4498-5kzmd" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-5kzmd,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6916,SelfLink:/api/v1/namespaces/deployment-6916/pods/nginx-deployment-7b8c6f4498-5kzmd,UID:f7bbd682-0162-4890-a31f-ca712c00cc8c,ResourceVersion:24324514,Generation:0,CreationTimestamp:2020-02-14 13:34:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 3cde4385-7712-42b2-a39d-3d5cedb06680 0xc002bb3a87 0xc002bb3a88}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-c9h7s {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-c9h7s,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-c9h7s true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002bb3b00} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002bb3b20}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 13:34:26 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 13:34:55 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 13:34:55 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 13:34:26 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.5,StartTime:2020-02-14 13:34:26 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-14 13:34:54 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://734b80c55f680f8b6a5399eaed29d52f5392a8cd0a90d5e57689b743b6763b72}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 14 13:35:13.238: INFO: Pod "nginx-deployment-7b8c6f4498-6wjlc" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-6wjlc,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6916,SelfLink:/api/v1/namespaces/deployment-6916/pods/nginx-deployment-7b8c6f4498-6wjlc,UID:a534868d-640b-46e3-b156-487112c9dd50,ResourceVersion:24324650,Generation:0,CreationTimestamp:2020-02-14 13:35:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 3cde4385-7712-42b2-a39d-3d5cedb06680 0xc002bb3bf7 0xc002bb3bf8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-c9h7s {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-c9h7s,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-c9h7s true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002bb3c60} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002bb3c80}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 13:35:04 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-14 13:35:04 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-14 13:35:04 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 13:35:03 +0000 UTC  }],Message:,Reason:,HostIP:10.96.2.216,PodIP:,StartTime:2020-02-14 13:35:04 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 14 13:35:13.239: INFO: Pod "nginx-deployment-7b8c6f4498-872vl" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-872vl,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6916,SelfLink:/api/v1/namespaces/deployment-6916/pods/nginx-deployment-7b8c6f4498-872vl,UID:e9eb6106-af66-4864-9d3c-00e63e950c74,ResourceVersion:24324484,Generation:0,CreationTimestamp:2020-02-14 13:34:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 3cde4385-7712-42b2-a39d-3d5cedb06680 0xc002bb3d47 0xc002bb3d48}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-c9h7s {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-c9h7s,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-c9h7s true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002bb3db0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002bb3dd0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 13:34:26 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 13:34:50 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 13:34:50 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 13:34:26 +0000 UTC  }],Message:,Reason:,HostIP:10.96.2.216,PodIP:10.32.0.4,StartTime:2020-02-14 13:34:26 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-14 13:34:48 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://6613820294a54c2bf5c9e4610572bb57a697b1fe4daeaeb320545bfa77ef7a28}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 14 13:35:13.240: INFO: Pod "nginx-deployment-7b8c6f4498-9mflz" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-9mflz,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6916,SelfLink:/api/v1/namespaces/deployment-6916/pods/nginx-deployment-7b8c6f4498-9mflz,UID:cb2724ef-3284-4db9-b6a8-f25e5194557d,ResourceVersion:24324616,Generation:0,CreationTimestamp:2020-02-14 13:35:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 3cde4385-7712-42b2-a39d-3d5cedb06680 0xc002bb3ea7 0xc002bb3ea8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-c9h7s {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-c9h7s,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-c9h7s true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002bb3f10} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002bb3f30}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 13:35:03 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 14 13:35:13.240: INFO: Pod "nginx-deployment-7b8c6f4498-bpccl" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-bpccl,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6916,SelfLink:/api/v1/namespaces/deployment-6916/pods/nginx-deployment-7b8c6f4498-bpccl,UID:3b661683-0ea0-480a-a8d7-8b614564a73c,ResourceVersion:24324503,Generation:0,CreationTimestamp:2020-02-14 13:34:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 3cde4385-7712-42b2-a39d-3d5cedb06680 0xc002bb3fb7 0xc002bb3fb8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-c9h7s {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-c9h7s,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-c9h7s true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc003108030} {node.kubernetes.io/unreachable Exists  NoExecute 0xc003108050}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 13:34:26 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 13:34:55 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 13:34:55 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 13:34:26 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.3,StartTime:2020-02-14 13:34:26 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-14 13:34:54 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://9e8efa7a12f6870cb4bc9201ad245cc9521c8d09173a2e5552a3e49612ff3b55}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 14 13:35:13.240: INFO: Pod "nginx-deployment-7b8c6f4498-czmqb" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-czmqb,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6916,SelfLink:/api/v1/namespaces/deployment-6916/pods/nginx-deployment-7b8c6f4498-czmqb,UID:d4281677-c2e4-458b-9094-b87f02df6797,ResourceVersion:24324621,Generation:0,CreationTimestamp:2020-02-14 13:35:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 3cde4385-7712-42b2-a39d-3d5cedb06680 0xc003108127 0xc003108128}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-c9h7s {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-c9h7s,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-c9h7s true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0031081a0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0031081c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 13:35:04 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 14 13:35:13.240: INFO: Pod "nginx-deployment-7b8c6f4498-fxvdl" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-fxvdl,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6916,SelfLink:/api/v1/namespaces/deployment-6916/pods/nginx-deployment-7b8c6f4498-fxvdl,UID:053df70c-2f2f-4e0f-854c-171c7b039c25,ResourceVersion:24324506,Generation:0,CreationTimestamp:2020-02-14 13:34:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 3cde4385-7712-42b2-a39d-3d5cedb06680 0xc003108247 0xc003108248}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-c9h7s {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-c9h7s,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-c9h7s true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0031082c0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0031082e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 13:34:26 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 13:34:55 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 13:34:55 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 13:34:26 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.1,StartTime:2020-02-14 13:34:26 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-14 13:34:54 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://3af76e13dd7b7bb8ebf96fcd48e7ed15439e4ed06186f40eb68d3202fbfa756a}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 14 13:35:13.240: INFO: Pod "nginx-deployment-7b8c6f4498-gmk2k" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-gmk2k,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6916,SelfLink:/api/v1/namespaces/deployment-6916/pods/nginx-deployment-7b8c6f4498-gmk2k,UID:60ba68bd-c54f-47be-9942-5f4120056364,ResourceVersion:24324631,Generation:0,CreationTimestamp:2020-02-14 13:35:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 3cde4385-7712-42b2-a39d-3d5cedb06680 0xc0031083b7 0xc0031083b8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-c9h7s {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-c9h7s,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-c9h7s true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc003108420} {node.kubernetes.io/unreachable Exists  NoExecute 0xc003108440}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 13:35:04 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 14 13:35:13.241: INFO: Pod "nginx-deployment-7b8c6f4498-h4lvd" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-h4lvd,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6916,SelfLink:/api/v1/namespaces/deployment-6916/pods/nginx-deployment-7b8c6f4498-h4lvd,UID:00b11317-fd59-47ac-a07f-04549d7f9ee6,ResourceVersion:24324475,Generation:0,CreationTimestamp:2020-02-14 13:34:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 3cde4385-7712-42b2-a39d-3d5cedb06680 0xc0031084c7 0xc0031084c8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-c9h7s {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-c9h7s,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-c9h7s true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc003108530} {node.kubernetes.io/unreachable Exists  NoExecute 0xc003108550}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 13:34:26 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 13:34:50 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 13:34:50 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 13:34:26 +0000 UTC  }],Message:,Reason:,HostIP:10.96.2.216,PodIP:10.32.0.7,StartTime:2020-02-14 13:34:26 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-14 13:34:49 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://260a0de94e193595c96a392b3f4ae3091dc1e420bf5f1f926bc11a7e8fb22140}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 14 13:35:13.241: INFO: Pod "nginx-deployment-7b8c6f4498-j5wbq" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-j5wbq,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6916,SelfLink:/api/v1/namespaces/deployment-6916/pods/nginx-deployment-7b8c6f4498-j5wbq,UID:740962ef-6374-4fe1-b24c-1cf969257f84,ResourceVersion:24324657,Generation:0,CreationTimestamp:2020-02-14 13:35:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 3cde4385-7712-42b2-a39d-3d5cedb06680 0xc003108627 0xc003108628}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-c9h7s {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-c9h7s,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-c9h7s true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0031086a0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0031086c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 13:35:05 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-14 13:35:05 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-14 13:35:05 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 13:35:03 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2020-02-14 13:35:05 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 14 13:35:13.241: INFO: Pod "nginx-deployment-7b8c6f4498-j7tzs" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-j7tzs,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6916,SelfLink:/api/v1/namespaces/deployment-6916/pods/nginx-deployment-7b8c6f4498-j7tzs,UID:31065983-c5f3-4e88-b032-08a35d26c078,ResourceVersion:24324662,Generation:0,CreationTimestamp:2020-02-14 13:35:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 3cde4385-7712-42b2-a39d-3d5cedb06680 0xc003108787 0xc003108788}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-c9h7s {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-c9h7s,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-c9h7s true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0031087f0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc003108810}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 13:35:04 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-14 13:35:04 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-14 13:35:04 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 13:35:03 +0000 UTC  }],Message:,Reason:,HostIP:10.96.2.216,PodIP:,StartTime:2020-02-14 13:35:04 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 14 13:35:13.241: INFO: Pod "nginx-deployment-7b8c6f4498-jtltp" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-jtltp,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6916,SelfLink:/api/v1/namespaces/deployment-6916/pods/nginx-deployment-7b8c6f4498-jtltp,UID:d0516a92-316b-4849-8433-f6b1fdd72010,ResourceVersion:24324629,Generation:0,CreationTimestamp:2020-02-14 13:35:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 3cde4385-7712-42b2-a39d-3d5cedb06680 0xc0031088d7 0xc0031088d8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-c9h7s {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-c9h7s,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-c9h7s true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc003108950} {node.kubernetes.io/unreachable Exists  NoExecute 0xc003108970}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 13:35:04 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 14 13:35:13.241: INFO: Pod "nginx-deployment-7b8c6f4498-kddx6" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-kddx6,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6916,SelfLink:/api/v1/namespaces/deployment-6916/pods/nginx-deployment-7b8c6f4498-kddx6,UID:0ae93ca1-7701-4343-a85d-5d54708e6bdc,ResourceVersion:24324617,Generation:0,CreationTimestamp:2020-02-14 13:35:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 3cde4385-7712-42b2-a39d-3d5cedb06680 0xc0031089f7 0xc0031089f8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-c9h7s {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-c9h7s,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-c9h7s true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc003108a70} {node.kubernetes.io/unreachable Exists  NoExecute 0xc003108a90}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 13:35:03 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 14 13:35:13.241: INFO: Pod "nginx-deployment-7b8c6f4498-kgnbw" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-kgnbw,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6916,SelfLink:/api/v1/namespaces/deployment-6916/pods/nginx-deployment-7b8c6f4498-kgnbw,UID:fc096c06-5233-4ccb-8dc9-a3d31db0301d,ResourceVersion:24324478,Generation:0,CreationTimestamp:2020-02-14 13:34:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 3cde4385-7712-42b2-a39d-3d5cedb06680 0xc003108b17 0xc003108b18}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-c9h7s {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-c9h7s,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-c9h7s true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc003108b80} {node.kubernetes.io/unreachable Exists  NoExecute 0xc003108ba0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 13:34:26 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 13:34:50 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 13:34:50 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 13:34:26 +0000 UTC  }],Message:,Reason:,HostIP:10.96.2.216,PodIP:10.32.0.5,StartTime:2020-02-14 13:34:26 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-14 13:34:48 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://edf0cd47a1855b7395a25206b14dfcd60e6af54153548433c1c207f2bacd1f9d}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 14 13:35:13.241: INFO: Pod "nginx-deployment-7b8c6f4498-pn5cv" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-pn5cv,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6916,SelfLink:/api/v1/namespaces/deployment-6916/pods/nginx-deployment-7b8c6f4498-pn5cv,UID:d3171f27-1ac5-4e1c-b69e-c958c7b5be78,ResourceVersion:24324640,Generation:0,CreationTimestamp:2020-02-14 13:35:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 3cde4385-7712-42b2-a39d-3d5cedb06680 0xc003108c77 0xc003108c78}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-c9h7s {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-c9h7s,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-c9h7s true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc003108cf0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc003108d10}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 13:35:04 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-14 13:35:04 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-14 13:35:04 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 13:35:02 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2020-02-14 13:35:04 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 14 13:35:13.242: INFO: Pod "nginx-deployment-7b8c6f4498-w76kl" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-w76kl,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6916,SelfLink:/api/v1/namespaces/deployment-6916/pods/nginx-deployment-7b8c6f4498-w76kl,UID:57d76428-ca59-4c43-a458-3981095b2f7b,ResourceVersion:24324615,Generation:0,CreationTimestamp:2020-02-14 13:35:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 3cde4385-7712-42b2-a39d-3d5cedb06680 0xc003108dd7 0xc003108dd8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-c9h7s {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-c9h7s,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-c9h7s true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc003108e50} {node.kubernetes.io/unreachable Exists  NoExecute 0xc003108e70}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 13:35:03 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 14 13:35:13.242: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-6916" for this suite.
Feb 14 13:36:57.572: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 13:36:57.678: INFO: namespace deployment-6916 deletion completed in 1m43.138771257s

• [SLOW TEST:151.592 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with projected pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 14 13:36:57.679: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with projected pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod pod-subpath-test-projected-dp62
STEP: Creating a pod to test atomic-volume-subpath
Feb 14 13:36:57.830: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-dp62" in namespace "subpath-1549" to be "success or failure"
Feb 14 13:36:57.840: INFO: Pod "pod-subpath-test-projected-dp62": Phase="Pending", Reason="", readiness=false. Elapsed: 10.267053ms
Feb 14 13:36:59.896: INFO: Pod "pod-subpath-test-projected-dp62": Phase="Pending", Reason="", readiness=false. Elapsed: 2.065938941s
Feb 14 13:37:01.945: INFO: Pod "pod-subpath-test-projected-dp62": Phase="Pending", Reason="", readiness=false. Elapsed: 4.115668535s
Feb 14 13:37:03.964: INFO: Pod "pod-subpath-test-projected-dp62": Phase="Pending", Reason="", readiness=false. Elapsed: 6.133962762s
Feb 14 13:37:05.974: INFO: Pod "pod-subpath-test-projected-dp62": Phase="Pending", Reason="", readiness=false. Elapsed: 8.14440162s
Feb 14 13:37:08.004: INFO: Pod "pod-subpath-test-projected-dp62": Phase="Running", Reason="", readiness=true. Elapsed: 10.174622179s
Feb 14 13:37:10.012: INFO: Pod "pod-subpath-test-projected-dp62": Phase="Running", Reason="", readiness=true. Elapsed: 12.182468503s
Feb 14 13:37:12.022: INFO: Pod "pod-subpath-test-projected-dp62": Phase="Running", Reason="", readiness=true. Elapsed: 14.192656295s
Feb 14 13:37:14.031: INFO: Pod "pod-subpath-test-projected-dp62": Phase="Running", Reason="", readiness=true. Elapsed: 16.200919143s
Feb 14 13:37:16.038: INFO: Pod "pod-subpath-test-projected-dp62": Phase="Running", Reason="", readiness=true. Elapsed: 18.20768418s
Feb 14 13:37:18.045: INFO: Pod "pod-subpath-test-projected-dp62": Phase="Running", Reason="", readiness=true. Elapsed: 20.214807133s
Feb 14 13:37:20.058: INFO: Pod "pod-subpath-test-projected-dp62": Phase="Running", Reason="", readiness=true. Elapsed: 22.228575341s
Feb 14 13:37:22.066: INFO: Pod "pod-subpath-test-projected-dp62": Phase="Running", Reason="", readiness=true. Elapsed: 24.236655135s
Feb 14 13:37:24.095: INFO: Pod "pod-subpath-test-projected-dp62": Phase="Running", Reason="", readiness=true. Elapsed: 26.265381317s
Feb 14 13:37:27.114: INFO: Pod "pod-subpath-test-projected-dp62": Phase="Running", Reason="", readiness=true. Elapsed: 29.283968421s
Feb 14 13:37:29.126: INFO: Pod "pod-subpath-test-projected-dp62": Phase="Succeeded", Reason="", readiness=false. Elapsed: 31.295838829s
STEP: Saw pod success
Feb 14 13:37:29.126: INFO: Pod "pod-subpath-test-projected-dp62" satisfied condition "success or failure"
Feb 14 13:37:29.130: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-projected-dp62 container test-container-subpath-projected-dp62: 
STEP: delete the pod
Feb 14 13:37:29.180: INFO: Waiting for pod pod-subpath-test-projected-dp62 to disappear
Feb 14 13:37:29.274: INFO: Pod pod-subpath-test-projected-dp62 no longer exists
STEP: Deleting pod pod-subpath-test-projected-dp62
Feb 14 13:37:29.274: INFO: Deleting pod "pod-subpath-test-projected-dp62" in namespace "subpath-1549"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 14 13:37:29.280: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-1549" for this suite.
Feb 14 13:37:35.334: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 13:37:35.560: INFO: namespace subpath-1549 deletion completed in 6.269890053s

• [SLOW TEST:37.882 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with projected pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run job 
  should create a job from an image when restart is OnFailure  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 14 13:37:35.561: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1612
[It] should create a job from an image when restart is OnFailure  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Feb 14 13:37:35.764: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-1280'
Feb 14 13:37:35.951: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Feb 14 13:37:35.951: INFO: stdout: "job.batch/e2e-test-nginx-job created\n"
STEP: verifying the job e2e-test-nginx-job was created
[AfterEach] [k8s.io] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1617
Feb 14 13:37:35.968: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-nginx-job --namespace=kubectl-1280'
Feb 14 13:37:36.304: INFO: stderr: ""
Feb 14 13:37:36.304: INFO: stdout: "job.batch \"e2e-test-nginx-job\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 14 13:37:36.304: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1280" for this suite.
Feb 14 13:38:00.332: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 13:38:00.473: INFO: namespace kubectl-1280 deletion completed in 24.163193629s

• [SLOW TEST:24.912 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create a job from an image when restart is OnFailure  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[k8s.io] Pods 
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 14 13:38:00.474: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb 14 13:38:00.621: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 14 13:38:11.239: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-587" for this suite.
Feb 14 13:39:03.277: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 13:39:03.388: INFO: namespace pods-587 deletion completed in 52.135294169s

• [SLOW TEST:62.914 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 14 13:39:03.388: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-f7b8856f-1615-41a2-9f18-b05ddeb19196
STEP: Creating a pod to test consume configMaps
Feb 14 13:39:03.466: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-8d101c8d-3b46-42bb-9234-05c33f24ac05" in namespace "projected-6379" to be "success or failure"
Feb 14 13:39:03.576: INFO: Pod "pod-projected-configmaps-8d101c8d-3b46-42bb-9234-05c33f24ac05": Phase="Pending", Reason="", readiness=false. Elapsed: 109.808907ms
Feb 14 13:39:05.589: INFO: Pod "pod-projected-configmaps-8d101c8d-3b46-42bb-9234-05c33f24ac05": Phase="Pending", Reason="", readiness=false. Elapsed: 2.123236762s
Feb 14 13:39:07.597: INFO: Pod "pod-projected-configmaps-8d101c8d-3b46-42bb-9234-05c33f24ac05": Phase="Pending", Reason="", readiness=false. Elapsed: 4.1317124s
Feb 14 13:39:09.606: INFO: Pod "pod-projected-configmaps-8d101c8d-3b46-42bb-9234-05c33f24ac05": Phase="Pending", Reason="", readiness=false. Elapsed: 6.139918538s
Feb 14 13:39:11.616: INFO: Pod "pod-projected-configmaps-8d101c8d-3b46-42bb-9234-05c33f24ac05": Phase="Pending", Reason="", readiness=false. Elapsed: 8.149827906s
Feb 14 13:39:13.636: INFO: Pod "pod-projected-configmaps-8d101c8d-3b46-42bb-9234-05c33f24ac05": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.170521437s
STEP: Saw pod success
Feb 14 13:39:13.637: INFO: Pod "pod-projected-configmaps-8d101c8d-3b46-42bb-9234-05c33f24ac05" satisfied condition "success or failure"
Feb 14 13:39:13.646: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-8d101c8d-3b46-42bb-9234-05c33f24ac05 container projected-configmap-volume-test: 
STEP: delete the pod
Feb 14 13:39:13.755: INFO: Waiting for pod pod-projected-configmaps-8d101c8d-3b46-42bb-9234-05c33f24ac05 to disappear
Feb 14 13:39:13.763: INFO: Pod pod-projected-configmaps-8d101c8d-3b46-42bb-9234-05c33f24ac05 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 14 13:39:13.764: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6379" for this suite.
Feb 14 13:39:19.811: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 13:39:19.950: INFO: namespace projected-6379 deletion completed in 6.177895078s

• [SLOW TEST:16.562 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 14 13:39:19.951: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81
Feb 14 13:39:20.016: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Feb 14 13:39:20.028: INFO: Waiting for terminating namespaces to be deleted...
Feb 14 13:39:20.031: INFO: 
Logging pods the kubelet thinks is on node iruya-node before test
Feb 14 13:39:20.043: INFO: kube-proxy-976zl from kube-system started at 2019-08-04 09:01:39 +0000 UTC (1 container statuses recorded)
Feb 14 13:39:20.043: INFO: 	Container kube-proxy ready: true, restart count 0
Feb 14 13:39:20.043: INFO: weave-net-rlp57 from kube-system started at 2019-10-12 11:56:39 +0000 UTC (2 container statuses recorded)
Feb 14 13:39:20.043: INFO: 	Container weave ready: true, restart count 0
Feb 14 13:39:20.043: INFO: 	Container weave-npc ready: true, restart count 0
Feb 14 13:39:20.043: INFO: kube-bench-j7kcs from default started at 2020-02-11 06:42:30 +0000 UTC (1 container statuses recorded)
Feb 14 13:39:20.043: INFO: 	Container kube-bench ready: false, restart count 0
Feb 14 13:39:20.043: INFO: 
Logging pods the kubelet thinks is on node iruya-server-sfge57q7djm7 before test
Feb 14 13:39:20.057: INFO: coredns-5c98db65d4-bm4gs from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded)
Feb 14 13:39:20.057: INFO: 	Container coredns ready: true, restart count 0
Feb 14 13:39:20.057: INFO: etcd-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:38 +0000 UTC (1 container statuses recorded)
Feb 14 13:39:20.057: INFO: 	Container etcd ready: true, restart count 0
Feb 14 13:39:20.057: INFO: weave-net-bzl4d from kube-system started at 2019-08-04 08:52:37 +0000 UTC (2 container statuses recorded)
Feb 14 13:39:20.057: INFO: 	Container weave ready: true, restart count 0
Feb 14 13:39:20.057: INFO: 	Container weave-npc ready: true, restart count 0
Feb 14 13:39:20.057: INFO: kube-controller-manager-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:42 +0000 UTC (1 container statuses recorded)
Feb 14 13:39:20.057: INFO: 	Container kube-controller-manager ready: true, restart count 21
Feb 14 13:39:20.057: INFO: kube-proxy-58v95 from kube-system started at 2019-08-04 08:52:37 +0000 UTC (1 container statuses recorded)
Feb 14 13:39:20.057: INFO: 	Container kube-proxy ready: true, restart count 0
Feb 14 13:39:20.057: INFO: kube-apiserver-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:39 +0000 UTC (1 container statuses recorded)
Feb 14 13:39:20.057: INFO: 	Container kube-apiserver ready: true, restart count 0
Feb 14 13:39:20.057: INFO: kube-scheduler-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:43 +0000 UTC (1 container statuses recorded)
Feb 14 13:39:20.057: INFO: 	Container kube-scheduler ready: true, restart count 13
Feb 14 13:39:20.057: INFO: coredns-5c98db65d4-xx8w8 from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded)
Feb 14 13:39:20.057: INFO: 	Container coredns ready: true, restart count 0
[It] validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: verifying the node has the label node iruya-node
STEP: verifying the node has the label node iruya-server-sfge57q7djm7
Feb 14 13:39:20.201: INFO: Pod coredns-5c98db65d4-bm4gs requesting resource cpu=100m on Node iruya-server-sfge57q7djm7
Feb 14 13:39:20.201: INFO: Pod coredns-5c98db65d4-xx8w8 requesting resource cpu=100m on Node iruya-server-sfge57q7djm7
Feb 14 13:39:20.201: INFO: Pod etcd-iruya-server-sfge57q7djm7 requesting resource cpu=0m on Node iruya-server-sfge57q7djm7
Feb 14 13:39:20.201: INFO: Pod kube-apiserver-iruya-server-sfge57q7djm7 requesting resource cpu=250m on Node iruya-server-sfge57q7djm7
Feb 14 13:39:20.201: INFO: Pod kube-controller-manager-iruya-server-sfge57q7djm7 requesting resource cpu=200m on Node iruya-server-sfge57q7djm7
Feb 14 13:39:20.201: INFO: Pod kube-proxy-58v95 requesting resource cpu=0m on Node iruya-server-sfge57q7djm7
Feb 14 13:39:20.201: INFO: Pod kube-proxy-976zl requesting resource cpu=0m on Node iruya-node
Feb 14 13:39:20.201: INFO: Pod kube-scheduler-iruya-server-sfge57q7djm7 requesting resource cpu=100m on Node iruya-server-sfge57q7djm7
Feb 14 13:39:20.201: INFO: Pod weave-net-bzl4d requesting resource cpu=20m on Node iruya-server-sfge57q7djm7
Feb 14 13:39:20.201: INFO: Pod weave-net-rlp57 requesting resource cpu=20m on Node iruya-node
STEP: Starting Pods to consume most of the cluster CPU.
STEP: Creating another pod that requires unavailable amount of CPU.
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-4838b97e-498b-4008-adcb-b81332c4ac90.15f34874b6c34927], Reason = [Scheduled], Message = [Successfully assigned sched-pred-7009/filler-pod-4838b97e-498b-4008-adcb-b81332c4ac90 to iruya-node]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-4838b97e-498b-4008-adcb-b81332c4ac90.15f34875d1856137], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-4838b97e-498b-4008-adcb-b81332c4ac90.15f3487696117388], Reason = [Created], Message = [Created container filler-pod-4838b97e-498b-4008-adcb-b81332c4ac90]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-4838b97e-498b-4008-adcb-b81332c4ac90.15f34876c13f8a24], Reason = [Started], Message = [Started container filler-pod-4838b97e-498b-4008-adcb-b81332c4ac90]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-a428b6cb-290a-48ef-b31a-e2fd79794acb.15f34874b78e2e72], Reason = [Scheduled], Message = [Successfully assigned sched-pred-7009/filler-pod-a428b6cb-290a-48ef-b31a-e2fd79794acb to iruya-server-sfge57q7djm7]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-a428b6cb-290a-48ef-b31a-e2fd79794acb.15f34875e13bd990], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-a428b6cb-290a-48ef-b31a-e2fd79794acb.15f34876b080c866], Reason = [Created], Message = [Created container filler-pod-a428b6cb-290a-48ef-b31a-e2fd79794acb]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-a428b6cb-290a-48ef-b31a-e2fd79794acb.15f34876cbe9ebbe], Reason = [Started], Message = [Started container filler-pod-a428b6cb-290a-48ef-b31a-e2fd79794acb]
STEP: Considering event: 
Type = [Warning], Name = [additional-pod.15f348778936d177], Reason = [FailedScheduling], Message = [0/2 nodes are available: 2 Insufficient cpu.]
STEP: removing the label node off the node iruya-node
STEP: verifying the node doesn't have the label node
STEP: removing the label node off the node iruya-server-sfge57q7djm7
STEP: verifying the node doesn't have the label node
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 14 13:39:33.846: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-7009" for this suite.
Feb 14 13:39:40.936: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 13:39:41.493: INFO: namespace sched-pred-7009 deletion completed in 7.635473052s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72

• [SLOW TEST:21.542 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23
  validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test when starting a container that exits 
  should run with the expected status [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 14 13:39:41.495: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run with the expected status [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpa': should get the expected 'State'
STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpof': should get the expected 'State'
STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpn': should get the expected 'State'
STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance]
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 14 13:40:40.502: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-4603" for this suite.
Feb 14 13:40:46.604: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 13:40:46.721: INFO: namespace container-runtime-4603 deletion completed in 6.21406837s

• [SLOW TEST:65.227 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    when starting a container that exits
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:39
      should run with the expected status [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 14 13:40:46.722: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0666 on node default medium
Feb 14 13:40:46.796: INFO: Waiting up to 5m0s for pod "pod-6fc5e151-efa6-4608-9d67-e707c1094bcf" in namespace "emptydir-112" to be "success or failure"
Feb 14 13:40:46.803: INFO: Pod "pod-6fc5e151-efa6-4608-9d67-e707c1094bcf": Phase="Pending", Reason="", readiness=false. Elapsed: 7.09782ms
Feb 14 13:40:48.810: INFO: Pod "pod-6fc5e151-efa6-4608-9d67-e707c1094bcf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014436036s
Feb 14 13:40:51.261: INFO: Pod "pod-6fc5e151-efa6-4608-9d67-e707c1094bcf": Phase="Pending", Reason="", readiness=false. Elapsed: 4.465088973s
Feb 14 13:40:53.271: INFO: Pod "pod-6fc5e151-efa6-4608-9d67-e707c1094bcf": Phase="Pending", Reason="", readiness=false. Elapsed: 6.475467172s
Feb 14 13:40:55.281: INFO: Pod "pod-6fc5e151-efa6-4608-9d67-e707c1094bcf": Phase="Pending", Reason="", readiness=false. Elapsed: 8.485522696s
Feb 14 13:40:57.289: INFO: Pod "pod-6fc5e151-efa6-4608-9d67-e707c1094bcf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.4937022s
STEP: Saw pod success
Feb 14 13:40:57.290: INFO: Pod "pod-6fc5e151-efa6-4608-9d67-e707c1094bcf" satisfied condition "success or failure"
Feb 14 13:40:57.293: INFO: Trying to get logs from node iruya-node pod pod-6fc5e151-efa6-4608-9d67-e707c1094bcf container test-container: 
STEP: delete the pod
Feb 14 13:40:57.785: INFO: Waiting for pod pod-6fc5e151-efa6-4608-9d67-e707c1094bcf to disappear
Feb 14 13:40:57.803: INFO: Pod pod-6fc5e151-efa6-4608-9d67-e707c1094bcf no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 14 13:40:57.804: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-112" for this suite.
Feb 14 13:41:03.957: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 13:41:04.059: INFO: namespace emptydir-112 deletion completed in 6.231753239s

• [SLOW TEST:17.337 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run default 
  should create an rc or deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 14 13:41:04.059: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1420
[It] should create an rc or deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Feb 14 13:41:04.115: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-4075'
Feb 14 13:41:06.223: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Feb 14 13:41:06.223: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n"
STEP: verifying the pod controlled by e2e-test-nginx-deployment gets created
[AfterEach] [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1426
Feb 14 13:41:08.290: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=kubectl-4075'
Feb 14 13:41:08.509: INFO: stderr: ""
Feb 14 13:41:08.510: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 14 13:41:08.510: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4075" for this suite.
Feb 14 13:41:14.602: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 13:41:14.721: INFO: namespace kubectl-4075 deletion completed in 6.170420159s

• [SLOW TEST:10.661 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create an rc or deployment from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 14 13:41:14.721: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Performing setup for networking test in namespace pod-network-test-7492
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Feb 14 13:41:14.827: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Feb 14 13:41:51.012: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.32.0.4 8081 | grep -v '^\s*$'] Namespace:pod-network-test-7492 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 14 13:41:51.012: INFO: >>> kubeConfig: /root/.kube/config
I0214 13:41:51.101881       8 log.go:172] (0xc0009ba8f0) (0xc001298b40) Create stream
I0214 13:41:51.102044       8 log.go:172] (0xc0009ba8f0) (0xc001298b40) Stream added, broadcasting: 1
I0214 13:41:51.111277       8 log.go:172] (0xc0009ba8f0) Reply frame received for 1
I0214 13:41:51.111309       8 log.go:172] (0xc0009ba8f0) (0xc00229c1e0) Create stream
I0214 13:41:51.111315       8 log.go:172] (0xc0009ba8f0) (0xc00229c1e0) Stream added, broadcasting: 3
I0214 13:41:51.112685       8 log.go:172] (0xc0009ba8f0) Reply frame received for 3
I0214 13:41:51.112731       8 log.go:172] (0xc0009ba8f0) (0xc0001eabe0) Create stream
I0214 13:41:51.112743       8 log.go:172] (0xc0009ba8f0) (0xc0001eabe0) Stream added, broadcasting: 5
I0214 13:41:51.114218       8 log.go:172] (0xc0009ba8f0) Reply frame received for 5
I0214 13:41:52.289301       8 log.go:172] (0xc0009ba8f0) Data frame received for 3
I0214 13:41:52.289448       8 log.go:172] (0xc00229c1e0) (3) Data frame handling
I0214 13:41:52.289484       8 log.go:172] (0xc00229c1e0) (3) Data frame sent
I0214 13:41:52.448089       8 log.go:172] (0xc0009ba8f0) (0xc00229c1e0) Stream removed, broadcasting: 3
I0214 13:41:52.448893       8 log.go:172] (0xc0009ba8f0) Data frame received for 1
I0214 13:41:52.449103       8 log.go:172] (0xc0009ba8f0) (0xc0001eabe0) Stream removed, broadcasting: 5
I0214 13:41:52.449216       8 log.go:172] (0xc001298b40) (1) Data frame handling
I0214 13:41:52.449246       8 log.go:172] (0xc001298b40) (1) Data frame sent
I0214 13:41:52.449296       8 log.go:172] (0xc0009ba8f0) (0xc001298b40) Stream removed, broadcasting: 1
I0214 13:41:52.449326       8 log.go:172] (0xc0009ba8f0) Go away received
I0214 13:41:52.449973       8 log.go:172] (0xc0009ba8f0) (0xc001298b40) Stream removed, broadcasting: 1
I0214 13:41:52.450045       8 log.go:172] (0xc0009ba8f0) (0xc00229c1e0) Stream removed, broadcasting: 3
I0214 13:41:52.450081       8 log.go:172] (0xc0009ba8f0) (0xc0001eabe0) Stream removed, broadcasting: 5
Feb 14 13:41:52.450: INFO: Found all expected endpoints: [netserver-0]
Feb 14 13:41:52.459: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.44.0.2 8081 | grep -v '^\s*$'] Namespace:pod-network-test-7492 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 14 13:41:52.459: INFO: >>> kubeConfig: /root/.kube/config
I0214 13:41:52.540235       8 log.go:172] (0xc0028eae70) (0xc0001eb4a0) Create stream
I0214 13:41:52.540559       8 log.go:172] (0xc0028eae70) (0xc0001eb4a0) Stream added, broadcasting: 1
I0214 13:41:52.551521       8 log.go:172] (0xc0028eae70) Reply frame received for 1
I0214 13:41:52.551596       8 log.go:172] (0xc0028eae70) (0xc001626000) Create stream
I0214 13:41:52.551613       8 log.go:172] (0xc0028eae70) (0xc001626000) Stream added, broadcasting: 3
I0214 13:41:52.554468       8 log.go:172] (0xc0028eae70) Reply frame received for 3
I0214 13:41:52.554528       8 log.go:172] (0xc0028eae70) (0xc0001eb540) Create stream
I0214 13:41:52.554571       8 log.go:172] (0xc0028eae70) (0xc0001eb540) Stream added, broadcasting: 5
I0214 13:41:52.557498       8 log.go:172] (0xc0028eae70) Reply frame received for 5
I0214 13:41:53.788665       8 log.go:172] (0xc0028eae70) Data frame received for 3
I0214 13:41:53.788950       8 log.go:172] (0xc001626000) (3) Data frame handling
I0214 13:41:53.789031       8 log.go:172] (0xc001626000) (3) Data frame sent
I0214 13:41:54.098492       8 log.go:172] (0xc0028eae70) (0xc0001eb540) Stream removed, broadcasting: 5
I0214 13:41:54.099380       8 log.go:172] (0xc0028eae70) Data frame received for 1
I0214 13:41:54.099577       8 log.go:172] (0xc0028eae70) (0xc001626000) Stream removed, broadcasting: 3
I0214 13:41:54.099668       8 log.go:172] (0xc0001eb4a0) (1) Data frame handling
I0214 13:41:54.099738       8 log.go:172] (0xc0001eb4a0) (1) Data frame sent
I0214 13:41:54.099790       8 log.go:172] (0xc0028eae70) (0xc0001eb4a0) Stream removed, broadcasting: 1
I0214 13:41:54.099857       8 log.go:172] (0xc0028eae70) Go away received
I0214 13:41:54.100501       8 log.go:172] (0xc0028eae70) (0xc0001eb4a0) Stream removed, broadcasting: 1
I0214 13:41:54.100550       8 log.go:172] (0xc0028eae70) (0xc001626000) Stream removed, broadcasting: 3
I0214 13:41:54.100563       8 log.go:172] (0xc0028eae70) (0xc0001eb540) Stream removed, broadcasting: 5
Feb 14 13:41:54.100: INFO: Found all expected endpoints: [netserver-1]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 14 13:41:54.101: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-7492" for this suite.
Feb 14 13:42:18.131: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 13:42:18.237: INFO: namespace pod-network-test-7492 deletion completed in 24.12393382s

• [SLOW TEST:63.516 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 14 13:42:18.238: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod test-webserver-4b5b4b95-1bf8-4f13-baee-628715591514 in namespace container-probe-5009
Feb 14 13:42:26.367: INFO: Started pod test-webserver-4b5b4b95-1bf8-4f13-baee-628715591514 in namespace container-probe-5009
STEP: checking the pod's current state and verifying that restartCount is present
Feb 14 13:42:26.386: INFO: Initial restart count of pod test-webserver-4b5b4b95-1bf8-4f13-baee-628715591514 is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 14 13:46:27.524: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-5009" for this suite.
Feb 14 13:46:33.606: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 13:46:33.754: INFO: namespace container-probe-5009 deletion completed in 6.190676738s

• [SLOW TEST:255.516 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-storage] Downward API volume 
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 14 13:46:33.754: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating the pod
Feb 14 13:46:45.114: INFO: Successfully updated pod "labelsupdatefdeb0ac9-429d-41bc-8fe7-5c17ee1529bd"
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 14 13:46:47.169: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-8124" for this suite.
Feb 14 13:47:09.206: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 13:47:09.338: INFO: namespace downward-api-8124 deletion completed in 22.161828788s

• [SLOW TEST:35.584 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 14 13:47:09.339: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir volume type on node default medium
Feb 14 13:47:09.445: INFO: Waiting up to 5m0s for pod "pod-597bcf3a-439b-4348-944c-236bb8835b41" in namespace "emptydir-4279" to be "success or failure"
Feb 14 13:47:09.448: INFO: Pod "pod-597bcf3a-439b-4348-944c-236bb8835b41": Phase="Pending", Reason="", readiness=false. Elapsed: 2.817512ms
Feb 14 13:47:11.459: INFO: Pod "pod-597bcf3a-439b-4348-944c-236bb8835b41": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013606523s
Feb 14 13:47:13.467: INFO: Pod "pod-597bcf3a-439b-4348-944c-236bb8835b41": Phase="Pending", Reason="", readiness=false. Elapsed: 4.021272061s
Feb 14 13:47:15.475: INFO: Pod "pod-597bcf3a-439b-4348-944c-236bb8835b41": Phase="Pending", Reason="", readiness=false. Elapsed: 6.02917483s
Feb 14 13:47:17.481: INFO: Pod "pod-597bcf3a-439b-4348-944c-236bb8835b41": Phase="Pending", Reason="", readiness=false. Elapsed: 8.035230189s
Feb 14 13:47:19.487: INFO: Pod "pod-597bcf3a-439b-4348-944c-236bb8835b41": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.041838241s
STEP: Saw pod success
Feb 14 13:47:19.488: INFO: Pod "pod-597bcf3a-439b-4348-944c-236bb8835b41" satisfied condition "success or failure"
Feb 14 13:47:19.493: INFO: Trying to get logs from node iruya-node pod pod-597bcf3a-439b-4348-944c-236bb8835b41 container test-container: 
STEP: delete the pod
Feb 14 13:47:19.560: INFO: Waiting for pod pod-597bcf3a-439b-4348-944c-236bb8835b41 to disappear
Feb 14 13:47:19.568: INFO: Pod pod-597bcf3a-439b-4348-944c-236bb8835b41 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 14 13:47:19.568: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-4279" for this suite.
Feb 14 13:47:25.690: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 13:47:25.799: INFO: namespace emptydir-4279 deletion completed in 6.220210869s

• [SLOW TEST:16.460 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 14 13:47:25.800: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8553.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8553.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Feb 14 13:47:38.010: INFO: Unable to read wheezy_udp@kubernetes.default.svc.cluster.local from pod dns-8553/dns-test-6c7fd5fc-de4a-4ead-92fd-9229bf3dd982: the server could not find the requested resource (get pods dns-test-6c7fd5fc-de4a-4ead-92fd-9229bf3dd982)
Feb 14 13:47:38.018: INFO: Unable to read wheezy_tcp@kubernetes.default.svc.cluster.local from pod dns-8553/dns-test-6c7fd5fc-de4a-4ead-92fd-9229bf3dd982: the server could not find the requested resource (get pods dns-test-6c7fd5fc-de4a-4ead-92fd-9229bf3dd982)
Feb 14 13:47:38.021: INFO: Unable to read wheezy_udp@PodARecord from pod dns-8553/dns-test-6c7fd5fc-de4a-4ead-92fd-9229bf3dd982: the server could not find the requested resource (get pods dns-test-6c7fd5fc-de4a-4ead-92fd-9229bf3dd982)
Feb 14 13:47:38.024: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-8553/dns-test-6c7fd5fc-de4a-4ead-92fd-9229bf3dd982: the server could not find the requested resource (get pods dns-test-6c7fd5fc-de4a-4ead-92fd-9229bf3dd982)
Feb 14 13:47:38.027: INFO: Unable to read jessie_udp@kubernetes.default.svc.cluster.local from pod dns-8553/dns-test-6c7fd5fc-de4a-4ead-92fd-9229bf3dd982: the server could not find the requested resource (get pods dns-test-6c7fd5fc-de4a-4ead-92fd-9229bf3dd982)
Feb 14 13:47:38.031: INFO: Unable to read jessie_tcp@kubernetes.default.svc.cluster.local from pod dns-8553/dns-test-6c7fd5fc-de4a-4ead-92fd-9229bf3dd982: the server could not find the requested resource (get pods dns-test-6c7fd5fc-de4a-4ead-92fd-9229bf3dd982)
Feb 14 13:47:38.037: INFO: Unable to read jessie_udp@PodARecord from pod dns-8553/dns-test-6c7fd5fc-de4a-4ead-92fd-9229bf3dd982: the server could not find the requested resource (get pods dns-test-6c7fd5fc-de4a-4ead-92fd-9229bf3dd982)
Feb 14 13:47:38.042: INFO: Unable to read jessie_tcp@PodARecord from pod dns-8553/dns-test-6c7fd5fc-de4a-4ead-92fd-9229bf3dd982: the server could not find the requested resource (get pods dns-test-6c7fd5fc-de4a-4ead-92fd-9229bf3dd982)
Feb 14 13:47:38.042: INFO: Lookups using dns-8553/dns-test-6c7fd5fc-de4a-4ead-92fd-9229bf3dd982 failed for: [wheezy_udp@kubernetes.default.svc.cluster.local wheezy_tcp@kubernetes.default.svc.cluster.local wheezy_udp@PodARecord wheezy_tcp@PodARecord jessie_udp@kubernetes.default.svc.cluster.local jessie_tcp@kubernetes.default.svc.cluster.local jessie_udp@PodARecord jessie_tcp@PodARecord]

Feb 14 13:47:43.084: INFO: DNS probes using dns-8553/dns-test-6c7fd5fc-de4a-4ead-92fd-9229bf3dd982 succeeded

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 14 13:47:43.186: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-8553" for this suite.
Feb 14 13:47:49.282: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 13:47:49.420: INFO: namespace dns-8553 deletion completed in 6.202806028s

• [SLOW TEST:23.620 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class 
  should be submitted and removed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 14 13:47:49.420: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods Set QOS Class
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:179
[It] should be submitted and removed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying QOS class is set on the pod
[AfterEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 14 13:47:49.546: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-4372" for this suite.
Feb 14 13:48:11.653: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 13:48:11.775: INFO: namespace pods-4372 deletion completed in 22.214747196s

• [SLOW TEST:22.355 seconds]
[k8s.io] [sig-node] Pods Extended
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  [k8s.io] Pods Set QOS Class
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should be submitted and removed  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 14 13:48:11.776: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44
[It] should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
Feb 14 13:48:11.881: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 14 13:48:25.584: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-9530" for this suite.
Feb 14 13:48:31.634: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 13:48:31.815: INFO: namespace init-container-9530 deletion completed in 6.206680666s

• [SLOW TEST:20.039 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 14 13:48:31.816: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-7f00ad9d-8556-4a78-b1ac-03b6b474ec39
STEP: Creating a pod to test consume configMaps
Feb 14 13:48:31.962: INFO: Waiting up to 5m0s for pod "pod-configmaps-e0303fb5-e3ed-467c-a73c-433fdda95cfc" in namespace "configmap-3406" to be "success or failure"
Feb 14 13:48:31.986: INFO: Pod "pod-configmaps-e0303fb5-e3ed-467c-a73c-433fdda95cfc": Phase="Pending", Reason="", readiness=false. Elapsed: 23.787597ms
Feb 14 13:48:33.996: INFO: Pod "pod-configmaps-e0303fb5-e3ed-467c-a73c-433fdda95cfc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03421486s
Feb 14 13:48:36.014: INFO: Pod "pod-configmaps-e0303fb5-e3ed-467c-a73c-433fdda95cfc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.052240749s
Feb 14 13:48:38.027: INFO: Pod "pod-configmaps-e0303fb5-e3ed-467c-a73c-433fdda95cfc": Phase="Pending", Reason="", readiness=false. Elapsed: 6.06472859s
Feb 14 13:48:40.037: INFO: Pod "pod-configmaps-e0303fb5-e3ed-467c-a73c-433fdda95cfc": Phase="Pending", Reason="", readiness=false. Elapsed: 8.075203824s
Feb 14 13:48:42.047: INFO: Pod "pod-configmaps-e0303fb5-e3ed-467c-a73c-433fdda95cfc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.084609105s
STEP: Saw pod success
Feb 14 13:48:42.047: INFO: Pod "pod-configmaps-e0303fb5-e3ed-467c-a73c-433fdda95cfc" satisfied condition "success or failure"
Feb 14 13:48:42.050: INFO: Trying to get logs from node iruya-node pod pod-configmaps-e0303fb5-e3ed-467c-a73c-433fdda95cfc container configmap-volume-test: 
STEP: delete the pod
Feb 14 13:48:42.119: INFO: Waiting for pod pod-configmaps-e0303fb5-e3ed-467c-a73c-433fdda95cfc to disappear
Feb 14 13:48:42.176: INFO: Pod pod-configmaps-e0303fb5-e3ed-467c-a73c-433fdda95cfc no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 14 13:48:42.177: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-3406" for this suite.
Feb 14 13:48:48.224: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 13:48:48.325: INFO: namespace configmap-3406 deletion completed in 6.142383259s

• [SLOW TEST:16.509 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform canary updates and phased rolling updates of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 14 13:48:48.325: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75
STEP: Creating service test in namespace statefulset-4014
[It] should perform canary updates and phased rolling updates of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a new StatefulSet
Feb 14 13:48:48.586: INFO: Found 0 stateful pods, waiting for 3
Feb 14 13:48:58.599: INFO: Found 2 stateful pods, waiting for 3
Feb 14 13:49:08.602: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Feb 14 13:49:08.602: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Feb 14 13:49:08.602: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Feb 14 13:49:18.602: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Feb 14 13:49:18.602: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Feb 14 13:49:18.603: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Updating stateful set template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine
Feb 14 13:49:18.640: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Not applying an update when the partition is greater than the number of replicas
STEP: Performing a canary update
Feb 14 13:49:28.716: INFO: Updating stateful set ss2
Feb 14 13:49:28.846: INFO: Waiting for Pod statefulset-4014/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
STEP: Restoring Pods to the correct revision when they are deleted
Feb 14 13:49:39.203: INFO: Found 2 stateful pods, waiting for 3
Feb 14 13:49:49.217: INFO: Found 2 stateful pods, waiting for 3
Feb 14 13:49:59.215: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Feb 14 13:49:59.215: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Feb 14 13:49:59.215: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Feb 14 13:50:09.222: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Feb 14 13:50:09.222: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Feb 14 13:50:09.222: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Performing a phased rolling update
Feb 14 13:50:09.258: INFO: Updating stateful set ss2
Feb 14 13:50:09.358: INFO: Waiting for Pod statefulset-4014/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb 14 13:50:19.459: INFO: Updating stateful set ss2
Feb 14 13:50:19.614: INFO: Waiting for StatefulSet statefulset-4014/ss2 to complete update
Feb 14 13:50:19.614: INFO: Waiting for Pod statefulset-4014/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb 14 13:50:29.628: INFO: Waiting for StatefulSet statefulset-4014/ss2 to complete update
Feb 14 13:50:29.628: INFO: Waiting for Pod statefulset-4014/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb 14 13:50:39.632: INFO: Waiting for StatefulSet statefulset-4014/ss2 to complete update
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86
Feb 14 13:50:49.635: INFO: Deleting all statefulset in ns statefulset-4014
Feb 14 13:50:49.639: INFO: Scaling statefulset ss2 to 0
Feb 14 13:51:29.681: INFO: Waiting for statefulset status.replicas updated to 0
Feb 14 13:51:29.687: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 14 13:51:29.714: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-4014" for this suite.
Feb 14 13:51:37.761: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 13:51:37.948: INFO: namespace statefulset-4014 deletion completed in 8.22472817s

• [SLOW TEST:169.623 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should perform canary updates and phased rolling updates of template modifications [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 14 13:51:37.948: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Feb 14 13:51:47.549: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 14 13:51:47.679: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-1955" for this suite.
Feb 14 13:51:53.711: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 13:51:53.841: INFO: namespace container-runtime-1955 deletion completed in 6.157737033s

• [SLOW TEST:15.893 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129
      should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] Events 
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] [sig-node] Events
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 14 13:51:53.842: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename events
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: retrieving the pod
Feb 14 13:52:04.083: INFO: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:send-events-05689714-0896-4bd4-abeb-c9f2c86edf85,GenerateName:,Namespace:events-2281,SelfLink:/api/v1/namespaces/events-2281/pods/send-events-05689714-0896-4bd4-abeb-c9f2c86edf85,UID:414e3539-2ec1-4998-b603-6f51bae140f5,ResourceVersion:24327088,Generation:0,CreationTimestamp:2020-02-14 13:51:54 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: foo,time: 45255290,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-kt7rh {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-kt7rh,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{p gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 [] []  [{ 0 80 TCP }] [] [] {map[] map[]} [{default-token-kt7rh true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00243bc50} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00243bca0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 13:51:54 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 13:52:02 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 13:52:02 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 13:51:54 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.1,StartTime:2020-02-14 13:51:54 +0000 UTC,ContainerStatuses:[{p {nil ContainerStateRunning{StartedAt:2020-02-14 13:52:01 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 docker-pullable://gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 docker://d723902d6fabc60eb85cd5790fd6948634ef1dc42451dd2d5ea5d1aa6a951fd5}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}

STEP: checking for scheduler event about the pod
Feb 14 13:52:06.091: INFO: Saw scheduler event for our pod.
STEP: checking for kubelet event about the pod
Feb 14 13:52:08.101: INFO: Saw kubelet event for our pod.
STEP: deleting the pod
[AfterEach] [k8s.io] [sig-node] Events
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 14 13:52:08.113: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "events-2281" for this suite.
Feb 14 13:52:48.211: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 13:52:48.353: INFO: namespace events-2281 deletion completed in 40.225896123s

• [SLOW TEST:54.511 seconds]
[k8s.io] [sig-node] Events
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 14 13:52:48.354: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88
[It] should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating service endpoint-test2 in namespace services-9212
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-9212 to expose endpoints map[]
Feb 14 13:52:48.567: INFO: successfully validated that service endpoint-test2 in namespace services-9212 exposes endpoints map[] (12.002977ms elapsed)
STEP: Creating pod pod1 in namespace services-9212
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-9212 to expose endpoints map[pod1:[80]]
Feb 14 13:52:52.711: INFO: Unexpected endpoints: found map[], expected map[pod1:[80]] (4.122241298s elapsed, will retry)
Feb 14 13:52:57.786: INFO: successfully validated that service endpoint-test2 in namespace services-9212 exposes endpoints map[pod1:[80]] (9.197703055s elapsed)
STEP: Creating pod pod2 in namespace services-9212
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-9212 to expose endpoints map[pod1:[80] pod2:[80]]
Feb 14 13:53:02.161: INFO: Unexpected endpoints: found map[5ef81a2e-5abe-4e0c-b0d2-6542c799d164:[80]], expected map[pod1:[80] pod2:[80]] (4.355977849s elapsed, will retry)
Feb 14 13:53:05.227: INFO: successfully validated that service endpoint-test2 in namespace services-9212 exposes endpoints map[pod1:[80] pod2:[80]] (7.42188675s elapsed)
STEP: Deleting pod pod1 in namespace services-9212
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-9212 to expose endpoints map[pod2:[80]]
Feb 14 13:53:05.291: INFO: successfully validated that service endpoint-test2 in namespace services-9212 exposes endpoints map[pod2:[80]] (39.02062ms elapsed)
STEP: Deleting pod pod2 in namespace services-9212
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-9212 to expose endpoints map[]
Feb 14 13:53:05.410: INFO: successfully validated that service endpoint-test2 in namespace services-9212 exposes endpoints map[] (7.539588ms elapsed)
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 14 13:53:05.449: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-9212" for this suite.
Feb 14 13:53:27.573: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 13:53:27.736: INFO: namespace services-9212 deletion completed in 22.275235429s
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92

• [SLOW TEST:39.382 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 14 13:53:27.737: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a watch on configmaps with label A
STEP: creating a watch on configmaps with label B
STEP: creating a watch on configmaps with label A or B
STEP: creating a configmap with label A and ensuring the correct watchers observe the notification
Feb 14 13:53:27.848: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-3143,SelfLink:/api/v1/namespaces/watch-3143/configmaps/e2e-watch-test-configmap-a,UID:363b274b-5c97-4e72-8c85-40db8ef9b044,ResourceVersion:24327274,Generation:0,CreationTimestamp:2020-02-14 13:53:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
Feb 14 13:53:27.849: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-3143,SelfLink:/api/v1/namespaces/watch-3143/configmaps/e2e-watch-test-configmap-a,UID:363b274b-5c97-4e72-8c85-40db8ef9b044,ResourceVersion:24327274,Generation:0,CreationTimestamp:2020-02-14 13:53:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
STEP: modifying configmap A and ensuring the correct watchers observe the notification
Feb 14 13:53:37.873: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-3143,SelfLink:/api/v1/namespaces/watch-3143/configmaps/e2e-watch-test-configmap-a,UID:363b274b-5c97-4e72-8c85-40db8ef9b044,ResourceVersion:24327288,Generation:0,CreationTimestamp:2020-02-14 13:53:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
Feb 14 13:53:37.874: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-3143,SelfLink:/api/v1/namespaces/watch-3143/configmaps/e2e-watch-test-configmap-a,UID:363b274b-5c97-4e72-8c85-40db8ef9b044,ResourceVersion:24327288,Generation:0,CreationTimestamp:2020-02-14 13:53:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying configmap A again and ensuring the correct watchers observe the notification
Feb 14 13:53:47.908: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-3143,SelfLink:/api/v1/namespaces/watch-3143/configmaps/e2e-watch-test-configmap-a,UID:363b274b-5c97-4e72-8c85-40db8ef9b044,ResourceVersion:24327303,Generation:0,CreationTimestamp:2020-02-14 13:53:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Feb 14 13:53:47.909: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-3143,SelfLink:/api/v1/namespaces/watch-3143/configmaps/e2e-watch-test-configmap-a,UID:363b274b-5c97-4e72-8c85-40db8ef9b044,ResourceVersion:24327303,Generation:0,CreationTimestamp:2020-02-14 13:53:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
STEP: deleting configmap A and ensuring the correct watchers observe the notification
Feb 14 13:53:57.921: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-3143,SelfLink:/api/v1/namespaces/watch-3143/configmaps/e2e-watch-test-configmap-a,UID:363b274b-5c97-4e72-8c85-40db8ef9b044,ResourceVersion:24327316,Generation:0,CreationTimestamp:2020-02-14 13:53:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Feb 14 13:53:57.922: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-3143,SelfLink:/api/v1/namespaces/watch-3143/configmaps/e2e-watch-test-configmap-a,UID:363b274b-5c97-4e72-8c85-40db8ef9b044,ResourceVersion:24327316,Generation:0,CreationTimestamp:2020-02-14 13:53:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
STEP: creating a configmap with label B and ensuring the correct watchers observe the notification
Feb 14 13:54:07.939: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-3143,SelfLink:/api/v1/namespaces/watch-3143/configmaps/e2e-watch-test-configmap-b,UID:e54984e0-39f1-4af6-8105-351ce5f1ef95,ResourceVersion:24327330,Generation:0,CreationTimestamp:2020-02-14 13:54:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
Feb 14 13:54:07.939: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-3143,SelfLink:/api/v1/namespaces/watch-3143/configmaps/e2e-watch-test-configmap-b,UID:e54984e0-39f1-4af6-8105-351ce5f1ef95,ResourceVersion:24327330,Generation:0,CreationTimestamp:2020-02-14 13:54:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
STEP: deleting configmap B and ensuring the correct watchers observe the notification
Feb 14 13:54:17.962: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-3143,SelfLink:/api/v1/namespaces/watch-3143/configmaps/e2e-watch-test-configmap-b,UID:e54984e0-39f1-4af6-8105-351ce5f1ef95,ResourceVersion:24327344,Generation:0,CreationTimestamp:2020-02-14 13:54:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
Feb 14 13:54:17.963: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-3143,SelfLink:/api/v1/namespaces/watch-3143/configmaps/e2e-watch-test-configmap-b,UID:e54984e0-39f1-4af6-8105-351ce5f1ef95,ResourceVersion:24327344,Generation:0,CreationTimestamp:2020-02-14 13:54:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 14 13:54:27.964: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-3143" for this suite.
Feb 14 13:54:34.016: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 13:54:34.130: INFO: namespace watch-3143 deletion completed in 6.152327221s

• [SLOW TEST:66.394 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with secret pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 14 13:54:34.131: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with secret pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod pod-subpath-test-secret-wws8
STEP: Creating a pod to test atomic-volume-subpath
Feb 14 13:54:34.276: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-wws8" in namespace "subpath-7934" to be "success or failure"
Feb 14 13:54:34.305: INFO: Pod "pod-subpath-test-secret-wws8": Phase="Pending", Reason="", readiness=false. Elapsed: 28.007146ms
Feb 14 13:54:36.326: INFO: Pod "pod-subpath-test-secret-wws8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.049615958s
Feb 14 13:54:38.343: INFO: Pod "pod-subpath-test-secret-wws8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.06593942s
Feb 14 13:54:40.352: INFO: Pod "pod-subpath-test-secret-wws8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.075263085s
Feb 14 13:54:42.364: INFO: Pod "pod-subpath-test-secret-wws8": Phase="Pending", Reason="", readiness=false. Elapsed: 8.086953779s
Feb 14 13:54:44.372: INFO: Pod "pod-subpath-test-secret-wws8": Phase="Running", Reason="", readiness=true. Elapsed: 10.095880705s
Feb 14 13:54:46.381: INFO: Pod "pod-subpath-test-secret-wws8": Phase="Running", Reason="", readiness=true. Elapsed: 12.1045936s
Feb 14 13:54:48.391: INFO: Pod "pod-subpath-test-secret-wws8": Phase="Running", Reason="", readiness=true. Elapsed: 14.114133586s
Feb 14 13:54:50.399: INFO: Pod "pod-subpath-test-secret-wws8": Phase="Running", Reason="", readiness=true. Elapsed: 16.122137575s
Feb 14 13:54:52.408: INFO: Pod "pod-subpath-test-secret-wws8": Phase="Running", Reason="", readiness=true. Elapsed: 18.131581801s
Feb 14 13:54:54.416: INFO: Pod "pod-subpath-test-secret-wws8": Phase="Running", Reason="", readiness=true. Elapsed: 20.139087708s
Feb 14 13:54:56.428: INFO: Pod "pod-subpath-test-secret-wws8": Phase="Running", Reason="", readiness=true. Elapsed: 22.151299998s
Feb 14 13:54:58.438: INFO: Pod "pod-subpath-test-secret-wws8": Phase="Running", Reason="", readiness=true. Elapsed: 24.161564415s
Feb 14 13:55:00.455: INFO: Pod "pod-subpath-test-secret-wws8": Phase="Running", Reason="", readiness=true. Elapsed: 26.177915242s
Feb 14 13:55:02.474: INFO: Pod "pod-subpath-test-secret-wws8": Phase="Running", Reason="", readiness=true. Elapsed: 28.196945768s
Feb 14 13:55:04.565: INFO: Pod "pod-subpath-test-secret-wws8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.28838386s
STEP: Saw pod success
Feb 14 13:55:04.565: INFO: Pod "pod-subpath-test-secret-wws8" satisfied condition "success or failure"
Feb 14 13:55:04.574: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-secret-wws8 container test-container-subpath-secret-wws8: 
STEP: delete the pod
Feb 14 13:55:04.796: INFO: Waiting for pod pod-subpath-test-secret-wws8 to disappear
Feb 14 13:55:04.848: INFO: Pod pod-subpath-test-secret-wws8 no longer exists
STEP: Deleting pod pod-subpath-test-secret-wws8
Feb 14 13:55:04.848: INFO: Deleting pod "pod-subpath-test-secret-wws8" in namespace "subpath-7934"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 14 13:55:04.856: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-7934" for this suite.
Feb 14 13:55:10.955: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 13:55:11.092: INFO: namespace subpath-7934 deletion completed in 6.222803171s

• [SLOW TEST:36.961 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with secret pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 14 13:55:11.092: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Performing setup for networking test in namespace pod-network-test-4930
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Feb 14 13:55:11.181: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Feb 14 13:55:51.410: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostName&protocol=udp&host=10.32.0.4&port=8081&tries=1'] Namespace:pod-network-test-4930 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 14 13:55:51.411: INFO: >>> kubeConfig: /root/.kube/config
I0214 13:55:51.500422       8 log.go:172] (0xc0028ea370) (0xc001d5c5a0) Create stream
I0214 13:55:51.500699       8 log.go:172] (0xc0028ea370) (0xc001d5c5a0) Stream added, broadcasting: 1
I0214 13:55:51.515648       8 log.go:172] (0xc0028ea370) Reply frame received for 1
I0214 13:55:51.515758       8 log.go:172] (0xc0028ea370) (0xc0011868c0) Create stream
I0214 13:55:51.515769       8 log.go:172] (0xc0028ea370) (0xc0011868c0) Stream added, broadcasting: 3
I0214 13:55:51.517377       8 log.go:172] (0xc0028ea370) Reply frame received for 3
I0214 13:55:51.517404       8 log.go:172] (0xc0028ea370) (0xc000516000) Create stream
I0214 13:55:51.517413       8 log.go:172] (0xc0028ea370) (0xc000516000) Stream added, broadcasting: 5
I0214 13:55:51.520422       8 log.go:172] (0xc0028ea370) Reply frame received for 5
I0214 13:55:51.759031       8 log.go:172] (0xc0028ea370) Data frame received for 3
I0214 13:55:51.759199       8 log.go:172] (0xc0011868c0) (3) Data frame handling
I0214 13:55:51.759230       8 log.go:172] (0xc0011868c0) (3) Data frame sent
I0214 13:55:52.059917       8 log.go:172] (0xc0028ea370) (0xc0011868c0) Stream removed, broadcasting: 3
I0214 13:55:52.060462       8 log.go:172] (0xc0028ea370) Data frame received for 1
I0214 13:55:52.060531       8 log.go:172] (0xc0028ea370) (0xc000516000) Stream removed, broadcasting: 5
I0214 13:55:52.060588       8 log.go:172] (0xc001d5c5a0) (1) Data frame handling
I0214 13:55:52.060621       8 log.go:172] (0xc001d5c5a0) (1) Data frame sent
I0214 13:55:52.060642       8 log.go:172] (0xc0028ea370) (0xc001d5c5a0) Stream removed, broadcasting: 1
I0214 13:55:52.060671       8 log.go:172] (0xc0028ea370) Go away received
I0214 13:55:52.061141       8 log.go:172] (0xc0028ea370) (0xc001d5c5a0) Stream removed, broadcasting: 1
I0214 13:55:52.061187       8 log.go:172] (0xc0028ea370) (0xc0011868c0) Stream removed, broadcasting: 3
I0214 13:55:52.061208       8 log.go:172] (0xc0028ea370) (0xc000516000) Stream removed, broadcasting: 5
Feb 14 13:55:52.061: INFO: Waiting for endpoints: map[]
Feb 14 13:55:52.074: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostName&protocol=udp&host=10.44.0.1&port=8081&tries=1'] Namespace:pod-network-test-4930 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 14 13:55:52.075: INFO: >>> kubeConfig: /root/.kube/config
I0214 13:55:52.175768       8 log.go:172] (0xc0009bb130) (0xc000516500) Create stream
I0214 13:55:52.175961       8 log.go:172] (0xc0009bb130) (0xc000516500) Stream added, broadcasting: 1
I0214 13:55:52.189956       8 log.go:172] (0xc0009bb130) Reply frame received for 1
I0214 13:55:52.190624       8 log.go:172] (0xc0009bb130) (0xc001d06460) Create stream
I0214 13:55:52.190677       8 log.go:172] (0xc0009bb130) (0xc001d06460) Stream added, broadcasting: 3
I0214 13:55:52.197613       8 log.go:172] (0xc0009bb130) Reply frame received for 3
I0214 13:55:52.197769       8 log.go:172] (0xc0009bb130) (0xc001186960) Create stream
I0214 13:55:52.197795       8 log.go:172] (0xc0009bb130) (0xc001186960) Stream added, broadcasting: 5
I0214 13:55:52.203841       8 log.go:172] (0xc0009bb130) Reply frame received for 5
I0214 13:55:52.448725       8 log.go:172] (0xc0009bb130) Data frame received for 3
I0214 13:55:52.448862       8 log.go:172] (0xc001d06460) (3) Data frame handling
I0214 13:55:52.448910       8 log.go:172] (0xc001d06460) (3) Data frame sent
I0214 13:55:52.720883       8 log.go:172] (0xc0009bb130) (0xc001d06460) Stream removed, broadcasting: 3
I0214 13:55:52.721128       8 log.go:172] (0xc0009bb130) Data frame received for 1
I0214 13:55:52.721153       8 log.go:172] (0xc000516500) (1) Data frame handling
I0214 13:55:52.721178       8 log.go:172] (0xc000516500) (1) Data frame sent
I0214 13:55:52.721217       8 log.go:172] (0xc0009bb130) (0xc000516500) Stream removed, broadcasting: 1
I0214 13:55:52.721374       8 log.go:172] (0xc0009bb130) (0xc001186960) Stream removed, broadcasting: 5
I0214 13:55:52.721404       8 log.go:172] (0xc0009bb130) Go away received
I0214 13:55:52.722107       8 log.go:172] (0xc0009bb130) (0xc000516500) Stream removed, broadcasting: 1
I0214 13:55:52.722310       8 log.go:172] (0xc0009bb130) (0xc001d06460) Stream removed, broadcasting: 3
I0214 13:55:52.722319       8 log.go:172] (0xc0009bb130) (0xc001186960) Stream removed, broadcasting: 5
Feb 14 13:55:52.722: INFO: Waiting for endpoints: map[]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 14 13:55:52.722: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-4930" for this suite.
Feb 14 13:56:18.766: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 13:56:18.901: INFO: namespace pod-network-test-4930 deletion completed in 26.167760695s

• [SLOW TEST:67.810 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a read only busybox container 
  should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 14 13:56:18.903: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 14 13:56:27.152: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-7017" for this suite.
Feb 14 13:57:19.181: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 13:57:19.313: INFO: namespace kubelet-test-7017 deletion completed in 52.153887638s

• [SLOW TEST:60.411 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when scheduling a read only busybox container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:187
    should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 14 13:57:19.314: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
[It] should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb 14 13:57:19.461: INFO: Creating daemon "daemon-set" with a node selector
STEP: Initially, daemon pods should not be running on any nodes.
Feb 14 13:57:19.474: INFO: Number of nodes with available pods: 0
Feb 14 13:57:19.475: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Change node label to blue, check that daemon pod is launched.
Feb 14 13:57:19.517: INFO: Number of nodes with available pods: 0
Feb 14 13:57:19.517: INFO: Node iruya-node is running more than one daemon pod
Feb 14 13:57:21.550: INFO: Number of nodes with available pods: 0
Feb 14 13:57:21.550: INFO: Node iruya-node is running more than one daemon pod
Feb 14 13:57:22.530: INFO: Number of nodes with available pods: 0
Feb 14 13:57:22.530: INFO: Node iruya-node is running more than one daemon pod
Feb 14 13:57:23.525: INFO: Number of nodes with available pods: 0
Feb 14 13:57:23.525: INFO: Node iruya-node is running more than one daemon pod
Feb 14 13:57:24.533: INFO: Number of nodes with available pods: 0
Feb 14 13:57:24.533: INFO: Node iruya-node is running more than one daemon pod
Feb 14 13:57:25.527: INFO: Number of nodes with available pods: 0
Feb 14 13:57:25.527: INFO: Node iruya-node is running more than one daemon pod
Feb 14 13:57:26.531: INFO: Number of nodes with available pods: 0
Feb 14 13:57:26.531: INFO: Node iruya-node is running more than one daemon pod
Feb 14 13:57:27.527: INFO: Number of nodes with available pods: 0
Feb 14 13:57:27.527: INFO: Node iruya-node is running more than one daemon pod
Feb 14 13:57:28.533: INFO: Number of nodes with available pods: 0
Feb 14 13:57:28.534: INFO: Node iruya-node is running more than one daemon pod
Feb 14 13:57:29.539: INFO: Number of nodes with available pods: 1
Feb 14 13:57:29.540: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Update the node label to green, and wait for daemons to be unscheduled
Feb 14 13:57:29.902: INFO: Number of nodes with available pods: 1
Feb 14 13:57:29.902: INFO: Number of running nodes: 0, number of available pods: 1
Feb 14 13:57:30.914: INFO: Number of nodes with available pods: 0
Feb 14 13:57:30.914: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate
Feb 14 13:57:30.986: INFO: Number of nodes with available pods: 0
Feb 14 13:57:30.986: INFO: Node iruya-node is running more than one daemon pod
Feb 14 13:57:32.003: INFO: Number of nodes with available pods: 0
Feb 14 13:57:32.003: INFO: Node iruya-node is running more than one daemon pod
Feb 14 13:57:32.996: INFO: Number of nodes with available pods: 0
Feb 14 13:57:32.996: INFO: Node iruya-node is running more than one daemon pod
Feb 14 13:57:34.065: INFO: Number of nodes with available pods: 0
Feb 14 13:57:34.065: INFO: Node iruya-node is running more than one daemon pod
Feb 14 13:57:34.994: INFO: Number of nodes with available pods: 0
Feb 14 13:57:34.995: INFO: Node iruya-node is running more than one daemon pod
Feb 14 13:57:36.080: INFO: Number of nodes with available pods: 0
Feb 14 13:57:36.080: INFO: Node iruya-node is running more than one daemon pod
Feb 14 13:57:36.997: INFO: Number of nodes with available pods: 0
Feb 14 13:57:36.997: INFO: Node iruya-node is running more than one daemon pod
Feb 14 13:57:37.995: INFO: Number of nodes with available pods: 0
Feb 14 13:57:37.995: INFO: Node iruya-node is running more than one daemon pod
Feb 14 13:57:38.994: INFO: Number of nodes with available pods: 0
Feb 14 13:57:38.994: INFO: Node iruya-node is running more than one daemon pod
Feb 14 13:57:40.001: INFO: Number of nodes with available pods: 0
Feb 14 13:57:40.001: INFO: Node iruya-node is running more than one daemon pod
Feb 14 13:57:40.996: INFO: Number of nodes with available pods: 0
Feb 14 13:57:40.996: INFO: Node iruya-node is running more than one daemon pod
Feb 14 13:57:41.995: INFO: Number of nodes with available pods: 0
Feb 14 13:57:41.995: INFO: Node iruya-node is running more than one daemon pod
Feb 14 13:57:42.992: INFO: Number of nodes with available pods: 0
Feb 14 13:57:42.992: INFO: Node iruya-node is running more than one daemon pod
Feb 14 13:57:43.998: INFO: Number of nodes with available pods: 0
Feb 14 13:57:43.998: INFO: Node iruya-node is running more than one daemon pod
Feb 14 13:57:44.994: INFO: Number of nodes with available pods: 0
Feb 14 13:57:44.994: INFO: Node iruya-node is running more than one daemon pod
Feb 14 13:57:45.993: INFO: Number of nodes with available pods: 1
Feb 14 13:57:45.993: INFO: Number of running nodes: 1, number of available pods: 1
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-5413, will wait for the garbage collector to delete the pods
Feb 14 13:57:46.064: INFO: Deleting DaemonSet.extensions daemon-set took: 11.877572ms
Feb 14 13:57:46.365: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.743537ms
Feb 14 13:57:56.673: INFO: Number of nodes with available pods: 0
Feb 14 13:57:56.674: INFO: Number of running nodes: 0, number of available pods: 0
Feb 14 13:57:56.683: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-5413/daemonsets","resourceVersion":"24327808"},"items":null}

Feb 14 13:57:56.686: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-5413/pods","resourceVersion":"24327809"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 14 13:57:56.736: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-5413" for this suite.
Feb 14 13:58:02.799: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 13:58:02.965: INFO: namespace daemonsets-5413 deletion completed in 6.186061574s

• [SLOW TEST:43.651 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 14 13:58:02.966: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with configmap pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod pod-subpath-test-configmap-zscv
STEP: Creating a pod to test atomic-volume-subpath
Feb 14 13:58:03.109: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-zscv" in namespace "subpath-9314" to be "success or failure"
Feb 14 13:58:03.221: INFO: Pod "pod-subpath-test-configmap-zscv": Phase="Pending", Reason="", readiness=false. Elapsed: 111.832218ms
Feb 14 13:58:05.235: INFO: Pod "pod-subpath-test-configmap-zscv": Phase="Pending", Reason="", readiness=false. Elapsed: 2.125698637s
Feb 14 13:58:07.248: INFO: Pod "pod-subpath-test-configmap-zscv": Phase="Pending", Reason="", readiness=false. Elapsed: 4.138460096s
Feb 14 13:58:09.257: INFO: Pod "pod-subpath-test-configmap-zscv": Phase="Pending", Reason="", readiness=false. Elapsed: 6.147887707s
Feb 14 13:58:11.266: INFO: Pod "pod-subpath-test-configmap-zscv": Phase="Pending", Reason="", readiness=false. Elapsed: 8.156917308s
Feb 14 13:58:13.276: INFO: Pod "pod-subpath-test-configmap-zscv": Phase="Running", Reason="", readiness=true. Elapsed: 10.166507039s
Feb 14 13:58:15.285: INFO: Pod "pod-subpath-test-configmap-zscv": Phase="Running", Reason="", readiness=true. Elapsed: 12.175822208s
Feb 14 13:58:17.296: INFO: Pod "pod-subpath-test-configmap-zscv": Phase="Running", Reason="", readiness=true. Elapsed: 14.186389566s
Feb 14 13:58:19.307: INFO: Pod "pod-subpath-test-configmap-zscv": Phase="Running", Reason="", readiness=true. Elapsed: 16.19795108s
Feb 14 13:58:21.317: INFO: Pod "pod-subpath-test-configmap-zscv": Phase="Running", Reason="", readiness=true. Elapsed: 18.207705913s
Feb 14 13:58:24.275: INFO: Pod "pod-subpath-test-configmap-zscv": Phase="Running", Reason="", readiness=true. Elapsed: 21.16545052s
Feb 14 13:58:26.283: INFO: Pod "pod-subpath-test-configmap-zscv": Phase="Running", Reason="", readiness=true. Elapsed: 23.173988967s
Feb 14 13:58:28.297: INFO: Pod "pod-subpath-test-configmap-zscv": Phase="Running", Reason="", readiness=true. Elapsed: 25.187448244s
Feb 14 13:58:30.308: INFO: Pod "pod-subpath-test-configmap-zscv": Phase="Running", Reason="", readiness=true. Elapsed: 27.198487057s
Feb 14 13:58:32.316: INFO: Pod "pod-subpath-test-configmap-zscv": Phase="Running", Reason="", readiness=true. Elapsed: 29.206709043s
Feb 14 13:58:34.329: INFO: Pod "pod-subpath-test-configmap-zscv": Phase="Succeeded", Reason="", readiness=false. Elapsed: 31.220129462s
STEP: Saw pod success
Feb 14 13:58:34.330: INFO: Pod "pod-subpath-test-configmap-zscv" satisfied condition "success or failure"
Feb 14 13:58:34.334: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-configmap-zscv container test-container-subpath-configmap-zscv: 
STEP: delete the pod
Feb 14 13:58:34.430: INFO: Waiting for pod pod-subpath-test-configmap-zscv to disappear
Feb 14 13:58:34.548: INFO: Pod pod-subpath-test-configmap-zscv no longer exists
STEP: Deleting pod pod-subpath-test-configmap-zscv
Feb 14 13:58:34.548: INFO: Deleting pod "pod-subpath-test-configmap-zscv" in namespace "subpath-9314"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 14 13:58:34.554: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-9314" for this suite.
Feb 14 13:58:40.616: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 13:58:40.743: INFO: namespace subpath-9314 deletion completed in 6.183555048s

• [SLOW TEST:37.778 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with configmap pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 14 13:58:40.744: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0644 on tmpfs
Feb 14 13:58:40.876: INFO: Waiting up to 5m0s for pod "pod-8402495d-1226-4286-be07-a398eadd31a4" in namespace "emptydir-77" to be "success or failure"
Feb 14 13:58:40.886: INFO: Pod "pod-8402495d-1226-4286-be07-a398eadd31a4": Phase="Pending", Reason="", readiness=false. Elapsed: 10.352712ms
Feb 14 13:58:42.897: INFO: Pod "pod-8402495d-1226-4286-be07-a398eadd31a4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021171211s
Feb 14 13:58:45.303: INFO: Pod "pod-8402495d-1226-4286-be07-a398eadd31a4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.427081865s
Feb 14 13:58:47.322: INFO: Pod "pod-8402495d-1226-4286-be07-a398eadd31a4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.445872186s
Feb 14 13:58:49.338: INFO: Pod "pod-8402495d-1226-4286-be07-a398eadd31a4": Phase="Pending", Reason="", readiness=false. Elapsed: 8.46197325s
Feb 14 13:58:51.346: INFO: Pod "pod-8402495d-1226-4286-be07-a398eadd31a4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.469576783s
STEP: Saw pod success
Feb 14 13:58:51.346: INFO: Pod "pod-8402495d-1226-4286-be07-a398eadd31a4" satisfied condition "success or failure"
Feb 14 13:58:51.351: INFO: Trying to get logs from node iruya-node pod pod-8402495d-1226-4286-be07-a398eadd31a4 container test-container: 
STEP: delete the pod
Feb 14 13:58:51.400: INFO: Waiting for pod pod-8402495d-1226-4286-be07-a398eadd31a4 to disappear
Feb 14 13:58:51.436: INFO: Pod pod-8402495d-1226-4286-be07-a398eadd31a4 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 14 13:58:51.436: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-77" for this suite.
Feb 14 13:58:57.527: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 13:58:57.770: INFO: namespace emptydir-77 deletion completed in 6.325443872s

• [SLOW TEST:17.026 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should scale a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 14 13:58:57.771: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should scale a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a replication controller
Feb 14 13:58:57.912: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5017'
Feb 14 13:59:00.310: INFO: stderr: ""
Feb 14 13:59:00.311: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Feb 14 13:59:00.312: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5017'
Feb 14 13:59:00.515: INFO: stderr: ""
Feb 14 13:59:00.516: INFO: stdout: ""
STEP: Replicas for name=update-demo: expected=2 actual=0
Feb 14 13:59:05.517: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5017'
Feb 14 13:59:05.650: INFO: stderr: ""
Feb 14 13:59:05.650: INFO: stdout: "update-demo-nautilus-ck6k5 update-demo-nautilus-hk9pd "
Feb 14 13:59:05.651: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ck6k5 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5017'
Feb 14 13:59:05.736: INFO: stderr: ""
Feb 14 13:59:05.736: INFO: stdout: ""
Feb 14 13:59:05.736: INFO: update-demo-nautilus-ck6k5 is created but not running
Feb 14 13:59:10.736: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5017'
Feb 14 13:59:10.889: INFO: stderr: ""
Feb 14 13:59:10.890: INFO: stdout: "update-demo-nautilus-ck6k5 update-demo-nautilus-hk9pd "
Feb 14 13:59:10.890: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ck6k5 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5017'
Feb 14 13:59:11.029: INFO: stderr: ""
Feb 14 13:59:11.030: INFO: stdout: ""
Feb 14 13:59:11.030: INFO: update-demo-nautilus-ck6k5 is created but not running
Feb 14 13:59:16.031: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5017'
Feb 14 13:59:16.163: INFO: stderr: ""
Feb 14 13:59:16.163: INFO: stdout: "update-demo-nautilus-ck6k5 update-demo-nautilus-hk9pd "
Feb 14 13:59:16.163: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ck6k5 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5017'
Feb 14 13:59:16.278: INFO: stderr: ""
Feb 14 13:59:16.278: INFO: stdout: "true"
Feb 14 13:59:16.279: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ck6k5 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5017'
Feb 14 13:59:16.413: INFO: stderr: ""
Feb 14 13:59:16.413: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb 14 13:59:16.413: INFO: validating pod update-demo-nautilus-ck6k5
Feb 14 13:59:16.429: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb 14 13:59:16.429: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb 14 13:59:16.429: INFO: update-demo-nautilus-ck6k5 is verified up and running
Feb 14 13:59:16.429: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-hk9pd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5017'
Feb 14 13:59:16.526: INFO: stderr: ""
Feb 14 13:59:16.526: INFO: stdout: "true"
Feb 14 13:59:16.527: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-hk9pd -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5017'
Feb 14 13:59:16.644: INFO: stderr: ""
Feb 14 13:59:16.644: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb 14 13:59:16.644: INFO: validating pod update-demo-nautilus-hk9pd
Feb 14 13:59:16.660: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb 14 13:59:16.660: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb 14 13:59:16.661: INFO: update-demo-nautilus-hk9pd is verified up and running
STEP: scaling down the replication controller
Feb 14 13:59:16.665: INFO: scanned /root for discovery docs: 
Feb 14 13:59:16.665: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-5017'
Feb 14 13:59:17.880: INFO: stderr: ""
Feb 14 13:59:17.880: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Feb 14 13:59:17.881: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5017'
Feb 14 13:59:18.081: INFO: stderr: ""
Feb 14 13:59:18.081: INFO: stdout: "update-demo-nautilus-ck6k5 update-demo-nautilus-hk9pd "
STEP: Replicas for name=update-demo: expected=1 actual=2
Feb 14 13:59:23.082: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5017'
Feb 14 13:59:23.322: INFO: stderr: ""
Feb 14 13:59:23.322: INFO: stdout: "update-demo-nautilus-ck6k5 update-demo-nautilus-hk9pd "
STEP: Replicas for name=update-demo: expected=1 actual=2
Feb 14 13:59:28.324: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5017'
Feb 14 13:59:28.523: INFO: stderr: ""
Feb 14 13:59:28.523: INFO: stdout: "update-demo-nautilus-hk9pd "
Feb 14 13:59:28.524: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-hk9pd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5017'
Feb 14 13:59:28.676: INFO: stderr: ""
Feb 14 13:59:28.677: INFO: stdout: "true"
Feb 14 13:59:28.677: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-hk9pd -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5017'
Feb 14 13:59:28.764: INFO: stderr: ""
Feb 14 13:59:28.764: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb 14 13:59:28.764: INFO: validating pod update-demo-nautilus-hk9pd
Feb 14 13:59:28.773: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb 14 13:59:28.774: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb 14 13:59:28.774: INFO: update-demo-nautilus-hk9pd is verified up and running
STEP: scaling up the replication controller
Feb 14 13:59:28.777: INFO: scanned /root for discovery docs: 
Feb 14 13:59:28.777: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-5017'
Feb 14 13:59:30.017: INFO: stderr: ""
Feb 14 13:59:30.017: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Feb 14 13:59:30.018: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5017'
Feb 14 13:59:30.180: INFO: stderr: ""
Feb 14 13:59:30.181: INFO: stdout: "update-demo-nautilus-hk9pd update-demo-nautilus-t26tw "
Feb 14 13:59:30.181: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-hk9pd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5017'
Feb 14 13:59:30.297: INFO: stderr: ""
Feb 14 13:59:30.297: INFO: stdout: "true"
Feb 14 13:59:30.297: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-hk9pd -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5017'
Feb 14 13:59:30.392: INFO: stderr: ""
Feb 14 13:59:30.393: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb 14 13:59:30.393: INFO: validating pod update-demo-nautilus-hk9pd
Feb 14 13:59:30.399: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb 14 13:59:30.399: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb 14 13:59:30.399: INFO: update-demo-nautilus-hk9pd is verified up and running
Feb 14 13:59:30.399: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-t26tw -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5017'
Feb 14 13:59:30.545: INFO: stderr: ""
Feb 14 13:59:30.545: INFO: stdout: ""
Feb 14 13:59:30.545: INFO: update-demo-nautilus-t26tw is created but not running
Feb 14 13:59:35.546: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5017'
Feb 14 13:59:35.728: INFO: stderr: ""
Feb 14 13:59:35.728: INFO: stdout: "update-demo-nautilus-hk9pd update-demo-nautilus-t26tw "
Feb 14 13:59:35.729: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-hk9pd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5017'
Feb 14 13:59:35.845: INFO: stderr: ""
Feb 14 13:59:35.846: INFO: stdout: "true"
Feb 14 13:59:35.846: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-hk9pd -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5017'
Feb 14 13:59:35.962: INFO: stderr: ""
Feb 14 13:59:35.962: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb 14 13:59:35.962: INFO: validating pod update-demo-nautilus-hk9pd
Feb 14 13:59:35.967: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb 14 13:59:35.967: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb 14 13:59:35.967: INFO: update-demo-nautilus-hk9pd is verified up and running
Feb 14 13:59:35.967: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-t26tw -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5017'
Feb 14 13:59:36.076: INFO: stderr: ""
Feb 14 13:59:36.076: INFO: stdout: ""
Feb 14 13:59:36.076: INFO: update-demo-nautilus-t26tw is created but not running
Feb 14 13:59:41.077: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5017'
Feb 14 13:59:41.229: INFO: stderr: ""
Feb 14 13:59:41.229: INFO: stdout: "update-demo-nautilus-hk9pd update-demo-nautilus-t26tw "
Feb 14 13:59:41.230: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-hk9pd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5017'
Feb 14 13:59:41.353: INFO: stderr: ""
Feb 14 13:59:41.354: INFO: stdout: "true"
Feb 14 13:59:41.354: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-hk9pd -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5017'
Feb 14 13:59:41.472: INFO: stderr: ""
Feb 14 13:59:41.472: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb 14 13:59:41.472: INFO: validating pod update-demo-nautilus-hk9pd
Feb 14 13:59:41.478: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb 14 13:59:41.478: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb 14 13:59:41.478: INFO: update-demo-nautilus-hk9pd is verified up and running
Feb 14 13:59:41.478: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-t26tw -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5017'
Feb 14 13:59:41.589: INFO: stderr: ""
Feb 14 13:59:41.589: INFO: stdout: "true"
Feb 14 13:59:41.589: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-t26tw -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5017'
Feb 14 13:59:41.679: INFO: stderr: ""
Feb 14 13:59:41.679: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb 14 13:59:41.679: INFO: validating pod update-demo-nautilus-t26tw
Feb 14 13:59:41.698: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb 14 13:59:41.698: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb 14 13:59:41.698: INFO: update-demo-nautilus-t26tw is verified up and running
STEP: using delete to clean up resources
Feb 14 13:59:41.698: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5017'
Feb 14 13:59:41.804: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb 14 13:59:41.804: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Feb 14 13:59:41.804: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-5017'
Feb 14 13:59:41.962: INFO: stderr: "No resources found.\n"
Feb 14 13:59:41.962: INFO: stdout: ""
Feb 14 13:59:41.963: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-5017 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Feb 14 13:59:42.212: INFO: stderr: ""
Feb 14 13:59:42.213: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 14 13:59:42.213: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5017" for this suite.
Feb 14 14:00:04.321: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 14:00:04.433: INFO: namespace kubectl-5017 deletion completed in 22.162024801s

• [SLOW TEST:66.662 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should scale a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 14 14:00:04.434: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb 14 14:00:14.738: INFO: Waiting up to 5m0s for pod "client-envvars-48075543-e264-4974-a35a-d058816bdc84" in namespace "pods-2566" to be "success or failure"
Feb 14 14:00:14.757: INFO: Pod "client-envvars-48075543-e264-4974-a35a-d058816bdc84": Phase="Pending", Reason="", readiness=false. Elapsed: 18.566063ms
Feb 14 14:00:16.768: INFO: Pod "client-envvars-48075543-e264-4974-a35a-d058816bdc84": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029874654s
Feb 14 14:00:18.776: INFO: Pod "client-envvars-48075543-e264-4974-a35a-d058816bdc84": Phase="Pending", Reason="", readiness=false. Elapsed: 4.038354802s
Feb 14 14:00:20.798: INFO: Pod "client-envvars-48075543-e264-4974-a35a-d058816bdc84": Phase="Pending", Reason="", readiness=false. Elapsed: 6.059958297s
Feb 14 14:00:22.806: INFO: Pod "client-envvars-48075543-e264-4974-a35a-d058816bdc84": Phase="Pending", Reason="", readiness=false. Elapsed: 8.067888654s
Feb 14 14:00:24.815: INFO: Pod "client-envvars-48075543-e264-4974-a35a-d058816bdc84": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.077001924s
STEP: Saw pod success
Feb 14 14:00:24.815: INFO: Pod "client-envvars-48075543-e264-4974-a35a-d058816bdc84" satisfied condition "success or failure"
Feb 14 14:00:24.820: INFO: Trying to get logs from node iruya-node pod client-envvars-48075543-e264-4974-a35a-d058816bdc84 container env3cont: 
STEP: delete the pod
Feb 14 14:00:24.935: INFO: Waiting for pod client-envvars-48075543-e264-4974-a35a-d058816bdc84 to disappear
Feb 14 14:00:24.967: INFO: Pod client-envvars-48075543-e264-4974-a35a-d058816bdc84 no longer exists
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 14 14:00:24.967: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-2566" for this suite.
Feb 14 14:01:27.009: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 14:01:27.159: INFO: namespace pods-2566 deletion completed in 1m2.183148248s

• [SLOW TEST:82.726 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-apps] Deployment 
  deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 14 14:01:27.160: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72
[It] deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb 14 14:01:28.130: INFO: Pod name cleanup-pod: Found 0 pods out of 1
Feb 14 14:01:33.148: INFO: Pod name cleanup-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Feb 14 14:01:37.167: INFO: Creating deployment test-cleanup-deployment
STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66
Feb 14 14:01:47.288: INFO: Deployment "test-cleanup-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment,GenerateName:,Namespace:deployment-2857,SelfLink:/apis/apps/v1/namespaces/deployment-2857/deployments/test-cleanup-deployment,UID:2bd15ca5-3401-4a7f-ae7f-ac2a9b3ee1d3,ResourceVersion:24328351,Generation:1,CreationTimestamp:2020-02-14 14:01:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 1,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-02-14 14:01:37 +0000 UTC 2020-02-14 14:01:37 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-02-14 14:01:46 +0000 UTC 2020-02-14 14:01:37 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-cleanup-deployment-55bbcbc84c" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},}

Feb 14 14:01:47.292: INFO: New ReplicaSet "test-cleanup-deployment-55bbcbc84c" of Deployment "test-cleanup-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment-55bbcbc84c,GenerateName:,Namespace:deployment-2857,SelfLink:/apis/apps/v1/namespaces/deployment-2857/replicasets/test-cleanup-deployment-55bbcbc84c,UID:61f75827-81ef-4ee8-b258-878b25b34e87,ResourceVersion:24328341,Generation:1,CreationTimestamp:2020-02-14 14:01:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment 2bd15ca5-3401-4a7f-ae7f-ac2a9b3ee1d3 0xc0018718f7 0xc0018718f8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Feb 14 14:01:47.297: INFO: Pod "test-cleanup-deployment-55bbcbc84c-2wl2q" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment-55bbcbc84c-2wl2q,GenerateName:test-cleanup-deployment-55bbcbc84c-,Namespace:deployment-2857,SelfLink:/api/v1/namespaces/deployment-2857/pods/test-cleanup-deployment-55bbcbc84c-2wl2q,UID:c27e389c-0894-476a-88e8-9b696514e34c,ResourceVersion:24328340,Generation:0,CreationTimestamp:2020-02-14 14:01:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-deployment-55bbcbc84c 61f75827-81ef-4ee8-b258-878b25b34e87 0xc00234a857 0xc00234a858}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-nc6bw {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-nc6bw,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [{default-token-nc6bw true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001aa4010} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001aa4030}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 14:01:37 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 14:01:46 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 14:01:46 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 14:01:37 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.2,StartTime:2020-02-14 14:01:37 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-02-14 14:01:45 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 docker://d8ca9095f85576016f7c1610075538cba2e330447019285f3c238f411b2c7d49}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 14 14:01:47.297: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-2857" for this suite.
Feb 14 14:01:53.337: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 14:01:53.399: INFO: namespace deployment-2857 deletion completed in 6.096054274s

• [SLOW TEST:26.239 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 14 14:01:53.399: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88
[It] should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating service multi-endpoint-test in namespace services-9428
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-9428 to expose endpoints map[]
Feb 14 14:01:53.705: INFO: Get endpoints failed (10.908533ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found
Feb 14 14:01:54.713: INFO: successfully validated that service multi-endpoint-test in namespace services-9428 exposes endpoints map[] (1.019631688s elapsed)
STEP: Creating pod pod1 in namespace services-9428
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-9428 to expose endpoints map[pod1:[100]]
Feb 14 14:01:59.313: INFO: Unexpected endpoints: found map[], expected map[pod1:[100]] (4.573781556s elapsed, will retry)
Feb 14 14:02:04.406: INFO: Unexpected endpoints: found map[], expected map[pod1:[100]] (9.666327753s elapsed, will retry)
Feb 14 14:02:05.415: INFO: successfully validated that service multi-endpoint-test in namespace services-9428 exposes endpoints map[pod1:[100]] (10.675372419s elapsed)
STEP: Creating pod pod2 in namespace services-9428
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-9428 to expose endpoints map[pod1:[100] pod2:[101]]
Feb 14 14:02:10.789: INFO: Unexpected endpoints: found map[92b91508-7540-4852-89b8-cb623920ddbf:[100]], expected map[pod1:[100] pod2:[101]] (5.368498534s elapsed, will retry)
Feb 14 14:02:13.858: INFO: successfully validated that service multi-endpoint-test in namespace services-9428 exposes endpoints map[pod1:[100] pod2:[101]] (8.437447409s elapsed)
STEP: Deleting pod pod1 in namespace services-9428
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-9428 to expose endpoints map[pod2:[101]]
Feb 14 14:02:14.955: INFO: successfully validated that service multi-endpoint-test in namespace services-9428 exposes endpoints map[pod2:[101]] (1.083840218s elapsed)
STEP: Deleting pod pod2 in namespace services-9428
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-9428 to expose endpoints map[]
Feb 14 14:02:15.989: INFO: successfully validated that service multi-endpoint-test in namespace services-9428 exposes endpoints map[] (1.028236769s elapsed)
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 14 14:02:16.911: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-9428" for this suite.
Feb 14 14:02:39.049: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 14:02:39.207: INFO: namespace services-9428 deletion completed in 22.250296255s
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92

• [SLOW TEST:45.808 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 14 14:02:39.208: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75
STEP: Creating service test in namespace statefulset-1105
[It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Initializing watcher for selector baz=blah,foo=bar
STEP: Creating stateful set ss in namespace statefulset-1105
STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-1105
Feb 14 14:02:39.301: INFO: Found 0 stateful pods, waiting for 1
Feb 14 14:02:49.481: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false
Feb 14 14:02:59.317: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod
Feb 14 14:02:59.325: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1105 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb 14 14:03:00.074: INFO: stderr: "I0214 14:02:59.617406    1635 log.go:172] (0xc0009fa4d0) (0xc000927040) Create stream\nI0214 14:02:59.617625    1635 log.go:172] (0xc0009fa4d0) (0xc000927040) Stream added, broadcasting: 1\nI0214 14:02:59.637459    1635 log.go:172] (0xc0009fa4d0) Reply frame received for 1\nI0214 14:02:59.637535    1635 log.go:172] (0xc0009fa4d0) (0xc00088c000) Create stream\nI0214 14:02:59.637549    1635 log.go:172] (0xc0009fa4d0) (0xc00088c000) Stream added, broadcasting: 3\nI0214 14:02:59.639457    1635 log.go:172] (0xc0009fa4d0) Reply frame received for 3\nI0214 14:02:59.639538    1635 log.go:172] (0xc0009fa4d0) (0xc000926000) Create stream\nI0214 14:02:59.639562    1635 log.go:172] (0xc0009fa4d0) (0xc000926000) Stream added, broadcasting: 5\nI0214 14:02:59.642259    1635 log.go:172] (0xc0009fa4d0) Reply frame received for 5\nI0214 14:02:59.781167    1635 log.go:172] (0xc0009fa4d0) Data frame received for 5\nI0214 14:02:59.781245    1635 log.go:172] (0xc000926000) (5) Data frame handling\nI0214 14:02:59.781284    1635 log.go:172] (0xc000926000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0214 14:02:59.868630    1635 log.go:172] (0xc0009fa4d0) Data frame received for 3\nI0214 14:02:59.868755    1635 log.go:172] (0xc00088c000) (3) Data frame handling\nI0214 14:02:59.868789    1635 log.go:172] (0xc00088c000) (3) Data frame sent\nI0214 14:03:00.053846    1635 log.go:172] (0xc0009fa4d0) (0xc000926000) Stream removed, broadcasting: 5\nI0214 14:03:00.054086    1635 log.go:172] (0xc0009fa4d0) (0xc00088c000) Stream removed, broadcasting: 3\nI0214 14:03:00.054184    1635 log.go:172] (0xc0009fa4d0) Data frame received for 1\nI0214 14:03:00.054223    1635 log.go:172] (0xc000927040) (1) Data frame handling\nI0214 14:03:00.054280    1635 log.go:172] (0xc000927040) (1) Data frame sent\nI0214 14:03:00.054313    1635 log.go:172] (0xc0009fa4d0) (0xc000927040) Stream removed, broadcasting: 1\nI0214 14:03:00.054366    1635 log.go:172] (0xc0009fa4d0) Go away received\nI0214 14:03:00.056582    1635 log.go:172] (0xc0009fa4d0) (0xc000927040) Stream removed, broadcasting: 1\nI0214 14:03:00.056607    1635 log.go:172] (0xc0009fa4d0) (0xc00088c000) Stream removed, broadcasting: 3\nI0214 14:03:00.056622    1635 log.go:172] (0xc0009fa4d0) (0xc000926000) Stream removed, broadcasting: 5\n"
Feb 14 14:03:00.075: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb 14 14:03:00.075: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Feb 14 14:03:00.095: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
Feb 14 14:03:10.106: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Feb 14 14:03:10.106: INFO: Waiting for statefulset status.replicas updated to 0
Feb 14 14:03:10.144: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999998677s
Feb 14 14:03:11.157: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.980455704s
Feb 14 14:03:12.194: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.967971706s
Feb 14 14:03:13.213: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.930552021s
Feb 14 14:03:14.221: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.912460392s
Feb 14 14:03:15.239: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.904568263s
Feb 14 14:03:16.250: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.884920014s
Feb 14 14:03:17.263: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.874852833s
Feb 14 14:03:18.291: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.861715544s
Feb 14 14:03:19.357: INFO: Verifying statefulset ss doesn't scale past 1 for another 833.973669ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-1105
Feb 14 14:03:20.862: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1105 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 14 14:03:21.563: INFO: stderr: "I0214 14:03:21.221936    1657 log.go:172] (0xc000116790) (0xc0005c05a0) Create stream\nI0214 14:03:21.222309    1657 log.go:172] (0xc000116790) (0xc0005c05a0) Stream added, broadcasting: 1\nI0214 14:03:21.232031    1657 log.go:172] (0xc000116790) Reply frame received for 1\nI0214 14:03:21.232084    1657 log.go:172] (0xc000116790) (0xc0007d8000) Create stream\nI0214 14:03:21.232097    1657 log.go:172] (0xc000116790) (0xc0007d8000) Stream added, broadcasting: 3\nI0214 14:03:21.233678    1657 log.go:172] (0xc000116790) Reply frame received for 3\nI0214 14:03:21.233704    1657 log.go:172] (0xc000116790) (0xc00089e000) Create stream\nI0214 14:03:21.233714    1657 log.go:172] (0xc000116790) (0xc00089e000) Stream added, broadcasting: 5\nI0214 14:03:21.234961    1657 log.go:172] (0xc000116790) Reply frame received for 5\nI0214 14:03:21.398168    1657 log.go:172] (0xc000116790) Data frame received for 5\nI0214 14:03:21.398245    1657 log.go:172] (0xc00089e000) (5) Data frame handling\nI0214 14:03:21.398269    1657 log.go:172] (0xc00089e000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0214 14:03:21.398293    1657 log.go:172] (0xc000116790) Data frame received for 3\nI0214 14:03:21.398306    1657 log.go:172] (0xc0007d8000) (3) Data frame handling\nI0214 14:03:21.398324    1657 log.go:172] (0xc0007d8000) (3) Data frame sent\nI0214 14:03:21.550844    1657 log.go:172] (0xc000116790) Data frame received for 1\nI0214 14:03:21.550993    1657 log.go:172] (0xc000116790) (0xc0007d8000) Stream removed, broadcasting: 3\nI0214 14:03:21.551070    1657 log.go:172] (0xc0005c05a0) (1) Data frame handling\nI0214 14:03:21.551102    1657 log.go:172] (0xc0005c05a0) (1) Data frame sent\nI0214 14:03:21.551201    1657 log.go:172] (0xc000116790) (0xc00089e000) Stream removed, broadcasting: 5\nI0214 14:03:21.551267    1657 log.go:172] (0xc000116790) (0xc0005c05a0) Stream removed, broadcasting: 1\nI0214 14:03:21.551311    1657 log.go:172] (0xc000116790) Go away received\nI0214 14:03:21.552286    1657 log.go:172] (0xc000116790) (0xc0005c05a0) Stream removed, broadcasting: 1\nI0214 14:03:21.552318    1657 log.go:172] (0xc000116790) (0xc0007d8000) Stream removed, broadcasting: 3\nI0214 14:03:21.552343    1657 log.go:172] (0xc000116790) (0xc00089e000) Stream removed, broadcasting: 5\n"
Feb 14 14:03:21.564: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Feb 14 14:03:21.564: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Feb 14 14:03:21.571: INFO: Found 1 stateful pods, waiting for 3
Feb 14 14:03:31.593: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Feb 14 14:03:31.593: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Feb 14 14:03:31.593: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Pending - Ready=false
Feb 14 14:03:41.588: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Feb 14 14:03:41.588: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Feb 14 14:03:41.588: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Verifying that stateful set ss was scaled up in order
STEP: Scale down will halt with unhealthy stateful pod
Feb 14 14:03:41.599: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1105 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb 14 14:03:42.272: INFO: stderr: "I0214 14:03:41.939942    1675 log.go:172] (0xc0009fa2c0) (0xc000962640) Create stream\nI0214 14:03:41.940102    1675 log.go:172] (0xc0009fa2c0) (0xc000962640) Stream added, broadcasting: 1\nI0214 14:03:41.949301    1675 log.go:172] (0xc0009fa2c0) Reply frame received for 1\nI0214 14:03:41.949356    1675 log.go:172] (0xc0009fa2c0) (0xc000962780) Create stream\nI0214 14:03:41.949371    1675 log.go:172] (0xc0009fa2c0) (0xc000962780) Stream added, broadcasting: 3\nI0214 14:03:41.951094    1675 log.go:172] (0xc0009fa2c0) Reply frame received for 3\nI0214 14:03:41.951126    1675 log.go:172] (0xc0009fa2c0) (0xc000640140) Create stream\nI0214 14:03:41.951145    1675 log.go:172] (0xc0009fa2c0) (0xc000640140) Stream added, broadcasting: 5\nI0214 14:03:41.952912    1675 log.go:172] (0xc0009fa2c0) Reply frame received for 5\nI0214 14:03:42.072391    1675 log.go:172] (0xc0009fa2c0) Data frame received for 3\nI0214 14:03:42.072805    1675 log.go:172] (0xc000962780) (3) Data frame handling\nI0214 14:03:42.072826    1675 log.go:172] (0xc000962780) (3) Data frame sent\nI0214 14:03:42.072904    1675 log.go:172] (0xc0009fa2c0) Data frame received for 5\nI0214 14:03:42.072934    1675 log.go:172] (0xc000640140) (5) Data frame handling\nI0214 14:03:42.072951    1675 log.go:172] (0xc000640140) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0214 14:03:42.256802    1675 log.go:172] (0xc0009fa2c0) Data frame received for 1\nI0214 14:03:42.256953    1675 log.go:172] (0xc0009fa2c0) (0xc000962780) Stream removed, broadcasting: 3\nI0214 14:03:42.257033    1675 log.go:172] (0xc000962640) (1) Data frame handling\nI0214 14:03:42.257065    1675 log.go:172] (0xc000962640) (1) Data frame sent\nI0214 14:03:42.257083    1675 log.go:172] (0xc0009fa2c0) (0xc000962640) Stream removed, broadcasting: 1\nI0214 14:03:42.258525    1675 log.go:172] (0xc0009fa2c0) (0xc000640140) Stream removed, broadcasting: 5\nI0214 14:03:42.258572    1675 log.go:172] (0xc0009fa2c0) Go away received\nI0214 14:03:42.258634    1675 log.go:172] (0xc0009fa2c0) (0xc000962640) Stream removed, broadcasting: 1\nI0214 14:03:42.258647    1675 log.go:172] (0xc0009fa2c0) (0xc000962780) Stream removed, broadcasting: 3\nI0214 14:03:42.258659    1675 log.go:172] (0xc0009fa2c0) (0xc000640140) Stream removed, broadcasting: 5\n"
Feb 14 14:03:42.273: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb 14 14:03:42.273: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Feb 14 14:03:42.274: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1105 ss-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb 14 14:03:42.830: INFO: stderr: "I0214 14:03:42.485502    1698 log.go:172] (0xc000946420) (0xc000666780) Create stream\nI0214 14:03:42.485916    1698 log.go:172] (0xc000946420) (0xc000666780) Stream added, broadcasting: 1\nI0214 14:03:42.492228    1698 log.go:172] (0xc000946420) Reply frame received for 1\nI0214 14:03:42.492259    1698 log.go:172] (0xc000946420) (0xc0009e4000) Create stream\nI0214 14:03:42.492265    1698 log.go:172] (0xc000946420) (0xc0009e4000) Stream added, broadcasting: 3\nI0214 14:03:42.493226    1698 log.go:172] (0xc000946420) Reply frame received for 3\nI0214 14:03:42.493275    1698 log.go:172] (0xc000946420) (0xc0007cc000) Create stream\nI0214 14:03:42.493285    1698 log.go:172] (0xc000946420) (0xc0007cc000) Stream added, broadcasting: 5\nI0214 14:03:42.494226    1698 log.go:172] (0xc000946420) Reply frame received for 5\nI0214 14:03:42.650740    1698 log.go:172] (0xc000946420) Data frame received for 5\nI0214 14:03:42.650800    1698 log.go:172] (0xc0007cc000) (5) Data frame handling\nI0214 14:03:42.650814    1698 log.go:172] (0xc0007cc000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0214 14:03:42.749979    1698 log.go:172] (0xc000946420) Data frame received for 3\nI0214 14:03:42.750002    1698 log.go:172] (0xc0009e4000) (3) Data frame handling\nI0214 14:03:42.750012    1698 log.go:172] (0xc0009e4000) (3) Data frame sent\nI0214 14:03:42.820480    1698 log.go:172] (0xc000946420) Data frame received for 1\nI0214 14:03:42.821210    1698 log.go:172] (0xc000946420) (0xc0009e4000) Stream removed, broadcasting: 3\nI0214 14:03:42.821533    1698 log.go:172] (0xc000666780) (1) Data frame handling\nI0214 14:03:42.821622    1698 log.go:172] (0xc000666780) (1) Data frame sent\nI0214 14:03:42.821702    1698 log.go:172] (0xc000946420) (0xc000666780) Stream removed, broadcasting: 1\nI0214 14:03:42.821887    1698 log.go:172] (0xc000946420) (0xc0007cc000) Stream removed, broadcasting: 5\nI0214 14:03:42.822050    1698 log.go:172] (0xc000946420) Go away received\nI0214 14:03:42.822938    1698 log.go:172] (0xc000946420) (0xc000666780) Stream removed, broadcasting: 1\nI0214 14:03:42.823497    1698 log.go:172] (0xc000946420) (0xc0009e4000) Stream removed, broadcasting: 3\nI0214 14:03:42.823526    1698 log.go:172] (0xc000946420) (0xc0007cc000) Stream removed, broadcasting: 5\n"
Feb 14 14:03:42.831: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb 14 14:03:42.831: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Feb 14 14:03:42.831: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1105 ss-2 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb 14 14:03:43.370: INFO: stderr: "I0214 14:03:43.073976    1718 log.go:172] (0xc0007de9a0) (0xc0009aebe0) Create stream\nI0214 14:03:43.074145    1718 log.go:172] (0xc0007de9a0) (0xc0009aebe0) Stream added, broadcasting: 1\nI0214 14:03:43.088446    1718 log.go:172] (0xc0007de9a0) Reply frame received for 1\nI0214 14:03:43.088483    1718 log.go:172] (0xc0007de9a0) (0xc0009ae000) Create stream\nI0214 14:03:43.088492    1718 log.go:172] (0xc0007de9a0) (0xc0009ae000) Stream added, broadcasting: 3\nI0214 14:03:43.090386    1718 log.go:172] (0xc0007de9a0) Reply frame received for 3\nI0214 14:03:43.090409    1718 log.go:172] (0xc0007de9a0) (0xc0009ae0a0) Create stream\nI0214 14:03:43.090416    1718 log.go:172] (0xc0007de9a0) (0xc0009ae0a0) Stream added, broadcasting: 5\nI0214 14:03:43.091741    1718 log.go:172] (0xc0007de9a0) Reply frame received for 5\nI0214 14:03:43.172477    1718 log.go:172] (0xc0007de9a0) Data frame received for 5\nI0214 14:03:43.172512    1718 log.go:172] (0xc0009ae0a0) (5) Data frame handling\nI0214 14:03:43.172535    1718 log.go:172] (0xc0009ae0a0) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0214 14:03:43.207800    1718 log.go:172] (0xc0007de9a0) Data frame received for 3\nI0214 14:03:43.207821    1718 log.go:172] (0xc0009ae000) (3) Data frame handling\nI0214 14:03:43.207832    1718 log.go:172] (0xc0009ae000) (3) Data frame sent\nI0214 14:03:43.354813    1718 log.go:172] (0xc0007de9a0) Data frame received for 1\nI0214 14:03:43.354985    1718 log.go:172] (0xc0007de9a0) (0xc0009ae0a0) Stream removed, broadcasting: 5\nI0214 14:03:43.355230    1718 log.go:172] (0xc0009aebe0) (1) Data frame handling\nI0214 14:03:43.355361    1718 log.go:172] (0xc0009aebe0) (1) Data frame sent\nI0214 14:03:43.355465    1718 log.go:172] (0xc0007de9a0) (0xc0009ae000) Stream removed, broadcasting: 3\nI0214 14:03:43.355533    1718 log.go:172] (0xc0007de9a0) (0xc0009aebe0) Stream removed, broadcasting: 1\nI0214 14:03:43.355565    1718 log.go:172] (0xc0007de9a0) Go away received\nI0214 14:03:43.357107    1718 log.go:172] (0xc0007de9a0) (0xc0009aebe0) Stream removed, broadcasting: 1\nI0214 14:03:43.357190    1718 log.go:172] (0xc0007de9a0) (0xc0009ae000) Stream removed, broadcasting: 3\nI0214 14:03:43.357258    1718 log.go:172] (0xc0007de9a0) (0xc0009ae0a0) Stream removed, broadcasting: 5\n"
Feb 14 14:03:43.371: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb 14 14:03:43.371: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Feb 14 14:03:43.371: INFO: Waiting for statefulset status.replicas updated to 0
Feb 14 14:03:43.376: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1
Feb 14 14:03:53.392: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Feb 14 14:03:53.392: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Feb 14 14:03:53.392: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Feb 14 14:03:53.414: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999996281s
Feb 14 14:03:54.423: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.988505156s
Feb 14 14:03:55.434: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.979473961s
Feb 14 14:03:56.453: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.969043631s
Feb 14 14:03:57.463: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.950118573s
Feb 14 14:03:58.535: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.940048774s
Feb 14 14:03:59.544: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.867149307s
Feb 14 14:04:00.559: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.858813273s
Feb 14 14:04:01.573: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.842893291s
Feb 14 14:04:02.591: INFO: Verifying statefulset ss doesn't scale past 3 for another 828.769232ms
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-1105
Feb 14 14:04:03.604: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1105 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 14 14:04:04.469: INFO: stderr: "I0214 14:04:03.877135    1740 log.go:172] (0xc000116d10) (0xc0002266e0) Create stream\nI0214 14:04:03.878147    1740 log.go:172] (0xc000116d10) (0xc0002266e0) Stream added, broadcasting: 1\nI0214 14:04:03.891375    1740 log.go:172] (0xc000116d10) Reply frame received for 1\nI0214 14:04:03.891501    1740 log.go:172] (0xc000116d10) (0xc00069a1e0) Create stream\nI0214 14:04:03.891519    1740 log.go:172] (0xc000116d10) (0xc00069a1e0) Stream added, broadcasting: 3\nI0214 14:04:03.897610    1740 log.go:172] (0xc000116d10) Reply frame received for 3\nI0214 14:04:03.897729    1740 log.go:172] (0xc000116d10) (0xc000226780) Create stream\nI0214 14:04:03.897754    1740 log.go:172] (0xc000116d10) (0xc000226780) Stream added, broadcasting: 5\nI0214 14:04:03.901891    1740 log.go:172] (0xc000116d10) Reply frame received for 5\nI0214 14:04:04.130185    1740 log.go:172] (0xc000116d10) Data frame received for 5\nI0214 14:04:04.130235    1740 log.go:172] (0xc000226780) (5) Data frame handling\nI0214 14:04:04.130260    1740 log.go:172] (0xc000226780) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0214 14:04:04.133363    1740 log.go:172] (0xc000116d10) Data frame received for 3\nI0214 14:04:04.133382    1740 log.go:172] (0xc00069a1e0) (3) Data frame handling\nI0214 14:04:04.133399    1740 log.go:172] (0xc00069a1e0) (3) Data frame sent\nI0214 14:04:04.456131    1740 log.go:172] (0xc000116d10) Data frame received for 1\nI0214 14:04:04.456199    1740 log.go:172] (0xc000116d10) (0xc000226780) Stream removed, broadcasting: 5\nI0214 14:04:04.456241    1740 log.go:172] (0xc0002266e0) (1) Data frame handling\nI0214 14:04:04.456271    1740 log.go:172] (0xc0002266e0) (1) Data frame sent\nI0214 14:04:04.456328    1740 log.go:172] (0xc000116d10) (0xc00069a1e0) Stream removed, broadcasting: 3\nI0214 14:04:04.456368    1740 log.go:172] (0xc000116d10) (0xc0002266e0) Stream removed, broadcasting: 1\nI0214 14:04:04.456386    1740 log.go:172] (0xc000116d10) Go away received\nI0214 14:04:04.457370    1740 log.go:172] (0xc000116d10) (0xc0002266e0) Stream removed, broadcasting: 1\nI0214 14:04:04.457386    1740 log.go:172] (0xc000116d10) (0xc00069a1e0) Stream removed, broadcasting: 3\nI0214 14:04:04.457390    1740 log.go:172] (0xc000116d10) (0xc000226780) Stream removed, broadcasting: 5\n"
Feb 14 14:04:04.469: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Feb 14 14:04:04.469: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Feb 14 14:04:04.470: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1105 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 14 14:04:04.868: INFO: stderr: "I0214 14:04:04.662705    1758 log.go:172] (0xc0009aa0b0) (0xc0008106e0) Create stream\nI0214 14:04:04.662867    1758 log.go:172] (0xc0009aa0b0) (0xc0008106e0) Stream added, broadcasting: 1\nI0214 14:04:04.665482    1758 log.go:172] (0xc0009aa0b0) Reply frame received for 1\nI0214 14:04:04.665505    1758 log.go:172] (0xc0009aa0b0) (0xc0006663c0) Create stream\nI0214 14:04:04.665519    1758 log.go:172] (0xc0009aa0b0) (0xc0006663c0) Stream added, broadcasting: 3\nI0214 14:04:04.666435    1758 log.go:172] (0xc0009aa0b0) Reply frame received for 3\nI0214 14:04:04.666462    1758 log.go:172] (0xc0009aa0b0) (0xc000292000) Create stream\nI0214 14:04:04.666472    1758 log.go:172] (0xc0009aa0b0) (0xc000292000) Stream added, broadcasting: 5\nI0214 14:04:04.667331    1758 log.go:172] (0xc0009aa0b0) Reply frame received for 5\nI0214 14:04:04.764906    1758 log.go:172] (0xc0009aa0b0) Data frame received for 5\nI0214 14:04:04.764944    1758 log.go:172] (0xc000292000) (5) Data frame handling\nI0214 14:04:04.764973    1758 log.go:172] (0xc000292000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0214 14:04:04.765757    1758 log.go:172] (0xc0009aa0b0) Data frame received for 3\nI0214 14:04:04.765838    1758 log.go:172] (0xc0006663c0) (3) Data frame handling\nI0214 14:04:04.765904    1758 log.go:172] (0xc0006663c0) (3) Data frame sent\nI0214 14:04:04.860384    1758 log.go:172] (0xc0009aa0b0) Data frame received for 1\nI0214 14:04:04.860714    1758 log.go:172] (0xc0008106e0) (1) Data frame handling\nI0214 14:04:04.860733    1758 log.go:172] (0xc0008106e0) (1) Data frame sent\nI0214 14:04:04.860751    1758 log.go:172] (0xc0009aa0b0) (0xc0008106e0) Stream removed, broadcasting: 1\nI0214 14:04:04.860769    1758 log.go:172] (0xc0009aa0b0) (0xc0006663c0) Stream removed, broadcasting: 3\nI0214 14:04:04.860787    1758 log.go:172] (0xc0009aa0b0) (0xc000292000) Stream removed, broadcasting: 5\nI0214 14:04:04.860954    1758 log.go:172] (0xc0009aa0b0) Go away received\nI0214 14:04:04.861731    1758 log.go:172] (0xc0009aa0b0) (0xc0008106e0) Stream removed, broadcasting: 1\nI0214 14:04:04.861748    1758 log.go:172] (0xc0009aa0b0) (0xc0006663c0) Stream removed, broadcasting: 3\nI0214 14:04:04.861755    1758 log.go:172] (0xc0009aa0b0) (0xc000292000) Stream removed, broadcasting: 5\n"
Feb 14 14:04:04.868: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Feb 14 14:04:04.868: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Feb 14 14:04:04.869: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1105 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 14 14:04:05.471: INFO: stderr: "I0214 14:04:05.080102    1775 log.go:172] (0xc00099c0b0) (0xc0009d26e0) Create stream\nI0214 14:04:05.080233    1775 log.go:172] (0xc00099c0b0) (0xc0009d26e0) Stream added, broadcasting: 1\nI0214 14:04:05.090166    1775 log.go:172] (0xc00099c0b0) Reply frame received for 1\nI0214 14:04:05.090257    1775 log.go:172] (0xc00099c0b0) (0xc0005d8460) Create stream\nI0214 14:04:05.090269    1775 log.go:172] (0xc00099c0b0) (0xc0005d8460) Stream added, broadcasting: 3\nI0214 14:04:05.091707    1775 log.go:172] (0xc00099c0b0) Reply frame received for 3\nI0214 14:04:05.091795    1775 log.go:172] (0xc00099c0b0) (0xc0009d2780) Create stream\nI0214 14:04:05.091816    1775 log.go:172] (0xc00099c0b0) (0xc0009d2780) Stream added, broadcasting: 5\nI0214 14:04:05.092978    1775 log.go:172] (0xc00099c0b0) Reply frame received for 5\nI0214 14:04:05.321859    1775 log.go:172] (0xc00099c0b0) Data frame received for 3\nI0214 14:04:05.321949    1775 log.go:172] (0xc0005d8460) (3) Data frame handling\nI0214 14:04:05.321976    1775 log.go:172] (0xc0005d8460) (3) Data frame sent\nI0214 14:04:05.322040    1775 log.go:172] (0xc00099c0b0) Data frame received for 5\nI0214 14:04:05.322062    1775 log.go:172] (0xc0009d2780) (5) Data frame handling\nI0214 14:04:05.322079    1775 log.go:172] (0xc0009d2780) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0214 14:04:05.459898    1775 log.go:172] (0xc00099c0b0) (0xc0009d2780) Stream removed, broadcasting: 5\nI0214 14:04:05.460212    1775 log.go:172] (0xc00099c0b0) Data frame received for 1\nI0214 14:04:05.460280    1775 log.go:172] (0xc00099c0b0) (0xc0005d8460) Stream removed, broadcasting: 3\nI0214 14:04:05.460452    1775 log.go:172] (0xc0009d26e0) (1) Data frame handling\nI0214 14:04:05.460486    1775 log.go:172] (0xc0009d26e0) (1) Data frame sent\nI0214 14:04:05.460500    1775 log.go:172] (0xc00099c0b0) (0xc0009d26e0) Stream removed, broadcasting: 1\nI0214 14:04:05.460515    1775 log.go:172] (0xc00099c0b0) Go away received\nI0214 14:04:05.461500    1775 log.go:172] (0xc00099c0b0) (0xc0009d26e0) Stream removed, broadcasting: 1\nI0214 14:04:05.461512    1775 log.go:172] (0xc00099c0b0) (0xc0005d8460) Stream removed, broadcasting: 3\nI0214 14:04:05.461517    1775 log.go:172] (0xc00099c0b0) (0xc0009d2780) Stream removed, broadcasting: 5\n"
Feb 14 14:04:05.471: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Feb 14 14:04:05.471: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Feb 14 14:04:05.472: INFO: Scaling statefulset ss to 0
STEP: Verifying that stateful set ss was scaled down in reverse order
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86
Feb 14 14:04:35.504: INFO: Deleting all statefulset in ns statefulset-1105
Feb 14 14:04:35.512: INFO: Scaling statefulset ss to 0
Feb 14 14:04:35.526: INFO: Waiting for statefulset status.replicas updated to 0
Feb 14 14:04:35.528: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 14 14:04:35.551: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-1105" for this suite.
Feb 14 14:04:41.599: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 14:04:41.747: INFO: namespace statefulset-1105 deletion completed in 6.188905173s

• [SLOW TEST:122.539 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with downward pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 14 14:04:41.748: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with downward pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod pod-subpath-test-downwardapi-n5mq
STEP: Creating a pod to test atomic-volume-subpath
Feb 14 14:04:42.050: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-n5mq" in namespace "subpath-2166" to be "success or failure"
Feb 14 14:04:42.059: INFO: Pod "pod-subpath-test-downwardapi-n5mq": Phase="Pending", Reason="", readiness=false. Elapsed: 8.753592ms
Feb 14 14:04:44.072: INFO: Pod "pod-subpath-test-downwardapi-n5mq": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021864917s
Feb 14 14:04:46.087: INFO: Pod "pod-subpath-test-downwardapi-n5mq": Phase="Pending", Reason="", readiness=false. Elapsed: 4.036554266s
Feb 14 14:04:48.097: INFO: Pod "pod-subpath-test-downwardapi-n5mq": Phase="Pending", Reason="", readiness=false. Elapsed: 6.04681013s
Feb 14 14:04:50.117: INFO: Pod "pod-subpath-test-downwardapi-n5mq": Phase="Pending", Reason="", readiness=false. Elapsed: 8.067260367s
Feb 14 14:04:52.125: INFO: Pod "pod-subpath-test-downwardapi-n5mq": Phase="Running", Reason="", readiness=true. Elapsed: 10.074792089s
Feb 14 14:04:54.138: INFO: Pod "pod-subpath-test-downwardapi-n5mq": Phase="Running", Reason="", readiness=true. Elapsed: 12.087884059s
Feb 14 14:04:56.147: INFO: Pod "pod-subpath-test-downwardapi-n5mq": Phase="Running", Reason="", readiness=true. Elapsed: 14.096987868s
Feb 14 14:04:58.157: INFO: Pod "pod-subpath-test-downwardapi-n5mq": Phase="Running", Reason="", readiness=true. Elapsed: 16.107394227s
Feb 14 14:05:00.166: INFO: Pod "pod-subpath-test-downwardapi-n5mq": Phase="Running", Reason="", readiness=true. Elapsed: 18.11574555s
Feb 14 14:05:02.182: INFO: Pod "pod-subpath-test-downwardapi-n5mq": Phase="Running", Reason="", readiness=true. Elapsed: 20.131438376s
Feb 14 14:05:04.194: INFO: Pod "pod-subpath-test-downwardapi-n5mq": Phase="Running", Reason="", readiness=true. Elapsed: 22.143848407s
Feb 14 14:05:06.205: INFO: Pod "pod-subpath-test-downwardapi-n5mq": Phase="Running", Reason="", readiness=true. Elapsed: 24.154940709s
Feb 14 14:05:08.213: INFO: Pod "pod-subpath-test-downwardapi-n5mq": Phase="Running", Reason="", readiness=true. Elapsed: 26.163284743s
Feb 14 14:05:10.224: INFO: Pod "pod-subpath-test-downwardapi-n5mq": Phase="Running", Reason="", readiness=true. Elapsed: 28.17373143s
Feb 14 14:05:12.228: INFO: Pod "pod-subpath-test-downwardapi-n5mq": Phase="Running", Reason="", readiness=true. Elapsed: 30.178091787s
Feb 14 14:05:14.236: INFO: Pod "pod-subpath-test-downwardapi-n5mq": Phase="Succeeded", Reason="", readiness=false. Elapsed: 32.185788511s
STEP: Saw pod success
Feb 14 14:05:14.236: INFO: Pod "pod-subpath-test-downwardapi-n5mq" satisfied condition "success or failure"
Feb 14 14:05:14.240: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-downwardapi-n5mq container test-container-subpath-downwardapi-n5mq: 
STEP: delete the pod
Feb 14 14:05:14.399: INFO: Waiting for pod pod-subpath-test-downwardapi-n5mq to disappear
Feb 14 14:05:14.428: INFO: Pod pod-subpath-test-downwardapi-n5mq no longer exists
STEP: Deleting pod pod-subpath-test-downwardapi-n5mq
Feb 14 14:05:14.428: INFO: Deleting pod "pod-subpath-test-downwardapi-n5mq" in namespace "subpath-2166"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 14 14:05:14.432: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-2166" for this suite.
Feb 14 14:05:20.468: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 14:05:20.581: INFO: namespace subpath-2166 deletion completed in 6.143900718s

• [SLOW TEST:38.834 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with downward pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox Pod with hostAliases 
  should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 14 14:05:20.582: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 14 14:05:30.725: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-6114" for this suite.
Feb 14 14:06:20.763: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 14:06:20.876: INFO: namespace kubelet-test-6114 deletion completed in 50.146861817s

• [SLOW TEST:60.294 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when scheduling a busybox Pod with hostAliases
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136
    should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 14 14:06:20.877: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name projected-secret-test-d3c56e4f-e6ad-4463-8807-56b52c95ee7f
STEP: Creating a pod to test consume secrets
Feb 14 14:06:20.989: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-4dee6dde-03c6-4b1a-aa8b-d20d2b5387e9" in namespace "projected-1139" to be "success or failure"
Feb 14 14:06:21.047: INFO: Pod "pod-projected-secrets-4dee6dde-03c6-4b1a-aa8b-d20d2b5387e9": Phase="Pending", Reason="", readiness=false. Elapsed: 58.065988ms
Feb 14 14:06:23.053: INFO: Pod "pod-projected-secrets-4dee6dde-03c6-4b1a-aa8b-d20d2b5387e9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.064178305s
Feb 14 14:06:25.074: INFO: Pod "pod-projected-secrets-4dee6dde-03c6-4b1a-aa8b-d20d2b5387e9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.085217315s
Feb 14 14:06:27.083: INFO: Pod "pod-projected-secrets-4dee6dde-03c6-4b1a-aa8b-d20d2b5387e9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.094022428s
Feb 14 14:06:29.094: INFO: Pod "pod-projected-secrets-4dee6dde-03c6-4b1a-aa8b-d20d2b5387e9": Phase="Pending", Reason="", readiness=false. Elapsed: 8.105138545s
Feb 14 14:06:31.107: INFO: Pod "pod-projected-secrets-4dee6dde-03c6-4b1a-aa8b-d20d2b5387e9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.117879503s
STEP: Saw pod success
Feb 14 14:06:31.107: INFO: Pod "pod-projected-secrets-4dee6dde-03c6-4b1a-aa8b-d20d2b5387e9" satisfied condition "success or failure"
Feb 14 14:06:31.116: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-4dee6dde-03c6-4b1a-aa8b-d20d2b5387e9 container secret-volume-test: 
STEP: delete the pod
Feb 14 14:06:31.165: INFO: Waiting for pod pod-projected-secrets-4dee6dde-03c6-4b1a-aa8b-d20d2b5387e9 to disappear
Feb 14 14:06:31.168: INFO: Pod pod-projected-secrets-4dee6dde-03c6-4b1a-aa8b-d20d2b5387e9 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 14 14:06:31.169: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1139" for this suite.
Feb 14 14:06:37.324: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 14:06:37.457: INFO: namespace projected-1139 deletion completed in 6.222422941s

• [SLOW TEST:16.580 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 14 14:06:37.457: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-upd-0e4502ed-eac4-409c-bb67-27b4266fd63f
STEP: Creating the pod
STEP: Updating configmap configmap-test-upd-0e4502ed-eac4-409c-bb67-27b4266fd63f
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 14 14:06:47.771: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-5500" for this suite.
Feb 14 14:07:09.836: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 14:07:09.940: INFO: namespace configmap-5500 deletion completed in 22.161311962s

• [SLOW TEST:32.483 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 14 14:07:09.941: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-0b7f5090-2025-4c06-aef6-ab9e9461da34
STEP: Creating a pod to test consume secrets
Feb 14 14:07:10.048: INFO: Waiting up to 5m0s for pod "pod-secrets-89592e96-1cd4-476d-874e-23e0dcc5983c" in namespace "secrets-7001" to be "success or failure"
Feb 14 14:07:10.057: INFO: Pod "pod-secrets-89592e96-1cd4-476d-874e-23e0dcc5983c": Phase="Pending", Reason="", readiness=false. Elapsed: 9.226571ms
Feb 14 14:07:12.072: INFO: Pod "pod-secrets-89592e96-1cd4-476d-874e-23e0dcc5983c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023718709s
Feb 14 14:07:14.090: INFO: Pod "pod-secrets-89592e96-1cd4-476d-874e-23e0dcc5983c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.042270988s
Feb 14 14:07:16.098: INFO: Pod "pod-secrets-89592e96-1cd4-476d-874e-23e0dcc5983c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.050263977s
Feb 14 14:07:18.125: INFO: Pod "pod-secrets-89592e96-1cd4-476d-874e-23e0dcc5983c": Phase="Pending", Reason="", readiness=false. Elapsed: 8.077283934s
Feb 14 14:07:20.139: INFO: Pod "pod-secrets-89592e96-1cd4-476d-874e-23e0dcc5983c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.09136097s
STEP: Saw pod success
Feb 14 14:07:20.139: INFO: Pod "pod-secrets-89592e96-1cd4-476d-874e-23e0dcc5983c" satisfied condition "success or failure"
Feb 14 14:07:20.144: INFO: Trying to get logs from node iruya-node pod pod-secrets-89592e96-1cd4-476d-874e-23e0dcc5983c container secret-volume-test: 
STEP: delete the pod
Feb 14 14:07:20.248: INFO: Waiting for pod pod-secrets-89592e96-1cd4-476d-874e-23e0dcc5983c to disappear
Feb 14 14:07:20.275: INFO: Pod pod-secrets-89592e96-1cd4-476d-874e-23e0dcc5983c no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 14 14:07:20.276: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-7001" for this suite.
Feb 14 14:07:26.456: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 14:07:26.651: INFO: namespace secrets-7001 deletion completed in 6.364161279s

• [SLOW TEST:16.710 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] PreStop 
  should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 14 14:07:26.651: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename prestop
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:167
[It] should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating server pod server in namespace prestop-8636
STEP: Waiting for pods to come up.
STEP: Creating tester pod tester in namespace prestop-8636
STEP: Deleting pre-stop pod
Feb 14 14:07:49.884: INFO: Saw: {
	"Hostname": "server",
	"Sent": null,
	"Received": {
		"prestop": 1
	},
	"Errors": null,
	"Log": [
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up."
	],
	"StillContactingPeers": true
}
STEP: Deleting the server pod
[AfterEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 14 14:07:49.899: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "prestop-8636" for this suite.
Feb 14 14:08:29.957: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 14:08:30.070: INFO: namespace prestop-8636 deletion completed in 40.157558996s

• [SLOW TEST:63.418 seconds]
[k8s.io] [sig-node] PreStop
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[k8s.io] Probing container 
  should have monotonically increasing restart count [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 14 14:08:30.070: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should have monotonically increasing restart count [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod liveness-0f82fe7e-b03b-4251-9856-4cdb35ba95fa in namespace container-probe-5279
Feb 14 14:08:38.281: INFO: Started pod liveness-0f82fe7e-b03b-4251-9856-4cdb35ba95fa in namespace container-probe-5279
STEP: checking the pod's current state and verifying that restartCount is present
Feb 14 14:08:38.288: INFO: Initial restart count of pod liveness-0f82fe7e-b03b-4251-9856-4cdb35ba95fa is 0
Feb 14 14:08:54.376: INFO: Restart count of pod container-probe-5279/liveness-0f82fe7e-b03b-4251-9856-4cdb35ba95fa is now 1 (16.087395039s elapsed)
Feb 14 14:09:14.540: INFO: Restart count of pod container-probe-5279/liveness-0f82fe7e-b03b-4251-9856-4cdb35ba95fa is now 2 (36.25116484s elapsed)
Feb 14 14:09:32.715: INFO: Restart count of pod container-probe-5279/liveness-0f82fe7e-b03b-4251-9856-4cdb35ba95fa is now 3 (54.426684866s elapsed)
Feb 14 14:09:54.893: INFO: Restart count of pod container-probe-5279/liveness-0f82fe7e-b03b-4251-9856-4cdb35ba95fa is now 4 (1m16.604322099s elapsed)
Feb 14 14:10:55.198: INFO: Restart count of pod container-probe-5279/liveness-0f82fe7e-b03b-4251-9856-4cdb35ba95fa is now 5 (2m16.909560823s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 14 14:10:55.235: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-5279" for this suite.
Feb 14 14:11:01.303: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 14:11:01.469: INFO: namespace container-probe-5279 deletion completed in 6.220832804s

• [SLOW TEST:151.400 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should have monotonically increasing restart count [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 14 14:11:01.471: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating pod
Feb 14 14:11:09.624: INFO: Pod pod-hostip-01cc2895-a6f4-4d29-8cc4-dcd9f47ed0e3 has hostIP: 10.96.3.65
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 14 14:11:09.625: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-2414" for this suite.
Feb 14 14:11:29.661: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 14:11:29.831: INFO: namespace pods-2414 deletion completed in 20.195668794s

• [SLOW TEST:28.361 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 14 14:11:29.832: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test use defaults
Feb 14 14:11:29.950: INFO: Waiting up to 5m0s for pod "client-containers-2ded9ff4-8a0c-4f7d-9c76-cd23df47629a" in namespace "containers-8684" to be "success or failure"
Feb 14 14:11:29.968: INFO: Pod "client-containers-2ded9ff4-8a0c-4f7d-9c76-cd23df47629a": Phase="Pending", Reason="", readiness=false. Elapsed: 17.597473ms
Feb 14 14:11:31.978: INFO: Pod "client-containers-2ded9ff4-8a0c-4f7d-9c76-cd23df47629a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027376277s
Feb 14 14:11:34.006: INFO: Pod "client-containers-2ded9ff4-8a0c-4f7d-9c76-cd23df47629a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.05571448s
Feb 14 14:11:36.026: INFO: Pod "client-containers-2ded9ff4-8a0c-4f7d-9c76-cd23df47629a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.076108674s
Feb 14 14:11:38.074: INFO: Pod "client-containers-2ded9ff4-8a0c-4f7d-9c76-cd23df47629a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.123529899s
STEP: Saw pod success
Feb 14 14:11:38.074: INFO: Pod "client-containers-2ded9ff4-8a0c-4f7d-9c76-cd23df47629a" satisfied condition "success or failure"
Feb 14 14:11:38.084: INFO: Trying to get logs from node iruya-node pod client-containers-2ded9ff4-8a0c-4f7d-9c76-cd23df47629a container test-container: 
STEP: delete the pod
Feb 14 14:11:38.143: INFO: Waiting for pod client-containers-2ded9ff4-8a0c-4f7d-9c76-cd23df47629a to disappear
Feb 14 14:11:38.151: INFO: Pod client-containers-2ded9ff4-8a0c-4f7d-9c76-cd23df47629a no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 14 14:11:38.151: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-8684" for this suite.
Feb 14 14:11:44.255: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 14:11:44.363: INFO: namespace containers-8684 deletion completed in 6.205441416s

• [SLOW TEST:14.532 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 14 14:11:44.365: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Feb 14 14:11:53.072: INFO: Successfully updated pod "pod-update-activedeadlineseconds-3ea1fade-84a7-4789-853a-b83cb755bbf7"
Feb 14 14:11:53.072: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-3ea1fade-84a7-4789-853a-b83cb755bbf7" in namespace "pods-5159" to be "terminated due to deadline exceeded"
Feb 14 14:11:53.110: INFO: Pod "pod-update-activedeadlineseconds-3ea1fade-84a7-4789-853a-b83cb755bbf7": Phase="Running", Reason="", readiness=true. Elapsed: 38.610431ms
Feb 14 14:11:55.121: INFO: Pod "pod-update-activedeadlineseconds-3ea1fade-84a7-4789-853a-b83cb755bbf7": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.04912603s
Feb 14 14:11:55.121: INFO: Pod "pod-update-activedeadlineseconds-3ea1fade-84a7-4789-853a-b83cb755bbf7" satisfied condition "terminated due to deadline exceeded"
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 14 14:11:55.121: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-5159" for this suite.
Feb 14 14:12:01.166: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 14:12:01.259: INFO: namespace pods-5159 deletion completed in 6.132612598s

• [SLOW TEST:16.894 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[k8s.io] KubeletManagedEtcHosts 
  should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] KubeletManagedEtcHosts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 14 14:12:01.259: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Setting up the test
STEP: Creating hostNetwork=false pod
STEP: Creating hostNetwork=true pod
STEP: Running the test
STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false
Feb 14 14:12:21.649: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-4148 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 14 14:12:21.649: INFO: >>> kubeConfig: /root/.kube/config
I0214 14:12:21.737023       8 log.go:172] (0xc001a849a0) (0xc0014cc460) Create stream
I0214 14:12:21.737232       8 log.go:172] (0xc001a849a0) (0xc0014cc460) Stream added, broadcasting: 1
I0214 14:12:21.747701       8 log.go:172] (0xc001a849a0) Reply frame received for 1
I0214 14:12:21.747751       8 log.go:172] (0xc001a849a0) (0xc002768000) Create stream
I0214 14:12:21.747769       8 log.go:172] (0xc001a849a0) (0xc002768000) Stream added, broadcasting: 3
I0214 14:12:21.750292       8 log.go:172] (0xc001a849a0) Reply frame received for 3
I0214 14:12:21.750323       8 log.go:172] (0xc001a849a0) (0xc0012e1720) Create stream
I0214 14:12:21.750334       8 log.go:172] (0xc001a849a0) (0xc0012e1720) Stream added, broadcasting: 5
I0214 14:12:21.752352       8 log.go:172] (0xc001a849a0) Reply frame received for 5
I0214 14:12:21.906867       8 log.go:172] (0xc001a849a0) Data frame received for 3
I0214 14:12:21.907004       8 log.go:172] (0xc002768000) (3) Data frame handling
I0214 14:12:21.907051       8 log.go:172] (0xc002768000) (3) Data frame sent
I0214 14:12:22.061486       8 log.go:172] (0xc001a849a0) Data frame received for 1
I0214 14:12:22.061755       8 log.go:172] (0xc0014cc460) (1) Data frame handling
I0214 14:12:22.061788       8 log.go:172] (0xc0014cc460) (1) Data frame sent
I0214 14:12:22.065414       8 log.go:172] (0xc001a849a0) (0xc0012e1720) Stream removed, broadcasting: 5
I0214 14:12:22.066171       8 log.go:172] (0xc001a849a0) (0xc0014cc460) Stream removed, broadcasting: 1
I0214 14:12:22.066596       8 log.go:172] (0xc001a849a0) (0xc002768000) Stream removed, broadcasting: 3
I0214 14:12:22.066662       8 log.go:172] (0xc001a849a0) Go away received
I0214 14:12:22.066976       8 log.go:172] (0xc001a849a0) (0xc0014cc460) Stream removed, broadcasting: 1
I0214 14:12:22.067016       8 log.go:172] (0xc001a849a0) (0xc002768000) Stream removed, broadcasting: 3
I0214 14:12:22.067047       8 log.go:172] (0xc001a849a0) (0xc0012e1720) Stream removed, broadcasting: 5
Feb 14 14:12:22.067: INFO: Exec stderr: ""
Feb 14 14:12:22.067: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-4148 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 14 14:12:22.067: INFO: >>> kubeConfig: /root/.kube/config
I0214 14:12:22.137759       8 log.go:172] (0xc002078000) (0xc001b3efa0) Create stream
I0214 14:12:22.137957       8 log.go:172] (0xc002078000) (0xc001b3efa0) Stream added, broadcasting: 1
I0214 14:12:22.146184       8 log.go:172] (0xc002078000) Reply frame received for 1
I0214 14:12:22.146260       8 log.go:172] (0xc002078000) (0xc0012e17c0) Create stream
I0214 14:12:22.146269       8 log.go:172] (0xc002078000) (0xc0012e17c0) Stream added, broadcasting: 3
I0214 14:12:22.148385       8 log.go:172] (0xc002078000) Reply frame received for 3
I0214 14:12:22.148420       8 log.go:172] (0xc002078000) (0xc0014cc500) Create stream
I0214 14:12:22.148433       8 log.go:172] (0xc002078000) (0xc0014cc500) Stream added, broadcasting: 5
I0214 14:12:22.150301       8 log.go:172] (0xc002078000) Reply frame received for 5
I0214 14:12:22.276293       8 log.go:172] (0xc002078000) Data frame received for 3
I0214 14:12:22.276465       8 log.go:172] (0xc0012e17c0) (3) Data frame handling
I0214 14:12:22.276521       8 log.go:172] (0xc0012e17c0) (3) Data frame sent
I0214 14:12:22.436439       8 log.go:172] (0xc002078000) (0xc0012e17c0) Stream removed, broadcasting: 3
I0214 14:12:22.437141       8 log.go:172] (0xc002078000) Data frame received for 1
I0214 14:12:22.437455       8 log.go:172] (0xc002078000) (0xc0014cc500) Stream removed, broadcasting: 5
I0214 14:12:22.437641       8 log.go:172] (0xc001b3efa0) (1) Data frame handling
I0214 14:12:22.437685       8 log.go:172] (0xc001b3efa0) (1) Data frame sent
I0214 14:12:22.437705       8 log.go:172] (0xc002078000) (0xc001b3efa0) Stream removed, broadcasting: 1
I0214 14:12:22.437799       8 log.go:172] (0xc002078000) Go away received
I0214 14:12:22.438127       8 log.go:172] (0xc002078000) (0xc001b3efa0) Stream removed, broadcasting: 1
I0214 14:12:22.438156       8 log.go:172] (0xc002078000) (0xc0012e17c0) Stream removed, broadcasting: 3
I0214 14:12:22.438173       8 log.go:172] (0xc002078000) (0xc0014cc500) Stream removed, broadcasting: 5
Feb 14 14:12:22.438: INFO: Exec stderr: ""
Feb 14 14:12:22.438: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-4148 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 14 14:12:22.438: INFO: >>> kubeConfig: /root/.kube/config
I0214 14:12:22.530638       8 log.go:172] (0xc000f06790) (0xc002768320) Create stream
I0214 14:12:22.530918       8 log.go:172] (0xc000f06790) (0xc002768320) Stream added, broadcasting: 1
I0214 14:12:22.543271       8 log.go:172] (0xc000f06790) Reply frame received for 1
I0214 14:12:22.543345       8 log.go:172] (0xc000f06790) (0xc001b3f040) Create stream
I0214 14:12:22.543356       8 log.go:172] (0xc000f06790) (0xc001b3f040) Stream added, broadcasting: 3
I0214 14:12:22.545053       8 log.go:172] (0xc000f06790) Reply frame received for 3
I0214 14:12:22.545136       8 log.go:172] (0xc000f06790) (0xc001b3f180) Create stream
I0214 14:12:22.545152       8 log.go:172] (0xc000f06790) (0xc001b3f180) Stream added, broadcasting: 5
I0214 14:12:22.554720       8 log.go:172] (0xc000f06790) Reply frame received for 5
I0214 14:12:22.720508       8 log.go:172] (0xc000f06790) Data frame received for 3
I0214 14:12:22.720580       8 log.go:172] (0xc001b3f040) (3) Data frame handling
I0214 14:12:22.720598       8 log.go:172] (0xc001b3f040) (3) Data frame sent
I0214 14:12:22.871822       8 log.go:172] (0xc000f06790) Data frame received for 1
I0214 14:12:22.872096       8 log.go:172] (0xc000f06790) (0xc001b3f040) Stream removed, broadcasting: 3
I0214 14:12:22.872160       8 log.go:172] (0xc002768320) (1) Data frame handling
I0214 14:12:22.872182       8 log.go:172] (0xc002768320) (1) Data frame sent
I0214 14:12:22.872442       8 log.go:172] (0xc000f06790) (0xc001b3f180) Stream removed, broadcasting: 5
I0214 14:12:22.872538       8 log.go:172] (0xc000f06790) (0xc002768320) Stream removed, broadcasting: 1
I0214 14:12:22.872586       8 log.go:172] (0xc000f06790) Go away received
I0214 14:12:22.872744       8 log.go:172] (0xc000f06790) (0xc002768320) Stream removed, broadcasting: 1
I0214 14:12:22.872760       8 log.go:172] (0xc000f06790) (0xc001b3f040) Stream removed, broadcasting: 3
I0214 14:12:22.872786       8 log.go:172] (0xc000f06790) (0xc001b3f180) Stream removed, broadcasting: 5
Feb 14 14:12:22.872: INFO: Exec stderr: ""
Feb 14 14:12:22.872: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-4148 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 14 14:12:22.873: INFO: >>> kubeConfig: /root/.kube/config
I0214 14:12:22.923283       8 log.go:172] (0xc000f07340) (0xc002768780) Create stream
I0214 14:12:22.923449       8 log.go:172] (0xc000f07340) (0xc002768780) Stream added, broadcasting: 1
I0214 14:12:22.929879       8 log.go:172] (0xc000f07340) Reply frame received for 1
I0214 14:12:22.929953       8 log.go:172] (0xc000f07340) (0xc00229d2c0) Create stream
I0214 14:12:22.929966       8 log.go:172] (0xc000f07340) (0xc00229d2c0) Stream added, broadcasting: 3
I0214 14:12:22.930934       8 log.go:172] (0xc000f07340) Reply frame received for 3
I0214 14:12:22.930958       8 log.go:172] (0xc000f07340) (0xc002768820) Create stream
I0214 14:12:22.930966       8 log.go:172] (0xc000f07340) (0xc002768820) Stream added, broadcasting: 5
I0214 14:12:22.932373       8 log.go:172] (0xc000f07340) Reply frame received for 5
I0214 14:12:23.033940       8 log.go:172] (0xc000f07340) Data frame received for 3
I0214 14:12:23.034656       8 log.go:172] (0xc00229d2c0) (3) Data frame handling
I0214 14:12:23.034721       8 log.go:172] (0xc00229d2c0) (3) Data frame sent
I0214 14:12:23.132780       8 log.go:172] (0xc000f07340) (0xc00229d2c0) Stream removed, broadcasting: 3
I0214 14:12:23.133401       8 log.go:172] (0xc000f07340) Data frame received for 1
I0214 14:12:23.133472       8 log.go:172] (0xc002768780) (1) Data frame handling
I0214 14:12:23.133510       8 log.go:172] (0xc002768780) (1) Data frame sent
I0214 14:12:23.133809       8 log.go:172] (0xc000f07340) (0xc002768780) Stream removed, broadcasting: 1
I0214 14:12:23.134001       8 log.go:172] (0xc000f07340) (0xc002768820) Stream removed, broadcasting: 5
I0214 14:12:23.134081       8 log.go:172] (0xc000f07340) (0xc002768780) Stream removed, broadcasting: 1
I0214 14:12:23.134097       8 log.go:172] (0xc000f07340) (0xc00229d2c0) Stream removed, broadcasting: 3
I0214 14:12:23.134104       8 log.go:172] (0xc000f07340) (0xc002768820) Stream removed, broadcasting: 5
I0214 14:12:23.134440       8 log.go:172] (0xc000f07340) Go away received
Feb 14 14:12:23.134: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount
Feb 14 14:12:23.134: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-4148 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 14 14:12:23.134: INFO: >>> kubeConfig: /root/.kube/config
I0214 14:12:23.188671       8 log.go:172] (0xc002079130) (0xc001b3f4a0) Create stream
I0214 14:12:23.188719       8 log.go:172] (0xc002079130) (0xc001b3f4a0) Stream added, broadcasting: 1
I0214 14:12:23.193922       8 log.go:172] (0xc002079130) Reply frame received for 1
I0214 14:12:23.193946       8 log.go:172] (0xc002079130) (0xc0014cc640) Create stream
I0214 14:12:23.193953       8 log.go:172] (0xc002079130) (0xc0014cc640) Stream added, broadcasting: 3
I0214 14:12:23.195256       8 log.go:172] (0xc002079130) Reply frame received for 3
I0214 14:12:23.195365       8 log.go:172] (0xc002079130) (0xc0027688c0) Create stream
I0214 14:12:23.195376       8 log.go:172] (0xc002079130) (0xc0027688c0) Stream added, broadcasting: 5
I0214 14:12:23.197841       8 log.go:172] (0xc002079130) Reply frame received for 5
I0214 14:12:23.287969       8 log.go:172] (0xc002079130) Data frame received for 3
I0214 14:12:23.288031       8 log.go:172] (0xc0014cc640) (3) Data frame handling
I0214 14:12:23.288058       8 log.go:172] (0xc0014cc640) (3) Data frame sent
I0214 14:12:23.440335       8 log.go:172] (0xc002079130) Data frame received for 1
I0214 14:12:23.440624       8 log.go:172] (0xc002079130) (0xc0027688c0) Stream removed, broadcasting: 5
I0214 14:12:23.440795       8 log.go:172] (0xc001b3f4a0) (1) Data frame handling
I0214 14:12:23.440840       8 log.go:172] (0xc001b3f4a0) (1) Data frame sent
I0214 14:12:23.440884       8 log.go:172] (0xc002079130) (0xc0014cc640) Stream removed, broadcasting: 3
I0214 14:12:23.441353       8 log.go:172] (0xc002079130) (0xc001b3f4a0) Stream removed, broadcasting: 1
I0214 14:12:23.441934       8 log.go:172] (0xc002079130) Go away received
I0214 14:12:23.442115       8 log.go:172] (0xc002079130) (0xc001b3f4a0) Stream removed, broadcasting: 1
I0214 14:12:23.442171       8 log.go:172] (0xc002079130) (0xc0014cc640) Stream removed, broadcasting: 3
I0214 14:12:23.442190       8 log.go:172] (0xc002079130) (0xc0027688c0) Stream removed, broadcasting: 5
Feb 14 14:12:23.442: INFO: Exec stderr: ""
Feb 14 14:12:23.442: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-4148 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 14 14:12:23.442: INFO: >>> kubeConfig: /root/.kube/config
I0214 14:12:23.516878       8 log.go:172] (0xc000f8b760) (0xc00229d680) Create stream
I0214 14:12:23.517036       8 log.go:172] (0xc000f8b760) (0xc00229d680) Stream added, broadcasting: 1
I0214 14:12:23.522925       8 log.go:172] (0xc000f8b760) Reply frame received for 1
I0214 14:12:23.522979       8 log.go:172] (0xc000f8b760) (0xc00229d720) Create stream
I0214 14:12:23.522990       8 log.go:172] (0xc000f8b760) (0xc00229d720) Stream added, broadcasting: 3
I0214 14:12:23.525741       8 log.go:172] (0xc000f8b760) Reply frame received for 3
I0214 14:12:23.525782       8 log.go:172] (0xc000f8b760) (0xc0014cc820) Create stream
I0214 14:12:23.525792       8 log.go:172] (0xc000f8b760) (0xc0014cc820) Stream added, broadcasting: 5
I0214 14:12:23.527370       8 log.go:172] (0xc000f8b760) Reply frame received for 5
I0214 14:12:23.659550       8 log.go:172] (0xc000f8b760) Data frame received for 3
I0214 14:12:23.659650       8 log.go:172] (0xc00229d720) (3) Data frame handling
I0214 14:12:23.659678       8 log.go:172] (0xc00229d720) (3) Data frame sent
I0214 14:12:23.814840       8 log.go:172] (0xc000f8b760) Data frame received for 1
I0214 14:12:23.815048       8 log.go:172] (0xc000f8b760) (0xc0014cc820) Stream removed, broadcasting: 5
I0214 14:12:23.815249       8 log.go:172] (0xc00229d680) (1) Data frame handling
I0214 14:12:23.815273       8 log.go:172] (0xc00229d680) (1) Data frame sent
I0214 14:12:23.815451       8 log.go:172] (0xc000f8b760) (0xc00229d720) Stream removed, broadcasting: 3
I0214 14:12:23.815485       8 log.go:172] (0xc000f8b760) (0xc00229d680) Stream removed, broadcasting: 1
I0214 14:12:23.815643       8 log.go:172] (0xc000f8b760) (0xc00229d680) Stream removed, broadcasting: 1
I0214 14:12:23.815659       8 log.go:172] (0xc000f8b760) (0xc00229d720) Stream removed, broadcasting: 3
I0214 14:12:23.815669       8 log.go:172] (0xc000f8b760) (0xc0014cc820) Stream removed, broadcasting: 5
I0214 14:12:23.818525       8 log.go:172] (0xc000f8b760) Go away received
Feb 14 14:12:23.818: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true
Feb 14 14:12:23.818: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-4148 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 14 14:12:23.818: INFO: >>> kubeConfig: /root/.kube/config
I0214 14:12:23.931110       8 log.go:172] (0xc00293c6e0) (0xc0014cce60) Create stream
I0214 14:12:23.931353       8 log.go:172] (0xc00293c6e0) (0xc0014cce60) Stream added, broadcasting: 1
I0214 14:12:23.971906       8 log.go:172] (0xc00293c6e0) Reply frame received for 1
I0214 14:12:23.972207       8 log.go:172] (0xc00293c6e0) (0xc0014ccfa0) Create stream
I0214 14:12:23.972231       8 log.go:172] (0xc00293c6e0) (0xc0014ccfa0) Stream added, broadcasting: 3
I0214 14:12:23.976803       8 log.go:172] (0xc00293c6e0) Reply frame received for 3
I0214 14:12:23.976884       8 log.go:172] (0xc00293c6e0) (0xc002768960) Create stream
I0214 14:12:23.976916       8 log.go:172] (0xc00293c6e0) (0xc002768960) Stream added, broadcasting: 5
I0214 14:12:23.980292       8 log.go:172] (0xc00293c6e0) Reply frame received for 5
I0214 14:12:24.176302       8 log.go:172] (0xc00293c6e0) Data frame received for 3
I0214 14:12:24.176433       8 log.go:172] (0xc0014ccfa0) (3) Data frame handling
I0214 14:12:24.176496       8 log.go:172] (0xc0014ccfa0) (3) Data frame sent
I0214 14:12:24.271813       8 log.go:172] (0xc00293c6e0) Data frame received for 1
I0214 14:12:24.272259       8 log.go:172] (0xc00293c6e0) (0xc002768960) Stream removed, broadcasting: 5
I0214 14:12:24.272341       8 log.go:172] (0xc0014cce60) (1) Data frame handling
I0214 14:12:24.272378       8 log.go:172] (0xc0014cce60) (1) Data frame sent
I0214 14:12:24.272442       8 log.go:172] (0xc00293c6e0) (0xc0014ccfa0) Stream removed, broadcasting: 3
I0214 14:12:24.272519       8 log.go:172] (0xc00293c6e0) (0xc0014cce60) Stream removed, broadcasting: 1
I0214 14:12:24.272560       8 log.go:172] (0xc00293c6e0) Go away received
I0214 14:12:24.273327       8 log.go:172] (0xc00293c6e0) (0xc0014cce60) Stream removed, broadcasting: 1
I0214 14:12:24.273365       8 log.go:172] (0xc00293c6e0) (0xc0014ccfa0) Stream removed, broadcasting: 3
I0214 14:12:24.273387       8 log.go:172] (0xc00293c6e0) (0xc002768960) Stream removed, broadcasting: 5
Feb 14 14:12:24.273: INFO: Exec stderr: ""
Feb 14 14:12:24.273: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-4148 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 14 14:12:24.273: INFO: >>> kubeConfig: /root/.kube/config
I0214 14:12:24.323472       8 log.go:172] (0xc002cb66e0) (0xc00229dae0) Create stream
I0214 14:12:24.323599       8 log.go:172] (0xc002cb66e0) (0xc00229dae0) Stream added, broadcasting: 1
I0214 14:12:24.329615       8 log.go:172] (0xc002cb66e0) Reply frame received for 1
I0214 14:12:24.329812       8 log.go:172] (0xc002cb66e0) (0xc001b3f680) Create stream
I0214 14:12:24.329851       8 log.go:172] (0xc002cb66e0) (0xc001b3f680) Stream added, broadcasting: 3
I0214 14:12:24.331920       8 log.go:172] (0xc002cb66e0) Reply frame received for 3
I0214 14:12:24.331939       8 log.go:172] (0xc002cb66e0) (0xc00229db80) Create stream
I0214 14:12:24.331946       8 log.go:172] (0xc002cb66e0) (0xc00229db80) Stream added, broadcasting: 5
I0214 14:12:24.333070       8 log.go:172] (0xc002cb66e0) Reply frame received for 5
I0214 14:12:24.416590       8 log.go:172] (0xc002cb66e0) Data frame received for 3
I0214 14:12:24.416740       8 log.go:172] (0xc001b3f680) (3) Data frame handling
I0214 14:12:24.416809       8 log.go:172] (0xc001b3f680) (3) Data frame sent
I0214 14:12:24.597733       8 log.go:172] (0xc002cb66e0) (0xc001b3f680) Stream removed, broadcasting: 3
I0214 14:12:24.597965       8 log.go:172] (0xc002cb66e0) Data frame received for 1
I0214 14:12:24.597984       8 log.go:172] (0xc00229dae0) (1) Data frame handling
I0214 14:12:24.598013       8 log.go:172] (0xc00229dae0) (1) Data frame sent
I0214 14:12:24.598094       8 log.go:172] (0xc002cb66e0) (0xc00229db80) Stream removed, broadcasting: 5
I0214 14:12:24.598118       8 log.go:172] (0xc002cb66e0) (0xc00229dae0) Stream removed, broadcasting: 1
I0214 14:12:24.598149       8 log.go:172] (0xc002cb66e0) Go away received
I0214 14:12:24.598427       8 log.go:172] (0xc002cb66e0) (0xc00229dae0) Stream removed, broadcasting: 1
I0214 14:12:24.598439       8 log.go:172] (0xc002cb66e0) (0xc001b3f680) Stream removed, broadcasting: 3
I0214 14:12:24.598471       8 log.go:172] (0xc002cb66e0) (0xc00229db80) Stream removed, broadcasting: 5
Feb 14 14:12:24.598: INFO: Exec stderr: ""
Feb 14 14:12:24.598: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-4148 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 14 14:12:24.598: INFO: >>> kubeConfig: /root/.kube/config
I0214 14:12:24.668929       8 log.go:172] (0xc000ed3130) (0xc0012e1ea0) Create stream
I0214 14:12:24.669051       8 log.go:172] (0xc000ed3130) (0xc0012e1ea0) Stream added, broadcasting: 1
I0214 14:12:24.673372       8 log.go:172] (0xc000ed3130) Reply frame received for 1
I0214 14:12:24.673395       8 log.go:172] (0xc000ed3130) (0xc0014cd0e0) Create stream
I0214 14:12:24.673401       8 log.go:172] (0xc000ed3130) (0xc0014cd0e0) Stream added, broadcasting: 3
I0214 14:12:24.674516       8 log.go:172] (0xc000ed3130) Reply frame received for 3
I0214 14:12:24.674532       8 log.go:172] (0xc000ed3130) (0xc002768a00) Create stream
I0214 14:12:24.674540       8 log.go:172] (0xc000ed3130) (0xc002768a00) Stream added, broadcasting: 5
I0214 14:12:24.676792       8 log.go:172] (0xc000ed3130) Reply frame received for 5
I0214 14:12:24.766919       8 log.go:172] (0xc000ed3130) Data frame received for 3
I0214 14:12:24.766997       8 log.go:172] (0xc0014cd0e0) (3) Data frame handling
I0214 14:12:24.767011       8 log.go:172] (0xc0014cd0e0) (3) Data frame sent
I0214 14:12:24.872123       8 log.go:172] (0xc000ed3130) Data frame received for 1
I0214 14:12:24.872207       8 log.go:172] (0xc0012e1ea0) (1) Data frame handling
I0214 14:12:24.872229       8 log.go:172] (0xc0012e1ea0) (1) Data frame sent
I0214 14:12:24.873097       8 log.go:172] (0xc000ed3130) (0xc0012e1ea0) Stream removed, broadcasting: 1
I0214 14:12:24.873987       8 log.go:172] (0xc000ed3130) (0xc0014cd0e0) Stream removed, broadcasting: 3
I0214 14:12:24.874077       8 log.go:172] (0xc000ed3130) (0xc002768a00) Stream removed, broadcasting: 5
I0214 14:12:24.874157       8 log.go:172] (0xc000ed3130) (0xc0012e1ea0) Stream removed, broadcasting: 1
I0214 14:12:24.874205       8 log.go:172] (0xc000ed3130) Go away received
I0214 14:12:24.874307       8 log.go:172] (0xc000ed3130) (0xc0014cd0e0) Stream removed, broadcasting: 3
I0214 14:12:24.874343       8 log.go:172] (0xc000ed3130) (0xc002768a00) Stream removed, broadcasting: 5
Feb 14 14:12:24.874: INFO: Exec stderr: ""
Feb 14 14:12:24.874: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-4148 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 14 14:12:24.874: INFO: >>> kubeConfig: /root/.kube/config
I0214 14:12:24.944542       8 log.go:172] (0xc00293d6b0) (0xc0014cd5e0) Create stream
I0214 14:12:24.944679       8 log.go:172] (0xc00293d6b0) (0xc0014cd5e0) Stream added, broadcasting: 1
I0214 14:12:24.949947       8 log.go:172] (0xc00293d6b0) Reply frame received for 1
I0214 14:12:24.949994       8 log.go:172] (0xc00293d6b0) (0xc00229dcc0) Create stream
I0214 14:12:24.950014       8 log.go:172] (0xc00293d6b0) (0xc00229dcc0) Stream added, broadcasting: 3
I0214 14:12:24.951997       8 log.go:172] (0xc00293d6b0) Reply frame received for 3
I0214 14:12:24.952015       8 log.go:172] (0xc00293d6b0) (0xc0014cd680) Create stream
I0214 14:12:24.952021       8 log.go:172] (0xc00293d6b0) (0xc0014cd680) Stream added, broadcasting: 5
I0214 14:12:24.954367       8 log.go:172] (0xc00293d6b0) Reply frame received for 5
I0214 14:12:25.042347       8 log.go:172] (0xc00293d6b0) Data frame received for 3
I0214 14:12:25.042475       8 log.go:172] (0xc00229dcc0) (3) Data frame handling
I0214 14:12:25.042500       8 log.go:172] (0xc00229dcc0) (3) Data frame sent
I0214 14:12:25.148963       8 log.go:172] (0xc00293d6b0) Data frame received for 1
I0214 14:12:25.149440       8 log.go:172] (0xc0014cd5e0) (1) Data frame handling
I0214 14:12:25.149519       8 log.go:172] (0xc0014cd5e0) (1) Data frame sent
I0214 14:12:25.149601       8 log.go:172] (0xc00293d6b0) (0xc0014cd5e0) Stream removed, broadcasting: 1
I0214 14:12:25.149673       8 log.go:172] (0xc00293d6b0) (0xc0014cd680) Stream removed, broadcasting: 5
I0214 14:12:25.149858       8 log.go:172] (0xc00293d6b0) (0xc00229dcc0) Stream removed, broadcasting: 3
I0214 14:12:25.149920       8 log.go:172] (0xc00293d6b0) Go away received
I0214 14:12:25.149994       8 log.go:172] (0xc00293d6b0) (0xc0014cd5e0) Stream removed, broadcasting: 1
I0214 14:12:25.150062       8 log.go:172] (0xc00293d6b0) (0xc00229dcc0) Stream removed, broadcasting: 3
I0214 14:12:25.150077       8 log.go:172] (0xc00293d6b0) (0xc0014cd680) Stream removed, broadcasting: 5
Feb 14 14:12:25.150: INFO: Exec stderr: ""
[AfterEach] [k8s.io] KubeletManagedEtcHosts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 14 14:12:25.150: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-kubelet-etc-hosts-4148" for this suite.
Feb 14 14:13:09.212: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 14:13:09.313: INFO: namespace e2e-kubelet-etc-hosts-4148 deletion completed in 44.152304386s

• [SLOW TEST:68.054 seconds]
[k8s.io] KubeletManagedEtcHosts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 14 14:13:09.314: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name cm-test-opt-del-e766ef91-fbbb-4628-9cd2-01a6777669e0
STEP: Creating configMap with name cm-test-opt-upd-d9b4d507-448d-48ab-9645-ad28b693368f
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-e766ef91-fbbb-4628-9cd2-01a6777669e0
STEP: Updating configmap cm-test-opt-upd-d9b4d507-448d-48ab-9645-ad28b693368f
STEP: Creating configMap with name cm-test-opt-create-4f67eafe-a84a-41ce-ae5d-da4d3bae912e
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 14 14:13:25.963: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-6671" for this suite.
Feb 14 14:13:47.996: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 14:13:48.108: INFO: namespace configmap-6671 deletion completed in 22.136028383s

• [SLOW TEST:38.794 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 14 14:13:48.109: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-64c0fd87-18fd-4290-b2a9-fd0b5c217eed
STEP: Creating a pod to test consume configMaps
Feb 14 14:13:48.252: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-b11db74f-5c57-4448-bd6b-2cad439c6bae" in namespace "projected-613" to be "success or failure"
Feb 14 14:13:48.257: INFO: Pod "pod-projected-configmaps-b11db74f-5c57-4448-bd6b-2cad439c6bae": Phase="Pending", Reason="", readiness=false. Elapsed: 5.548042ms
Feb 14 14:13:50.263: INFO: Pod "pod-projected-configmaps-b11db74f-5c57-4448-bd6b-2cad439c6bae": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011557207s
Feb 14 14:13:52.274: INFO: Pod "pod-projected-configmaps-b11db74f-5c57-4448-bd6b-2cad439c6bae": Phase="Pending", Reason="", readiness=false. Elapsed: 4.022702452s
Feb 14 14:13:54.280: INFO: Pod "pod-projected-configmaps-b11db74f-5c57-4448-bd6b-2cad439c6bae": Phase="Pending", Reason="", readiness=false. Elapsed: 6.028378249s
Feb 14 14:13:56.302: INFO: Pod "pod-projected-configmaps-b11db74f-5c57-4448-bd6b-2cad439c6bae": Phase="Pending", Reason="", readiness=false. Elapsed: 8.050831623s
Feb 14 14:13:58.718: INFO: Pod "pod-projected-configmaps-b11db74f-5c57-4448-bd6b-2cad439c6bae": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.466016831s
STEP: Saw pod success
Feb 14 14:13:58.718: INFO: Pod "pod-projected-configmaps-b11db74f-5c57-4448-bd6b-2cad439c6bae" satisfied condition "success or failure"
Feb 14 14:13:58.723: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-b11db74f-5c57-4448-bd6b-2cad439c6bae container projected-configmap-volume-test: 
STEP: delete the pod
Feb 14 14:13:58.785: INFO: Waiting for pod pod-projected-configmaps-b11db74f-5c57-4448-bd6b-2cad439c6bae to disappear
Feb 14 14:13:58.793: INFO: Pod pod-projected-configmaps-b11db74f-5c57-4448-bd6b-2cad439c6bae no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 14 14:13:58.794: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-613" for this suite.
Feb 14 14:14:06.873: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 14:14:07.001: INFO: namespace projected-613 deletion completed in 8.202774365s

• [SLOW TEST:18.893 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  pod should support shared volumes between containers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 14 14:14:07.002: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] pod should support shared volumes between containers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating Pod
STEP: Waiting for the pod running
STEP: Geting the pod
STEP: Reading file content from the nginx-container
Feb 14 14:14:21.171: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec pod-sharedvolume-b2441543-0f91-44a1-90b7-9543aea843af -c busybox-main-container --namespace=emptydir-4016 -- cat /usr/share/volumeshare/shareddata.txt'
Feb 14 14:14:23.526: INFO: stderr: "I0214 14:14:23.239138    1794 log.go:172] (0xc00071a420) (0xc000734820) Create stream\nI0214 14:14:23.239254    1794 log.go:172] (0xc00071a420) (0xc000734820) Stream added, broadcasting: 1\nI0214 14:14:23.248669    1794 log.go:172] (0xc00071a420) Reply frame received for 1\nI0214 14:14:23.248711    1794 log.go:172] (0xc00071a420) (0xc0007348c0) Create stream\nI0214 14:14:23.248720    1794 log.go:172] (0xc00071a420) (0xc0007348c0) Stream added, broadcasting: 3\nI0214 14:14:23.250464    1794 log.go:172] (0xc00071a420) Reply frame received for 3\nI0214 14:14:23.250501    1794 log.go:172] (0xc00071a420) (0xc000594280) Create stream\nI0214 14:14:23.250511    1794 log.go:172] (0xc00071a420) (0xc000594280) Stream added, broadcasting: 5\nI0214 14:14:23.252008    1794 log.go:172] (0xc00071a420) Reply frame received for 5\nI0214 14:14:23.394843    1794 log.go:172] (0xc00071a420) Data frame received for 3\nI0214 14:14:23.394937    1794 log.go:172] (0xc0007348c0) (3) Data frame handling\nI0214 14:14:23.394975    1794 log.go:172] (0xc0007348c0) (3) Data frame sent\nI0214 14:14:23.512375    1794 log.go:172] (0xc00071a420) Data frame received for 1\nI0214 14:14:23.512425    1794 log.go:172] (0xc00071a420) (0xc0007348c0) Stream removed, broadcasting: 3\nI0214 14:14:23.512578    1794 log.go:172] (0xc00071a420) (0xc000594280) Stream removed, broadcasting: 5\nI0214 14:14:23.512732    1794 log.go:172] (0xc000734820) (1) Data frame handling\nI0214 14:14:23.512763    1794 log.go:172] (0xc000734820) (1) Data frame sent\nI0214 14:14:23.512794    1794 log.go:172] (0xc00071a420) (0xc000734820) Stream removed, broadcasting: 1\nI0214 14:14:23.512818    1794 log.go:172] (0xc00071a420) Go away received\nI0214 14:14:23.513737    1794 log.go:172] (0xc00071a420) (0xc000734820) Stream removed, broadcasting: 1\nI0214 14:14:23.513747    1794 log.go:172] (0xc00071a420) (0xc0007348c0) Stream removed, broadcasting: 3\nI0214 14:14:23.513751    1794 log.go:172] (0xc00071a420) (0xc000594280) Stream removed, broadcasting: 5\n"
Feb 14 14:14:23.526: INFO: stdout: "Hello from the busy-box sub-container\n"
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 14 14:14:23.526: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-4016" for this suite.
Feb 14 14:14:29.562: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 14:14:29.698: INFO: namespace emptydir-4016 deletion completed in 6.164124393s

• [SLOW TEST:22.696 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  pod should support shared volumes between containers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should have an terminated reason [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 14 14:14:29.699: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should have an terminated reason [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 14 14:14:41.895: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-4226" for this suite.
Feb 14 14:14:47.930: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 14:14:48.055: INFO: namespace kubelet-test-4226 deletion completed in 6.153436127s

• [SLOW TEST:18.357 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78
    should have an terminated reason [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 14 14:14:48.056: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward api env vars
Feb 14 14:14:48.316: INFO: Waiting up to 5m0s for pod "downward-api-512551e1-08e4-4ba3-bc90-4598f0057997" in namespace "downward-api-1396" to be "success or failure"
Feb 14 14:14:48.325: INFO: Pod "downward-api-512551e1-08e4-4ba3-bc90-4598f0057997": Phase="Pending", Reason="", readiness=false. Elapsed: 8.602647ms
Feb 14 14:14:50.343: INFO: Pod "downward-api-512551e1-08e4-4ba3-bc90-4598f0057997": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026703129s
Feb 14 14:14:52.351: INFO: Pod "downward-api-512551e1-08e4-4ba3-bc90-4598f0057997": Phase="Pending", Reason="", readiness=false. Elapsed: 4.034663199s
Feb 14 14:14:54.359: INFO: Pod "downward-api-512551e1-08e4-4ba3-bc90-4598f0057997": Phase="Pending", Reason="", readiness=false. Elapsed: 6.043492995s
Feb 14 14:14:56.374: INFO: Pod "downward-api-512551e1-08e4-4ba3-bc90-4598f0057997": Phase="Pending", Reason="", readiness=false. Elapsed: 8.057583833s
Feb 14 14:14:58.387: INFO: Pod "downward-api-512551e1-08e4-4ba3-bc90-4598f0057997": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.070877055s
STEP: Saw pod success
Feb 14 14:14:58.387: INFO: Pod "downward-api-512551e1-08e4-4ba3-bc90-4598f0057997" satisfied condition "success or failure"
Feb 14 14:14:58.393: INFO: Trying to get logs from node iruya-node pod downward-api-512551e1-08e4-4ba3-bc90-4598f0057997 container dapi-container: 
STEP: delete the pod
Feb 14 14:14:58.446: INFO: Waiting for pod downward-api-512551e1-08e4-4ba3-bc90-4598f0057997 to disappear
Feb 14 14:14:58.486: INFO: Pod downward-api-512551e1-08e4-4ba3-bc90-4598f0057997 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 14 14:14:58.486: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-1396" for this suite.
Feb 14 14:15:04.514: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 14:15:04.600: INFO: namespace downward-api-1396 deletion completed in 6.106854874s

• [SLOW TEST:16.544 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32
  should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 14 14:15:04.600: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with configMap that has name projected-configmap-test-upd-2e328afa-574e-4014-a568-6ff37d52082d
STEP: Creating the pod
STEP: Updating configmap projected-configmap-test-upd-2e328afa-574e-4014-a568-6ff37d52082d
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 14 14:15:14.935: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6510" for this suite.
Feb 14 14:15:36.962: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 14:15:37.049: INFO: namespace projected-6510 deletion completed in 22.107709415s

• [SLOW TEST:32.449 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl describe 
  should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 14 14:15:37.051: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb 14 14:15:37.411: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4857'
Feb 14 14:15:37.887: INFO: stderr: ""
Feb 14 14:15:37.887: INFO: stdout: "replicationcontroller/redis-master created\n"
Feb 14 14:15:37.887: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4857'
Feb 14 14:15:38.621: INFO: stderr: ""
Feb 14 14:15:38.621: INFO: stdout: "service/redis-master created\n"
STEP: Waiting for Redis master to start.
Feb 14 14:15:39.629: INFO: Selector matched 1 pods for map[app:redis]
Feb 14 14:15:39.630: INFO: Found 0 / 1
Feb 14 14:15:40.642: INFO: Selector matched 1 pods for map[app:redis]
Feb 14 14:15:40.642: INFO: Found 0 / 1
Feb 14 14:15:41.632: INFO: Selector matched 1 pods for map[app:redis]
Feb 14 14:15:41.632: INFO: Found 0 / 1
Feb 14 14:15:42.639: INFO: Selector matched 1 pods for map[app:redis]
Feb 14 14:15:42.640: INFO: Found 0 / 1
Feb 14 14:15:44.106: INFO: Selector matched 1 pods for map[app:redis]
Feb 14 14:15:44.106: INFO: Found 0 / 1
Feb 14 14:15:44.638: INFO: Selector matched 1 pods for map[app:redis]
Feb 14 14:15:44.638: INFO: Found 0 / 1
Feb 14 14:15:45.632: INFO: Selector matched 1 pods for map[app:redis]
Feb 14 14:15:45.632: INFO: Found 0 / 1
Feb 14 14:15:46.648: INFO: Selector matched 1 pods for map[app:redis]
Feb 14 14:15:46.648: INFO: Found 0 / 1
Feb 14 14:15:47.633: INFO: Selector matched 1 pods for map[app:redis]
Feb 14 14:15:47.633: INFO: Found 1 / 1
Feb 14 14:15:47.633: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Feb 14 14:15:47.640: INFO: Selector matched 1 pods for map[app:redis]
Feb 14 14:15:47.641: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Feb 14 14:15:47.641: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe pod redis-master-w67k6 --namespace=kubectl-4857'
Feb 14 14:15:47.834: INFO: stderr: ""
Feb 14 14:15:47.834: INFO: stdout: "Name:           redis-master-w67k6\nNamespace:      kubectl-4857\nPriority:       0\nNode:           iruya-node/10.96.3.65\nStart Time:     Fri, 14 Feb 2020 14:15:38 +0000\nLabels:         app=redis\n                role=master\nAnnotations:    \nStatus:         Running\nIP:             10.44.0.1\nControlled By:  ReplicationController/redis-master\nContainers:\n  redis-master:\n    Container ID:   docker://1352224e78243ea84d2bb32c0a2f420c632e55988cdc969895c5c178d4318beb\n    Image:          gcr.io/kubernetes-e2e-test-images/redis:1.0\n    Image ID:       docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830\n    Port:           6379/TCP\n    Host Port:      0/TCP\n    State:          Running\n      Started:      Fri, 14 Feb 2020 14:15:45 +0000\n    Ready:          True\n    Restart Count:  0\n    Environment:    \n    Mounts:\n      /var/run/secrets/kubernetes.io/serviceaccount from default-token-d9r2c (ro)\nConditions:\n  Type              Status\n  Initialized       True \n  Ready             True \n  ContainersReady   True \n  PodScheduled      True \nVolumes:\n  default-token-d9r2c:\n    Type:        Secret (a volume populated by a Secret)\n    SecretName:  default-token-d9r2c\n    Optional:    false\nQoS Class:       BestEffort\nNode-Selectors:  \nTolerations:     node.kubernetes.io/not-ready:NoExecute for 300s\n                 node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n  Type    Reason     Age   From                 Message\n  ----    ------     ----  ----                 -------\n  Normal  Scheduled  10s   default-scheduler    Successfully assigned kubectl-4857/redis-master-w67k6 to iruya-node\n  Normal  Pulled     6s    kubelet, iruya-node  Container image \"gcr.io/kubernetes-e2e-test-images/redis:1.0\" already present on machine\n  Normal  Created    2s    kubelet, iruya-node  Created container redis-master\n  Normal  Started    2s    kubelet, iruya-node  Started container redis-master\n"
Feb 14 14:15:47.834: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe rc redis-master --namespace=kubectl-4857'
Feb 14 14:15:48.024: INFO: stderr: ""
Feb 14 14:15:48.024: INFO: stdout: "Name:         redis-master\nNamespace:    kubectl-4857\nSelector:     app=redis,role=master\nLabels:       app=redis\n              role=master\nAnnotations:  \nReplicas:     1 current / 1 desired\nPods Status:  1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n  Labels:  app=redis\n           role=master\n  Containers:\n   redis-master:\n    Image:        gcr.io/kubernetes-e2e-test-images/redis:1.0\n    Port:         6379/TCP\n    Host Port:    0/TCP\n    Environment:  \n    Mounts:       \n  Volumes:        \nEvents:\n  Type    Reason            Age   From                    Message\n  ----    ------            ----  ----                    -------\n  Normal  SuccessfulCreate  11s   replication-controller  Created pod: redis-master-w67k6\n"
Feb 14 14:15:48.024: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe service redis-master --namespace=kubectl-4857'
Feb 14 14:15:48.172: INFO: stderr: ""
Feb 14 14:15:48.173: INFO: stdout: "Name:              redis-master\nNamespace:         kubectl-4857\nLabels:            app=redis\n                   role=master\nAnnotations:       \nSelector:          app=redis,role=master\nType:              ClusterIP\nIP:                10.109.10.223\nPort:                6379/TCP\nTargetPort:        redis-server/TCP\nEndpoints:         10.44.0.1:6379\nSession Affinity:  None\nEvents:            \n"
Feb 14 14:15:48.178: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe node iruya-node'
Feb 14 14:15:48.278: INFO: stderr: ""
Feb 14 14:15:48.279: INFO: stdout: "Name:               iruya-node\nRoles:              \nLabels:             beta.kubernetes.io/arch=amd64\n                    beta.kubernetes.io/os=linux\n                    kubernetes.io/arch=amd64\n                    kubernetes.io/hostname=iruya-node\n                    kubernetes.io/os=linux\nAnnotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock\n                    node.alpha.kubernetes.io/ttl: 0\n                    volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp:  Sun, 04 Aug 2019 09:01:39 +0000\nTaints:             \nUnschedulable:      false\nConditions:\n  Type                 Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message\n  ----                 ------  -----------------                 ------------------                ------                       -------\n  NetworkUnavailable   False   Sat, 12 Oct 2019 11:56:49 +0000   Sat, 12 Oct 2019 11:56:49 +0000   WeaveIsUp                    Weave pod has set this\n  MemoryPressure       False   Fri, 14 Feb 2020 14:14:51 +0000   Sun, 04 Aug 2019 09:01:39 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available\n  DiskPressure         False   Fri, 14 Feb 2020 14:14:51 +0000   Sun, 04 Aug 2019 09:01:39 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure\n  PIDPressure          False   Fri, 14 Feb 2020 14:14:51 +0000   Sun, 04 Aug 2019 09:01:39 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available\n  Ready                True    Fri, 14 Feb 2020 14:14:51 +0000   Sun, 04 Aug 2019 09:02:19 +0000   KubeletReady                 kubelet is posting ready status. AppArmor enabled\nAddresses:\n  InternalIP:  10.96.3.65\n  Hostname:    iruya-node\nCapacity:\n cpu:                4\n ephemeral-storage:  20145724Ki\n hugepages-2Mi:      0\n memory:             4039076Ki\n pods:               110\nAllocatable:\n cpu:                4\n ephemeral-storage:  18566299208\n hugepages-2Mi:      0\n memory:             3936676Ki\n pods:               110\nSystem Info:\n Machine ID:                 f573dcf04d6f4a87856a35d266a2fa7a\n System UUID:                F573DCF0-4D6F-4A87-856A-35D266A2FA7A\n Boot ID:                    8baf4beb-8391-43e6-b17b-b1e184b5370a\n Kernel Version:             4.15.0-52-generic\n OS Image:                   Ubuntu 18.04.2 LTS\n Operating System:           linux\n Architecture:               amd64\n Container Runtime Version:  docker://18.9.7\n Kubelet Version:            v1.15.1\n Kube-Proxy Version:         v1.15.1\nPodCIDR:                     10.96.1.0/24\nNon-terminated Pods:         (3 in total)\n  Namespace                  Name                  CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE\n  ---------                  ----                  ------------  ----------  ---------------  -------------  ---\n  kube-system                kube-proxy-976zl      0 (0%)        0 (0%)      0 (0%)           0 (0%)         194d\n  kube-system                weave-net-rlp57       20m (0%)      0 (0%)      0 (0%)           0 (0%)         125d\n  kubectl-4857               redis-master-w67k6    0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s\nAllocated resources:\n  (Total limits may be over 100 percent, i.e., overcommitted.)\n  Resource           Requests  Limits\n  --------           --------  ------\n  cpu                20m (0%)  0 (0%)\n  memory             0 (0%)    0 (0%)\n  ephemeral-storage  0 (0%)    0 (0%)\nEvents:              \n"
Feb 14 14:15:48.279: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe namespace kubectl-4857'
Feb 14 14:15:48.388: INFO: stderr: ""
Feb 14 14:15:48.388: INFO: stdout: "Name:         kubectl-4857\nLabels:       e2e-framework=kubectl\n              e2e-run=c17a9fbc-6909-4f44-abd6-f96cfc9860fc\nAnnotations:  \nStatus:       Active\n\nNo resource quota.\n\nNo resource limits.\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 14 14:15:48.388: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4857" for this suite.
Feb 14 14:16:10.411: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 14:16:10.539: INFO: namespace kubectl-4857 deletion completed in 22.147141067s

• [SLOW TEST:33.488 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl describe
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should check if kubectl describe prints relevant information for rc and pods  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Burst scaling should run to completion even with unhealthy pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 14 14:16:10.539: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75
STEP: Creating service test in namespace statefulset-1865
[It] Burst scaling should run to completion even with unhealthy pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating stateful set ss in namespace statefulset-1865
STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-1865
Feb 14 14:16:10.698: INFO: Found 0 stateful pods, waiting for 1
Feb 14 14:16:20.712: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod
Feb 14 14:16:20.717: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1865 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb 14 14:16:21.232: INFO: stderr: "I0214 14:16:20.911250    1967 log.go:172] (0xc000502420) (0xc0003ae6e0) Create stream\nI0214 14:16:20.911372    1967 log.go:172] (0xc000502420) (0xc0003ae6e0) Stream added, broadcasting: 1\nI0214 14:16:20.921939    1967 log.go:172] (0xc000502420) Reply frame received for 1\nI0214 14:16:20.922010    1967 log.go:172] (0xc000502420) (0xc0002d01e0) Create stream\nI0214 14:16:20.922021    1967 log.go:172] (0xc000502420) (0xc0002d01e0) Stream added, broadcasting: 3\nI0214 14:16:20.923842    1967 log.go:172] (0xc000502420) Reply frame received for 3\nI0214 14:16:20.923867    1967 log.go:172] (0xc000502420) (0xc0002d0280) Create stream\nI0214 14:16:20.923872    1967 log.go:172] (0xc000502420) (0xc0002d0280) Stream added, broadcasting: 5\nI0214 14:16:20.925094    1967 log.go:172] (0xc000502420) Reply frame received for 5\nI0214 14:16:21.040823    1967 log.go:172] (0xc000502420) Data frame received for 5\nI0214 14:16:21.040919    1967 log.go:172] (0xc0002d0280) (5) Data frame handling\nI0214 14:16:21.040942    1967 log.go:172] (0xc0002d0280) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0214 14:16:21.073344    1967 log.go:172] (0xc000502420) Data frame received for 3\nI0214 14:16:21.073652    1967 log.go:172] (0xc0002d01e0) (3) Data frame handling\nI0214 14:16:21.073723    1967 log.go:172] (0xc0002d01e0) (3) Data frame sent\nI0214 14:16:21.221565    1967 log.go:172] (0xc000502420) Data frame received for 1\nI0214 14:16:21.221882    1967 log.go:172] (0xc000502420) (0xc0002d0280) Stream removed, broadcasting: 5\nI0214 14:16:21.221939    1967 log.go:172] (0xc0003ae6e0) (1) Data frame handling\nI0214 14:16:21.221960    1967 log.go:172] (0xc0003ae6e0) (1) Data frame sent\nI0214 14:16:21.222001    1967 log.go:172] (0xc000502420) (0xc0002d01e0) Stream removed, broadcasting: 3\nI0214 14:16:21.222032    1967 log.go:172] (0xc000502420) (0xc0003ae6e0) Stream removed, broadcasting: 1\nI0214 14:16:21.222054    1967 log.go:172] (0xc000502420) Go away received\nI0214 14:16:21.222950    1967 log.go:172] (0xc000502420) (0xc0003ae6e0) Stream removed, broadcasting: 1\nI0214 14:16:21.222987    1967 log.go:172] (0xc000502420) (0xc0002d01e0) Stream removed, broadcasting: 3\nI0214 14:16:21.223000    1967 log.go:172] (0xc000502420) (0xc0002d0280) Stream removed, broadcasting: 5\n"
Feb 14 14:16:21.232: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb 14 14:16:21.232: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Feb 14 14:16:21.241: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
Feb 14 14:16:31.249: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Feb 14 14:16:31.250: INFO: Waiting for statefulset status.replicas updated to 0
Feb 14 14:16:31.279: INFO: POD   NODE        PHASE    GRACE  CONDITIONS
Feb 14 14:16:31.279: INFO: ss-0  iruya-node  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 14:16:10 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-14 14:16:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-14 14:16:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 14:16:10 +0000 UTC  }]
Feb 14 14:16:31.279: INFO: 
Feb 14 14:16:31.279: INFO: StatefulSet ss has not reached scale 3, at 1
Feb 14 14:16:32.296: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.992507926s
Feb 14 14:16:33.346: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.975719921s
Feb 14 14:16:35.330: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.924926643s
Feb 14 14:16:36.450: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.941567316s
Feb 14 14:16:37.458: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.821809291s
Feb 14 14:16:38.488: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.813159452s
Feb 14 14:16:40.422: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.783199914s
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-1865
Feb 14 14:16:41.436: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1865 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 14 14:16:43.410: INFO: stderr: "I0214 14:16:41.615395    1985 log.go:172] (0xc000116840) (0xc0007906e0) Create stream\nI0214 14:16:41.615601    1985 log.go:172] (0xc000116840) (0xc0007906e0) Stream added, broadcasting: 1\nI0214 14:16:41.621394    1985 log.go:172] (0xc000116840) Reply frame received for 1\nI0214 14:16:41.621424    1985 log.go:172] (0xc000116840) (0xc000790780) Create stream\nI0214 14:16:41.621432    1985 log.go:172] (0xc000116840) (0xc000790780) Stream added, broadcasting: 3\nI0214 14:16:41.622871    1985 log.go:172] (0xc000116840) Reply frame received for 3\nI0214 14:16:41.622909    1985 log.go:172] (0xc000116840) (0xc000790820) Create stream\nI0214 14:16:41.622927    1985 log.go:172] (0xc000116840) (0xc000790820) Stream added, broadcasting: 5\nI0214 14:16:41.624958    1985 log.go:172] (0xc000116840) Reply frame received for 5\nI0214 14:16:43.191632    1985 log.go:172] (0xc000116840) Data frame received for 5\nI0214 14:16:43.191765    1985 log.go:172] (0xc000790820) (5) Data frame handling\nI0214 14:16:43.191875    1985 log.go:172] (0xc000790820) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0214 14:16:43.203420    1985 log.go:172] (0xc000116840) Data frame received for 3\nI0214 14:16:43.203451    1985 log.go:172] (0xc000790780) (3) Data frame handling\nI0214 14:16:43.203484    1985 log.go:172] (0xc000790780) (3) Data frame sent\nI0214 14:16:43.394483    1985 log.go:172] (0xc000116840) Data frame received for 1\nI0214 14:16:43.394614    1985 log.go:172] (0xc000116840) (0xc000790780) Stream removed, broadcasting: 3\nI0214 14:16:43.394739    1985 log.go:172] (0xc0007906e0) (1) Data frame handling\nI0214 14:16:43.394771    1985 log.go:172] (0xc0007906e0) (1) Data frame sent\nI0214 14:16:43.394852    1985 log.go:172] (0xc000116840) (0xc000790820) Stream removed, broadcasting: 5\nI0214 14:16:43.394921    1985 log.go:172] (0xc000116840) (0xc0007906e0) Stream removed, broadcasting: 1\nI0214 14:16:43.394943    1985 log.go:172] (0xc000116840) Go away received\nI0214 14:16:43.396193    1985 log.go:172] (0xc000116840) (0xc0007906e0) Stream removed, broadcasting: 1\nI0214 14:16:43.396260    1985 log.go:172] (0xc000116840) (0xc000790780) Stream removed, broadcasting: 3\nI0214 14:16:43.396288    1985 log.go:172] (0xc000116840) (0xc000790820) Stream removed, broadcasting: 5\n"
Feb 14 14:16:43.410: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Feb 14 14:16:43.410: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Feb 14 14:16:43.411: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1865 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 14 14:16:44.209: INFO: rc: 1
Feb 14 14:16:44.210: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1865 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    error: unable to upgrade connection: container not found ("nginx")
 []  0xc0023de960 exit status 1   true [0xc00274cc18 0xc00274cc30 0xc00274cc48] [0xc00274cc18 0xc00274cc30 0xc00274cc48] [0xc00274cc28 0xc00274cc40] [0xba6c50 0xba6c50] 0xc0018675c0 }:
Command stdout:

stderr:
error: unable to upgrade connection: container not found ("nginx")

error:
exit status 1
Feb 14 14:16:54.212: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1865 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 14 14:16:54.602: INFO: stderr: "I0214 14:16:54.378199    2022 log.go:172] (0xc0008424d0) (0xc0009206e0) Create stream\nI0214 14:16:54.378332    2022 log.go:172] (0xc0008424d0) (0xc0009206e0) Stream added, broadcasting: 1\nI0214 14:16:54.382651    2022 log.go:172] (0xc0008424d0) Reply frame received for 1\nI0214 14:16:54.382685    2022 log.go:172] (0xc0008424d0) (0xc0005583c0) Create stream\nI0214 14:16:54.382695    2022 log.go:172] (0xc0008424d0) (0xc0005583c0) Stream added, broadcasting: 3\nI0214 14:16:54.384705    2022 log.go:172] (0xc0008424d0) Reply frame received for 3\nI0214 14:16:54.384755    2022 log.go:172] (0xc0008424d0) (0xc0007d6000) Create stream\nI0214 14:16:54.384791    2022 log.go:172] (0xc0008424d0) (0xc0007d6000) Stream added, broadcasting: 5\nI0214 14:16:54.386490    2022 log.go:172] (0xc0008424d0) Reply frame received for 5\nI0214 14:16:54.481408    2022 log.go:172] (0xc0008424d0) Data frame received for 5\nI0214 14:16:54.481513    2022 log.go:172] (0xc0007d6000) (5) Data frame handling\nI0214 14:16:54.481599    2022 log.go:172] (0xc0007d6000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0214 14:16:54.481702    2022 log.go:172] (0xc0008424d0) Data frame received for 3\nI0214 14:16:54.481741    2022 log.go:172] (0xc0005583c0) (3) Data frame handling\nI0214 14:16:54.481785    2022 log.go:172] (0xc0005583c0) (3) Data frame sent\nI0214 14:16:54.587981    2022 log.go:172] (0xc0008424d0) (0xc0005583c0) Stream removed, broadcasting: 3\nI0214 14:16:54.588187    2022 log.go:172] (0xc0008424d0) Data frame received for 1\nI0214 14:16:54.588223    2022 log.go:172] (0xc0009206e0) (1) Data frame handling\nI0214 14:16:54.588268    2022 log.go:172] (0xc0009206e0) (1) Data frame sent\nI0214 14:16:54.588714    2022 log.go:172] (0xc0008424d0) (0xc0009206e0) Stream removed, broadcasting: 1\nI0214 14:16:54.589148    2022 log.go:172] (0xc0008424d0) (0xc0007d6000) Stream removed, broadcasting: 5\nI0214 14:16:54.589341    2022 log.go:172] (0xc0008424d0) Go away received\nI0214 14:16:54.590276    2022 log.go:172] (0xc0008424d0) (0xc0009206e0) Stream removed, broadcasting: 1\nI0214 14:16:54.590537    2022 log.go:172] (0xc0008424d0) (0xc0005583c0) Stream removed, broadcasting: 3\nI0214 14:16:54.590665    2022 log.go:172] (0xc0008424d0) (0xc0007d6000) Stream removed, broadcasting: 5\n"
Feb 14 14:16:54.602: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Feb 14 14:16:54.602: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Feb 14 14:16:54.602: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1865 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 14 14:16:55.020: INFO: stderr: "I0214 14:16:54.751385    2041 log.go:172] (0xc0006ded10) (0xc0006e6b40) Create stream\nI0214 14:16:54.751534    2041 log.go:172] (0xc0006ded10) (0xc0006e6b40) Stream added, broadcasting: 1\nI0214 14:16:54.757165    2041 log.go:172] (0xc0006ded10) Reply frame received for 1\nI0214 14:16:54.757263    2041 log.go:172] (0xc0006ded10) (0xc0009d6000) Create stream\nI0214 14:16:54.757282    2041 log.go:172] (0xc0006ded10) (0xc0009d6000) Stream added, broadcasting: 3\nI0214 14:16:54.759031    2041 log.go:172] (0xc0006ded10) Reply frame received for 3\nI0214 14:16:54.759081    2041 log.go:172] (0xc0006ded10) (0xc00072c000) Create stream\nI0214 14:16:54.759098    2041 log.go:172] (0xc0006ded10) (0xc00072c000) Stream added, broadcasting: 5\nI0214 14:16:54.761138    2041 log.go:172] (0xc0006ded10) Reply frame received for 5\nI0214 14:16:54.873865    2041 log.go:172] (0xc0006ded10) Data frame received for 3\nI0214 14:16:54.873974    2041 log.go:172] (0xc0009d6000) (3) Data frame handling\nI0214 14:16:54.874004    2041 log.go:172] (0xc0009d6000) (3) Data frame sent\nI0214 14:16:54.874098    2041 log.go:172] (0xc0006ded10) Data frame received for 5\nI0214 14:16:54.874109    2041 log.go:172] (0xc00072c000) (5) Data frame handling\nI0214 14:16:54.874129    2041 log.go:172] (0xc00072c000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0214 14:16:55.009561    2041 log.go:172] (0xc0006ded10) Data frame received for 1\nI0214 14:16:55.009770    2041 log.go:172] (0xc0006ded10) (0xc0009d6000) Stream removed, broadcasting: 3\nI0214 14:16:55.009875    2041 log.go:172] (0xc0006e6b40) (1) Data frame handling\nI0214 14:16:55.009906    2041 log.go:172] (0xc0006e6b40) (1) Data frame sent\nI0214 14:16:55.009919    2041 log.go:172] (0xc0006ded10) (0xc00072c000) Stream removed, broadcasting: 5\nI0214 14:16:55.009953    2041 log.go:172] (0xc0006ded10) (0xc0006e6b40) Stream removed, broadcasting: 1\nI0214 14:16:55.009992    2041 log.go:172] (0xc0006ded10) Go away received\nI0214 14:16:55.011045    2041 log.go:172] (0xc0006ded10) (0xc0006e6b40) Stream removed, broadcasting: 1\nI0214 14:16:55.011058    2041 log.go:172] (0xc0006ded10) (0xc0009d6000) Stream removed, broadcasting: 3\nI0214 14:16:55.011066    2041 log.go:172] (0xc0006ded10) (0xc00072c000) Stream removed, broadcasting: 5\n"
Feb 14 14:16:55.020: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Feb 14 14:16:55.020: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Feb 14 14:16:55.028: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Feb 14 14:16:55.028: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Feb 14 14:16:55.028: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Scale down will not halt with unhealthy stateful pod
Feb 14 14:16:55.033: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1865 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb 14 14:16:55.486: INFO: stderr: "I0214 14:16:55.216552    2063 log.go:172] (0xc000728630) (0xc000426aa0) Create stream\nI0214 14:16:55.216620    2063 log.go:172] (0xc000728630) (0xc000426aa0) Stream added, broadcasting: 1\nI0214 14:16:55.224453    2063 log.go:172] (0xc000728630) Reply frame received for 1\nI0214 14:16:55.224492    2063 log.go:172] (0xc000728630) (0xc0006ba000) Create stream\nI0214 14:16:55.224516    2063 log.go:172] (0xc000728630) (0xc0006ba000) Stream added, broadcasting: 3\nI0214 14:16:55.227039    2063 log.go:172] (0xc000728630) Reply frame received for 3\nI0214 14:16:55.227080    2063 log.go:172] (0xc000728630) (0xc00071a000) Create stream\nI0214 14:16:55.227105    2063 log.go:172] (0xc000728630) (0xc00071a000) Stream added, broadcasting: 5\nI0214 14:16:55.231592    2063 log.go:172] (0xc000728630) Reply frame received for 5\nI0214 14:16:55.335238    2063 log.go:172] (0xc000728630) Data frame received for 3\nI0214 14:16:55.335326    2063 log.go:172] (0xc0006ba000) (3) Data frame handling\nI0214 14:16:55.335348    2063 log.go:172] (0xc0006ba000) (3) Data frame sent\nI0214 14:16:55.335431    2063 log.go:172] (0xc000728630) Data frame received for 5\nI0214 14:16:55.335461    2063 log.go:172] (0xc00071a000) (5) Data frame handling\nI0214 14:16:55.335500    2063 log.go:172] (0xc00071a000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0214 14:16:55.473390    2063 log.go:172] (0xc000728630) (0xc0006ba000) Stream removed, broadcasting: 3\nI0214 14:16:55.473510    2063 log.go:172] (0xc000728630) Data frame received for 1\nI0214 14:16:55.473547    2063 log.go:172] (0xc000426aa0) (1) Data frame handling\nI0214 14:16:55.473581    2063 log.go:172] (0xc000426aa0) (1) Data frame sent\nI0214 14:16:55.473630    2063 log.go:172] (0xc000728630) (0xc00071a000) Stream removed, broadcasting: 5\nI0214 14:16:55.473753    2063 log.go:172] (0xc000728630) (0xc000426aa0) Stream removed, broadcasting: 1\nI0214 14:16:55.474177    2063 log.go:172] (0xc000728630) Go away received\nI0214 14:16:55.475452    2063 log.go:172] (0xc000728630) (0xc000426aa0) Stream removed, broadcasting: 1\nI0214 14:16:55.475502    2063 log.go:172] (0xc000728630) (0xc0006ba000) Stream removed, broadcasting: 3\nI0214 14:16:55.475517    2063 log.go:172] (0xc000728630) (0xc00071a000) Stream removed, broadcasting: 5\n"
Feb 14 14:16:55.486: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb 14 14:16:55.486: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Feb 14 14:16:55.486: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1865 ss-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb 14 14:16:55.938: INFO: stderr: "I0214 14:16:55.740351    2083 log.go:172] (0xc00060c420) (0xc00060a780) Create stream\nI0214 14:16:55.740555    2083 log.go:172] (0xc00060c420) (0xc00060a780) Stream added, broadcasting: 1\nI0214 14:16:55.743898    2083 log.go:172] (0xc00060c420) Reply frame received for 1\nI0214 14:16:55.743976    2083 log.go:172] (0xc00060c420) (0xc000396280) Create stream\nI0214 14:16:55.743996    2083 log.go:172] (0xc00060c420) (0xc000396280) Stream added, broadcasting: 3\nI0214 14:16:55.745449    2083 log.go:172] (0xc00060c420) Reply frame received for 3\nI0214 14:16:55.745474    2083 log.go:172] (0xc00060c420) (0xc00060a820) Create stream\nI0214 14:16:55.745483    2083 log.go:172] (0xc00060c420) (0xc00060a820) Stream added, broadcasting: 5\nI0214 14:16:55.746429    2083 log.go:172] (0xc00060c420) Reply frame received for 5\nI0214 14:16:55.832541    2083 log.go:172] (0xc00060c420) Data frame received for 5\nI0214 14:16:55.832574    2083 log.go:172] (0xc00060a820) (5) Data frame handling\nI0214 14:16:55.832587    2083 log.go:172] (0xc00060a820) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0214 14:16:55.853888    2083 log.go:172] (0xc00060c420) Data frame received for 3\nI0214 14:16:55.853935    2083 log.go:172] (0xc000396280) (3) Data frame handling\nI0214 14:16:55.853972    2083 log.go:172] (0xc000396280) (3) Data frame sent\nI0214 14:16:55.926975    2083 log.go:172] (0xc00060c420) (0xc000396280) Stream removed, broadcasting: 3\nI0214 14:16:55.927097    2083 log.go:172] (0xc00060c420) Data frame received for 1\nI0214 14:16:55.927111    2083 log.go:172] (0xc00060a780) (1) Data frame handling\nI0214 14:16:55.927122    2083 log.go:172] (0xc00060a780) (1) Data frame sent\nI0214 14:16:55.927131    2083 log.go:172] (0xc00060c420) (0xc00060a780) Stream removed, broadcasting: 1\nI0214 14:16:55.927702    2083 log.go:172] (0xc00060c420) (0xc00060a820) Stream removed, broadcasting: 5\nI0214 14:16:55.927752    2083 log.go:172] (0xc00060c420) Go away received\nI0214 14:16:55.927870    2083 log.go:172] (0xc00060c420) (0xc00060a780) Stream removed, broadcasting: 1\nI0214 14:16:55.927897    2083 log.go:172] (0xc00060c420) (0xc000396280) Stream removed, broadcasting: 3\nI0214 14:16:55.927912    2083 log.go:172] (0xc00060c420) (0xc00060a820) Stream removed, broadcasting: 5\n"
Feb 14 14:16:55.938: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb 14 14:16:55.938: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Feb 14 14:16:55.939: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1865 ss-2 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb 14 14:16:56.625: INFO: stderr: "I0214 14:16:56.127243    2103 log.go:172] (0xc0009944d0) (0xc00098a960) Create stream\nI0214 14:16:56.127962    2103 log.go:172] (0xc0009944d0) (0xc00098a960) Stream added, broadcasting: 1\nI0214 14:16:56.155498    2103 log.go:172] (0xc0009944d0) Reply frame received for 1\nI0214 14:16:56.155675    2103 log.go:172] (0xc0009944d0) (0xc00098a000) Create stream\nI0214 14:16:56.155705    2103 log.go:172] (0xc0009944d0) (0xc00098a000) Stream added, broadcasting: 3\nI0214 14:16:56.157689    2103 log.go:172] (0xc0009944d0) Reply frame received for 3\nI0214 14:16:56.157718    2103 log.go:172] (0xc0009944d0) (0xc00098a0a0) Create stream\nI0214 14:16:56.157725    2103 log.go:172] (0xc0009944d0) (0xc00098a0a0) Stream added, broadcasting: 5\nI0214 14:16:56.160387    2103 log.go:172] (0xc0009944d0) Reply frame received for 5\nI0214 14:16:56.341553    2103 log.go:172] (0xc0009944d0) Data frame received for 5\nI0214 14:16:56.341606    2103 log.go:172] (0xc00098a0a0) (5) Data frame handling\nI0214 14:16:56.341635    2103 log.go:172] (0xc00098a0a0) (5) Data frame sent\nI0214 14:16:56.341641    2103 log.go:172] (0xc0009944d0) Data frame received for 5\nI0214 14:16:56.341647    2103 log.go:172] (0xc00098a0a0) (5) Data frame handling\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0214 14:16:56.341683    2103 log.go:172] (0xc00098a0a0) (5) Data frame sent\nI0214 14:16:56.378037    2103 log.go:172] (0xc0009944d0) Data frame received for 3\nI0214 14:16:56.378063    2103 log.go:172] (0xc00098a000) (3) Data frame handling\nI0214 14:16:56.378078    2103 log.go:172] (0xc00098a000) (3) Data frame sent\nI0214 14:16:56.597143    2103 log.go:172] (0xc0009944d0) Data frame received for 1\nI0214 14:16:56.597734    2103 log.go:172] (0xc0009944d0) (0xc00098a0a0) Stream removed, broadcasting: 5\nI0214 14:16:56.597853    2103 log.go:172] (0xc00098a960) (1) Data frame handling\nI0214 14:16:56.598017    2103 log.go:172] (0xc00098a960) (1) Data frame sent\nI0214 14:16:56.598177    2103 log.go:172] (0xc0009944d0) (0xc00098a000) Stream removed, broadcasting: 3\nI0214 14:16:56.598237    2103 log.go:172] (0xc0009944d0) (0xc00098a960) Stream removed, broadcasting: 1\nI0214 14:16:56.598270    2103 log.go:172] (0xc0009944d0) Go away received\nI0214 14:16:56.600020    2103 log.go:172] (0xc0009944d0) (0xc00098a960) Stream removed, broadcasting: 1\nI0214 14:16:56.600051    2103 log.go:172] (0xc0009944d0) (0xc00098a000) Stream removed, broadcasting: 3\nI0214 14:16:56.600074    2103 log.go:172] (0xc0009944d0) (0xc00098a0a0) Stream removed, broadcasting: 5\n"
Feb 14 14:16:56.626: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb 14 14:16:56.626: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Feb 14 14:16:56.626: INFO: Waiting for statefulset status.replicas updated to 0
Feb 14 14:16:56.687: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Feb 14 14:16:56.687: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Feb 14 14:16:56.687: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Feb 14 14:16:56.710: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Feb 14 14:16:56.710: INFO: ss-0  iruya-node                 Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 14:16:10 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-14 14:16:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-14 14:16:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 14:16:10 +0000 UTC  }]
Feb 14 14:16:56.710: INFO: ss-1  iruya-server-sfge57q7djm7  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 14:16:31 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-14 14:16:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-14 14:16:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 14:16:31 +0000 UTC  }]
Feb 14 14:16:56.711: INFO: ss-2  iruya-node                 Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 14:16:31 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-14 14:16:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-14 14:16:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 14:16:31 +0000 UTC  }]
Feb 14 14:16:56.711: INFO: 
Feb 14 14:16:56.711: INFO: StatefulSet ss has not reached scale 0, at 3
Feb 14 14:16:57.935: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Feb 14 14:16:57.936: INFO: ss-0  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 14:16:10 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-14 14:16:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-14 14:16:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 14:16:10 +0000 UTC  }]
Feb 14 14:16:57.936: INFO: ss-1  iruya-server-sfge57q7djm7  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 14:16:31 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-14 14:16:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-14 14:16:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 14:16:31 +0000 UTC  }]
Feb 14 14:16:57.936: INFO: ss-2  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 14:16:31 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-14 14:16:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-14 14:16:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 14:16:31 +0000 UTC  }]
Feb 14 14:16:57.936: INFO: 
Feb 14 14:16:57.936: INFO: StatefulSet ss has not reached scale 0, at 3
Feb 14 14:16:58.998: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Feb 14 14:16:58.998: INFO: ss-0  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 14:16:10 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-14 14:16:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-14 14:16:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 14:16:10 +0000 UTC  }]
Feb 14 14:16:58.998: INFO: ss-1  iruya-server-sfge57q7djm7  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 14:16:31 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-14 14:16:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-14 14:16:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 14:16:31 +0000 UTC  }]
Feb 14 14:16:58.998: INFO: ss-2  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 14:16:31 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-14 14:16:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-14 14:16:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 14:16:31 +0000 UTC  }]
Feb 14 14:16:58.998: INFO: 
Feb 14 14:16:58.998: INFO: StatefulSet ss has not reached scale 0, at 3
Feb 14 14:17:00.009: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Feb 14 14:17:00.009: INFO: ss-0  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 14:16:10 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-14 14:16:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-14 14:16:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 14:16:10 +0000 UTC  }]
Feb 14 14:17:00.009: INFO: ss-1  iruya-server-sfge57q7djm7  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 14:16:31 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-14 14:16:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-14 14:16:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 14:16:31 +0000 UTC  }]
Feb 14 14:17:00.009: INFO: ss-2  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 14:16:31 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-14 14:16:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-14 14:16:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 14:16:31 +0000 UTC  }]
Feb 14 14:17:00.009: INFO: 
Feb 14 14:17:00.009: INFO: StatefulSet ss has not reached scale 0, at 3
Feb 14 14:17:01.123: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Feb 14 14:17:01.123: INFO: ss-0  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 14:16:10 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-14 14:16:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-14 14:16:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 14:16:10 +0000 UTC  }]
Feb 14 14:17:01.124: INFO: ss-1  iruya-server-sfge57q7djm7  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 14:16:31 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-14 14:16:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-14 14:16:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 14:16:31 +0000 UTC  }]
Feb 14 14:17:01.124: INFO: ss-2  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 14:16:31 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-14 14:16:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-14 14:16:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 14:16:31 +0000 UTC  }]
Feb 14 14:17:01.124: INFO: 
Feb 14 14:17:01.124: INFO: StatefulSet ss has not reached scale 0, at 3
Feb 14 14:17:02.140: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Feb 14 14:17:02.140: INFO: ss-0  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 14:16:10 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-14 14:16:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-14 14:16:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 14:16:10 +0000 UTC  }]
Feb 14 14:17:02.140: INFO: ss-1  iruya-server-sfge57q7djm7  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 14:16:31 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-14 14:16:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-14 14:16:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 14:16:31 +0000 UTC  }]
Feb 14 14:17:02.140: INFO: ss-2  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 14:16:31 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-14 14:16:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-14 14:16:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 14:16:31 +0000 UTC  }]
Feb 14 14:17:02.140: INFO: 
Feb 14 14:17:02.140: INFO: StatefulSet ss has not reached scale 0, at 3
Feb 14 14:17:03.183: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Feb 14 14:17:03.183: INFO: ss-0  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 14:16:10 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-14 14:16:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-14 14:16:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 14:16:10 +0000 UTC  }]
Feb 14 14:17:03.183: INFO: ss-1  iruya-server-sfge57q7djm7  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 14:16:31 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-14 14:16:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-14 14:16:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 14:16:31 +0000 UTC  }]
Feb 14 14:17:03.184: INFO: ss-2  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 14:16:31 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-14 14:16:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-14 14:16:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 14:16:31 +0000 UTC  }]
Feb 14 14:17:03.184: INFO: 
Feb 14 14:17:03.184: INFO: StatefulSet ss has not reached scale 0, at 3
Feb 14 14:17:04.210: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Feb 14 14:17:04.210: INFO: ss-0  iruya-node                 Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 14:16:10 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-14 14:16:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-14 14:16:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 14:16:10 +0000 UTC  }]
Feb 14 14:17:04.211: INFO: ss-1  iruya-server-sfge57q7djm7  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 14:16:31 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-14 14:16:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-14 14:16:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 14:16:31 +0000 UTC  }]
Feb 14 14:17:04.211: INFO: ss-2  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 14:16:31 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-14 14:16:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-14 14:16:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 14:16:31 +0000 UTC  }]
Feb 14 14:17:04.211: INFO: 
Feb 14 14:17:04.211: INFO: StatefulSet ss has not reached scale 0, at 3
Feb 14 14:17:05.259: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Feb 14 14:17:05.259: INFO: ss-0  iruya-node                 Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 14:16:10 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-14 14:16:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-14 14:16:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 14:16:10 +0000 UTC  }]
Feb 14 14:17:05.259: INFO: ss-1  iruya-server-sfge57q7djm7  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 14:16:31 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-14 14:16:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-14 14:16:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 14:16:31 +0000 UTC  }]
Feb 14 14:17:05.259: INFO: ss-2  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 14:16:31 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-14 14:16:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-14 14:16:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 14:16:31 +0000 UTC  }]
Feb 14 14:17:05.259: INFO: 
Feb 14 14:17:05.259: INFO: StatefulSet ss has not reached scale 0, at 3
Feb 14 14:17:06.266: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Feb 14 14:17:06.266: INFO: ss-1  iruya-server-sfge57q7djm7  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 14:16:31 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-14 14:16:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-14 14:16:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 14:16:31 +0000 UTC  }]
Feb 14 14:17:06.266: INFO: ss-2  iruya-node                 Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 14:16:31 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-14 14:16:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-14 14:16:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 14:16:31 +0000 UTC  }]
Feb 14 14:17:06.266: INFO: 
Feb 14 14:17:06.266: INFO: StatefulSet ss has not reached scale 0, at 2
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-1865
Feb 14 14:17:07.276: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1865 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 14 14:17:07.524: INFO: rc: 1
Feb 14 14:17:07.525: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1865 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    error: unable to upgrade connection: container not found ("nginx")
 []  0xc002c88fc0 exit status 1   true [0xc002856a20 0xc002856a50 0xc002856aa0] [0xc002856a20 0xc002856a50 0xc002856aa0] [0xc002856a30 0xc002856a80] [0xba6c50 0xba6c50] 0xc0026e3f80 }:
Command stdout:

stderr:
error: unable to upgrade connection: container not found ("nginx")

error:
exit status 1
Feb 14 14:17:17.525: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1865 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 14 14:17:17.739: INFO: rc: 1
Feb 14 14:17:17.739: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1865 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc002428120 exit status 1   true [0xc00071c9e8 0xc00071ca00 0xc00071ca18] [0xc00071c9e8 0xc00071ca00 0xc00071ca18] [0xc00071c9f8 0xc00071ca10] [0xba6c50 0xba6c50] 0xc002229d40 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Feb 14 14:17:27.740: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1865 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 14 14:17:27.947: INFO: rc: 1
Feb 14 14:17:27.947: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1865 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc0024281e0 exit status 1   true [0xc00071ca20 0xc00071ca38 0xc00071ca50] [0xc00071ca20 0xc00071ca38 0xc00071ca50] [0xc00071ca30 0xc00071ca48] [0xba6c50 0xba6c50] 0xc001542660 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Feb 14 14:17:37.948: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1865 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 14 14:17:38.156: INFO: rc: 1
Feb 14 14:17:38.157: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1865 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc000705590 exit status 1   true [0xc0009140e8 0xc0009143e8 0xc000914778] [0xc0009140e8 0xc0009143e8 0xc000914778] [0xc000914278 0xc000914680] [0xba6c50 0xba6c50] 0xc001866480 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Feb 14 14:17:48.158: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1865 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 14 14:17:48.332: INFO: rc: 1
Feb 14 14:17:48.332: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1865 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc0023de090 exit status 1   true [0xc00274c008 0xc00274c030 0xc00274c070] [0xc00274c008 0xc00274c030 0xc00274c070] [0xc00274c028 0xc00274c060] [0xba6c50 0xba6c50] 0xc0026e2300 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Feb 14 14:17:58.333: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1865 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 14 14:17:58.529: INFO: rc: 1
Feb 14 14:17:58.530: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1865 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc000705680 exit status 1   true [0xc000914798 0xc000914858 0xc000914d08] [0xc000914798 0xc000914858 0xc000914d08] [0xc000914810 0xc000914ce8] [0xba6c50 0xba6c50] 0xc001866b40 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Feb 14 14:18:08.531: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1865 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 14 14:18:08.701: INFO: rc: 1
Feb 14 14:18:08.701: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1865 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc000705770 exit status 1   true [0xc000914d18 0xc000914f78 0xc000915040] [0xc000914d18 0xc000914f78 0xc000915040] [0xc000914f30 0xc000914ff8] [0xba6c50 0xba6c50] 0xc001866fc0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Feb 14 14:18:18.702: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1865 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 14 14:18:18.836: INFO: rc: 1
Feb 14 14:18:18.836: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1865 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc0023de150 exit status 1   true [0xc00274c088 0xc00274c0b0 0xc00274c0d8] [0xc00274c088 0xc00274c0b0 0xc00274c0d8] [0xc00274c098 0xc00274c0d0] [0xba6c50 0xba6c50] 0xc0026e2600 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Feb 14 14:18:28.837: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1865 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 14 14:18:28.970: INFO: rc: 1
Feb 14 14:18:28.971: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1865 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc0023de210 exit status 1   true [0xc00274c108 0xc00274c168 0xc00274c208] [0xc00274c108 0xc00274c168 0xc00274c208] [0xc00274c140 0xc00274c1e8] [0xba6c50 0xba6c50] 0xc0026e2d80 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Feb 14 14:18:38.972: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1865 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 14 14:18:39.175: INFO: rc: 1
Feb 14 14:18:39.176: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1865 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc0023de300 exit status 1   true [0xc00274c230 0xc00274c280 0xc00274c2c0] [0xc00274c230 0xc00274c280 0xc00274c2c0] [0xc00274c268 0xc00274c2b8] [0xba6c50 0xba6c50] 0xc0026e32c0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Feb 14 14:18:49.177: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1865 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 14 14:18:49.341: INFO: rc: 1
Feb 14 14:18:49.341: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1865 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc0023de3c0 exit status 1   true [0xc00274c2c8 0xc00274c2e0 0xc00274c318] [0xc00274c2c8 0xc00274c2e0 0xc00274c318] [0xc00274c2d8 0xc00274c310] [0xba6c50 0xba6c50] 0xc0026e3860 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Feb 14 14:18:59.342: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1865 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 14 14:18:59.558: INFO: rc: 1
Feb 14 14:18:59.558: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1865 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc0023de480 exit status 1   true [0xc00274c320 0xc00274c368 0xc00274c3b0] [0xc00274c320 0xc00274c368 0xc00274c3b0] [0xc00274c348 0xc00274c390] [0xba6c50 0xba6c50] 0xc0026e3bc0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Feb 14 14:19:09.559: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1865 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 14 14:19:09.766: INFO: rc: 1
Feb 14 14:19:09.767: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1865 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc0023de540 exit status 1   true [0xc00274c3d0 0xc00274c3f8 0xc00274c420] [0xc00274c3d0 0xc00274c3f8 0xc00274c420] [0xc00274c3f0 0xc00274c408] [0xba6c50 0xba6c50] 0xc0026e3f80 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Feb 14 14:19:19.768: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1865 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 14 14:19:22.021: INFO: rc: 1
Feb 14 14:19:22.022: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1865 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc0027a8270 exit status 1   true [0xc0018c6000 0xc0018c6018 0xc0018c6030] [0xc0018c6000 0xc0018c6018 0xc0018c6030] [0xc0018c6010 0xc0018c6028] [0xba6c50 0xba6c50] 0xc0024d25a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Feb 14 14:19:32.023: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1865 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 14 14:19:32.272: INFO: rc: 1
Feb 14 14:19:32.273: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1865 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc0023de660 exit status 1   true [0xc00274c428 0xc00274c440 0xc00274c458] [0xc00274c428 0xc00274c440 0xc00274c458] [0xc00274c438 0xc00274c450] [0xba6c50 0xba6c50] 0xc002228cc0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Feb 14 14:19:42.273: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1865 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 14 14:19:42.514: INFO: rc: 1
Feb 14 14:19:42.515: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1865 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc0023de0c0 exit status 1   true [0xc00274c008 0xc00274c030 0xc00274c070] [0xc00274c008 0xc00274c030 0xc00274c070] [0xc00274c028 0xc00274c060] [0xba6c50 0xba6c50] 0xc0026e2240 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Feb 14 14:19:52.516: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1865 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 14 14:19:52.632: INFO: rc: 1
Feb 14 14:19:52.632: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1865 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc0023de1b0 exit status 1   true [0xc00274c088 0xc00274c0b0 0xc00274c0d8] [0xc00274c088 0xc00274c0b0 0xc00274c0d8] [0xc00274c098 0xc00274c0d0] [0xba6c50 0xba6c50] 0xc0026e25a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Feb 14 14:20:02.634: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1865 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 14 14:20:02.743: INFO: rc: 1
Feb 14 14:20:02.744: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1865 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc002dce090 exit status 1   true [0xc0018c6000 0xc0018c6018 0xc0018c6030] [0xc0018c6000 0xc0018c6018 0xc0018c6030] [0xc0018c6010 0xc0018c6028] [0xba6c50 0xba6c50] 0xc002228c60 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Feb 14 14:20:12.746: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1865 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 14 14:20:12.962: INFO: rc: 1
Feb 14 14:20:12.962: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1865 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc0023de2a0 exit status 1   true [0xc00274c108 0xc00274c168 0xc00274c208] [0xc00274c108 0xc00274c168 0xc00274c208] [0xc00274c140 0xc00274c1e8] [0xba6c50 0xba6c50] 0xc0026e2cc0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Feb 14 14:20:22.963: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1865 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 14 14:20:23.117: INFO: rc: 1
Feb 14 14:20:23.117: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1865 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc0027a82d0 exit status 1   true [0xc0009140e8 0xc0009143e8 0xc000914778] [0xc0009140e8 0xc0009143e8 0xc000914778] [0xc000914278 0xc000914680] [0xba6c50 0xba6c50] 0xc0024d2600 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Feb 14 14:20:33.118: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1865 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 14 14:20:33.285: INFO: rc: 1
Feb 14 14:20:33.286: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1865 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc0023de390 exit status 1   true [0xc00274c230 0xc00274c280 0xc00274c2c0] [0xc00274c230 0xc00274c280 0xc00274c2c0] [0xc00274c268 0xc00274c2b8] [0xba6c50 0xba6c50] 0xc0026e3200 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Feb 14 14:20:43.286: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1865 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 14 14:20:43.471: INFO: rc: 1
Feb 14 14:20:43.471: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1865 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc002dce150 exit status 1   true [0xc0018c6038 0xc0018c6050 0xc0018c6068] [0xc0018c6038 0xc0018c6050 0xc0018c6068] [0xc0018c6048 0xc0018c6060] [0xba6c50 0xba6c50] 0xc002228fc0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Feb 14 14:20:53.472: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1865 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 14 14:20:53.760: INFO: rc: 1
Feb 14 14:20:53.761: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1865 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc000705650 exit status 1   true [0xc002856000 0xc002856018 0xc002856030] [0xc002856000 0xc002856018 0xc002856030] [0xc002856010 0xc002856028] [0xba6c50 0xba6c50] 0xc001866480 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Feb 14 14:21:03.762: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1865 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 14 14:21:03.910: INFO: rc: 1
Feb 14 14:21:03.911: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1865 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc0007057a0 exit status 1   true [0xc002856038 0xc002856050 0xc002856068] [0xc002856038 0xc002856050 0xc002856068] [0xc002856048 0xc002856060] [0xba6c50 0xba6c50] 0xc001866b40 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Feb 14 14:21:13.912: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1865 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 14 14:21:14.101: INFO: rc: 1
Feb 14 14:21:14.101: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1865 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc000705860 exit status 1   true [0xc002856070 0xc002856088 0xc0028560a0] [0xc002856070 0xc002856088 0xc0028560a0] [0xc002856080 0xc002856098] [0xba6c50 0xba6c50] 0xc001866fc0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Feb 14 14:21:24.103: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1865 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 14 14:21:24.315: INFO: rc: 1
Feb 14 14:21:24.315: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1865 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc0023de5a0 exit status 1   true [0xc00274c2c8 0xc00274c2e0 0xc00274c318] [0xc00274c2c8 0xc00274c2e0 0xc00274c318] [0xc00274c2d8 0xc00274c310] [0xba6c50 0xba6c50] 0xc0026e3800 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Feb 14 14:21:34.316: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1865 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 14 14:21:34.448: INFO: rc: 1
Feb 14 14:21:34.449: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1865 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc0027a8420 exit status 1   true [0xc000914798 0xc000914858 0xc000914d08] [0xc000914798 0xc000914858 0xc000914d08] [0xc000914810 0xc000914ce8] [0xba6c50 0xba6c50] 0xc0024d2960 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Feb 14 14:21:44.451: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1865 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 14 14:21:44.706: INFO: rc: 1
Feb 14 14:21:44.706: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1865 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc0027a8090 exit status 1   true [0xc0009140e8 0xc0009143e8 0xc000914778] [0xc0009140e8 0xc0009143e8 0xc000914778] [0xc000914278 0xc000914680] [0xba6c50 0xba6c50] 0xc0024d25a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Feb 14 14:21:54.707: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1865 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 14 14:21:54.917: INFO: rc: 1
Feb 14 14:21:54.918: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1865 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc002dce0c0 exit status 1   true [0xc0018c6000 0xc0018c6018 0xc0018c6030] [0xc0018c6000 0xc0018c6018 0xc0018c6030] [0xc0018c6010 0xc0018c6028] [0xba6c50 0xba6c50] 0xc002228c60 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Feb 14 14:22:04.918: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1865 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 14 14:22:05.069: INFO: rc: 1
Feb 14 14:22:05.070: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1865 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc0007055c0 exit status 1   true [0xc002856000 0xc002856018 0xc002856030] [0xc002856000 0xc002856018 0xc002856030] [0xc002856010 0xc002856028] [0xba6c50 0xba6c50] 0xc001866480 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Feb 14 14:22:15.071: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1865 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 14 14:22:15.226: INFO: rc: 1
Feb 14 14:22:15.227: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: 
Feb 14 14:22:15.227: INFO: Scaling statefulset ss to 0
Feb 14 14:22:15.238: INFO: Waiting for statefulset status.replicas updated to 0
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86
Feb 14 14:22:15.239: INFO: Deleting all statefulset in ns statefulset-1865
Feb 14 14:22:15.242: INFO: Scaling statefulset ss to 0
Feb 14 14:22:15.250: INFO: Waiting for statefulset status.replicas updated to 0
Feb 14 14:22:15.252: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 14 14:22:15.275: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-1865" for this suite.
Feb 14 14:22:23.331: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 14:22:23.449: INFO: namespace statefulset-1865 deletion completed in 8.169411394s

• [SLOW TEST:372.910 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    Burst scaling should run to completion even with unhealthy pods [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 14 14:22:23.450: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Feb 14 14:22:41.722: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Feb 14 14:22:41.732: INFO: Pod pod-with-prestop-http-hook still exists
Feb 14 14:22:43.733: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Feb 14 14:22:43.752: INFO: Pod pod-with-prestop-http-hook still exists
Feb 14 14:22:45.733: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Feb 14 14:22:45.747: INFO: Pod pod-with-prestop-http-hook still exists
Feb 14 14:22:47.733: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Feb 14 14:22:47.824: INFO: Pod pod-with-prestop-http-hook still exists
Feb 14 14:22:49.733: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Feb 14 14:22:49.741: INFO: Pod pod-with-prestop-http-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 14 14:22:49.767: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-1111" for this suite.
Feb 14 14:23:11.812: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 14:23:11.944: INFO: namespace container-lifecycle-hook-1111 deletion completed in 22.171442851s

• [SLOW TEST:48.494 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute prestop http hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 14 14:23:11.944: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44
[It] should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
Feb 14 14:23:12.020: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 14 14:23:34.454: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-6781" for this suite.
Feb 14 14:23:56.609: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 14:23:56.713: INFO: namespace init-container-6781 deletion completed in 22.140755804s

• [SLOW TEST:44.768 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 14 14:23:56.713: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb 14 14:23:56.782: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3e0d93f4-56d8-403a-bac5-7b2a20d48d86" in namespace "projected-7920" to be "success or failure"
Feb 14 14:23:56.886: INFO: Pod "downwardapi-volume-3e0d93f4-56d8-403a-bac5-7b2a20d48d86": Phase="Pending", Reason="", readiness=false. Elapsed: 104.688604ms
Feb 14 14:23:58.897: INFO: Pod "downwardapi-volume-3e0d93f4-56d8-403a-bac5-7b2a20d48d86": Phase="Pending", Reason="", readiness=false. Elapsed: 2.115030065s
Feb 14 14:24:00.914: INFO: Pod "downwardapi-volume-3e0d93f4-56d8-403a-bac5-7b2a20d48d86": Phase="Pending", Reason="", readiness=false. Elapsed: 4.132796495s
Feb 14 14:24:02.924: INFO: Pod "downwardapi-volume-3e0d93f4-56d8-403a-bac5-7b2a20d48d86": Phase="Pending", Reason="", readiness=false. Elapsed: 6.14209535s
Feb 14 14:24:04.940: INFO: Pod "downwardapi-volume-3e0d93f4-56d8-403a-bac5-7b2a20d48d86": Phase="Pending", Reason="", readiness=false. Elapsed: 8.157827295s
Feb 14 14:24:06.951: INFO: Pod "downwardapi-volume-3e0d93f4-56d8-403a-bac5-7b2a20d48d86": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.169072908s
STEP: Saw pod success
Feb 14 14:24:06.951: INFO: Pod "downwardapi-volume-3e0d93f4-56d8-403a-bac5-7b2a20d48d86" satisfied condition "success or failure"
Feb 14 14:24:06.954: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-3e0d93f4-56d8-403a-bac5-7b2a20d48d86 container client-container: 
STEP: delete the pod
Feb 14 14:24:07.204: INFO: Waiting for pod downwardapi-volume-3e0d93f4-56d8-403a-bac5-7b2a20d48d86 to disappear
Feb 14 14:24:07.226: INFO: Pod downwardapi-volume-3e0d93f4-56d8-403a-bac5-7b2a20d48d86 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 14 14:24:07.226: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7920" for this suite.
Feb 14 14:24:13.257: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 14:24:13.356: INFO: namespace projected-7920 deletion completed in 6.124408634s

• [SLOW TEST:16.643 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 14 14:24:13.357: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-de546183-8b29-4170-901c-9cd7adc5afc2
STEP: Creating a pod to test consume secrets
Feb 14 14:24:13.560: INFO: Waiting up to 5m0s for pod "pod-secrets-d8f338ef-8542-470d-b29e-9c6b125f4401" in namespace "secrets-1926" to be "success or failure"
Feb 14 14:24:13.588: INFO: Pod "pod-secrets-d8f338ef-8542-470d-b29e-9c6b125f4401": Phase="Pending", Reason="", readiness=false. Elapsed: 27.454782ms
Feb 14 14:24:15.606: INFO: Pod "pod-secrets-d8f338ef-8542-470d-b29e-9c6b125f4401": Phase="Pending", Reason="", readiness=false. Elapsed: 2.045771352s
Feb 14 14:24:17.615: INFO: Pod "pod-secrets-d8f338ef-8542-470d-b29e-9c6b125f4401": Phase="Pending", Reason="", readiness=false. Elapsed: 4.054477504s
Feb 14 14:24:19.623: INFO: Pod "pod-secrets-d8f338ef-8542-470d-b29e-9c6b125f4401": Phase="Pending", Reason="", readiness=false. Elapsed: 6.06282493s
Feb 14 14:24:22.132: INFO: Pod "pod-secrets-d8f338ef-8542-470d-b29e-9c6b125f4401": Phase="Pending", Reason="", readiness=false. Elapsed: 8.571737828s
Feb 14 14:24:24.147: INFO: Pod "pod-secrets-d8f338ef-8542-470d-b29e-9c6b125f4401": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.586678473s
STEP: Saw pod success
Feb 14 14:24:24.148: INFO: Pod "pod-secrets-d8f338ef-8542-470d-b29e-9c6b125f4401" satisfied condition "success or failure"
Feb 14 14:24:24.152: INFO: Trying to get logs from node iruya-node pod pod-secrets-d8f338ef-8542-470d-b29e-9c6b125f4401 container secret-volume-test: 
STEP: delete the pod
Feb 14 14:24:24.353: INFO: Waiting for pod pod-secrets-d8f338ef-8542-470d-b29e-9c6b125f4401 to disappear
Feb 14 14:24:24.436: INFO: Pod pod-secrets-d8f338ef-8542-470d-b29e-9c6b125f4401 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 14 14:24:24.438: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-1926" for this suite.
Feb 14 14:24:30.486: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 14:24:30.684: INFO: namespace secrets-1926 deletion completed in 6.230840293s

• [SLOW TEST:17.328 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl api-versions 
  should check if v1 is in available api versions  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 14 14:24:30.686: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should check if v1 is in available api versions  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: validating api versions
Feb 14 14:24:30.835: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions'
Feb 14 14:24:31.000: INFO: stderr: ""
Feb 14 14:24:31.001: INFO: stdout: "admissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\napps/v1beta1\napps/v1beta2\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 14 14:24:31.001: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5045" for this suite.
Feb 14 14:24:37.084: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 14:24:37.220: INFO: namespace kubectl-5045 deletion completed in 6.213705657s

• [SLOW TEST:6.535 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl api-versions
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should check if v1 is in available api versions  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 14 14:24:37.221: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb 14 14:24:37.378: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f864eff8-c7e1-40b6-8504-efd12dfedf42" in namespace "projected-6029" to be "success or failure"
Feb 14 14:24:37.386: INFO: Pod "downwardapi-volume-f864eff8-c7e1-40b6-8504-efd12dfedf42": Phase="Pending", Reason="", readiness=false. Elapsed: 8.360074ms
Feb 14 14:24:39.394: INFO: Pod "downwardapi-volume-f864eff8-c7e1-40b6-8504-efd12dfedf42": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015951424s
Feb 14 14:24:41.403: INFO: Pod "downwardapi-volume-f864eff8-c7e1-40b6-8504-efd12dfedf42": Phase="Pending", Reason="", readiness=false. Elapsed: 4.025431031s
Feb 14 14:24:43.410: INFO: Pod "downwardapi-volume-f864eff8-c7e1-40b6-8504-efd12dfedf42": Phase="Pending", Reason="", readiness=false. Elapsed: 6.031996764s
Feb 14 14:24:45.418: INFO: Pod "downwardapi-volume-f864eff8-c7e1-40b6-8504-efd12dfedf42": Phase="Pending", Reason="", readiness=false. Elapsed: 8.040114581s
Feb 14 14:24:47.427: INFO: Pod "downwardapi-volume-f864eff8-c7e1-40b6-8504-efd12dfedf42": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.04937284s
STEP: Saw pod success
Feb 14 14:24:47.427: INFO: Pod "downwardapi-volume-f864eff8-c7e1-40b6-8504-efd12dfedf42" satisfied condition "success or failure"
Feb 14 14:24:47.431: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-f864eff8-c7e1-40b6-8504-efd12dfedf42 container client-container: 
STEP: delete the pod
Feb 14 14:24:47.775: INFO: Waiting for pod downwardapi-volume-f864eff8-c7e1-40b6-8504-efd12dfedf42 to disappear
Feb 14 14:24:47.853: INFO: Pod downwardapi-volume-f864eff8-c7e1-40b6-8504-efd12dfedf42 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 14 14:24:47.854: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6029" for this suite.
Feb 14 14:24:53.907: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 14:24:54.015: INFO: namespace projected-6029 deletion completed in 6.148295673s

• [SLOW TEST:16.794 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 14 14:24:54.016: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the rc
STEP: delete the rc
STEP: wait for all pods to be garbage collected
STEP: Gathering metrics
W0214 14:25:06.472353       8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Feb 14 14:25:06.472: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 14 14:25:06.472: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-9000" for this suite.
Feb 14 14:25:12.509: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 14:25:12.614: INFO: namespace gc-9000 deletion completed in 6.136384683s

• [SLOW TEST:18.598 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 14 14:25:12.615: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-8c7f80ad-ee1e-4a6c-beb0-470216193a00
STEP: Creating a pod to test consume secrets
Feb 14 14:25:12.761: INFO: Waiting up to 5m0s for pod "pod-secrets-071976c5-e673-41d9-8cb6-c63a32ddc607" in namespace "secrets-6742" to be "success or failure"
Feb 14 14:25:12.768: INFO: Pod "pod-secrets-071976c5-e673-41d9-8cb6-c63a32ddc607": Phase="Pending", Reason="", readiness=false. Elapsed: 7.288234ms
Feb 14 14:25:14.777: INFO: Pod "pod-secrets-071976c5-e673-41d9-8cb6-c63a32ddc607": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015681796s
Feb 14 14:25:16.823: INFO: Pod "pod-secrets-071976c5-e673-41d9-8cb6-c63a32ddc607": Phase="Pending", Reason="", readiness=false. Elapsed: 4.061665336s
Feb 14 14:25:18.831: INFO: Pod "pod-secrets-071976c5-e673-41d9-8cb6-c63a32ddc607": Phase="Pending", Reason="", readiness=false. Elapsed: 6.069838819s
Feb 14 14:25:20.840: INFO: Pod "pod-secrets-071976c5-e673-41d9-8cb6-c63a32ddc607": Phase="Pending", Reason="", readiness=false. Elapsed: 8.0786933s
Feb 14 14:25:22.874: INFO: Pod "pod-secrets-071976c5-e673-41d9-8cb6-c63a32ddc607": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.112668427s
STEP: Saw pod success
Feb 14 14:25:22.874: INFO: Pod "pod-secrets-071976c5-e673-41d9-8cb6-c63a32ddc607" satisfied condition "success or failure"
Feb 14 14:25:22.898: INFO: Trying to get logs from node iruya-node pod pod-secrets-071976c5-e673-41d9-8cb6-c63a32ddc607 container secret-volume-test: 
STEP: delete the pod
Feb 14 14:25:23.024: INFO: Waiting for pod pod-secrets-071976c5-e673-41d9-8cb6-c63a32ddc607 to disappear
Feb 14 14:25:23.113: INFO: Pod pod-secrets-071976c5-e673-41d9-8cb6-c63a32ddc607 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 14 14:25:23.114: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-6742" for this suite.
Feb 14 14:25:33.173: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 14:25:33.294: INFO: namespace secrets-6742 deletion completed in 10.159846501s

• [SLOW TEST:20.680 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 14 14:25:33.295: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb 14 14:25:33.465: INFO: Waiting up to 5m0s for pod "downwardapi-volume-aa36109e-0e0b-43b6-be2f-8941dc8bc287" in namespace "downward-api-8939" to be "success or failure"
Feb 14 14:25:33.480: INFO: Pod "downwardapi-volume-aa36109e-0e0b-43b6-be2f-8941dc8bc287": Phase="Pending", Reason="", readiness=false. Elapsed: 14.167899ms
Feb 14 14:25:35.489: INFO: Pod "downwardapi-volume-aa36109e-0e0b-43b6-be2f-8941dc8bc287": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023430949s
Feb 14 14:25:37.505: INFO: Pod "downwardapi-volume-aa36109e-0e0b-43b6-be2f-8941dc8bc287": Phase="Pending", Reason="", readiness=false. Elapsed: 4.039712487s
Feb 14 14:25:39.517: INFO: Pod "downwardapi-volume-aa36109e-0e0b-43b6-be2f-8941dc8bc287": Phase="Pending", Reason="", readiness=false. Elapsed: 6.051561532s
Feb 14 14:25:41.524: INFO: Pod "downwardapi-volume-aa36109e-0e0b-43b6-be2f-8941dc8bc287": Phase="Pending", Reason="", readiness=false. Elapsed: 8.058319707s
Feb 14 14:25:43.562: INFO: Pod "downwardapi-volume-aa36109e-0e0b-43b6-be2f-8941dc8bc287": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.096506654s
STEP: Saw pod success
Feb 14 14:25:43.563: INFO: Pod "downwardapi-volume-aa36109e-0e0b-43b6-be2f-8941dc8bc287" satisfied condition "success or failure"
Feb 14 14:25:43.583: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-aa36109e-0e0b-43b6-be2f-8941dc8bc287 container client-container: 
STEP: delete the pod
Feb 14 14:25:43.739: INFO: Waiting for pod downwardapi-volume-aa36109e-0e0b-43b6-be2f-8941dc8bc287 to disappear
Feb 14 14:25:43.746: INFO: Pod downwardapi-volume-aa36109e-0e0b-43b6-be2f-8941dc8bc287 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 14 14:25:43.747: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-8939" for this suite.
Feb 14 14:25:50.207: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 14:25:50.382: INFO: namespace downward-api-8939 deletion completed in 6.628482539s

• [SLOW TEST:17.087 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 14 14:25:50.383: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a pod in the namespace
STEP: Waiting for the pod to have running status
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
STEP: Recreating the namespace
STEP: Verifying there are no pods in the namespace
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 14 14:26:21.331: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "namespaces-9938" for this suite.
Feb 14 14:26:27.398: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 14:26:27.538: INFO: namespace namespaces-9938 deletion completed in 6.200384474s
STEP: Destroying namespace "nsdeletetest-3691" for this suite.
Feb 14 14:26:27.541: INFO: Namespace nsdeletetest-3691 was already deleted
STEP: Destroying namespace "nsdeletetest-3876" for this suite.
Feb 14 14:26:33.569: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 14:26:33.689: INFO: namespace nsdeletetest-3876 deletion completed in 6.148682363s

• [SLOW TEST:43.307 seconds]
[sig-api-machinery] Namespaces [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 14 14:26:33.690: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb 14 14:26:33.838: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3f7e87eb-f45e-45b5-9fa6-807b545b0198" in namespace "projected-4336" to be "success or failure"
Feb 14 14:26:33.861: INFO: Pod "downwardapi-volume-3f7e87eb-f45e-45b5-9fa6-807b545b0198": Phase="Pending", Reason="", readiness=false. Elapsed: 22.668446ms
Feb 14 14:26:35.875: INFO: Pod "downwardapi-volume-3f7e87eb-f45e-45b5-9fa6-807b545b0198": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036231355s
Feb 14 14:26:37.889: INFO: Pod "downwardapi-volume-3f7e87eb-f45e-45b5-9fa6-807b545b0198": Phase="Pending", Reason="", readiness=false. Elapsed: 4.050699937s
Feb 14 14:26:39.904: INFO: Pod "downwardapi-volume-3f7e87eb-f45e-45b5-9fa6-807b545b0198": Phase="Pending", Reason="", readiness=false. Elapsed: 6.06503188s
Feb 14 14:26:41.912: INFO: Pod "downwardapi-volume-3f7e87eb-f45e-45b5-9fa6-807b545b0198": Phase="Pending", Reason="", readiness=false. Elapsed: 8.073284779s
Feb 14 14:26:43.927: INFO: Pod "downwardapi-volume-3f7e87eb-f45e-45b5-9fa6-807b545b0198": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.088574218s
STEP: Saw pod success
Feb 14 14:26:43.928: INFO: Pod "downwardapi-volume-3f7e87eb-f45e-45b5-9fa6-807b545b0198" satisfied condition "success or failure"
Feb 14 14:26:43.935: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-3f7e87eb-f45e-45b5-9fa6-807b545b0198 container client-container: 
STEP: delete the pod
Feb 14 14:26:44.972: INFO: Waiting for pod downwardapi-volume-3f7e87eb-f45e-45b5-9fa6-807b545b0198 to disappear
Feb 14 14:26:44.980: INFO: Pod downwardapi-volume-3f7e87eb-f45e-45b5-9fa6-807b545b0198 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 14 14:26:44.981: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4336" for this suite.
Feb 14 14:26:51.085: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 14:26:51.207: INFO: namespace projected-4336 deletion completed in 6.220842873s

• [SLOW TEST:17.517 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 14 14:26:51.208: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Performing setup for networking test in namespace pod-network-test-1706
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Feb 14 14:26:51.302: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Feb 14 14:27:23.507: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.32.0.4:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-1706 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 14 14:27:23.507: INFO: >>> kubeConfig: /root/.kube/config
I0214 14:27:23.593389       8 log.go:172] (0xc0017cc6e0) (0xc0025b32c0) Create stream
I0214 14:27:23.593531       8 log.go:172] (0xc0017cc6e0) (0xc0025b32c0) Stream added, broadcasting: 1
I0214 14:27:23.610437       8 log.go:172] (0xc0017cc6e0) Reply frame received for 1
I0214 14:27:23.610594       8 log.go:172] (0xc0017cc6e0) (0xc0025b3360) Create stream
I0214 14:27:23.610615       8 log.go:172] (0xc0017cc6e0) (0xc0025b3360) Stream added, broadcasting: 3
I0214 14:27:23.613287       8 log.go:172] (0xc0017cc6e0) Reply frame received for 3
I0214 14:27:23.613330       8 log.go:172] (0xc0017cc6e0) (0xc0026e9220) Create stream
I0214 14:27:23.613345       8 log.go:172] (0xc0017cc6e0) (0xc0026e9220) Stream added, broadcasting: 5
I0214 14:27:23.618279       8 log.go:172] (0xc0017cc6e0) Reply frame received for 5
I0214 14:27:23.817337       8 log.go:172] (0xc0017cc6e0) Data frame received for 3
I0214 14:27:23.817492       8 log.go:172] (0xc0025b3360) (3) Data frame handling
I0214 14:27:23.817537       8 log.go:172] (0xc0025b3360) (3) Data frame sent
I0214 14:27:24.069632       8 log.go:172] (0xc0017cc6e0) Data frame received for 1
I0214 14:27:24.070018       8 log.go:172] (0xc0017cc6e0) (0xc0026e9220) Stream removed, broadcasting: 5
I0214 14:27:24.070188       8 log.go:172] (0xc0025b32c0) (1) Data frame handling
I0214 14:27:24.070299       8 log.go:172] (0xc0025b32c0) (1) Data frame sent
I0214 14:27:24.070466       8 log.go:172] (0xc0017cc6e0) (0xc0025b3360) Stream removed, broadcasting: 3
I0214 14:27:24.070539       8 log.go:172] (0xc0017cc6e0) (0xc0025b32c0) Stream removed, broadcasting: 1
I0214 14:27:24.070609       8 log.go:172] (0xc0017cc6e0) Go away received
I0214 14:27:24.071531       8 log.go:172] (0xc0017cc6e0) (0xc0025b32c0) Stream removed, broadcasting: 1
I0214 14:27:24.071613       8 log.go:172] (0xc0017cc6e0) (0xc0025b3360) Stream removed, broadcasting: 3
I0214 14:27:24.071625       8 log.go:172] (0xc0017cc6e0) (0xc0026e9220) Stream removed, broadcasting: 5
Feb 14 14:27:24.071: INFO: Found all expected endpoints: [netserver-0]
Feb 14 14:27:24.086: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.44.0.1:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-1706 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 14 14:27:24.086: INFO: >>> kubeConfig: /root/.kube/config
I0214 14:27:24.203188       8 log.go:172] (0xc001da5c30) (0xc0026e94a0) Create stream
I0214 14:27:24.203345       8 log.go:172] (0xc001da5c30) (0xc0026e94a0) Stream added, broadcasting: 1
I0214 14:27:24.224139       8 log.go:172] (0xc001da5c30) Reply frame received for 1
I0214 14:27:24.224290       8 log.go:172] (0xc001da5c30) (0xc0024ffcc0) Create stream
I0214 14:27:24.224301       8 log.go:172] (0xc001da5c30) (0xc0024ffcc0) Stream added, broadcasting: 3
I0214 14:27:24.226453       8 log.go:172] (0xc001da5c30) Reply frame received for 3
I0214 14:27:24.226490       8 log.go:172] (0xc001da5c30) (0xc000be54a0) Create stream
I0214 14:27:24.226502       8 log.go:172] (0xc001da5c30) (0xc000be54a0) Stream added, broadcasting: 5
I0214 14:27:24.228707       8 log.go:172] (0xc001da5c30) Reply frame received for 5
I0214 14:27:24.403705       8 log.go:172] (0xc001da5c30) Data frame received for 3
I0214 14:27:24.403829       8 log.go:172] (0xc0024ffcc0) (3) Data frame handling
I0214 14:27:24.403866       8 log.go:172] (0xc0024ffcc0) (3) Data frame sent
I0214 14:27:24.627197       8 log.go:172] (0xc001da5c30) (0xc0024ffcc0) Stream removed, broadcasting: 3
I0214 14:27:24.627531       8 log.go:172] (0xc001da5c30) Data frame received for 1
I0214 14:27:24.627557       8 log.go:172] (0xc0026e94a0) (1) Data frame handling
I0214 14:27:24.627578       8 log.go:172] (0xc0026e94a0) (1) Data frame sent
I0214 14:27:24.627698       8 log.go:172] (0xc001da5c30) (0xc0026e94a0) Stream removed, broadcasting: 1
I0214 14:27:24.628121       8 log.go:172] (0xc001da5c30) (0xc000be54a0) Stream removed, broadcasting: 5
I0214 14:27:24.628204       8 log.go:172] (0xc001da5c30) (0xc0026e94a0) Stream removed, broadcasting: 1
I0214 14:27:24.628226       8 log.go:172] (0xc001da5c30) (0xc0024ffcc0) Stream removed, broadcasting: 3
I0214 14:27:24.628476       8 log.go:172] (0xc001da5c30) Go away received
I0214 14:27:24.628507       8 log.go:172] (0xc001da5c30) (0xc000be54a0) Stream removed, broadcasting: 5
Feb 14 14:27:24.628: INFO: Found all expected endpoints: [netserver-1]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 14 14:27:24.629: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-1706" for this suite.
Feb 14 14:27:48.674: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 14:27:48.830: INFO: namespace pod-network-test-1706 deletion completed in 24.189986968s

• [SLOW TEST:57.622 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 14 14:27:48.830: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-1014.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-1014.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-1014.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-1014.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-1014.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-1014.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-1014.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-1014.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-1014.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-1014.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-1014.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-1014.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1014.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 144.108.105.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.105.108.144_udp@PTR;check="$$(dig +tcp +noall +answer +search 144.108.105.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.105.108.144_tcp@PTR;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-1014.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-1014.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-1014.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-1014.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-1014.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-1014.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-1014.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-1014.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-1014.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-1014.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-1014.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-1014.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1014.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 144.108.105.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.105.108.144_udp@PTR;check="$$(dig +tcp +noall +answer +search 144.108.105.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.105.108.144_tcp@PTR;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Feb 14 14:28:03.302: INFO: Unable to read wheezy_udp@dns-test-service.dns-1014.svc.cluster.local from pod dns-1014/dns-test-7b9cf618-011b-4143-9d87-e9a68400553f: the server could not find the requested resource (get pods dns-test-7b9cf618-011b-4143-9d87-e9a68400553f)
Feb 14 14:28:03.313: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1014.svc.cluster.local from pod dns-1014/dns-test-7b9cf618-011b-4143-9d87-e9a68400553f: the server could not find the requested resource (get pods dns-test-7b9cf618-011b-4143-9d87-e9a68400553f)
Feb 14 14:28:03.327: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1014.svc.cluster.local from pod dns-1014/dns-test-7b9cf618-011b-4143-9d87-e9a68400553f: the server could not find the requested resource (get pods dns-test-7b9cf618-011b-4143-9d87-e9a68400553f)
Feb 14 14:28:03.336: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1014.svc.cluster.local from pod dns-1014/dns-test-7b9cf618-011b-4143-9d87-e9a68400553f: the server could not find the requested resource (get pods dns-test-7b9cf618-011b-4143-9d87-e9a68400553f)
Feb 14 14:28:03.353: INFO: Unable to read wheezy_udp@_http._tcp.test-service-2.dns-1014.svc.cluster.local from pod dns-1014/dns-test-7b9cf618-011b-4143-9d87-e9a68400553f: the server could not find the requested resource (get pods dns-test-7b9cf618-011b-4143-9d87-e9a68400553f)
Feb 14 14:28:03.362: INFO: Unable to read wheezy_tcp@_http._tcp.test-service-2.dns-1014.svc.cluster.local from pod dns-1014/dns-test-7b9cf618-011b-4143-9d87-e9a68400553f: the server could not find the requested resource (get pods dns-test-7b9cf618-011b-4143-9d87-e9a68400553f)
Feb 14 14:28:03.370: INFO: Unable to read wheezy_udp@PodARecord from pod dns-1014/dns-test-7b9cf618-011b-4143-9d87-e9a68400553f: the server could not find the requested resource (get pods dns-test-7b9cf618-011b-4143-9d87-e9a68400553f)
Feb 14 14:28:03.376: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-1014/dns-test-7b9cf618-011b-4143-9d87-e9a68400553f: the server could not find the requested resource (get pods dns-test-7b9cf618-011b-4143-9d87-e9a68400553f)
Feb 14 14:28:03.382: INFO: Unable to read 10.105.108.144_udp@PTR from pod dns-1014/dns-test-7b9cf618-011b-4143-9d87-e9a68400553f: the server could not find the requested resource (get pods dns-test-7b9cf618-011b-4143-9d87-e9a68400553f)
Feb 14 14:28:03.387: INFO: Unable to read 10.105.108.144_tcp@PTR from pod dns-1014/dns-test-7b9cf618-011b-4143-9d87-e9a68400553f: the server could not find the requested resource (get pods dns-test-7b9cf618-011b-4143-9d87-e9a68400553f)
Feb 14 14:28:03.395: INFO: Unable to read jessie_udp@dns-test-service.dns-1014.svc.cluster.local from pod dns-1014/dns-test-7b9cf618-011b-4143-9d87-e9a68400553f: the server could not find the requested resource (get pods dns-test-7b9cf618-011b-4143-9d87-e9a68400553f)
Feb 14 14:28:03.401: INFO: Unable to read jessie_tcp@dns-test-service.dns-1014.svc.cluster.local from pod dns-1014/dns-test-7b9cf618-011b-4143-9d87-e9a68400553f: the server could not find the requested resource (get pods dns-test-7b9cf618-011b-4143-9d87-e9a68400553f)
Feb 14 14:28:03.417: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1014.svc.cluster.local from pod dns-1014/dns-test-7b9cf618-011b-4143-9d87-e9a68400553f: the server could not find the requested resource (get pods dns-test-7b9cf618-011b-4143-9d87-e9a68400553f)
Feb 14 14:28:03.437: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1014.svc.cluster.local from pod dns-1014/dns-test-7b9cf618-011b-4143-9d87-e9a68400553f: the server could not find the requested resource (get pods dns-test-7b9cf618-011b-4143-9d87-e9a68400553f)
Feb 14 14:28:03.444: INFO: Unable to read jessie_udp@_http._tcp.test-service-2.dns-1014.svc.cluster.local from pod dns-1014/dns-test-7b9cf618-011b-4143-9d87-e9a68400553f: the server could not find the requested resource (get pods dns-test-7b9cf618-011b-4143-9d87-e9a68400553f)
Feb 14 14:28:03.450: INFO: Unable to read jessie_tcp@_http._tcp.test-service-2.dns-1014.svc.cluster.local from pod dns-1014/dns-test-7b9cf618-011b-4143-9d87-e9a68400553f: the server could not find the requested resource (get pods dns-test-7b9cf618-011b-4143-9d87-e9a68400553f)
Feb 14 14:28:03.455: INFO: Unable to read jessie_udp@PodARecord from pod dns-1014/dns-test-7b9cf618-011b-4143-9d87-e9a68400553f: the server could not find the requested resource (get pods dns-test-7b9cf618-011b-4143-9d87-e9a68400553f)
Feb 14 14:28:03.462: INFO: Unable to read jessie_tcp@PodARecord from pod dns-1014/dns-test-7b9cf618-011b-4143-9d87-e9a68400553f: the server could not find the requested resource (get pods dns-test-7b9cf618-011b-4143-9d87-e9a68400553f)
Feb 14 14:28:03.484: INFO: Unable to read 10.105.108.144_udp@PTR from pod dns-1014/dns-test-7b9cf618-011b-4143-9d87-e9a68400553f: the server could not find the requested resource (get pods dns-test-7b9cf618-011b-4143-9d87-e9a68400553f)
Feb 14 14:28:03.545: INFO: Unable to read 10.105.108.144_tcp@PTR from pod dns-1014/dns-test-7b9cf618-011b-4143-9d87-e9a68400553f: the server could not find the requested resource (get pods dns-test-7b9cf618-011b-4143-9d87-e9a68400553f)
Feb 14 14:28:03.545: INFO: Lookups using dns-1014/dns-test-7b9cf618-011b-4143-9d87-e9a68400553f failed for: [wheezy_udp@dns-test-service.dns-1014.svc.cluster.local wheezy_tcp@dns-test-service.dns-1014.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-1014.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-1014.svc.cluster.local wheezy_udp@_http._tcp.test-service-2.dns-1014.svc.cluster.local wheezy_tcp@_http._tcp.test-service-2.dns-1014.svc.cluster.local wheezy_udp@PodARecord wheezy_tcp@PodARecord 10.105.108.144_udp@PTR 10.105.108.144_tcp@PTR jessie_udp@dns-test-service.dns-1014.svc.cluster.local jessie_tcp@dns-test-service.dns-1014.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-1014.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-1014.svc.cluster.local jessie_udp@_http._tcp.test-service-2.dns-1014.svc.cluster.local jessie_tcp@_http._tcp.test-service-2.dns-1014.svc.cluster.local jessie_udp@PodARecord jessie_tcp@PodARecord 10.105.108.144_udp@PTR 10.105.108.144_tcp@PTR]

Feb 14 14:28:08.728: INFO: DNS probes using dns-1014/dns-test-7b9cf618-011b-4143-9d87-e9a68400553f succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 14 14:28:09.137: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-1014" for this suite.
Feb 14 14:28:15.298: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 14:28:15.458: INFO: namespace dns-1014 deletion completed in 6.29736595s

• [SLOW TEST:26.627 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[k8s.io] Pods 
  should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 14 14:28:15.458: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Feb 14 14:28:24.179: INFO: Successfully updated pod "pod-update-88cfaf4c-6aa0-4bf6-8386-f8d1ddf018dd"
STEP: verifying the updated pod is in kubernetes
Feb 14 14:28:24.200: INFO: Pod update OK
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 14 14:28:24.200: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-3189" for this suite.
Feb 14 14:28:46.246: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 14:28:46.331: INFO: namespace pods-3189 deletion completed in 22.125346904s

• [SLOW TEST:30.872 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run rc 
  should create an rc from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 14 14:28:46.331: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1456
[It] should create an rc from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Feb 14 14:28:46.480: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=kubectl-7536'
Feb 14 14:28:49.476: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Feb 14 14:28:49.477: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n"
STEP: verifying the rc e2e-test-nginx-rc was created
STEP: verifying the pod controlled by rc e2e-test-nginx-rc was created
STEP: confirm that you can get logs from an rc
Feb 14 14:28:49.628: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-nginx-rc-c2mcz]
Feb 14 14:28:49.628: INFO: Waiting up to 5m0s for pod "e2e-test-nginx-rc-c2mcz" in namespace "kubectl-7536" to be "running and ready"
Feb 14 14:28:49.667: INFO: Pod "e2e-test-nginx-rc-c2mcz": Phase="Pending", Reason="", readiness=false. Elapsed: 39.028545ms
Feb 14 14:28:51.679: INFO: Pod "e2e-test-nginx-rc-c2mcz": Phase="Pending", Reason="", readiness=false. Elapsed: 2.051336067s
Feb 14 14:28:53.782: INFO: Pod "e2e-test-nginx-rc-c2mcz": Phase="Pending", Reason="", readiness=false. Elapsed: 4.153748167s
Feb 14 14:28:55.792: INFO: Pod "e2e-test-nginx-rc-c2mcz": Phase="Pending", Reason="", readiness=false. Elapsed: 6.163623854s
Feb 14 14:28:57.802: INFO: Pod "e2e-test-nginx-rc-c2mcz": Phase="Pending", Reason="", readiness=false. Elapsed: 8.173943527s
Feb 14 14:28:59.835: INFO: Pod "e2e-test-nginx-rc-c2mcz": Phase="Running", Reason="", readiness=true. Elapsed: 10.206700746s
Feb 14 14:28:59.835: INFO: Pod "e2e-test-nginx-rc-c2mcz" satisfied condition "running and ready"
Feb 14 14:28:59.835: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-nginx-rc-c2mcz]
Feb 14 14:28:59.835: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-nginx-rc --namespace=kubectl-7536'
Feb 14 14:29:00.028: INFO: stderr: ""
Feb 14 14:29:00.028: INFO: stdout: ""
[AfterEach] [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1461
Feb 14 14:29:00.028: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=kubectl-7536'
Feb 14 14:29:00.136: INFO: stderr: ""
Feb 14 14:29:00.137: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 14 14:29:00.137: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-7536" for this suite.
Feb 14 14:29:22.169: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 14:29:22.293: INFO: namespace kubectl-7536 deletion completed in 22.149600803s

• [SLOW TEST:35.962 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create an rc from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy through a service and a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 14 14:29:22.294: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy through a service and a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: starting an echo server on multiple ports
STEP: creating replication controller proxy-service-4d2c8 in namespace proxy-9635
I0214 14:29:22.575936       8 runners.go:180] Created replication controller with name: proxy-service-4d2c8, namespace: proxy-9635, replica count: 1
I0214 14:29:23.628586       8 runners.go:180] proxy-service-4d2c8 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0214 14:29:24.629743       8 runners.go:180] proxy-service-4d2c8 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0214 14:29:25.630609       8 runners.go:180] proxy-service-4d2c8 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0214 14:29:26.631623       8 runners.go:180] proxy-service-4d2c8 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0214 14:29:27.632278       8 runners.go:180] proxy-service-4d2c8 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0214 14:29:28.633456       8 runners.go:180] proxy-service-4d2c8 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0214 14:29:29.634204       8 runners.go:180] proxy-service-4d2c8 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0214 14:29:30.635242       8 runners.go:180] proxy-service-4d2c8 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0214 14:29:31.636210       8 runners.go:180] proxy-service-4d2c8 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0214 14:29:32.637320       8 runners.go:180] proxy-service-4d2c8 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0214 14:29:33.638157       8 runners.go:180] proxy-service-4d2c8 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0214 14:29:34.639510       8 runners.go:180] proxy-service-4d2c8 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0214 14:29:35.640565       8 runners.go:180] proxy-service-4d2c8 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0214 14:29:36.641282       8 runners.go:180] proxy-service-4d2c8 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0214 14:29:37.641984       8 runners.go:180] proxy-service-4d2c8 Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Feb 14 14:29:37.698: INFO: setup took 15.268429992s, starting test cases
STEP: running 16 cases, 20 attempts per case, 320 total attempts
Feb 14 14:29:37.745: INFO: (0) /api/v1/namespaces/proxy-9635/pods/proxy-service-4d2c8-wm47b:162/proxy/: bar (200; 46.03917ms)
Feb 14 14:29:37.755: INFO: (0) /api/v1/namespaces/proxy-9635/pods/proxy-service-4d2c8-wm47b/proxy/: test (200; 56.411375ms)
Feb 14 14:29:37.755: INFO: (0) /api/v1/namespaces/proxy-9635/services/http:proxy-service-4d2c8:portname2/proxy/: bar (200; 56.288332ms)
Feb 14 14:29:37.762: INFO: (0) /api/v1/namespaces/proxy-9635/pods/proxy-service-4d2c8-wm47b:1080/proxy/: test<... (200; 63.058576ms)
Feb 14 14:29:37.763: INFO: (0) /api/v1/namespaces/proxy-9635/pods/http:proxy-service-4d2c8-wm47b:162/proxy/: bar (200; 64.126836ms)
Feb 14 14:29:37.763: INFO: (0) /api/v1/namespaces/proxy-9635/pods/http:proxy-service-4d2c8-wm47b:1080/proxy/: ... (200; 64.5559ms)
Feb 14 14:29:37.764: INFO: (0) /api/v1/namespaces/proxy-9635/pods/proxy-service-4d2c8-wm47b:160/proxy/: foo (200; 65.665394ms)
Feb 14 14:29:37.764: INFO: (0) /api/v1/namespaces/proxy-9635/services/http:proxy-service-4d2c8:portname1/proxy/: foo (200; 65.865366ms)
Feb 14 14:29:37.765: INFO: (0) /api/v1/namespaces/proxy-9635/services/proxy-service-4d2c8:portname1/proxy/: foo (200; 65.718055ms)
Feb 14 14:29:37.765: INFO: (0) /api/v1/namespaces/proxy-9635/services/proxy-service-4d2c8:portname2/proxy/: bar (200; 65.858257ms)
Feb 14 14:29:37.765: INFO: (0) /api/v1/namespaces/proxy-9635/pods/http:proxy-service-4d2c8-wm47b:160/proxy/: foo (200; 65.694117ms)
Feb 14 14:29:37.769: INFO: (0) /api/v1/namespaces/proxy-9635/services/https:proxy-service-4d2c8:tlsportname1/proxy/: tls baz (200; 69.473732ms)
Feb 14 14:29:37.771: INFO: (0) /api/v1/namespaces/proxy-9635/pods/https:proxy-service-4d2c8-wm47b:443/proxy/: ... (200; 31.608791ms)
Feb 14 14:29:37.808: INFO: (1) /api/v1/namespaces/proxy-9635/pods/https:proxy-service-4d2c8-wm47b:443/proxy/: test (200; 30.634712ms)
Feb 14 14:29:37.808: INFO: (1) /api/v1/namespaces/proxy-9635/pods/https:proxy-service-4d2c8-wm47b:460/proxy/: tls baz (200; 30.897621ms)
Feb 14 14:29:37.808: INFO: (1) /api/v1/namespaces/proxy-9635/pods/proxy-service-4d2c8-wm47b:1080/proxy/: test<... (200; 31.374364ms)
Feb 14 14:29:37.809: INFO: (1) /api/v1/namespaces/proxy-9635/services/proxy-service-4d2c8:portname2/proxy/: bar (200; 32.540566ms)
Feb 14 14:29:37.809: INFO: (1) /api/v1/namespaces/proxy-9635/services/http:proxy-service-4d2c8:portname1/proxy/: foo (200; 32.115185ms)
Feb 14 14:29:37.811: INFO: (1) /api/v1/namespaces/proxy-9635/services/proxy-service-4d2c8:portname1/proxy/: foo (200; 33.828809ms)
Feb 14 14:29:37.811: INFO: (1) /api/v1/namespaces/proxy-9635/services/https:proxy-service-4d2c8:tlsportname1/proxy/: tls baz (200; 33.784529ms)
Feb 14 14:29:37.811: INFO: (1) /api/v1/namespaces/proxy-9635/services/https:proxy-service-4d2c8:tlsportname2/proxy/: tls qux (200; 33.882218ms)
Feb 14 14:29:37.812: INFO: (1) /api/v1/namespaces/proxy-9635/services/http:proxy-service-4d2c8:portname2/proxy/: bar (200; 34.149876ms)
Feb 14 14:29:37.837: INFO: (2) /api/v1/namespaces/proxy-9635/pods/proxy-service-4d2c8-wm47b:1080/proxy/: test<... (200; 24.866829ms)
Feb 14 14:29:37.837: INFO: (2) /api/v1/namespaces/proxy-9635/pods/http:proxy-service-4d2c8-wm47b:160/proxy/: foo (200; 24.814892ms)
Feb 14 14:29:37.837: INFO: (2) /api/v1/namespaces/proxy-9635/pods/proxy-service-4d2c8-wm47b:160/proxy/: foo (200; 24.974316ms)
Feb 14 14:29:37.837: INFO: (2) /api/v1/namespaces/proxy-9635/pods/https:proxy-service-4d2c8-wm47b:443/proxy/: test (200; 25.121938ms)
Feb 14 14:29:37.837: INFO: (2) /api/v1/namespaces/proxy-9635/pods/http:proxy-service-4d2c8-wm47b:1080/proxy/: ... (200; 25.105177ms)
Feb 14 14:29:37.837: INFO: (2) /api/v1/namespaces/proxy-9635/pods/http:proxy-service-4d2c8-wm47b:162/proxy/: bar (200; 25.085039ms)
Feb 14 14:29:37.838: INFO: (2) /api/v1/namespaces/proxy-9635/pods/https:proxy-service-4d2c8-wm47b:462/proxy/: tls qux (200; 26.419937ms)
Feb 14 14:29:37.839: INFO: (2) /api/v1/namespaces/proxy-9635/pods/proxy-service-4d2c8-wm47b:162/proxy/: bar (200; 27.151439ms)
Feb 14 14:29:37.839: INFO: (2) /api/v1/namespaces/proxy-9635/pods/https:proxy-service-4d2c8-wm47b:460/proxy/: tls baz (200; 27.659704ms)
Feb 14 14:29:37.843: INFO: (2) /api/v1/namespaces/proxy-9635/services/http:proxy-service-4d2c8:portname1/proxy/: foo (200; 31.459679ms)
Feb 14 14:29:37.843: INFO: (2) /api/v1/namespaces/proxy-9635/services/proxy-service-4d2c8:portname2/proxy/: bar (200; 31.684185ms)
Feb 14 14:29:37.843: INFO: (2) /api/v1/namespaces/proxy-9635/services/http:proxy-service-4d2c8:portname2/proxy/: bar (200; 31.593675ms)
Feb 14 14:29:37.844: INFO: (2) /api/v1/namespaces/proxy-9635/services/https:proxy-service-4d2c8:tlsportname1/proxy/: tls baz (200; 31.742292ms)
Feb 14 14:29:37.844: INFO: (2) /api/v1/namespaces/proxy-9635/services/proxy-service-4d2c8:portname1/proxy/: foo (200; 31.714102ms)
Feb 14 14:29:37.844: INFO: (2) /api/v1/namespaces/proxy-9635/services/https:proxy-service-4d2c8:tlsportname2/proxy/: tls qux (200; 31.956722ms)
Feb 14 14:29:37.870: INFO: (3) /api/v1/namespaces/proxy-9635/pods/http:proxy-service-4d2c8-wm47b:1080/proxy/: ... (200; 25.873699ms)
Feb 14 14:29:37.870: INFO: (3) /api/v1/namespaces/proxy-9635/pods/proxy-service-4d2c8-wm47b:1080/proxy/: test<... (200; 26.23352ms)
Feb 14 14:29:37.870: INFO: (3) /api/v1/namespaces/proxy-9635/pods/proxy-service-4d2c8-wm47b:160/proxy/: foo (200; 26.259543ms)
Feb 14 14:29:37.870: INFO: (3) /api/v1/namespaces/proxy-9635/pods/http:proxy-service-4d2c8-wm47b:160/proxy/: foo (200; 26.607134ms)
Feb 14 14:29:37.870: INFO: (3) /api/v1/namespaces/proxy-9635/pods/proxy-service-4d2c8-wm47b/proxy/: test (200; 26.399898ms)
Feb 14 14:29:37.871: INFO: (3) /api/v1/namespaces/proxy-9635/pods/proxy-service-4d2c8-wm47b:162/proxy/: bar (200; 27.035249ms)
Feb 14 14:29:37.872: INFO: (3) /api/v1/namespaces/proxy-9635/pods/https:proxy-service-4d2c8-wm47b:462/proxy/: tls qux (200; 28.564616ms)
Feb 14 14:29:37.873: INFO: (3) /api/v1/namespaces/proxy-9635/pods/https:proxy-service-4d2c8-wm47b:443/proxy/: ... (200; 9.65483ms)
Feb 14 14:29:37.891: INFO: (4) /api/v1/namespaces/proxy-9635/pods/proxy-service-4d2c8-wm47b:160/proxy/: foo (200; 9.967521ms)
Feb 14 14:29:37.892: INFO: (4) /api/v1/namespaces/proxy-9635/pods/https:proxy-service-4d2c8-wm47b:443/proxy/: test (200; 14.212961ms)
Feb 14 14:29:37.896: INFO: (4) /api/v1/namespaces/proxy-9635/pods/https:proxy-service-4d2c8-wm47b:462/proxy/: tls qux (200; 14.483882ms)
Feb 14 14:29:37.896: INFO: (4) /api/v1/namespaces/proxy-9635/pods/proxy-service-4d2c8-wm47b:1080/proxy/: test<... (200; 14.73721ms)
Feb 14 14:29:37.897: INFO: (4) /api/v1/namespaces/proxy-9635/pods/http:proxy-service-4d2c8-wm47b:162/proxy/: bar (200; 15.433444ms)
Feb 14 14:29:37.897: INFO: (4) /api/v1/namespaces/proxy-9635/pods/proxy-service-4d2c8-wm47b:162/proxy/: bar (200; 15.87089ms)
Feb 14 14:29:37.906: INFO: (4) /api/v1/namespaces/proxy-9635/services/https:proxy-service-4d2c8:tlsportname1/proxy/: tls baz (200; 24.360582ms)
Feb 14 14:29:37.906: INFO: (4) /api/v1/namespaces/proxy-9635/services/http:proxy-service-4d2c8:portname2/proxy/: bar (200; 24.373245ms)
Feb 14 14:29:37.906: INFO: (4) /api/v1/namespaces/proxy-9635/services/http:proxy-service-4d2c8:portname1/proxy/: foo (200; 24.676896ms)
Feb 14 14:29:37.906: INFO: (4) /api/v1/namespaces/proxy-9635/services/https:proxy-service-4d2c8:tlsportname2/proxy/: tls qux (200; 24.681523ms)
Feb 14 14:29:37.906: INFO: (4) /api/v1/namespaces/proxy-9635/services/proxy-service-4d2c8:portname1/proxy/: foo (200; 25.306066ms)
Feb 14 14:29:37.909: INFO: (4) /api/v1/namespaces/proxy-9635/services/proxy-service-4d2c8:portname2/proxy/: bar (200; 27.695486ms)
Feb 14 14:29:37.935: INFO: (5) /api/v1/namespaces/proxy-9635/pods/http:proxy-service-4d2c8-wm47b:1080/proxy/: ... (200; 24.891184ms)
Feb 14 14:29:37.935: INFO: (5) /api/v1/namespaces/proxy-9635/pods/https:proxy-service-4d2c8-wm47b:443/proxy/: test (200; 25.652174ms)
Feb 14 14:29:37.935: INFO: (5) /api/v1/namespaces/proxy-9635/services/proxy-service-4d2c8:portname2/proxy/: bar (200; 25.668773ms)
Feb 14 14:29:37.936: INFO: (5) /api/v1/namespaces/proxy-9635/services/http:proxy-service-4d2c8:portname1/proxy/: foo (200; 26.23308ms)
Feb 14 14:29:37.937: INFO: (5) /api/v1/namespaces/proxy-9635/services/https:proxy-service-4d2c8:tlsportname2/proxy/: tls qux (200; 27.127114ms)
Feb 14 14:29:37.937: INFO: (5) /api/v1/namespaces/proxy-9635/services/proxy-service-4d2c8:portname1/proxy/: foo (200; 27.704281ms)
Feb 14 14:29:37.937: INFO: (5) /api/v1/namespaces/proxy-9635/services/http:proxy-service-4d2c8:portname2/proxy/: bar (200; 27.449548ms)
Feb 14 14:29:37.938: INFO: (5) /api/v1/namespaces/proxy-9635/services/https:proxy-service-4d2c8:tlsportname1/proxy/: tls baz (200; 28.239906ms)
Feb 14 14:29:37.938: INFO: (5) /api/v1/namespaces/proxy-9635/pods/http:proxy-service-4d2c8-wm47b:160/proxy/: foo (200; 28.251978ms)
Feb 14 14:29:37.938: INFO: (5) /api/v1/namespaces/proxy-9635/pods/proxy-service-4d2c8-wm47b:162/proxy/: bar (200; 28.748849ms)
Feb 14 14:29:37.938: INFO: (5) /api/v1/namespaces/proxy-9635/pods/https:proxy-service-4d2c8-wm47b:462/proxy/: tls qux (200; 28.458477ms)
Feb 14 14:29:37.938: INFO: (5) /api/v1/namespaces/proxy-9635/pods/https:proxy-service-4d2c8-wm47b:460/proxy/: tls baz (200; 28.322358ms)
Feb 14 14:29:37.938: INFO: (5) /api/v1/namespaces/proxy-9635/pods/proxy-service-4d2c8-wm47b:1080/proxy/: test<... (200; 28.694095ms)
Feb 14 14:29:37.938: INFO: (5) /api/v1/namespaces/proxy-9635/pods/proxy-service-4d2c8-wm47b:160/proxy/: foo (200; 28.425186ms)
Feb 14 14:29:37.938: INFO: (5) /api/v1/namespaces/proxy-9635/pods/http:proxy-service-4d2c8-wm47b:162/proxy/: bar (200; 29.140544ms)
Feb 14 14:29:37.949: INFO: (6) /api/v1/namespaces/proxy-9635/pods/https:proxy-service-4d2c8-wm47b:443/proxy/: test (200; 21.090888ms)
Feb 14 14:29:37.960: INFO: (6) /api/v1/namespaces/proxy-9635/pods/proxy-service-4d2c8-wm47b:162/proxy/: bar (200; 20.703633ms)
Feb 14 14:29:37.960: INFO: (6) /api/v1/namespaces/proxy-9635/services/proxy-service-4d2c8:portname1/proxy/: foo (200; 20.757697ms)
Feb 14 14:29:37.960: INFO: (6) /api/v1/namespaces/proxy-9635/pods/http:proxy-service-4d2c8-wm47b:1080/proxy/: ... (200; 21.214271ms)
Feb 14 14:29:37.961: INFO: (6) /api/v1/namespaces/proxy-9635/services/proxy-service-4d2c8:portname2/proxy/: bar (200; 21.844286ms)
Feb 14 14:29:37.961: INFO: (6) /api/v1/namespaces/proxy-9635/pods/proxy-service-4d2c8-wm47b:160/proxy/: foo (200; 21.765402ms)
Feb 14 14:29:37.961: INFO: (6) /api/v1/namespaces/proxy-9635/pods/https:proxy-service-4d2c8-wm47b:462/proxy/: tls qux (200; 22.099212ms)
Feb 14 14:29:37.961: INFO: (6) /api/v1/namespaces/proxy-9635/services/https:proxy-service-4d2c8:tlsportname2/proxy/: tls qux (200; 22.440107ms)
Feb 14 14:29:37.962: INFO: (6) /api/v1/namespaces/proxy-9635/services/http:proxy-service-4d2c8:portname1/proxy/: foo (200; 22.479245ms)
Feb 14 14:29:37.963: INFO: (6) /api/v1/namespaces/proxy-9635/pods/http:proxy-service-4d2c8-wm47b:160/proxy/: foo (200; 23.320961ms)
Feb 14 14:29:37.963: INFO: (6) /api/v1/namespaces/proxy-9635/services/https:proxy-service-4d2c8:tlsportname1/proxy/: tls baz (200; 23.326068ms)
Feb 14 14:29:37.963: INFO: (6) /api/v1/namespaces/proxy-9635/services/http:proxy-service-4d2c8:portname2/proxy/: bar (200; 23.310781ms)
Feb 14 14:29:37.963: INFO: (6) /api/v1/namespaces/proxy-9635/pods/proxy-service-4d2c8-wm47b:1080/proxy/: test<... (200; 23.775434ms)
Feb 14 14:29:37.963: INFO: (6) /api/v1/namespaces/proxy-9635/pods/http:proxy-service-4d2c8-wm47b:162/proxy/: bar (200; 23.482165ms)
Feb 14 14:29:37.963: INFO: (6) /api/v1/namespaces/proxy-9635/pods/https:proxy-service-4d2c8-wm47b:460/proxy/: tls baz (200; 23.734806ms)
Feb 14 14:29:37.972: INFO: (7) /api/v1/namespaces/proxy-9635/pods/proxy-service-4d2c8-wm47b/proxy/: test (200; 8.739744ms)
Feb 14 14:29:37.973: INFO: (7) /api/v1/namespaces/proxy-9635/services/http:proxy-service-4d2c8:portname2/proxy/: bar (200; 9.976959ms)
Feb 14 14:29:37.974: INFO: (7) /api/v1/namespaces/proxy-9635/pods/proxy-service-4d2c8-wm47b:162/proxy/: bar (200; 10.225782ms)
Feb 14 14:29:37.974: INFO: (7) /api/v1/namespaces/proxy-9635/pods/http:proxy-service-4d2c8-wm47b:1080/proxy/: ... (200; 10.93765ms)
Feb 14 14:29:37.974: INFO: (7) /api/v1/namespaces/proxy-9635/pods/https:proxy-service-4d2c8-wm47b:443/proxy/: test<... (200; 11.108692ms)
Feb 14 14:29:37.974: INFO: (7) /api/v1/namespaces/proxy-9635/pods/proxy-service-4d2c8-wm47b:160/proxy/: foo (200; 11.290386ms)
Feb 14 14:29:37.975: INFO: (7) /api/v1/namespaces/proxy-9635/pods/https:proxy-service-4d2c8-wm47b:460/proxy/: tls baz (200; 11.328113ms)
Feb 14 14:29:37.975: INFO: (7) /api/v1/namespaces/proxy-9635/services/proxy-service-4d2c8:portname2/proxy/: bar (200; 11.637868ms)
Feb 14 14:29:37.975: INFO: (7) /api/v1/namespaces/proxy-9635/services/https:proxy-service-4d2c8:tlsportname2/proxy/: tls qux (200; 11.898312ms)
Feb 14 14:29:37.975: INFO: (7) /api/v1/namespaces/proxy-9635/services/proxy-service-4d2c8:portname1/proxy/: foo (200; 12.218731ms)
Feb 14 14:29:37.976: INFO: (7) /api/v1/namespaces/proxy-9635/pods/http:proxy-service-4d2c8-wm47b:162/proxy/: bar (200; 12.211125ms)
Feb 14 14:29:37.976: INFO: (7) /api/v1/namespaces/proxy-9635/services/http:proxy-service-4d2c8:portname1/proxy/: foo (200; 12.723038ms)
Feb 14 14:29:37.976: INFO: (7) /api/v1/namespaces/proxy-9635/services/https:proxy-service-4d2c8:tlsportname1/proxy/: tls baz (200; 13.039124ms)
Feb 14 14:29:37.985: INFO: (8) /api/v1/namespaces/proxy-9635/pods/proxy-service-4d2c8-wm47b/proxy/: test (200; 8.819062ms)
Feb 14 14:29:37.986: INFO: (8) /api/v1/namespaces/proxy-9635/pods/proxy-service-4d2c8-wm47b:162/proxy/: bar (200; 9.646196ms)
Feb 14 14:29:37.986: INFO: (8) /api/v1/namespaces/proxy-9635/pods/proxy-service-4d2c8-wm47b:1080/proxy/: test<... (200; 9.513447ms)
Feb 14 14:29:37.986: INFO: (8) /api/v1/namespaces/proxy-9635/pods/proxy-service-4d2c8-wm47b:160/proxy/: foo (200; 9.556019ms)
Feb 14 14:29:37.986: INFO: (8) /api/v1/namespaces/proxy-9635/pods/http:proxy-service-4d2c8-wm47b:162/proxy/: bar (200; 9.530718ms)
Feb 14 14:29:37.986: INFO: (8) /api/v1/namespaces/proxy-9635/services/proxy-service-4d2c8:portname2/proxy/: bar (200; 9.6965ms)
Feb 14 14:29:37.986: INFO: (8) /api/v1/namespaces/proxy-9635/pods/https:proxy-service-4d2c8-wm47b:462/proxy/: tls qux (200; 9.568927ms)
Feb 14 14:29:37.986: INFO: (8) /api/v1/namespaces/proxy-9635/pods/http:proxy-service-4d2c8-wm47b:160/proxy/: foo (200; 9.603989ms)
Feb 14 14:29:37.986: INFO: (8) /api/v1/namespaces/proxy-9635/pods/https:proxy-service-4d2c8-wm47b:460/proxy/: tls baz (200; 9.729758ms)
Feb 14 14:29:37.986: INFO: (8) /api/v1/namespaces/proxy-9635/pods/http:proxy-service-4d2c8-wm47b:1080/proxy/: ... (200; 9.971887ms)
Feb 14 14:29:37.986: INFO: (8) /api/v1/namespaces/proxy-9635/pods/https:proxy-service-4d2c8-wm47b:443/proxy/: test<... (200; 5.739178ms)
Feb 14 14:29:37.995: INFO: (9) /api/v1/namespaces/proxy-9635/pods/http:proxy-service-4d2c8-wm47b:1080/proxy/: ... (200; 5.793402ms)
Feb 14 14:29:37.995: INFO: (9) /api/v1/namespaces/proxy-9635/pods/https:proxy-service-4d2c8-wm47b:443/proxy/: test (200; 15.399271ms)
Feb 14 14:29:38.005: INFO: (9) /api/v1/namespaces/proxy-9635/services/https:proxy-service-4d2c8:tlsportname1/proxy/: tls baz (200; 15.385278ms)
Feb 14 14:29:38.005: INFO: (9) /api/v1/namespaces/proxy-9635/pods/https:proxy-service-4d2c8-wm47b:462/proxy/: tls qux (200; 15.562923ms)
Feb 14 14:29:38.005: INFO: (9) /api/v1/namespaces/proxy-9635/pods/https:proxy-service-4d2c8-wm47b:460/proxy/: tls baz (200; 15.082379ms)
Feb 14 14:29:38.006: INFO: (9) /api/v1/namespaces/proxy-9635/pods/proxy-service-4d2c8-wm47b:160/proxy/: foo (200; 16.614083ms)
Feb 14 14:29:38.007: INFO: (9) /api/v1/namespaces/proxy-9635/pods/http:proxy-service-4d2c8-wm47b:160/proxy/: foo (200; 18.048998ms)
Feb 14 14:29:38.013: INFO: (10) /api/v1/namespaces/proxy-9635/pods/proxy-service-4d2c8-wm47b:162/proxy/: bar (200; 5.56599ms)
Feb 14 14:29:38.014: INFO: (10) /api/v1/namespaces/proxy-9635/pods/http:proxy-service-4d2c8-wm47b:162/proxy/: bar (200; 5.818369ms)
Feb 14 14:29:38.014: INFO: (10) /api/v1/namespaces/proxy-9635/pods/http:proxy-service-4d2c8-wm47b:160/proxy/: foo (200; 6.780963ms)
Feb 14 14:29:38.015: INFO: (10) /api/v1/namespaces/proxy-9635/pods/http:proxy-service-4d2c8-wm47b:1080/proxy/: ... (200; 7.140648ms)
Feb 14 14:29:38.016: INFO: (10) /api/v1/namespaces/proxy-9635/pods/https:proxy-service-4d2c8-wm47b:460/proxy/: tls baz (200; 8.491445ms)
Feb 14 14:29:38.017: INFO: (10) /api/v1/namespaces/proxy-9635/pods/proxy-service-4d2c8-wm47b/proxy/: test (200; 9.296434ms)
Feb 14 14:29:38.017: INFO: (10) /api/v1/namespaces/proxy-9635/pods/https:proxy-service-4d2c8-wm47b:462/proxy/: tls qux (200; 9.118653ms)
Feb 14 14:29:38.017: INFO: (10) /api/v1/namespaces/proxy-9635/pods/proxy-service-4d2c8-wm47b:1080/proxy/: test<... (200; 9.678789ms)
Feb 14 14:29:38.018: INFO: (10) /api/v1/namespaces/proxy-9635/pods/https:proxy-service-4d2c8-wm47b:443/proxy/: ... (200; 10.626566ms)
Feb 14 14:29:38.036: INFO: (11) /api/v1/namespaces/proxy-9635/pods/proxy-service-4d2c8-wm47b:160/proxy/: foo (200; 10.566811ms)
Feb 14 14:29:38.036: INFO: (11) /api/v1/namespaces/proxy-9635/pods/proxy-service-4d2c8-wm47b:1080/proxy/: test<... (200; 10.907409ms)
Feb 14 14:29:38.036: INFO: (11) /api/v1/namespaces/proxy-9635/pods/https:proxy-service-4d2c8-wm47b:443/proxy/: test (200; 10.96897ms)
Feb 14 14:29:38.036: INFO: (11) /api/v1/namespaces/proxy-9635/pods/http:proxy-service-4d2c8-wm47b:160/proxy/: foo (200; 10.564609ms)
Feb 14 14:29:38.036: INFO: (11) /api/v1/namespaces/proxy-9635/pods/https:proxy-service-4d2c8-wm47b:462/proxy/: tls qux (200; 11.082961ms)
Feb 14 14:29:38.037: INFO: (11) /api/v1/namespaces/proxy-9635/services/proxy-service-4d2c8:portname1/proxy/: foo (200; 11.993673ms)
Feb 14 14:29:38.038: INFO: (11) /api/v1/namespaces/proxy-9635/services/http:proxy-service-4d2c8:portname2/proxy/: bar (200; 13.088646ms)
Feb 14 14:29:38.038: INFO: (11) /api/v1/namespaces/proxy-9635/services/https:proxy-service-4d2c8:tlsportname1/proxy/: tls baz (200; 13.422284ms)
Feb 14 14:29:38.038: INFO: (11) /api/v1/namespaces/proxy-9635/pods/https:proxy-service-4d2c8-wm47b:460/proxy/: tls baz (200; 13.214345ms)
Feb 14 14:29:38.038: INFO: (11) /api/v1/namespaces/proxy-9635/services/proxy-service-4d2c8:portname2/proxy/: bar (200; 13.268965ms)
Feb 14 14:29:38.039: INFO: (11) /api/v1/namespaces/proxy-9635/services/https:proxy-service-4d2c8:tlsportname2/proxy/: tls qux (200; 13.865582ms)
Feb 14 14:29:38.040: INFO: (11) /api/v1/namespaces/proxy-9635/services/http:proxy-service-4d2c8:portname1/proxy/: foo (200; 15.325541ms)
Feb 14 14:29:38.051: INFO: (12) /api/v1/namespaces/proxy-9635/pods/proxy-service-4d2c8-wm47b/proxy/: test (200; 10.742042ms)
Feb 14 14:29:38.051: INFO: (12) /api/v1/namespaces/proxy-9635/pods/https:proxy-service-4d2c8-wm47b:443/proxy/: test<... (200; 11.203026ms)
Feb 14 14:29:38.051: INFO: (12) /api/v1/namespaces/proxy-9635/pods/http:proxy-service-4d2c8-wm47b:1080/proxy/: ... (200; 11.231205ms)
Feb 14 14:29:38.053: INFO: (12) /api/v1/namespaces/proxy-9635/services/proxy-service-4d2c8:portname2/proxy/: bar (200; 12.782156ms)
Feb 14 14:29:38.053: INFO: (12) /api/v1/namespaces/proxy-9635/services/http:proxy-service-4d2c8:portname2/proxy/: bar (200; 12.670641ms)
Feb 14 14:29:38.054: INFO: (12) /api/v1/namespaces/proxy-9635/services/http:proxy-service-4d2c8:portname1/proxy/: foo (200; 13.082737ms)
Feb 14 14:29:38.054: INFO: (12) /api/v1/namespaces/proxy-9635/services/proxy-service-4d2c8:portname1/proxy/: foo (200; 13.467279ms)
Feb 14 14:29:38.054: INFO: (12) /api/v1/namespaces/proxy-9635/services/https:proxy-service-4d2c8:tlsportname1/proxy/: tls baz (200; 13.391472ms)
Feb 14 14:29:38.054: INFO: (12) /api/v1/namespaces/proxy-9635/services/https:proxy-service-4d2c8:tlsportname2/proxy/: tls qux (200; 13.440897ms)
Feb 14 14:29:38.060: INFO: (13) /api/v1/namespaces/proxy-9635/pods/proxy-service-4d2c8-wm47b:162/proxy/: bar (200; 6.119002ms)
Feb 14 14:29:38.060: INFO: (13) /api/v1/namespaces/proxy-9635/pods/https:proxy-service-4d2c8-wm47b:462/proxy/: tls qux (200; 6.415294ms)
Feb 14 14:29:38.061: INFO: (13) /api/v1/namespaces/proxy-9635/pods/proxy-service-4d2c8-wm47b:1080/proxy/: test<... (200; 7.353481ms)
Feb 14 14:29:38.063: INFO: (13) /api/v1/namespaces/proxy-9635/services/proxy-service-4d2c8:portname1/proxy/: foo (200; 8.986637ms)
Feb 14 14:29:38.063: INFO: (13) /api/v1/namespaces/proxy-9635/pods/https:proxy-service-4d2c8-wm47b:443/proxy/: test (200; 9.453848ms)
Feb 14 14:29:38.064: INFO: (13) /api/v1/namespaces/proxy-9635/pods/http:proxy-service-4d2c8-wm47b:1080/proxy/: ... (200; 9.787255ms)
Feb 14 14:29:38.064: INFO: (13) /api/v1/namespaces/proxy-9635/services/https:proxy-service-4d2c8:tlsportname1/proxy/: tls baz (200; 9.778672ms)
Feb 14 14:29:38.064: INFO: (13) /api/v1/namespaces/proxy-9635/services/proxy-service-4d2c8:portname2/proxy/: bar (200; 10.191211ms)
Feb 14 14:29:38.064: INFO: (13) /api/v1/namespaces/proxy-9635/pods/http:proxy-service-4d2c8-wm47b:160/proxy/: foo (200; 10.438418ms)
Feb 14 14:29:38.067: INFO: (13) /api/v1/namespaces/proxy-9635/services/http:proxy-service-4d2c8:portname1/proxy/: foo (200; 13.005566ms)
Feb 14 14:29:38.067: INFO: (13) /api/v1/namespaces/proxy-9635/pods/http:proxy-service-4d2c8-wm47b:162/proxy/: bar (200; 12.950617ms)
Feb 14 14:29:38.067: INFO: (13) /api/v1/namespaces/proxy-9635/services/http:proxy-service-4d2c8:portname2/proxy/: bar (200; 13.178545ms)
Feb 14 14:29:38.067: INFO: (13) /api/v1/namespaces/proxy-9635/pods/proxy-service-4d2c8-wm47b:160/proxy/: foo (200; 13.274308ms)
Feb 14 14:29:38.067: INFO: (13) /api/v1/namespaces/proxy-9635/pods/https:proxy-service-4d2c8-wm47b:460/proxy/: tls baz (200; 13.437745ms)
Feb 14 14:29:38.068: INFO: (13) /api/v1/namespaces/proxy-9635/services/https:proxy-service-4d2c8:tlsportname2/proxy/: tls qux (200; 13.591587ms)
Feb 14 14:29:38.076: INFO: (14) /api/v1/namespaces/proxy-9635/pods/http:proxy-service-4d2c8-wm47b:1080/proxy/: ... (200; 8.094327ms)
Feb 14 14:29:38.076: INFO: (14) /api/v1/namespaces/proxy-9635/pods/proxy-service-4d2c8-wm47b:1080/proxy/: test<... (200; 8.120213ms)
Feb 14 14:29:38.076: INFO: (14) /api/v1/namespaces/proxy-9635/pods/https:proxy-service-4d2c8-wm47b:460/proxy/: tls baz (200; 8.594658ms)
Feb 14 14:29:38.076: INFO: (14) /api/v1/namespaces/proxy-9635/pods/proxy-service-4d2c8-wm47b/proxy/: test (200; 8.732848ms)
Feb 14 14:29:38.076: INFO: (14) /api/v1/namespaces/proxy-9635/pods/https:proxy-service-4d2c8-wm47b:443/proxy/: test<... (200; 8.448166ms)
Feb 14 14:29:38.094: INFO: (15) /api/v1/namespaces/proxy-9635/pods/https:proxy-service-4d2c8-wm47b:462/proxy/: tls qux (200; 9.480287ms)
Feb 14 14:29:38.094: INFO: (15) /api/v1/namespaces/proxy-9635/pods/http:proxy-service-4d2c8-wm47b:1080/proxy/: ... (200; 9.43103ms)
Feb 14 14:29:38.094: INFO: (15) /api/v1/namespaces/proxy-9635/pods/proxy-service-4d2c8-wm47b/proxy/: test (200; 9.906388ms)
Feb 14 14:29:38.094: INFO: (15) /api/v1/namespaces/proxy-9635/pods/http:proxy-service-4d2c8-wm47b:160/proxy/: foo (200; 9.824998ms)
Feb 14 14:29:38.094: INFO: (15) /api/v1/namespaces/proxy-9635/pods/proxy-service-4d2c8-wm47b:162/proxy/: bar (200; 9.970036ms)
Feb 14 14:29:38.094: INFO: (15) /api/v1/namespaces/proxy-9635/pods/http:proxy-service-4d2c8-wm47b:162/proxy/: bar (200; 10.328541ms)
Feb 14 14:29:38.095: INFO: (15) /api/v1/namespaces/proxy-9635/pods/https:proxy-service-4d2c8-wm47b:443/proxy/: test (200; 5.279077ms)
Feb 14 14:29:38.106: INFO: (16) /api/v1/namespaces/proxy-9635/pods/proxy-service-4d2c8-wm47b:162/proxy/: bar (200; 7.946418ms)
Feb 14 14:29:38.106: INFO: (16) /api/v1/namespaces/proxy-9635/pods/http:proxy-service-4d2c8-wm47b:160/proxy/: foo (200; 7.899457ms)
Feb 14 14:29:38.107: INFO: (16) /api/v1/namespaces/proxy-9635/pods/https:proxy-service-4d2c8-wm47b:462/proxy/: tls qux (200; 8.831289ms)
Feb 14 14:29:38.109: INFO: (16) /api/v1/namespaces/proxy-9635/pods/http:proxy-service-4d2c8-wm47b:1080/proxy/: ... (200; 10.785033ms)
Feb 14 14:29:38.109: INFO: (16) /api/v1/namespaces/proxy-9635/pods/https:proxy-service-4d2c8-wm47b:443/proxy/: test<... (200; 11.396339ms)
Feb 14 14:29:38.111: INFO: (16) /api/v1/namespaces/proxy-9635/services/http:proxy-service-4d2c8:portname2/proxy/: bar (200; 12.957248ms)
Feb 14 14:29:38.111: INFO: (16) /api/v1/namespaces/proxy-9635/services/https:proxy-service-4d2c8:tlsportname1/proxy/: tls baz (200; 13.049269ms)
Feb 14 14:29:38.111: INFO: (16) /api/v1/namespaces/proxy-9635/services/https:proxy-service-4d2c8:tlsportname2/proxy/: tls qux (200; 13.404817ms)
Feb 14 14:29:38.111: INFO: (16) /api/v1/namespaces/proxy-9635/services/proxy-service-4d2c8:portname1/proxy/: foo (200; 13.365665ms)
Feb 14 14:29:38.111: INFO: (16) /api/v1/namespaces/proxy-9635/services/proxy-service-4d2c8:portname2/proxy/: bar (200; 13.279923ms)
Feb 14 14:29:38.111: INFO: (16) /api/v1/namespaces/proxy-9635/services/http:proxy-service-4d2c8:portname1/proxy/: foo (200; 13.567705ms)
Feb 14 14:29:38.124: INFO: (17) /api/v1/namespaces/proxy-9635/pods/proxy-service-4d2c8-wm47b:160/proxy/: foo (200; 12.672761ms)
Feb 14 14:29:38.125: INFO: (17) /api/v1/namespaces/proxy-9635/pods/http:proxy-service-4d2c8-wm47b:162/proxy/: bar (200; 13.19056ms)
Feb 14 14:29:38.125: INFO: (17) /api/v1/namespaces/proxy-9635/pods/proxy-service-4d2c8-wm47b:162/proxy/: bar (200; 13.370102ms)
Feb 14 14:29:38.125: INFO: (17) /api/v1/namespaces/proxy-9635/pods/proxy-service-4d2c8-wm47b/proxy/: test (200; 13.367022ms)
Feb 14 14:29:38.125: INFO: (17) /api/v1/namespaces/proxy-9635/pods/http:proxy-service-4d2c8-wm47b:1080/proxy/: ... (200; 13.734528ms)
Feb 14 14:29:38.125: INFO: (17) /api/v1/namespaces/proxy-9635/pods/https:proxy-service-4d2c8-wm47b:462/proxy/: tls qux (200; 13.618435ms)
Feb 14 14:29:38.125: INFO: (17) /api/v1/namespaces/proxy-9635/pods/proxy-service-4d2c8-wm47b:1080/proxy/: test<... (200; 13.840075ms)
Feb 14 14:29:38.125: INFO: (17) /api/v1/namespaces/proxy-9635/pods/http:proxy-service-4d2c8-wm47b:160/proxy/: foo (200; 13.750957ms)
Feb 14 14:29:38.125: INFO: (17) /api/v1/namespaces/proxy-9635/pods/https:proxy-service-4d2c8-wm47b:460/proxy/: tls baz (200; 13.841753ms)
Feb 14 14:29:38.125: INFO: (17) /api/v1/namespaces/proxy-9635/pods/https:proxy-service-4d2c8-wm47b:443/proxy/: test (200; 15.456115ms)
Feb 14 14:29:38.146: INFO: (18) /api/v1/namespaces/proxy-9635/pods/proxy-service-4d2c8-wm47b:160/proxy/: foo (200; 15.506872ms)
Feb 14 14:29:38.146: INFO: (18) /api/v1/namespaces/proxy-9635/pods/proxy-service-4d2c8-wm47b:1080/proxy/: test<... (200; 15.352738ms)
Feb 14 14:29:38.146: INFO: (18) /api/v1/namespaces/proxy-9635/services/proxy-service-4d2c8:portname2/proxy/: bar (200; 15.49812ms)
Feb 14 14:29:38.147: INFO: (18) /api/v1/namespaces/proxy-9635/pods/https:proxy-service-4d2c8-wm47b:462/proxy/: tls qux (200; 15.700095ms)
Feb 14 14:29:38.147: INFO: (18) /api/v1/namespaces/proxy-9635/pods/proxy-service-4d2c8-wm47b:162/proxy/: bar (200; 15.751614ms)
Feb 14 14:29:38.147: INFO: (18) /api/v1/namespaces/proxy-9635/services/https:proxy-service-4d2c8:tlsportname2/proxy/: tls qux (200; 15.93431ms)
Feb 14 14:29:38.147: INFO: (18) /api/v1/namespaces/proxy-9635/services/http:proxy-service-4d2c8:portname1/proxy/: foo (200; 16.50668ms)
Feb 14 14:29:38.148: INFO: (18) /api/v1/namespaces/proxy-9635/pods/https:proxy-service-4d2c8-wm47b:460/proxy/: tls baz (200; 16.664701ms)
Feb 14 14:29:38.148: INFO: (18) /api/v1/namespaces/proxy-9635/pods/https:proxy-service-4d2c8-wm47b:443/proxy/: ... (200; 16.832431ms)
Feb 14 14:29:38.149: INFO: (18) /api/v1/namespaces/proxy-9635/pods/http:proxy-service-4d2c8-wm47b:162/proxy/: bar (200; 17.629148ms)
Feb 14 14:29:38.150: INFO: (18) /api/v1/namespaces/proxy-9635/services/proxy-service-4d2c8:portname1/proxy/: foo (200; 18.784837ms)
Feb 14 14:29:38.170: INFO: (19) /api/v1/namespaces/proxy-9635/pods/proxy-service-4d2c8-wm47b:1080/proxy/: test<... (200; 20.184747ms)
Feb 14 14:29:38.170: INFO: (19) /api/v1/namespaces/proxy-9635/services/http:proxy-service-4d2c8:portname2/proxy/: bar (200; 20.129331ms)
Feb 14 14:29:38.171: INFO: (19) /api/v1/namespaces/proxy-9635/pods/proxy-service-4d2c8-wm47b:162/proxy/: bar (200; 20.442523ms)
Feb 14 14:29:38.171: INFO: (19) /api/v1/namespaces/proxy-9635/services/proxy-service-4d2c8:portname1/proxy/: foo (200; 20.843555ms)
Feb 14 14:29:38.171: INFO: (19) /api/v1/namespaces/proxy-9635/pods/http:proxy-service-4d2c8-wm47b:1080/proxy/: ... (200; 20.940883ms)
Feb 14 14:29:38.171: INFO: (19) /api/v1/namespaces/proxy-9635/pods/proxy-service-4d2c8-wm47b:160/proxy/: foo (200; 21.172268ms)
Feb 14 14:29:38.171: INFO: (19) /api/v1/namespaces/proxy-9635/pods/http:proxy-service-4d2c8-wm47b:160/proxy/: foo (200; 21.230392ms)
Feb 14 14:29:38.174: INFO: (19) /api/v1/namespaces/proxy-9635/pods/proxy-service-4d2c8-wm47b/proxy/: test (200; 23.686954ms)
Feb 14 14:29:38.174: INFO: (19) /api/v1/namespaces/proxy-9635/services/https:proxy-service-4d2c8:tlsportname1/proxy/: tls baz (200; 23.567504ms)
Feb 14 14:29:38.174: INFO: (19) /api/v1/namespaces/proxy-9635/pods/https:proxy-service-4d2c8-wm47b:462/proxy/: tls qux (200; 23.615088ms)
Feb 14 14:29:38.174: INFO: (19) /api/v1/namespaces/proxy-9635/services/http:proxy-service-4d2c8:portname1/proxy/: foo (200; 23.533619ms)
Feb 14 14:29:38.174: INFO: (19) /api/v1/namespaces/proxy-9635/pods/https:proxy-service-4d2c8-wm47b:460/proxy/: tls baz (200; 23.677595ms)
Feb 14 14:29:38.174: INFO: (19) /api/v1/namespaces/proxy-9635/services/https:proxy-service-4d2c8:tlsportname2/proxy/: tls qux (200; 23.762252ms)
Feb 14 14:29:38.174: INFO: (19) /api/v1/namespaces/proxy-9635/pods/http:proxy-service-4d2c8-wm47b:162/proxy/: bar (200; 24.026813ms)
Feb 14 14:29:38.174: INFO: (19) /api/v1/namespaces/proxy-9635/services/proxy-service-4d2c8:portname2/proxy/: bar (200; 23.970844ms)
Feb 14 14:29:38.174: INFO: (19) /api/v1/namespaces/proxy-9635/pods/https:proxy-service-4d2c8-wm47b:443/proxy/: >> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a watch on configmaps with a certain label
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: changing the label value of the configmap
STEP: Expecting to observe a delete notification for the watched object
Feb 14 14:29:52.805: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-6602,SelfLink:/api/v1/namespaces/watch-6602/configmaps/e2e-watch-test-label-changed,UID:976900bd-d316-4b3a-bc24-0b00730569fe,ResourceVersion:24332088,Generation:0,CreationTimestamp:2020-02-14 14:29:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
Feb 14 14:29:52.805: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-6602,SelfLink:/api/v1/namespaces/watch-6602/configmaps/e2e-watch-test-label-changed,UID:976900bd-d316-4b3a-bc24-0b00730569fe,ResourceVersion:24332089,Generation:0,CreationTimestamp:2020-02-14 14:29:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
Feb 14 14:29:52.806: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-6602,SelfLink:/api/v1/namespaces/watch-6602/configmaps/e2e-watch-test-label-changed,UID:976900bd-d316-4b3a-bc24-0b00730569fe,ResourceVersion:24332090,Generation:0,CreationTimestamp:2020-02-14 14:29:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying the configmap a second time
STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements
STEP: changing the label value of the configmap back
STEP: modifying the configmap a third time
STEP: deleting the configmap
STEP: Expecting to observe an add notification for the watched object when the label value was restored
Feb 14 14:30:02.932: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-6602,SelfLink:/api/v1/namespaces/watch-6602/configmaps/e2e-watch-test-label-changed,UID:976900bd-d316-4b3a-bc24-0b00730569fe,ResourceVersion:24332106,Generation:0,CreationTimestamp:2020-02-14 14:29:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Feb 14 14:30:02.933: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-6602,SelfLink:/api/v1/namespaces/watch-6602/configmaps/e2e-watch-test-label-changed,UID:976900bd-d316-4b3a-bc24-0b00730569fe,ResourceVersion:24332107,Generation:0,CreationTimestamp:2020-02-14 14:29:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},}
Feb 14 14:30:02.933: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-6602,SelfLink:/api/v1/namespaces/watch-6602/configmaps/e2e-watch-test-label-changed,UID:976900bd-d316-4b3a-bc24-0b00730569fe,ResourceVersion:24332108,Generation:0,CreationTimestamp:2020-02-14 14:29:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 14 14:30:02.933: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-6602" for this suite.
Feb 14 14:30:08.960: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 14:30:09.043: INFO: namespace watch-6602 deletion completed in 6.103588878s

• [SLOW TEST:16.333 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period 
  should be submitted and removed [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 14 14:30:09.044: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Delete Grace Period
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:47
[It] should be submitted and removed [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: setting up selector
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
Feb 14 14:30:19.213: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0'
STEP: deleting the pod gracefully
STEP: verifying the kubelet observed the termination notice
Feb 14 14:30:39.359: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed
[AfterEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 14 14:30:39.375: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-6408" for this suite.
Feb 14 14:30:45.415: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 14:30:45.554: INFO: namespace pods-6408 deletion completed in 6.165644089s

• [SLOW TEST:36.510 seconds]
[k8s.io] [sig-node] Pods Extended
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  [k8s.io] Delete Grace Period
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should be submitted and removed [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 14 14:30:45.554: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap configmap-817/configmap-test-c2d2de60-bd51-4f37-8adb-42946d9d93e6
STEP: Creating a pod to test consume configMaps
Feb 14 14:30:45.689: INFO: Waiting up to 5m0s for pod "pod-configmaps-d986c716-5043-4608-8406-25d15954d9a4" in namespace "configmap-817" to be "success or failure"
Feb 14 14:30:45.722: INFO: Pod "pod-configmaps-d986c716-5043-4608-8406-25d15954d9a4": Phase="Pending", Reason="", readiness=false. Elapsed: 32.587121ms
Feb 14 14:30:47.735: INFO: Pod "pod-configmaps-d986c716-5043-4608-8406-25d15954d9a4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.045475925s
Feb 14 14:30:49.744: INFO: Pod "pod-configmaps-d986c716-5043-4608-8406-25d15954d9a4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.054187918s
Feb 14 14:30:51.754: INFO: Pod "pod-configmaps-d986c716-5043-4608-8406-25d15954d9a4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.063917919s
Feb 14 14:30:53.797: INFO: Pod "pod-configmaps-d986c716-5043-4608-8406-25d15954d9a4": Phase="Pending", Reason="", readiness=false. Elapsed: 8.107209007s
Feb 14 14:30:55.812: INFO: Pod "pod-configmaps-d986c716-5043-4608-8406-25d15954d9a4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.122000428s
STEP: Saw pod success
Feb 14 14:30:55.812: INFO: Pod "pod-configmaps-d986c716-5043-4608-8406-25d15954d9a4" satisfied condition "success or failure"
Feb 14 14:30:55.816: INFO: Trying to get logs from node iruya-node pod pod-configmaps-d986c716-5043-4608-8406-25d15954d9a4 container env-test: 
STEP: delete the pod
Feb 14 14:30:55.875: INFO: Waiting for pod pod-configmaps-d986c716-5043-4608-8406-25d15954d9a4 to disappear
Feb 14 14:30:55.907: INFO: Pod pod-configmaps-d986c716-5043-4608-8406-25d15954d9a4 no longer exists
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 14 14:30:55.908: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-817" for this suite.
Feb 14 14:31:03.251: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 14:31:03.530: INFO: namespace configmap-817 deletion completed in 7.563614827s

• [SLOW TEST:17.976 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 14 14:31:03.531: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-9667.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-9667.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9667.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-9667.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-9667.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9667.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe /etc/hosts
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Feb 14 14:31:17.769: INFO: Unable to read wheezy_udp@PodARecord from pod dns-9667/dns-test-f177696d-316e-4417-9aa7-088a94d60bf8: the server could not find the requested resource (get pods dns-test-f177696d-316e-4417-9aa7-088a94d60bf8)
Feb 14 14:31:17.790: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-9667/dns-test-f177696d-316e-4417-9aa7-088a94d60bf8: the server could not find the requested resource (get pods dns-test-f177696d-316e-4417-9aa7-088a94d60bf8)
Feb 14 14:31:17.802: INFO: Unable to read jessie_hosts@dns-querier-1.dns-test-service.dns-9667.svc.cluster.local from pod dns-9667/dns-test-f177696d-316e-4417-9aa7-088a94d60bf8: the server could not find the requested resource (get pods dns-test-f177696d-316e-4417-9aa7-088a94d60bf8)
Feb 14 14:31:17.813: INFO: Unable to read jessie_hosts@dns-querier-1 from pod dns-9667/dns-test-f177696d-316e-4417-9aa7-088a94d60bf8: the server could not find the requested resource (get pods dns-test-f177696d-316e-4417-9aa7-088a94d60bf8)
Feb 14 14:31:17.824: INFO: Unable to read jessie_udp@PodARecord from pod dns-9667/dns-test-f177696d-316e-4417-9aa7-088a94d60bf8: the server could not find the requested resource (get pods dns-test-f177696d-316e-4417-9aa7-088a94d60bf8)
Feb 14 14:31:17.836: INFO: Unable to read jessie_tcp@PodARecord from pod dns-9667/dns-test-f177696d-316e-4417-9aa7-088a94d60bf8: the server could not find the requested resource (get pods dns-test-f177696d-316e-4417-9aa7-088a94d60bf8)
Feb 14 14:31:17.836: INFO: Lookups using dns-9667/dns-test-f177696d-316e-4417-9aa7-088a94d60bf8 failed for: [wheezy_udp@PodARecord wheezy_tcp@PodARecord jessie_hosts@dns-querier-1.dns-test-service.dns-9667.svc.cluster.local jessie_hosts@dns-querier-1 jessie_udp@PodARecord jessie_tcp@PodARecord]

Feb 14 14:31:22.913: INFO: DNS probes using dns-9667/dns-test-f177696d-316e-4417-9aa7-088a94d60bf8 succeeded

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 14 14:31:23.022: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-9667" for this suite.
Feb 14 14:31:31.085: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 14:31:31.193: INFO: namespace dns-9667 deletion completed in 8.161162982s

• [SLOW TEST:27.662 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 14 14:31:31.193: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44
[It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
Feb 14 14:31:31.297: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 14 14:31:45.874: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-5234" for this suite.
Feb 14 14:31:51.977: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 14:31:52.143: INFO: namespace init-container-5234 deletion completed in 6.210817687s

• [SLOW TEST:20.950 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 14 14:31:52.144: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0666 on tmpfs
Feb 14 14:31:52.377: INFO: Waiting up to 5m0s for pod "pod-80cbbb0f-4025-4d45-9205-de0dca61e68e" in namespace "emptydir-5636" to be "success or failure"
Feb 14 14:31:52.482: INFO: Pod "pod-80cbbb0f-4025-4d45-9205-de0dca61e68e": Phase="Pending", Reason="", readiness=false. Elapsed: 104.323651ms
Feb 14 14:31:54.505: INFO: Pod "pod-80cbbb0f-4025-4d45-9205-de0dca61e68e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.126941349s
Feb 14 14:31:56.517: INFO: Pod "pod-80cbbb0f-4025-4d45-9205-de0dca61e68e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.138974381s
Feb 14 14:31:58.533: INFO: Pod "pod-80cbbb0f-4025-4d45-9205-de0dca61e68e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.155079473s
Feb 14 14:32:00.549: INFO: Pod "pod-80cbbb0f-4025-4d45-9205-de0dca61e68e": Phase="Running", Reason="", readiness=true. Elapsed: 8.171240923s
Feb 14 14:32:02.559: INFO: Pod "pod-80cbbb0f-4025-4d45-9205-de0dca61e68e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.181690507s
STEP: Saw pod success
Feb 14 14:32:02.560: INFO: Pod "pod-80cbbb0f-4025-4d45-9205-de0dca61e68e" satisfied condition "success or failure"
Feb 14 14:32:02.565: INFO: Trying to get logs from node iruya-node pod pod-80cbbb0f-4025-4d45-9205-de0dca61e68e container test-container: 
STEP: delete the pod
Feb 14 14:32:02.751: INFO: Waiting for pod pod-80cbbb0f-4025-4d45-9205-de0dca61e68e to disappear
Feb 14 14:32:02.796: INFO: Pod pod-80cbbb0f-4025-4d45-9205-de0dca61e68e no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 14 14:32:02.797: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-5636" for this suite.
Feb 14 14:32:08.830: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 14:32:08.975: INFO: namespace emptydir-5636 deletion completed in 6.170812596s

• [SLOW TEST:16.831 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 14 14:32:08.976: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb 14 14:32:33.193: INFO: Container started at 2020-02-14 14:32:16 +0000 UTC, pod became ready at 2020-02-14 14:32:32 +0000 UTC
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 14 14:32:33.193: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-1054" for this suite.
Feb 14 14:32:55.230: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 14:32:55.364: INFO: namespace container-probe-1054 deletion completed in 22.163900037s

• [SLOW TEST:46.388 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Should recreate evicted statefulset [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 14 14:32:55.365: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75
STEP: Creating service test in namespace statefulset-3469
[It] Should recreate evicted statefulset [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Looking for a node to schedule stateful set and pod
STEP: Creating pod with conflicting port in namespace statefulset-3469
STEP: Creating statefulset with conflicting port in namespace statefulset-3469
STEP: Waiting until pod test-pod will start running in namespace statefulset-3469
STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-3469
Feb 14 14:33:05.742: INFO: Observed stateful pod in namespace: statefulset-3469, name: ss-0, uid: c729bffe-5576-41c9-9310-c397d814bba6, status phase: Pending. Waiting for statefulset controller to delete.
Feb 14 14:33:06.502: INFO: Observed stateful pod in namespace: statefulset-3469, name: ss-0, uid: c729bffe-5576-41c9-9310-c397d814bba6, status phase: Failed. Waiting for statefulset controller to delete.
Feb 14 14:33:06.562: INFO: Observed stateful pod in namespace: statefulset-3469, name: ss-0, uid: c729bffe-5576-41c9-9310-c397d814bba6, status phase: Failed. Waiting for statefulset controller to delete.
Feb 14 14:33:06.571: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-3469
STEP: Removing pod with conflicting port in namespace statefulset-3469
STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-3469 and will be in running state
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86
Feb 14 14:33:16.789: INFO: Deleting all statefulset in ns statefulset-3469
Feb 14 14:33:16.794: INFO: Scaling statefulset ss to 0
Feb 14 14:33:26.828: INFO: Waiting for statefulset status.replicas updated to 0
Feb 14 14:33:26.837: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 14 14:33:26.875: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-3469" for this suite.
Feb 14 14:33:32.959: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 14:33:33.268: INFO: namespace statefulset-3469 deletion completed in 6.372325531s

• [SLOW TEST:37.904 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    Should recreate evicted statefulset [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should do a rolling update of a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 14 14:33:33.269: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should do a rolling update of a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the initial replication controller
Feb 14 14:33:33.420: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9897'
Feb 14 14:33:33.996: INFO: stderr: ""
Feb 14 14:33:33.997: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Feb 14 14:33:33.998: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9897'
Feb 14 14:33:34.344: INFO: stderr: ""
Feb 14 14:33:34.344: INFO: stdout: "update-demo-nautilus-kppll update-demo-nautilus-tqkh5 "
Feb 14 14:33:34.345: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-kppll -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9897'
Feb 14 14:33:34.609: INFO: stderr: ""
Feb 14 14:33:34.609: INFO: stdout: ""
Feb 14 14:33:34.609: INFO: update-demo-nautilus-kppll is created but not running
Feb 14 14:33:39.610: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9897'
Feb 14 14:33:41.094: INFO: stderr: ""
Feb 14 14:33:41.094: INFO: stdout: "update-demo-nautilus-kppll update-demo-nautilus-tqkh5 "
Feb 14 14:33:41.095: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-kppll -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9897'
Feb 14 14:33:41.452: INFO: stderr: ""
Feb 14 14:33:41.452: INFO: stdout: ""
Feb 14 14:33:41.452: INFO: update-demo-nautilus-kppll is created but not running
Feb 14 14:33:46.454: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9897'
Feb 14 14:33:46.709: INFO: stderr: ""
Feb 14 14:33:46.709: INFO: stdout: "update-demo-nautilus-kppll update-demo-nautilus-tqkh5 "
Feb 14 14:33:46.709: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-kppll -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9897'
Feb 14 14:33:46.852: INFO: stderr: ""
Feb 14 14:33:46.852: INFO: stdout: "true"
Feb 14 14:33:46.853: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-kppll -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9897'
Feb 14 14:33:46.991: INFO: stderr: ""
Feb 14 14:33:46.992: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb 14 14:33:46.992: INFO: validating pod update-demo-nautilus-kppll
Feb 14 14:33:47.009: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb 14 14:33:47.009: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb 14 14:33:47.010: INFO: update-demo-nautilus-kppll is verified up and running
Feb 14 14:33:47.010: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-tqkh5 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9897'
Feb 14 14:33:47.191: INFO: stderr: ""
Feb 14 14:33:47.191: INFO: stdout: "true"
Feb 14 14:33:47.192: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-tqkh5 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9897'
Feb 14 14:33:47.356: INFO: stderr: ""
Feb 14 14:33:47.356: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb 14 14:33:47.356: INFO: validating pod update-demo-nautilus-tqkh5
Feb 14 14:33:47.379: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb 14 14:33:47.379: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb 14 14:33:47.379: INFO: update-demo-nautilus-tqkh5 is verified up and running
STEP: rolling-update to new replication controller
Feb 14 14:33:47.382: INFO: scanned /root for discovery docs: 
Feb 14 14:33:47.382: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=kubectl-9897'
Feb 14 14:34:19.643: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Feb 14 14:34:19.643: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Feb 14 14:34:19.644: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9897'
Feb 14 14:34:19.868: INFO: stderr: ""
Feb 14 14:34:19.868: INFO: stdout: "update-demo-kitten-mm86m update-demo-kitten-ttpxp "
Feb 14 14:34:19.869: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-mm86m -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9897'
Feb 14 14:34:19.975: INFO: stderr: ""
Feb 14 14:34:19.975: INFO: stdout: "true"
Feb 14 14:34:19.975: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-mm86m -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9897'
Feb 14 14:34:20.149: INFO: stderr: ""
Feb 14 14:34:20.149: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Feb 14 14:34:20.150: INFO: validating pod update-demo-kitten-mm86m
Feb 14 14:34:20.187: INFO: got data: {
  "image": "kitten.jpg"
}

Feb 14 14:34:20.188: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Feb 14 14:34:20.188: INFO: update-demo-kitten-mm86m is verified up and running
Feb 14 14:34:20.188: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-ttpxp -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9897'
Feb 14 14:34:20.293: INFO: stderr: ""
Feb 14 14:34:20.294: INFO: stdout: "true"
Feb 14 14:34:20.294: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-ttpxp -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9897'
Feb 14 14:34:20.450: INFO: stderr: ""
Feb 14 14:34:20.450: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Feb 14 14:34:20.450: INFO: validating pod update-demo-kitten-ttpxp
Feb 14 14:34:20.485: INFO: got data: {
  "image": "kitten.jpg"
}

Feb 14 14:34:20.485: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Feb 14 14:34:20.485: INFO: update-demo-kitten-ttpxp is verified up and running
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 14 14:34:20.485: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-9897" for this suite.
Feb 14 14:34:42.519: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 14:34:42.633: INFO: namespace kubectl-9897 deletion completed in 22.140753931s

• [SLOW TEST:69.364 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should do a rolling update of a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-apps] ReplicationController 
  should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 14 14:34:42.634: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Given a Pod with a 'name' label pod-adoption is created
STEP: When a replication controller with a matching selector is created
STEP: Then the orphan pod is adopted
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 14 14:34:51.820: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-1767" for this suite.
Feb 14 14:35:14.554: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 14:35:14.673: INFO: namespace replication-controller-1767 deletion completed in 22.847312615s

• [SLOW TEST:32.040 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 14 14:35:14.674: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
[It] should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Feb 14 14:35:14.902: INFO: Number of nodes with available pods: 0
Feb 14 14:35:14.902: INFO: Node iruya-node is running more than one daemon pod
Feb 14 14:35:15.922: INFO: Number of nodes with available pods: 0
Feb 14 14:35:15.922: INFO: Node iruya-node is running more than one daemon pod
Feb 14 14:35:16.919: INFO: Number of nodes with available pods: 0
Feb 14 14:35:16.919: INFO: Node iruya-node is running more than one daemon pod
Feb 14 14:35:17.927: INFO: Number of nodes with available pods: 0
Feb 14 14:35:17.927: INFO: Node iruya-node is running more than one daemon pod
Feb 14 14:35:18.921: INFO: Number of nodes with available pods: 0
Feb 14 14:35:18.921: INFO: Node iruya-node is running more than one daemon pod
Feb 14 14:35:19.914: INFO: Number of nodes with available pods: 0
Feb 14 14:35:19.914: INFO: Node iruya-node is running more than one daemon pod
Feb 14 14:35:21.859: INFO: Number of nodes with available pods: 0
Feb 14 14:35:21.859: INFO: Node iruya-node is running more than one daemon pod
Feb 14 14:35:22.801: INFO: Number of nodes with available pods: 0
Feb 14 14:35:22.801: INFO: Node iruya-node is running more than one daemon pod
Feb 14 14:35:22.918: INFO: Number of nodes with available pods: 0
Feb 14 14:35:22.918: INFO: Node iruya-node is running more than one daemon pod
Feb 14 14:35:24.779: INFO: Number of nodes with available pods: 0
Feb 14 14:35:24.779: INFO: Node iruya-node is running more than one daemon pod
Feb 14 14:35:24.945: INFO: Number of nodes with available pods: 0
Feb 14 14:35:24.945: INFO: Node iruya-node is running more than one daemon pod
Feb 14 14:35:25.915: INFO: Number of nodes with available pods: 0
Feb 14 14:35:25.916: INFO: Node iruya-node is running more than one daemon pod
Feb 14 14:35:26.934: INFO: Number of nodes with available pods: 2
Feb 14 14:35:26.934: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived.
Feb 14 14:35:27.068: INFO: Number of nodes with available pods: 1
Feb 14 14:35:27.068: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Feb 14 14:35:28.489: INFO: Number of nodes with available pods: 1
Feb 14 14:35:28.490: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Feb 14 14:35:29.654: INFO: Number of nodes with available pods: 1
Feb 14 14:35:29.654: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Feb 14 14:35:30.081: INFO: Number of nodes with available pods: 1
Feb 14 14:35:30.081: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Feb 14 14:35:31.095: INFO: Number of nodes with available pods: 1
Feb 14 14:35:31.096: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Feb 14 14:35:32.085: INFO: Number of nodes with available pods: 1
Feb 14 14:35:32.085: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Feb 14 14:35:33.161: INFO: Number of nodes with available pods: 1
Feb 14 14:35:33.161: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Feb 14 14:35:34.903: INFO: Number of nodes with available pods: 1
Feb 14 14:35:34.903: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Feb 14 14:35:35.457: INFO: Number of nodes with available pods: 1
Feb 14 14:35:35.457: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Feb 14 14:35:36.124: INFO: Number of nodes with available pods: 1
Feb 14 14:35:36.124: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Feb 14 14:35:37.091: INFO: Number of nodes with available pods: 1
Feb 14 14:35:37.091: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Feb 14 14:35:38.083: INFO: Number of nodes with available pods: 2
Feb 14 14:35:38.084: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Wait for the failed daemon pod to be completely deleted.
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-4463, will wait for the garbage collector to delete the pods
Feb 14 14:35:38.166: INFO: Deleting DaemonSet.extensions daemon-set took: 19.124762ms
Feb 14 14:35:38.467: INFO: Terminating DaemonSet.extensions daemon-set pods took: 301.070285ms
Feb 14 14:35:47.876: INFO: Number of nodes with available pods: 0
Feb 14 14:35:47.876: INFO: Number of running nodes: 0, number of available pods: 0
Feb 14 14:35:47.880: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-4463/daemonsets","resourceVersion":"24333054"},"items":null}

Feb 14 14:35:47.884: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-4463/pods","resourceVersion":"24333054"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 14 14:35:47.897: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-4463" for this suite.
Feb 14 14:35:53.953: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 14:35:54.039: INFO: namespace daemonsets-4463 deletion completed in 6.139149111s

• [SLOW TEST:39.365 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 14 14:35:54.040: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb 14 14:35:54.161: INFO: Waiting up to 5m0s for pod "downwardapi-volume-cc5d14d6-4d85-4bd1-9b31-4e5a5fd12c6d" in namespace "downward-api-3867" to be "success or failure"
Feb 14 14:35:54.168: INFO: Pod "downwardapi-volume-cc5d14d6-4d85-4bd1-9b31-4e5a5fd12c6d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.445412ms
Feb 14 14:35:56.294: INFO: Pod "downwardapi-volume-cc5d14d6-4d85-4bd1-9b31-4e5a5fd12c6d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.132750228s
Feb 14 14:35:58.313: INFO: Pod "downwardapi-volume-cc5d14d6-4d85-4bd1-9b31-4e5a5fd12c6d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.15155346s
Feb 14 14:36:00.335: INFO: Pod "downwardapi-volume-cc5d14d6-4d85-4bd1-9b31-4e5a5fd12c6d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.173878163s
Feb 14 14:36:02.344: INFO: Pod "downwardapi-volume-cc5d14d6-4d85-4bd1-9b31-4e5a5fd12c6d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.182904139s
Feb 14 14:36:04.355: INFO: Pod "downwardapi-volume-cc5d14d6-4d85-4bd1-9b31-4e5a5fd12c6d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.193455232s
STEP: Saw pod success
Feb 14 14:36:04.355: INFO: Pod "downwardapi-volume-cc5d14d6-4d85-4bd1-9b31-4e5a5fd12c6d" satisfied condition "success or failure"
Feb 14 14:36:04.360: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-cc5d14d6-4d85-4bd1-9b31-4e5a5fd12c6d container client-container: 
STEP: delete the pod
Feb 14 14:36:04.453: INFO: Waiting for pod downwardapi-volume-cc5d14d6-4d85-4bd1-9b31-4e5a5fd12c6d to disappear
Feb 14 14:36:04.506: INFO: Pod downwardapi-volume-cc5d14d6-4d85-4bd1-9b31-4e5a5fd12c6d no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 14 14:36:04.507: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-3867" for this suite.
Feb 14 14:36:10.556: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 14:36:10.695: INFO: namespace downward-api-3867 deletion completed in 6.179158712s

• [SLOW TEST:16.655 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl rolling-update 
  should support rolling-update to same image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 14 14:36:10.695: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1516
[It] should support rolling-update to same image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Feb 14 14:36:10.733: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=kubectl-6885'
Feb 14 14:36:11.005: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Feb 14 14:36:11.005: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n"
STEP: verifying the rc e2e-test-nginx-rc was created
Feb 14 14:36:11.055: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
STEP: rolling-update to same image controller
Feb 14 14:36:11.082: INFO: scanned /root for discovery docs: 
Feb 14 14:36:11.083: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-nginx-rc --update-period=1s --image=docker.io/library/nginx:1.14-alpine --image-pull-policy=IfNotPresent --namespace=kubectl-6885'
Feb 14 14:36:33.690: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Feb 14 14:36:33.690: INFO: stdout: "Created e2e-test-nginx-rc-7c5d6f37ea1c7a65c9cbc87ec6a9cd84\nScaling up e2e-test-nginx-rc-7c5d6f37ea1c7a65c9cbc87ec6a9cd84 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-7c5d6f37ea1c7a65c9cbc87ec6a9cd84 up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-7c5d6f37ea1c7a65c9cbc87ec6a9cd84 to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n"
Feb 14 14:36:33.690: INFO: stdout: "Created e2e-test-nginx-rc-7c5d6f37ea1c7a65c9cbc87ec6a9cd84\nScaling up e2e-test-nginx-rc-7c5d6f37ea1c7a65c9cbc87ec6a9cd84 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-7c5d6f37ea1c7a65c9cbc87ec6a9cd84 up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-7c5d6f37ea1c7a65c9cbc87ec6a9cd84 to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n"
STEP: waiting for all containers in run=e2e-test-nginx-rc pods to come up.
Feb 14 14:36:33.691: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-6885'
Feb 14 14:36:33.862: INFO: stderr: ""
Feb 14 14:36:33.863: INFO: stdout: "e2e-test-nginx-rc-7c5d6f37ea1c7a65c9cbc87ec6a9cd84-cfj2r e2e-test-nginx-rc-rfjsw "
STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2
Feb 14 14:36:38.863: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-6885'
Feb 14 14:36:39.007: INFO: stderr: ""
Feb 14 14:36:39.008: INFO: stdout: "e2e-test-nginx-rc-7c5d6f37ea1c7a65c9cbc87ec6a9cd84-cfj2r e2e-test-nginx-rc-rfjsw "
STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2
Feb 14 14:36:44.009: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-6885'
Feb 14 14:36:44.131: INFO: stderr: ""
Feb 14 14:36:44.132: INFO: stdout: "e2e-test-nginx-rc-7c5d6f37ea1c7a65c9cbc87ec6a9cd84-cfj2r e2e-test-nginx-rc-rfjsw "
STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2
Feb 14 14:36:49.132: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-6885'
Feb 14 14:36:49.280: INFO: stderr: ""
Feb 14 14:36:49.280: INFO: stdout: "e2e-test-nginx-rc-7c5d6f37ea1c7a65c9cbc87ec6a9cd84-cfj2r e2e-test-nginx-rc-rfjsw "
STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2
Feb 14 14:36:54.281: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-6885'
Feb 14 14:36:54.423: INFO: stderr: ""
Feb 14 14:36:54.424: INFO: stdout: "e2e-test-nginx-rc-7c5d6f37ea1c7a65c9cbc87ec6a9cd84-cfj2r e2e-test-nginx-rc-rfjsw "
STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2
Feb 14 14:36:59.425: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-6885'
Feb 14 14:36:59.600: INFO: stderr: ""
Feb 14 14:36:59.600: INFO: stdout: "e2e-test-nginx-rc-7c5d6f37ea1c7a65c9cbc87ec6a9cd84-cfj2r e2e-test-nginx-rc-rfjsw "
STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2
Feb 14 14:37:04.600: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-6885'
Feb 14 14:37:04.748: INFO: stderr: ""
Feb 14 14:37:04.748: INFO: stdout: "e2e-test-nginx-rc-7c5d6f37ea1c7a65c9cbc87ec6a9cd84-cfj2r e2e-test-nginx-rc-rfjsw "
STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2
Feb 14 14:37:09.750: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-6885'
Feb 14 14:37:10.016: INFO: stderr: ""
Feb 14 14:37:10.017: INFO: stdout: "e2e-test-nginx-rc-7c5d6f37ea1c7a65c9cbc87ec6a9cd84-cfj2r e2e-test-nginx-rc-rfjsw "
STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2
Feb 14 14:37:15.017: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-6885'
Feb 14 14:37:15.199: INFO: stderr: ""
Feb 14 14:37:15.199: INFO: stdout: "e2e-test-nginx-rc-7c5d6f37ea1c7a65c9cbc87ec6a9cd84-cfj2r e2e-test-nginx-rc-rfjsw "
STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2
Feb 14 14:37:20.200: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-6885'
Feb 14 14:37:20.394: INFO: stderr: ""
Feb 14 14:37:20.394: INFO: stdout: "e2e-test-nginx-rc-7c5d6f37ea1c7a65c9cbc87ec6a9cd84-cfj2r e2e-test-nginx-rc-rfjsw "
STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2
Feb 14 14:37:25.395: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-6885'
Feb 14 14:37:25.581: INFO: stderr: ""
Feb 14 14:37:25.582: INFO: stdout: "e2e-test-nginx-rc-7c5d6f37ea1c7a65c9cbc87ec6a9cd84-cfj2r e2e-test-nginx-rc-rfjsw "
STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2
Feb 14 14:37:30.583: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-6885'
Feb 14 14:37:30.733: INFO: stderr: ""
Feb 14 14:37:30.734: INFO: stdout: "e2e-test-nginx-rc-7c5d6f37ea1c7a65c9cbc87ec6a9cd84-cfj2r e2e-test-nginx-rc-rfjsw "
STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2
Feb 14 14:37:35.735: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-6885'
Feb 14 14:37:35.891: INFO: stderr: ""
Feb 14 14:37:35.891: INFO: stdout: "e2e-test-nginx-rc-7c5d6f37ea1c7a65c9cbc87ec6a9cd84-cfj2r e2e-test-nginx-rc-rfjsw "
STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2
Feb 14 14:37:40.892: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-6885'
Feb 14 14:37:41.079: INFO: stderr: ""
Feb 14 14:37:41.079: INFO: stdout: "e2e-test-nginx-rc-7c5d6f37ea1c7a65c9cbc87ec6a9cd84-cfj2r e2e-test-nginx-rc-rfjsw "
STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2
Feb 14 14:37:46.080: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-6885'
Feb 14 14:37:46.245: INFO: stderr: ""
Feb 14 14:37:46.246: INFO: stdout: "e2e-test-nginx-rc-7c5d6f37ea1c7a65c9cbc87ec6a9cd84-cfj2r e2e-test-nginx-rc-rfjsw "
STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2
Feb 14 14:37:51.247: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-6885'
Feb 14 14:37:51.376: INFO: stderr: ""
Feb 14 14:37:51.376: INFO: stdout: "e2e-test-nginx-rc-7c5d6f37ea1c7a65c9cbc87ec6a9cd84-cfj2r "
Feb 14 14:37:51.376: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-7c5d6f37ea1c7a65c9cbc87ec6a9cd84-cfj2r -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-nginx-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6885'
Feb 14 14:37:51.516: INFO: stderr: ""
Feb 14 14:37:51.517: INFO: stdout: "true"
Feb 14 14:37:51.517: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-7c5d6f37ea1c7a65c9cbc87ec6a9cd84-cfj2r -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-nginx-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6885'
Feb 14 14:37:51.622: INFO: stderr: ""
Feb 14 14:37:51.622: INFO: stdout: "docker.io/library/nginx:1.14-alpine"
Feb 14 14:37:51.622: INFO: e2e-test-nginx-rc-7c5d6f37ea1c7a65c9cbc87ec6a9cd84-cfj2r is verified up and running
[AfterEach] [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1522
Feb 14 14:37:51.622: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=kubectl-6885'
Feb 14 14:37:51.753: INFO: stderr: ""
Feb 14 14:37:51.754: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 14 14:37:51.754: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6885" for this suite.
Feb 14 14:38:15.778: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 14:38:15.937: INFO: namespace kubectl-6885 deletion completed in 24.177698248s

• [SLOW TEST:125.242 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should support rolling-update to same image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 14 14:38:15.938: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for all rs to be garbage collected
STEP: expected 0 rs, got 1 rs
STEP: expected 0 pods, got 2 pods
STEP: expected 0 pods, got 2 pods
STEP: Gathering metrics
W0214 14:38:18.652458       8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Feb 14 14:38:18.652: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 14 14:38:18.652: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-8938" for this suite.
Feb 14 14:38:27.385: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 14:38:27.519: INFO: namespace gc-8938 deletion completed in 8.746911337s

• [SLOW TEST:11.581 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-apps] Deployment 
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 14 14:38:27.519: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72
[It] RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb 14 14:38:27.690: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted)
Feb 14 14:38:27.760: INFO: Pod name sample-pod: Found 0 pods out of 1
Feb 14 14:38:32.772: INFO: Pod name sample-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Feb 14 14:38:36.787: INFO: Creating deployment "test-rolling-update-deployment"
Feb 14 14:38:36.798: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has
Feb 14 14:38:36.863: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created
Feb 14 14:38:38.898: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected
Feb 14 14:38:38.903: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717287916, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717287916, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717287917, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717287916, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 14 14:38:40.913: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717287916, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717287916, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717287917, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717287916, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 14 14:38:42.917: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717287916, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717287916, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717287917, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717287916, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 14 14:38:44.913: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717287916, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717287916, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717287917, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717287916, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 14 14:38:46.910: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted)
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66
Feb 14 14:38:46.924: INFO: Deployment "test-rolling-update-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment,GenerateName:,Namespace:deployment-8834,SelfLink:/apis/apps/v1/namespaces/deployment-8834/deployments/test-rolling-update-deployment,UID:282a3228-1d20-4c5f-9f1f-0016e23ab87f,ResourceVersion:24333503,Generation:1,CreationTimestamp:2020-02-14 14:38:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-02-14 14:38:36 +0000 UTC 2020-02-14 14:38:36 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-02-14 14:38:45 +0000 UTC 2020-02-14 14:38:36 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rolling-update-deployment-79f6b9d75c" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},}

Feb 14 14:38:46.928: INFO: New ReplicaSet "test-rolling-update-deployment-79f6b9d75c" of Deployment "test-rolling-update-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-79f6b9d75c,GenerateName:,Namespace:deployment-8834,SelfLink:/apis/apps/v1/namespaces/deployment-8834/replicasets/test-rolling-update-deployment-79f6b9d75c,UID:c5613cde-2e0d-41f5-ab73-f6eb5bfa7850,ResourceVersion:24333493,Generation:1,CreationTimestamp:2020-02-14 14:38:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment 282a3228-1d20-4c5f-9f1f-0016e23ab87f 0xc0029cf8b7 0xc0029cf8b8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Feb 14 14:38:46.928: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment":
Feb 14 14:38:46.928: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-controller,GenerateName:,Namespace:deployment-8834,SelfLink:/apis/apps/v1/namespaces/deployment-8834/replicasets/test-rolling-update-controller,UID:f4dd3f2c-c141-4d7b-93e1-88abcbb23433,ResourceVersion:24333502,Generation:2,CreationTimestamp:2020-02-14 14:38:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305832,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment 282a3228-1d20-4c5f-9f1f-0016e23ab87f 0xc0029cf7cf 0xc0029cf7e0}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Feb 14 14:38:46.932: INFO: Pod "test-rolling-update-deployment-79f6b9d75c-jnbhv" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-79f6b9d75c-jnbhv,GenerateName:test-rolling-update-deployment-79f6b9d75c-,Namespace:deployment-8834,SelfLink:/api/v1/namespaces/deployment-8834/pods/test-rolling-update-deployment-79f6b9d75c-jnbhv,UID:75c1c027-96b2-4d80-a674-33afc68638e9,ResourceVersion:24333492,Generation:0,CreationTimestamp:2020-02-14 14:38:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rolling-update-deployment-79f6b9d75c c5613cde-2e0d-41f5-ab73-f6eb5bfa7850 0xc002b9c1c7 0xc002b9c1c8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-lrpjx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-lrpjx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [{default-token-lrpjx true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002b9c250} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002b9c270}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 14:38:36 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 14:38:44 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 14:38:44 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 14:38:36 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.2,StartTime:2020-02-14 14:38:36 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-02-14 14:38:43 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 docker://a150c224b6a870a965147cc081b836f375ba4a05a0f9d1ef53c5c184bfe313d3}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 14 14:38:46.932: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-8834" for this suite.
Feb 14 14:38:52.994: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 14:38:53.172: INFO: namespace deployment-8834 deletion completed in 6.234669601s

• [SLOW TEST:25.653 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-storage] Projected downwardAPI 
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 14 14:38:53.173: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating the pod
Feb 14 14:39:03.942: INFO: Successfully updated pod "annotationupdate388fcb07-bb07-457f-84f1-a03c941986b9"
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 14 14:39:06.048: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2005" for this suite.
Feb 14 14:39:28.085: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 14:39:28.206: INFO: namespace projected-2005 deletion completed in 22.144991681s

• [SLOW TEST:35.033 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 14 14:39:28.207: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating secret secrets-9275/secret-test-25aff480-5aa0-464a-a1ac-d1348557c908
STEP: Creating a pod to test consume secrets
Feb 14 14:39:28.334: INFO: Waiting up to 5m0s for pod "pod-configmaps-d3d0230c-80b3-49f5-98b2-881b48e4331a" in namespace "secrets-9275" to be "success or failure"
Feb 14 14:39:28.345: INFO: Pod "pod-configmaps-d3d0230c-80b3-49f5-98b2-881b48e4331a": Phase="Pending", Reason="", readiness=false. Elapsed: 11.250409ms
Feb 14 14:39:30.356: INFO: Pod "pod-configmaps-d3d0230c-80b3-49f5-98b2-881b48e4331a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021687222s
Feb 14 14:39:32.373: INFO: Pod "pod-configmaps-d3d0230c-80b3-49f5-98b2-881b48e4331a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.038898831s
Feb 14 14:39:34.387: INFO: Pod "pod-configmaps-d3d0230c-80b3-49f5-98b2-881b48e4331a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.052967239s
Feb 14 14:39:36.433: INFO: Pod "pod-configmaps-d3d0230c-80b3-49f5-98b2-881b48e4331a": Phase="Pending", Reason="", readiness=false. Elapsed: 8.099520347s
Feb 14 14:39:38.446: INFO: Pod "pod-configmaps-d3d0230c-80b3-49f5-98b2-881b48e4331a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.112544877s
STEP: Saw pod success
Feb 14 14:39:38.447: INFO: Pod "pod-configmaps-d3d0230c-80b3-49f5-98b2-881b48e4331a" satisfied condition "success or failure"
Feb 14 14:39:38.451: INFO: Trying to get logs from node iruya-node pod pod-configmaps-d3d0230c-80b3-49f5-98b2-881b48e4331a container env-test: 
STEP: delete the pod
Feb 14 14:39:38.612: INFO: Waiting for pod pod-configmaps-d3d0230c-80b3-49f5-98b2-881b48e4331a to disappear
Feb 14 14:39:38.619: INFO: Pod pod-configmaps-d3d0230c-80b3-49f5-98b2-881b48e4331a no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 14 14:39:38.620: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-9275" for this suite.
Feb 14 14:39:44.664: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 14:39:44.745: INFO: namespace secrets-9275 deletion completed in 6.12005777s

• [SLOW TEST:16.539 seconds]
[sig-api-machinery] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform rolling updates and roll backs of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 14 14:39:44.746: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75
STEP: Creating service test in namespace statefulset-6245
[It] should perform rolling updates and roll backs of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a new StatefulSet
Feb 14 14:39:44.871: INFO: Found 0 stateful pods, waiting for 3
Feb 14 14:39:54.899: INFO: Found 2 stateful pods, waiting for 3
Feb 14 14:40:04.883: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Feb 14 14:40:04.883: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Feb 14 14:40:04.883: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Feb 14 14:40:14.895: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Feb 14 14:40:14.896: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Feb 14 14:40:14.896: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
Feb 14 14:40:14.920: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6245 ss2-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb 14 14:40:17.700: INFO: stderr: "I0214 14:40:17.482323    3525 log.go:172] (0xc000cde420) (0xc0005c6960) Create stream\nI0214 14:40:17.482407    3525 log.go:172] (0xc000cde420) (0xc0005c6960) Stream added, broadcasting: 1\nI0214 14:40:17.488074    3525 log.go:172] (0xc000cde420) Reply frame received for 1\nI0214 14:40:17.488121    3525 log.go:172] (0xc000cde420) (0xc0009fa000) Create stream\nI0214 14:40:17.488132    3525 log.go:172] (0xc000cde420) (0xc0009fa000) Stream added, broadcasting: 3\nI0214 14:40:17.489647    3525 log.go:172] (0xc000cde420) Reply frame received for 3\nI0214 14:40:17.489697    3525 log.go:172] (0xc000cde420) (0xc000a34000) Create stream\nI0214 14:40:17.490602    3525 log.go:172] (0xc000cde420) (0xc000a34000) Stream added, broadcasting: 5\nI0214 14:40:17.496849    3525 log.go:172] (0xc000cde420) Reply frame received for 5\nI0214 14:40:17.593471    3525 log.go:172] (0xc000cde420) Data frame received for 5\nI0214 14:40:17.593501    3525 log.go:172] (0xc000a34000) (5) Data frame handling\nI0214 14:40:17.593532    3525 log.go:172] (0xc000a34000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0214 14:40:17.613973    3525 log.go:172] (0xc000cde420) Data frame received for 3\nI0214 14:40:17.614008    3525 log.go:172] (0xc0009fa000) (3) Data frame handling\nI0214 14:40:17.614025    3525 log.go:172] (0xc0009fa000) (3) Data frame sent\nI0214 14:40:17.688340    3525 log.go:172] (0xc000cde420) (0xc0009fa000) Stream removed, broadcasting: 3\nI0214 14:40:17.688473    3525 log.go:172] (0xc000cde420) Data frame received for 1\nI0214 14:40:17.688517    3525 log.go:172] (0xc0005c6960) (1) Data frame handling\nI0214 14:40:17.688534    3525 log.go:172] (0xc0005c6960) (1) Data frame sent\nI0214 14:40:17.688575    3525 log.go:172] (0xc000cde420) (0xc000a34000) Stream removed, broadcasting: 5\nI0214 14:40:17.688684    3525 log.go:172] (0xc000cde420) (0xc0005c6960) Stream removed, broadcasting: 1\nI0214 14:40:17.688707    3525 log.go:172] (0xc000cde420) Go away received\nI0214 14:40:17.689593    3525 log.go:172] (0xc000cde420) (0xc0005c6960) Stream removed, broadcasting: 1\nI0214 14:40:17.689611    3525 log.go:172] (0xc000cde420) (0xc0009fa000) Stream removed, broadcasting: 3\nI0214 14:40:17.689618    3525 log.go:172] (0xc000cde420) (0xc000a34000) Stream removed, broadcasting: 5\n"
Feb 14 14:40:17.700: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb 14 14:40:17.700: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

STEP: Updating StatefulSet template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine
Feb 14 14:40:17.829: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Updating Pods in reverse ordinal order
Feb 14 14:40:28.116: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6245 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 14 14:40:28.605: INFO: stderr: "I0214 14:40:28.379702    3560 log.go:172] (0xc000906420) (0xc00081c6e0) Create stream\nI0214 14:40:28.379862    3560 log.go:172] (0xc000906420) (0xc00081c6e0) Stream added, broadcasting: 1\nI0214 14:40:28.382464    3560 log.go:172] (0xc000906420) Reply frame received for 1\nI0214 14:40:28.382510    3560 log.go:172] (0xc000906420) (0xc0000d6280) Create stream\nI0214 14:40:28.382528    3560 log.go:172] (0xc000906420) (0xc0000d6280) Stream added, broadcasting: 3\nI0214 14:40:28.383789    3560 log.go:172] (0xc000906420) Reply frame received for 3\nI0214 14:40:28.383829    3560 log.go:172] (0xc000906420) (0xc000808000) Create stream\nI0214 14:40:28.383842    3560 log.go:172] (0xc000906420) (0xc000808000) Stream added, broadcasting: 5\nI0214 14:40:28.385395    3560 log.go:172] (0xc000906420) Reply frame received for 5\nI0214 14:40:28.473206    3560 log.go:172] (0xc000906420) Data frame received for 3\nI0214 14:40:28.473302    3560 log.go:172] (0xc0000d6280) (3) Data frame handling\nI0214 14:40:28.473318    3560 log.go:172] (0xc0000d6280) (3) Data frame sent\nI0214 14:40:28.473388    3560 log.go:172] (0xc000906420) Data frame received for 5\nI0214 14:40:28.473413    3560 log.go:172] (0xc000808000) (5) Data frame handling\nI0214 14:40:28.473448    3560 log.go:172] (0xc000808000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0214 14:40:28.594350    3560 log.go:172] (0xc000906420) Data frame received for 1\nI0214 14:40:28.594593    3560 log.go:172] (0xc000906420) (0xc000808000) Stream removed, broadcasting: 5\nI0214 14:40:28.594703    3560 log.go:172] (0xc00081c6e0) (1) Data frame handling\nI0214 14:40:28.594731    3560 log.go:172] (0xc00081c6e0) (1) Data frame sent\nI0214 14:40:28.594860    3560 log.go:172] (0xc000906420) (0xc0000d6280) Stream removed, broadcasting: 3\nI0214 14:40:28.594919    3560 log.go:172] (0xc000906420) (0xc00081c6e0) Stream removed, broadcasting: 1\nI0214 14:40:28.594945    3560 log.go:172] (0xc000906420) Go away received\nI0214 14:40:28.596133    3560 log.go:172] (0xc000906420) (0xc00081c6e0) Stream removed, broadcasting: 1\nI0214 14:40:28.596152    3560 log.go:172] (0xc000906420) (0xc0000d6280) Stream removed, broadcasting: 3\nI0214 14:40:28.596159    3560 log.go:172] (0xc000906420) (0xc000808000) Stream removed, broadcasting: 5\n"
Feb 14 14:40:28.605: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Feb 14 14:40:28.605: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Feb 14 14:40:38.649: INFO: Waiting for StatefulSet statefulset-6245/ss2 to complete update
Feb 14 14:40:38.650: INFO: Waiting for Pod statefulset-6245/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb 14 14:40:38.650: INFO: Waiting for Pod statefulset-6245/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb 14 14:40:48.676: INFO: Waiting for StatefulSet statefulset-6245/ss2 to complete update
Feb 14 14:40:48.676: INFO: Waiting for Pod statefulset-6245/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb 14 14:40:48.676: INFO: Waiting for Pod statefulset-6245/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb 14 14:40:59.591: INFO: Waiting for StatefulSet statefulset-6245/ss2 to complete update
Feb 14 14:40:59.591: INFO: Waiting for Pod statefulset-6245/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb 14 14:41:08.661: INFO: Waiting for StatefulSet statefulset-6245/ss2 to complete update
Feb 14 14:41:08.661: INFO: Waiting for Pod statefulset-6245/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb 14 14:41:18.671: INFO: Waiting for StatefulSet statefulset-6245/ss2 to complete update
Feb 14 14:41:18.671: INFO: Waiting for Pod statefulset-6245/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb 14 14:41:28.664: INFO: Waiting for StatefulSet statefulset-6245/ss2 to complete update
STEP: Rolling back to a previous revision
Feb 14 14:41:38.672: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6245 ss2-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb 14 14:41:39.160: INFO: stderr: "I0214 14:41:38.842943    3580 log.go:172] (0xc000962370) (0xc0009046e0) Create stream\nI0214 14:41:38.843146    3580 log.go:172] (0xc000962370) (0xc0009046e0) Stream added, broadcasting: 1\nI0214 14:41:38.847610    3580 log.go:172] (0xc000962370) Reply frame received for 1\nI0214 14:41:38.847657    3580 log.go:172] (0xc000962370) (0xc0006221e0) Create stream\nI0214 14:41:38.847685    3580 log.go:172] (0xc000962370) (0xc0006221e0) Stream added, broadcasting: 3\nI0214 14:41:38.849900    3580 log.go:172] (0xc000962370) Reply frame received for 3\nI0214 14:41:38.849932    3580 log.go:172] (0xc000962370) (0xc000622280) Create stream\nI0214 14:41:38.849946    3580 log.go:172] (0xc000962370) (0xc000622280) Stream added, broadcasting: 5\nI0214 14:41:38.851918    3580 log.go:172] (0xc000962370) Reply frame received for 5\nI0214 14:41:38.973423    3580 log.go:172] (0xc000962370) Data frame received for 5\nI0214 14:41:38.973460    3580 log.go:172] (0xc000622280) (5) Data frame handling\nI0214 14:41:38.973485    3580 log.go:172] (0xc000622280) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0214 14:41:39.021708    3580 log.go:172] (0xc000962370) Data frame received for 3\nI0214 14:41:39.021748    3580 log.go:172] (0xc0006221e0) (3) Data frame handling\nI0214 14:41:39.021773    3580 log.go:172] (0xc0006221e0) (3) Data frame sent\nI0214 14:41:39.148430    3580 log.go:172] (0xc000962370) Data frame received for 1\nI0214 14:41:39.148507    3580 log.go:172] (0xc000962370) (0xc0006221e0) Stream removed, broadcasting: 3\nI0214 14:41:39.148551    3580 log.go:172] (0xc0009046e0) (1) Data frame handling\nI0214 14:41:39.148570    3580 log.go:172] (0xc0009046e0) (1) Data frame sent\nI0214 14:41:39.148596    3580 log.go:172] (0xc000962370) (0xc000622280) Stream removed, broadcasting: 5\nI0214 14:41:39.148643    3580 log.go:172] (0xc000962370) (0xc0009046e0) Stream removed, broadcasting: 1\nI0214 14:41:39.148705    3580 log.go:172] (0xc000962370) Go away received\nI0214 14:41:39.149553    3580 log.go:172] (0xc000962370) (0xc0009046e0) Stream removed, broadcasting: 1\nI0214 14:41:39.149590    3580 log.go:172] (0xc000962370) (0xc0006221e0) Stream removed, broadcasting: 3\nI0214 14:41:39.149608    3580 log.go:172] (0xc000962370) (0xc000622280) Stream removed, broadcasting: 5\n"
Feb 14 14:41:39.161: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb 14 14:41:39.161: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Feb 14 14:41:39.230: INFO: Updating stateful set ss2
STEP: Rolling back update in reverse ordinal order
Feb 14 14:41:49.290: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6245 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 14 14:41:49.691: INFO: stderr: "I0214 14:41:49.501381    3601 log.go:172] (0xc000117080) (0xc000658aa0) Create stream\nI0214 14:41:49.501632    3601 log.go:172] (0xc000117080) (0xc000658aa0) Stream added, broadcasting: 1\nI0214 14:41:49.508389    3601 log.go:172] (0xc000117080) Reply frame received for 1\nI0214 14:41:49.508461    3601 log.go:172] (0xc000117080) (0xc0008c8000) Create stream\nI0214 14:41:49.508482    3601 log.go:172] (0xc000117080) (0xc0008c8000) Stream added, broadcasting: 3\nI0214 14:41:49.509904    3601 log.go:172] (0xc000117080) Reply frame received for 3\nI0214 14:41:49.509932    3601 log.go:172] (0xc000117080) (0xc000658b40) Create stream\nI0214 14:41:49.509942    3601 log.go:172] (0xc000117080) (0xc000658b40) Stream added, broadcasting: 5\nI0214 14:41:49.511271    3601 log.go:172] (0xc000117080) Reply frame received for 5\nI0214 14:41:49.600635    3601 log.go:172] (0xc000117080) Data frame received for 3\nI0214 14:41:49.600731    3601 log.go:172] (0xc0008c8000) (3) Data frame handling\nI0214 14:41:49.600776    3601 log.go:172] (0xc0008c8000) (3) Data frame sent\nI0214 14:41:49.600909    3601 log.go:172] (0xc000117080) Data frame received for 5\nI0214 14:41:49.600939    3601 log.go:172] (0xc000658b40) (5) Data frame handling\nI0214 14:41:49.600957    3601 log.go:172] (0xc000658b40) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0214 14:41:49.675516    3601 log.go:172] (0xc000117080) Data frame received for 1\nI0214 14:41:49.676027    3601 log.go:172] (0xc000117080) (0xc000658b40) Stream removed, broadcasting: 5\nI0214 14:41:49.676153    3601 log.go:172] (0xc000658aa0) (1) Data frame handling\nI0214 14:41:49.676189    3601 log.go:172] (0xc000658aa0) (1) Data frame sent\nI0214 14:41:49.676233    3601 log.go:172] (0xc000117080) (0xc0008c8000) Stream removed, broadcasting: 3\nI0214 14:41:49.676647    3601 log.go:172] (0xc000117080) (0xc000658aa0) Stream removed, broadcasting: 1\nI0214 14:41:49.677947    3601 log.go:172] (0xc000117080) Go away received\nI0214 14:41:49.681576    3601 log.go:172] (0xc000117080) (0xc000658aa0) Stream removed, broadcasting: 1\nI0214 14:41:49.681945    3601 log.go:172] (0xc000117080) (0xc0008c8000) Stream removed, broadcasting: 3\nI0214 14:41:49.682051    3601 log.go:172] (0xc000117080) (0xc000658b40) Stream removed, broadcasting: 5\n"
Feb 14 14:41:49.691: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Feb 14 14:41:49.691: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Feb 14 14:41:59.722: INFO: Waiting for StatefulSet statefulset-6245/ss2 to complete update
Feb 14 14:41:59.722: INFO: Waiting for Pod statefulset-6245/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Feb 14 14:41:59.722: INFO: Waiting for Pod statefulset-6245/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Feb 14 14:41:59.722: INFO: Waiting for Pod statefulset-6245/ss2-2 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Feb 14 14:42:09.732: INFO: Waiting for StatefulSet statefulset-6245/ss2 to complete update
Feb 14 14:42:09.732: INFO: Waiting for Pod statefulset-6245/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Feb 14 14:42:09.732: INFO: Waiting for Pod statefulset-6245/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Feb 14 14:42:19.755: INFO: Waiting for StatefulSet statefulset-6245/ss2 to complete update
Feb 14 14:42:19.756: INFO: Waiting for Pod statefulset-6245/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Feb 14 14:42:19.756: INFO: Waiting for Pod statefulset-6245/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Feb 14 14:42:29.755: INFO: Waiting for StatefulSet statefulset-6245/ss2 to complete update
Feb 14 14:42:29.755: INFO: Waiting for Pod statefulset-6245/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Feb 14 14:42:39.783: INFO: Waiting for StatefulSet statefulset-6245/ss2 to complete update
Feb 14 14:42:39.783: INFO: Waiting for Pod statefulset-6245/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Feb 14 14:42:49.731: INFO: Waiting for StatefulSet statefulset-6245/ss2 to complete update
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86
Feb 14 14:42:59.739: INFO: Deleting all statefulset in ns statefulset-6245
Feb 14 14:42:59.744: INFO: Scaling statefulset ss2 to 0
Feb 14 14:43:39.789: INFO: Waiting for statefulset status.replicas updated to 0
Feb 14 14:43:39.798: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 14 14:43:39.824: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-6245" for this suite.
Feb 14 14:43:47.864: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 14:43:48.012: INFO: namespace statefulset-6245 deletion completed in 8.180575279s

• [SLOW TEST:243.266 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should perform rolling updates and roll backs of template modifications [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-auth] ServiceAccounts 
  should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 14 14:43:48.013: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: getting the auto-created API token
Feb 14 14:43:48.693: INFO: created pod pod-service-account-defaultsa
Feb 14 14:43:48.693: INFO: pod pod-service-account-defaultsa service account token volume mount: true
Feb 14 14:43:48.706: INFO: created pod pod-service-account-mountsa
Feb 14 14:43:48.706: INFO: pod pod-service-account-mountsa service account token volume mount: true
Feb 14 14:43:48.732: INFO: created pod pod-service-account-nomountsa
Feb 14 14:43:48.733: INFO: pod pod-service-account-nomountsa service account token volume mount: false
Feb 14 14:43:48.745: INFO: created pod pod-service-account-defaultsa-mountspec
Feb 14 14:43:48.745: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true
Feb 14 14:43:48.845: INFO: created pod pod-service-account-mountsa-mountspec
Feb 14 14:43:48.845: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true
Feb 14 14:43:48.875: INFO: created pod pod-service-account-nomountsa-mountspec
Feb 14 14:43:48.875: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true
Feb 14 14:43:48.919: INFO: created pod pod-service-account-defaultsa-nomountspec
Feb 14 14:43:48.919: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false
Feb 14 14:43:49.031: INFO: created pod pod-service-account-mountsa-nomountspec
Feb 14 14:43:49.032: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false
Feb 14 14:43:49.112: INFO: created pod pod-service-account-nomountsa-nomountspec
Feb 14 14:43:49.112: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false
[AfterEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 14 14:43:49.113: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-8303" for this suite.
Feb 14 14:44:14.174: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 14:44:14.300: INFO: namespace svcaccounts-8303 deletion completed in 25.06256903s

• [SLOW TEST:26.287 seconds]
[sig-auth] ServiceAccounts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23
  should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should be possible to delete [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 14 14:44:14.300: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should be possible to delete [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 14 14:44:14.476: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-1167" for this suite.
Feb 14 14:44:20.531: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 14:44:20.645: INFO: namespace kubelet-test-1167 deletion completed in 6.15283064s

• [SLOW TEST:6.344 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78
    should be possible to delete [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 14 14:44:20.646: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-b9b55230-0f58-46e5-a4d7-769bef53dca1
STEP: Creating a pod to test consume configMaps
Feb 14 14:44:20.897: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-ebffe0e6-d334-4715-acde-752785c22273" in namespace "projected-5357" to be "success or failure"
Feb 14 14:44:20.929: INFO: Pod "pod-projected-configmaps-ebffe0e6-d334-4715-acde-752785c22273": Phase="Pending", Reason="", readiness=false. Elapsed: 31.401186ms
Feb 14 14:44:22.936: INFO: Pod "pod-projected-configmaps-ebffe0e6-d334-4715-acde-752785c22273": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038658264s
Feb 14 14:44:24.951: INFO: Pod "pod-projected-configmaps-ebffe0e6-d334-4715-acde-752785c22273": Phase="Pending", Reason="", readiness=false. Elapsed: 4.053606078s
Feb 14 14:44:26.959: INFO: Pod "pod-projected-configmaps-ebffe0e6-d334-4715-acde-752785c22273": Phase="Pending", Reason="", readiness=false. Elapsed: 6.061244136s
Feb 14 14:44:28.981: INFO: Pod "pod-projected-configmaps-ebffe0e6-d334-4715-acde-752785c22273": Phase="Pending", Reason="", readiness=false. Elapsed: 8.083631795s
Feb 14 14:44:30.999: INFO: Pod "pod-projected-configmaps-ebffe0e6-d334-4715-acde-752785c22273": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.10165533s
STEP: Saw pod success
Feb 14 14:44:30.999: INFO: Pod "pod-projected-configmaps-ebffe0e6-d334-4715-acde-752785c22273" satisfied condition "success or failure"
Feb 14 14:44:31.004: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-ebffe0e6-d334-4715-acde-752785c22273 container projected-configmap-volume-test: 
STEP: delete the pod
Feb 14 14:44:31.139: INFO: Waiting for pod pod-projected-configmaps-ebffe0e6-d334-4715-acde-752785c22273 to disappear
Feb 14 14:44:31.143: INFO: Pod pod-projected-configmaps-ebffe0e6-d334-4715-acde-752785c22273 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 14 14:44:31.144: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5357" for this suite.
Feb 14 14:44:37.166: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 14:44:37.298: INFO: namespace projected-5357 deletion completed in 6.14990126s

• [SLOW TEST:16.653 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 14 14:44:37.299: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod liveness-b96cd215-361d-44f3-bead-c8f4482c994d in namespace container-probe-3611
Feb 14 14:44:47.539: INFO: Started pod liveness-b96cd215-361d-44f3-bead-c8f4482c994d in namespace container-probe-3611
STEP: checking the pod's current state and verifying that restartCount is present
Feb 14 14:44:47.542: INFO: Initial restart count of pod liveness-b96cd215-361d-44f3-bead-c8f4482c994d is 0
Feb 14 14:45:06.457: INFO: Restart count of pod container-probe-3611/liveness-b96cd215-361d-44f3-bead-c8f4482c994d is now 1 (18.915082381s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 14 14:45:06.499: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-3611" for this suite.
Feb 14 14:45:12.620: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 14:45:12.733: INFO: namespace container-probe-3611 deletion completed in 6.211441344s

• [SLOW TEST:35.435 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 14 14:45:12.734: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Feb 14 14:45:38.978: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 14 14:45:38.990: INFO: Pod pod-with-prestop-exec-hook still exists
Feb 14 14:45:40.991: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 14 14:45:41.000: INFO: Pod pod-with-prestop-exec-hook still exists
Feb 14 14:45:42.991: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 14 14:45:42.999: INFO: Pod pod-with-prestop-exec-hook still exists
Feb 14 14:45:44.991: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 14 14:45:44.999: INFO: Pod pod-with-prestop-exec-hook still exists
Feb 14 14:45:46.991: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 14 14:45:47.002: INFO: Pod pod-with-prestop-exec-hook still exists
Feb 14 14:45:48.991: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 14 14:45:48.999: INFO: Pod pod-with-prestop-exec-hook still exists
Feb 14 14:45:50.991: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 14 14:45:51.003: INFO: Pod pod-with-prestop-exec-hook still exists
Feb 14 14:45:52.992: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 14 14:45:53.003: INFO: Pod pod-with-prestop-exec-hook still exists
Feb 14 14:45:54.992: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 14 14:45:55.006: INFO: Pod pod-with-prestop-exec-hook still exists
Feb 14 14:45:56.991: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 14 14:45:57.007: INFO: Pod pod-with-prestop-exec-hook still exists
Feb 14 14:45:58.991: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 14 14:45:58.999: INFO: Pod pod-with-prestop-exec-hook still exists
Feb 14 14:46:00.992: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 14 14:46:01.004: INFO: Pod pod-with-prestop-exec-hook still exists
Feb 14 14:46:02.991: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 14 14:46:02.996: INFO: Pod pod-with-prestop-exec-hook still exists
Feb 14 14:46:04.991: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 14 14:46:04.998: INFO: Pod pod-with-prestop-exec-hook still exists
Feb 14 14:46:06.991: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 14 14:46:07.001: INFO: Pod pod-with-prestop-exec-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 14 14:46:07.027: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-6478" for this suite.
Feb 14 14:46:29.055: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 14:46:29.157: INFO: namespace container-lifecycle-hook-6478 deletion completed in 22.121980308s

• [SLOW TEST:76.423 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute prestop exec hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 14 14:46:29.157: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-6516d0a9-4c5f-4d5a-97dd-feb6f9636270
STEP: Creating a pod to test consume configMaps
Feb 14 14:46:29.265: INFO: Waiting up to 5m0s for pod "pod-configmaps-482b4def-47ce-400d-861d-3b8bd2cd637a" in namespace "configmap-1899" to be "success or failure"
Feb 14 14:46:29.276: INFO: Pod "pod-configmaps-482b4def-47ce-400d-861d-3b8bd2cd637a": Phase="Pending", Reason="", readiness=false. Elapsed: 10.297239ms
Feb 14 14:46:31.283: INFO: Pod "pod-configmaps-482b4def-47ce-400d-861d-3b8bd2cd637a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017823529s
Feb 14 14:46:33.290: INFO: Pod "pod-configmaps-482b4def-47ce-400d-861d-3b8bd2cd637a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.02500401s
Feb 14 14:46:35.299: INFO: Pod "pod-configmaps-482b4def-47ce-400d-861d-3b8bd2cd637a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.03337022s
Feb 14 14:46:37.311: INFO: Pod "pod-configmaps-482b4def-47ce-400d-861d-3b8bd2cd637a": Phase="Pending", Reason="", readiness=false. Elapsed: 8.04613537s
Feb 14 14:46:39.665: INFO: Pod "pod-configmaps-482b4def-47ce-400d-861d-3b8bd2cd637a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.399805599s
STEP: Saw pod success
Feb 14 14:46:39.665: INFO: Pod "pod-configmaps-482b4def-47ce-400d-861d-3b8bd2cd637a" satisfied condition "success or failure"
Feb 14 14:46:39.670: INFO: Trying to get logs from node iruya-node pod pod-configmaps-482b4def-47ce-400d-861d-3b8bd2cd637a container configmap-volume-test: 
STEP: delete the pod
Feb 14 14:46:39.724: INFO: Waiting for pod pod-configmaps-482b4def-47ce-400d-861d-3b8bd2cd637a to disappear
Feb 14 14:46:39.728: INFO: Pod pod-configmaps-482b4def-47ce-400d-861d-3b8bd2cd637a no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 14 14:46:39.728: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-1899" for this suite.
Feb 14 14:46:45.759: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 14:46:45.875: INFO: namespace configmap-1899 deletion completed in 6.141376543s

• [SLOW TEST:16.717 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 14 14:46:45.875: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a watch on configmaps
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: closing the watch once it receives two notifications
Feb 14 14:46:46.007: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-4007,SelfLink:/api/v1/namespaces/watch-4007/configmaps/e2e-watch-test-watch-closed,UID:9b695dce-af16-48f8-978f-523aff625b2d,ResourceVersion:24334789,Generation:0,CreationTimestamp:2020-02-14 14:46:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
Feb 14 14:46:46.008: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-4007,SelfLink:/api/v1/namespaces/watch-4007/configmaps/e2e-watch-test-watch-closed,UID:9b695dce-af16-48f8-978f-523aff625b2d,ResourceVersion:24334790,Generation:0,CreationTimestamp:2020-02-14 14:46:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying the configmap a second time, while the watch is closed
STEP: creating a new watch on configmaps from the last resource version observed by the first watch
STEP: deleting the configmap
STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed
Feb 14 14:46:46.020: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-4007,SelfLink:/api/v1/namespaces/watch-4007/configmaps/e2e-watch-test-watch-closed,UID:9b695dce-af16-48f8-978f-523aff625b2d,ResourceVersion:24334791,Generation:0,CreationTimestamp:2020-02-14 14:46:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Feb 14 14:46:46.020: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-4007,SelfLink:/api/v1/namespaces/watch-4007/configmaps/e2e-watch-test-watch-closed,UID:9b695dce-af16-48f8-978f-523aff625b2d,ResourceVersion:24334792,Generation:0,CreationTimestamp:2020-02-14 14:46:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 14 14:46:46.020: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-4007" for this suite.
Feb 14 14:46:52.047: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 14:46:52.157: INFO: namespace watch-4007 deletion completed in 6.134293662s

• [SLOW TEST:6.282 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 14 14:46:52.158: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81
Feb 14 14:46:52.263: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Feb 14 14:46:52.275: INFO: Waiting for terminating namespaces to be deleted...
Feb 14 14:46:52.279: INFO: 
Logging pods the kubelet thinks is on node iruya-node before test
Feb 14 14:46:52.301: INFO: weave-net-rlp57 from kube-system started at 2019-10-12 11:56:39 +0000 UTC (2 container statuses recorded)
Feb 14 14:46:52.301: INFO: 	Container weave ready: true, restart count 0
Feb 14 14:46:52.302: INFO: 	Container weave-npc ready: true, restart count 0
Feb 14 14:46:52.302: INFO: kube-bench-j7kcs from default started at 2020-02-11 06:42:30 +0000 UTC (1 container statuses recorded)
Feb 14 14:46:52.302: INFO: 	Container kube-bench ready: false, restart count 0
Feb 14 14:46:52.302: INFO: kube-proxy-976zl from kube-system started at 2019-08-04 09:01:39 +0000 UTC (1 container statuses recorded)
Feb 14 14:46:52.302: INFO: 	Container kube-proxy ready: true, restart count 0
Feb 14 14:46:52.302: INFO: 
Logging pods the kubelet thinks is on node iruya-server-sfge57q7djm7 before test
Feb 14 14:46:52.323: INFO: kube-scheduler-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:43 +0000 UTC (1 container statuses recorded)
Feb 14 14:46:52.323: INFO: 	Container kube-scheduler ready: true, restart count 13
Feb 14 14:46:52.323: INFO: coredns-5c98db65d4-xx8w8 from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded)
Feb 14 14:46:52.323: INFO: 	Container coredns ready: true, restart count 0
Feb 14 14:46:52.323: INFO: etcd-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:38 +0000 UTC (1 container statuses recorded)
Feb 14 14:46:52.323: INFO: 	Container etcd ready: true, restart count 0
Feb 14 14:46:52.323: INFO: weave-net-bzl4d from kube-system started at 2019-08-04 08:52:37 +0000 UTC (2 container statuses recorded)
Feb 14 14:46:52.323: INFO: 	Container weave ready: true, restart count 0
Feb 14 14:46:52.323: INFO: 	Container weave-npc ready: true, restart count 0
Feb 14 14:46:52.323: INFO: coredns-5c98db65d4-bm4gs from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded)
Feb 14 14:46:52.323: INFO: 	Container coredns ready: true, restart count 0
Feb 14 14:46:52.323: INFO: kube-controller-manager-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:42 +0000 UTC (1 container statuses recorded)
Feb 14 14:46:52.323: INFO: 	Container kube-controller-manager ready: true, restart count 21
Feb 14 14:46:52.323: INFO: kube-proxy-58v95 from kube-system started at 2019-08-04 08:52:37 +0000 UTC (1 container statuses recorded)
Feb 14 14:46:52.323: INFO: 	Container kube-proxy ready: true, restart count 0
Feb 14 14:46:52.323: INFO: kube-apiserver-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:39 +0000 UTC (1 container statuses recorded)
Feb 14 14:46:52.323: INFO: 	Container kube-apiserver ready: true, restart count 0
[It] validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Trying to schedule Pod with nonempty NodeSelector.
STEP: Considering event: 
Type = [Warning], Name = [restricted-pod.15f34c242c0cd233], Reason = [FailedScheduling], Message = [0/2 nodes are available: 2 node(s) didn't match node selector.]
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 14 14:46:53.390: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-1270" for this suite.
Feb 14 14:46:59.432: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 14:46:59.605: INFO: namespace sched-pred-1270 deletion completed in 6.207950991s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72

• [SLOW TEST:7.447 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23
  validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 14 14:46:59.605: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81
Feb 14 14:46:59.667: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Feb 14 14:46:59.681: INFO: Waiting for terminating namespaces to be deleted...
Feb 14 14:46:59.686: INFO: 
Logging pods the kubelet thinks is on node iruya-node before test
Feb 14 14:46:59.703: INFO: weave-net-rlp57 from kube-system started at 2019-10-12 11:56:39 +0000 UTC (2 container statuses recorded)
Feb 14 14:46:59.703: INFO: 	Container weave ready: true, restart count 0
Feb 14 14:46:59.703: INFO: 	Container weave-npc ready: true, restart count 0
Feb 14 14:46:59.703: INFO: kube-bench-j7kcs from default started at 2020-02-11 06:42:30 +0000 UTC (1 container statuses recorded)
Feb 14 14:46:59.703: INFO: 	Container kube-bench ready: false, restart count 0
Feb 14 14:46:59.703: INFO: kube-proxy-976zl from kube-system started at 2019-08-04 09:01:39 +0000 UTC (1 container statuses recorded)
Feb 14 14:46:59.703: INFO: 	Container kube-proxy ready: true, restart count 0
Feb 14 14:46:59.703: INFO: 
Logging pods the kubelet thinks is on node iruya-server-sfge57q7djm7 before test
Feb 14 14:46:59.734: INFO: etcd-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:38 +0000 UTC (1 container statuses recorded)
Feb 14 14:46:59.734: INFO: 	Container etcd ready: true, restart count 0
Feb 14 14:46:59.734: INFO: weave-net-bzl4d from kube-system started at 2019-08-04 08:52:37 +0000 UTC (2 container statuses recorded)
Feb 14 14:46:59.734: INFO: 	Container weave ready: true, restart count 0
Feb 14 14:46:59.734: INFO: 	Container weave-npc ready: true, restart count 0
Feb 14 14:46:59.734: INFO: coredns-5c98db65d4-bm4gs from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded)
Feb 14 14:46:59.734: INFO: 	Container coredns ready: true, restart count 0
Feb 14 14:46:59.734: INFO: kube-controller-manager-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:42 +0000 UTC (1 container statuses recorded)
Feb 14 14:46:59.734: INFO: 	Container kube-controller-manager ready: true, restart count 21
Feb 14 14:46:59.734: INFO: kube-proxy-58v95 from kube-system started at 2019-08-04 08:52:37 +0000 UTC (1 container statuses recorded)
Feb 14 14:46:59.734: INFO: 	Container kube-proxy ready: true, restart count 0
Feb 14 14:46:59.734: INFO: kube-apiserver-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:39 +0000 UTC (1 container statuses recorded)
Feb 14 14:46:59.734: INFO: 	Container kube-apiserver ready: true, restart count 0
Feb 14 14:46:59.734: INFO: kube-scheduler-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:43 +0000 UTC (1 container statuses recorded)
Feb 14 14:46:59.734: INFO: 	Container kube-scheduler ready: true, restart count 13
Feb 14 14:46:59.734: INFO: coredns-5c98db65d4-xx8w8 from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded)
Feb 14 14:46:59.734: INFO: 	Container coredns ready: true, restart count 0
[It] validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-97870077-62ec-4591-988e-495cb73277e3 42
STEP: Trying to relaunch the pod, now with labels.
STEP: removing the label kubernetes.io/e2e-97870077-62ec-4591-988e-495cb73277e3 off the node iruya-node
STEP: verifying the node doesn't have the label kubernetes.io/e2e-97870077-62ec-4591-988e-495cb73277e3
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 14 14:47:21.988: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-5088" for this suite.
Feb 14 14:47:42.074: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 14:47:42.182: INFO: namespace sched-pred-5088 deletion completed in 20.134243792s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72

• [SLOW TEST:42.576 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23
  validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 14 14:47:42.182: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0644 on node default medium
Feb 14 14:47:42.455: INFO: Waiting up to 5m0s for pod "pod-5e803235-b146-4540-8c72-daa2387b41ed" in namespace "emptydir-8756" to be "success or failure"
Feb 14 14:47:42.459: INFO: Pod "pod-5e803235-b146-4540-8c72-daa2387b41ed": Phase="Pending", Reason="", readiness=false. Elapsed: 3.942591ms
Feb 14 14:47:44.543: INFO: Pod "pod-5e803235-b146-4540-8c72-daa2387b41ed": Phase="Pending", Reason="", readiness=false. Elapsed: 2.088375372s
Feb 14 14:47:46.558: INFO: Pod "pod-5e803235-b146-4540-8c72-daa2387b41ed": Phase="Pending", Reason="", readiness=false. Elapsed: 4.102865281s
Feb 14 14:47:48.602: INFO: Pod "pod-5e803235-b146-4540-8c72-daa2387b41ed": Phase="Pending", Reason="", readiness=false. Elapsed: 6.146783734s
Feb 14 14:47:50.615: INFO: Pod "pod-5e803235-b146-4540-8c72-daa2387b41ed": Phase="Pending", Reason="", readiness=false. Elapsed: 8.160101677s
Feb 14 14:47:52.627: INFO: Pod "pod-5e803235-b146-4540-8c72-daa2387b41ed": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.172462169s
STEP: Saw pod success
Feb 14 14:47:52.627: INFO: Pod "pod-5e803235-b146-4540-8c72-daa2387b41ed" satisfied condition "success or failure"
Feb 14 14:47:52.631: INFO: Trying to get logs from node iruya-node pod pod-5e803235-b146-4540-8c72-daa2387b41ed container test-container: 
STEP: delete the pod
Feb 14 14:47:52.963: INFO: Waiting for pod pod-5e803235-b146-4540-8c72-daa2387b41ed to disappear
Feb 14 14:47:52.971: INFO: Pod pod-5e803235-b146-4540-8c72-daa2387b41ed no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 14 14:47:52.971: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-8756" for this suite.
Feb 14 14:47:59.033: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 14:47:59.139: INFO: namespace emptydir-8756 deletion completed in 6.131735639s

• [SLOW TEST:16.957 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl expose 
  should create services for rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 14 14:47:59.141: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should create services for rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating Redis RC
Feb 14 14:47:59.257: INFO: namespace kubectl-8676
Feb 14 14:47:59.258: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8676'
Feb 14 14:47:59.897: INFO: stderr: ""
Feb 14 14:47:59.897: INFO: stdout: "replicationcontroller/redis-master created\n"
STEP: Waiting for Redis master to start.
Feb 14 14:48:00.912: INFO: Selector matched 1 pods for map[app:redis]
Feb 14 14:48:00.912: INFO: Found 0 / 1
Feb 14 14:48:01.913: INFO: Selector matched 1 pods for map[app:redis]
Feb 14 14:48:01.914: INFO: Found 0 / 1
Feb 14 14:48:02.914: INFO: Selector matched 1 pods for map[app:redis]
Feb 14 14:48:02.914: INFO: Found 0 / 1
Feb 14 14:48:03.910: INFO: Selector matched 1 pods for map[app:redis]
Feb 14 14:48:03.910: INFO: Found 0 / 1
Feb 14 14:48:04.909: INFO: Selector matched 1 pods for map[app:redis]
Feb 14 14:48:04.910: INFO: Found 0 / 1
Feb 14 14:48:05.907: INFO: Selector matched 1 pods for map[app:redis]
Feb 14 14:48:05.907: INFO: Found 0 / 1
Feb 14 14:48:06.908: INFO: Selector matched 1 pods for map[app:redis]
Feb 14 14:48:06.909: INFO: Found 0 / 1
Feb 14 14:48:07.912: INFO: Selector matched 1 pods for map[app:redis]
Feb 14 14:48:07.913: INFO: Found 1 / 1
Feb 14 14:48:07.913: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Feb 14 14:48:07.925: INFO: Selector matched 1 pods for map[app:redis]
Feb 14 14:48:07.925: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Feb 14 14:48:07.925: INFO: wait on redis-master startup in kubectl-8676 
Feb 14 14:48:07.926: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-wrhgq redis-master --namespace=kubectl-8676'
Feb 14 14:48:08.184: INFO: stderr: ""
Feb 14 14:48:08.184: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 14 Feb 14:48:07.059 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 14 Feb 14:48:07.061 # Server started, Redis version 3.2.12\n1:M 14 Feb 14:48:07.062 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 14 Feb 14:48:07.062 * The server is now ready to accept connections on port 6379\n"
STEP: exposing RC
Feb 14 14:48:08.184: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc redis-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-8676'
Feb 14 14:48:08.987: INFO: stderr: ""
Feb 14 14:48:08.988: INFO: stdout: "service/rm2 exposed\n"
Feb 14 14:48:09.016: INFO: Service rm2 in namespace kubectl-8676 found.
STEP: exposing service
Feb 14 14:48:11.036: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-8676'
Feb 14 14:48:11.367: INFO: stderr: ""
Feb 14 14:48:11.367: INFO: stdout: "service/rm3 exposed\n"
Feb 14 14:48:11.373: INFO: Service rm3 in namespace kubectl-8676 found.
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 14 14:48:13.384: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-8676" for this suite.
Feb 14 14:48:37.444: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 14:48:37.681: INFO: namespace kubectl-8676 deletion completed in 24.290454753s

• [SLOW TEST:38.541 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl expose
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create services for rc  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-apps] Deployment 
  deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 14 14:48:37.682: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72
[It] deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb 14 14:48:37.868: INFO: Pod name rollover-pod: Found 0 pods out of 1
Feb 14 14:48:42.918: INFO: Pod name rollover-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Feb 14 14:48:47.314: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready
Feb 14 14:48:49.332: INFO: Creating deployment "test-rollover-deployment"
Feb 14 14:48:49.369: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations
Feb 14 14:48:51.462: INFO: Check revision of new replica set for deployment "test-rollover-deployment"
Feb 14 14:48:51.473: INFO: Ensure that both replica sets have 1 created replica
Feb 14 14:48:51.480: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update
Feb 14 14:48:51.488: INFO: Updating deployment test-rollover-deployment
Feb 14 14:48:51.488: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller
Feb 14 14:48:53.582: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2
Feb 14 14:48:53.594: INFO: Make sure deployment "test-rollover-deployment" is complete
Feb 14 14:48:53.605: INFO: all replica sets need to contain the pod-template-hash label
Feb 14 14:48:53.605: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717288529, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717288529, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717288531, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717288529, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 14 14:48:55.616: INFO: all replica sets need to contain the pod-template-hash label
Feb 14 14:48:55.617: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717288529, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717288529, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717288531, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717288529, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 14 14:48:57.617: INFO: all replica sets need to contain the pod-template-hash label
Feb 14 14:48:57.617: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717288529, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717288529, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717288531, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717288529, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 14 14:48:59.736: INFO: all replica sets need to contain the pod-template-hash label
Feb 14 14:48:59.737: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717288529, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717288529, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717288531, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717288529, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 14 14:49:01.636: INFO: all replica sets need to contain the pod-template-hash label
Feb 14 14:49:01.636: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717288529, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717288529, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717288531, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717288529, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 14 14:49:03.631: INFO: all replica sets need to contain the pod-template-hash label
Feb 14 14:49:03.631: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717288529, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717288529, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717288542, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717288529, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 14 14:49:05.621: INFO: all replica sets need to contain the pod-template-hash label
Feb 14 14:49:05.621: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717288529, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717288529, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717288542, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717288529, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 14 14:49:07.628: INFO: all replica sets need to contain the pod-template-hash label
Feb 14 14:49:07.629: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717288529, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717288529, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717288542, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717288529, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 14 14:49:09.637: INFO: all replica sets need to contain the pod-template-hash label
Feb 14 14:49:09.638: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717288529, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717288529, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717288542, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717288529, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 14 14:49:11.681: INFO: all replica sets need to contain the pod-template-hash label
Feb 14 14:49:11.681: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717288529, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717288529, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717288542, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717288529, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 14 14:49:13.631: INFO: 
Feb 14 14:49:13.631: INFO: Ensure that both old replica sets have no replicas
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66
Feb 14 14:49:13.727: INFO: Deployment "test-rollover-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment,GenerateName:,Namespace:deployment-6501,SelfLink:/apis/apps/v1/namespaces/deployment-6501/deployments/test-rollover-deployment,UID:9b15b6ca-3569-4baf-b111-0b6200ea089b,ResourceVersion:24335183,Generation:2,CreationTimestamp:2020-02-14 14:48:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-02-14 14:48:49 +0000 UTC 2020-02-14 14:48:49 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-02-14 14:49:12 +0000 UTC 2020-02-14 14:48:49 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rollover-deployment-854595fc44" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},}

Feb 14 14:49:13.736: INFO: New ReplicaSet "test-rollover-deployment-854595fc44" of Deployment "test-rollover-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-854595fc44,GenerateName:,Namespace:deployment-6501,SelfLink:/apis/apps/v1/namespaces/deployment-6501/replicasets/test-rollover-deployment-854595fc44,UID:d1afef17-bcc9-4fa0-8ba8-325c8e5be6c4,ResourceVersion:24335174,Generation:2,CreationTimestamp:2020-02-14 14:48:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 9b15b6ca-3569-4baf-b111-0b6200ea089b 0xc00257ead7 0xc00257ead8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Feb 14 14:49:13.736: INFO: All old ReplicaSets of Deployment "test-rollover-deployment":
Feb 14 14:49:13.736: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-controller,GenerateName:,Namespace:deployment-6501,SelfLink:/apis/apps/v1/namespaces/deployment-6501/replicasets/test-rollover-controller,UID:21ea4e0e-e46a-4b8f-88bb-faa052f24323,ResourceVersion:24335182,Generation:2,CreationTimestamp:2020-02-14 14:48:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 9b15b6ca-3569-4baf-b111-0b6200ea089b 0xc00257e9ef 0xc00257ea00}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Feb 14 14:49:13.737: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-9b8b997cf,GenerateName:,Namespace:deployment-6501,SelfLink:/apis/apps/v1/namespaces/deployment-6501/replicasets/test-rollover-deployment-9b8b997cf,UID:f48e02a8-e24c-4488-8ad7-85e060273487,ResourceVersion:24335137,Generation:2,CreationTimestamp:2020-02-14 14:48:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 9b15b6ca-3569-4baf-b111-0b6200ea089b 0xc00257eba0 0xc00257eba1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Feb 14 14:49:13.748: INFO: Pod "test-rollover-deployment-854595fc44-cpwm9" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-854595fc44-cpwm9,GenerateName:test-rollover-deployment-854595fc44-,Namespace:deployment-6501,SelfLink:/api/v1/namespaces/deployment-6501/pods/test-rollover-deployment-854595fc44-cpwm9,UID:9ea44b4f-0534-453f-ac81-39f12e6d4aaa,ResourceVersion:24335158,Generation:0,CreationTimestamp:2020-02-14 14:48:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rollover-deployment-854595fc44 d1afef17-bcc9-4fa0-8ba8-325c8e5be6c4 0xc001c63bb7 0xc001c63bb8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-2fn9r {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-2fn9r,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [{default-token-2fn9r true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001c63c30} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001c63c50}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 14:48:51 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 14:49:02 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 14:49:02 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 14:48:51 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.2,StartTime:2020-02-14 14:48:51 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-02-14 14:49:00 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 docker://8625e9c78769819029837401d5dfbdd4d1dd8e510ecd12c257b8bffbf5d85030}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 14 14:49:13.749: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-6501" for this suite.
Feb 14 14:49:21.802: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 14:49:22.154: INFO: namespace deployment-6501 deletion completed in 8.397487811s

• [SLOW TEST:44.473 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 14 14:49:22.155: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test substitution in container's command
Feb 14 14:49:22.568: INFO: Waiting up to 5m0s for pod "var-expansion-78cff05e-5a00-462c-8af3-9fcdabcd180e" in namespace "var-expansion-6695" to be "success or failure"
Feb 14 14:49:22.579: INFO: Pod "var-expansion-78cff05e-5a00-462c-8af3-9fcdabcd180e": Phase="Pending", Reason="", readiness=false. Elapsed: 10.520252ms
Feb 14 14:49:24.593: INFO: Pod "var-expansion-78cff05e-5a00-462c-8af3-9fcdabcd180e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024786798s
Feb 14 14:49:26.610: INFO: Pod "var-expansion-78cff05e-5a00-462c-8af3-9fcdabcd180e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.04162604s
Feb 14 14:49:28.617: INFO: Pod "var-expansion-78cff05e-5a00-462c-8af3-9fcdabcd180e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.048685752s
Feb 14 14:49:30.630: INFO: Pod "var-expansion-78cff05e-5a00-462c-8af3-9fcdabcd180e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.06157629s
STEP: Saw pod success
Feb 14 14:49:30.630: INFO: Pod "var-expansion-78cff05e-5a00-462c-8af3-9fcdabcd180e" satisfied condition "success or failure"
Feb 14 14:49:30.637: INFO: Trying to get logs from node iruya-node pod var-expansion-78cff05e-5a00-462c-8af3-9fcdabcd180e container dapi-container: 
STEP: delete the pod
Feb 14 14:49:30.800: INFO: Waiting for pod var-expansion-78cff05e-5a00-462c-8af3-9fcdabcd180e to disappear
Feb 14 14:49:30.806: INFO: Pod var-expansion-78cff05e-5a00-462c-8af3-9fcdabcd180e no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 14 14:49:30.806: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-6695" for this suite.
Feb 14 14:49:36.833: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 14:49:36.946: INFO: namespace var-expansion-6695 deletion completed in 6.134005451s

• [SLOW TEST:14.791 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 14 14:49:36.946: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0666 on tmpfs
Feb 14 14:49:37.042: INFO: Waiting up to 5m0s for pod "pod-6716b9d8-f901-4d41-97ce-3c2a078cccca" in namespace "emptydir-6292" to be "success or failure"
Feb 14 14:49:37.047: INFO: Pod "pod-6716b9d8-f901-4d41-97ce-3c2a078cccca": Phase="Pending", Reason="", readiness=false. Elapsed: 4.884902ms
Feb 14 14:49:39.067: INFO: Pod "pod-6716b9d8-f901-4d41-97ce-3c2a078cccca": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024866435s
Feb 14 14:49:41.078: INFO: Pod "pod-6716b9d8-f901-4d41-97ce-3c2a078cccca": Phase="Pending", Reason="", readiness=false. Elapsed: 4.035347201s
Feb 14 14:49:43.135: INFO: Pod "pod-6716b9d8-f901-4d41-97ce-3c2a078cccca": Phase="Pending", Reason="", readiness=false. Elapsed: 6.091927009s
Feb 14 14:49:45.149: INFO: Pod "pod-6716b9d8-f901-4d41-97ce-3c2a078cccca": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.106791958s
STEP: Saw pod success
Feb 14 14:49:45.150: INFO: Pod "pod-6716b9d8-f901-4d41-97ce-3c2a078cccca" satisfied condition "success or failure"
Feb 14 14:49:45.159: INFO: Trying to get logs from node iruya-node pod pod-6716b9d8-f901-4d41-97ce-3c2a078cccca container test-container: 
STEP: delete the pod
Feb 14 14:49:45.246: INFO: Waiting for pod pod-6716b9d8-f901-4d41-97ce-3c2a078cccca to disappear
Feb 14 14:49:45.252: INFO: Pod pod-6716b9d8-f901-4d41-97ce-3c2a078cccca no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 14 14:49:45.252: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-6292" for this suite.
Feb 14 14:49:51.304: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 14:49:51.399: INFO: namespace emptydir-6292 deletion completed in 6.139776838s

• [SLOW TEST:14.453 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 14 14:49:51.399: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the container
STEP: wait for the container to reach Failed
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Feb 14 14:50:00.727: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 14 14:50:00.796: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-8914" for this suite.
Feb 14 14:50:06.868: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 14:50:06.967: INFO: namespace container-runtime-8914 deletion completed in 6.165653935s

• [SLOW TEST:15.569 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129
      should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 14 14:50:06.968: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test substitution in container's args
Feb 14 14:50:07.043: INFO: Waiting up to 5m0s for pod "var-expansion-44d5344d-e126-4801-9769-9b7e7198cee7" in namespace "var-expansion-2324" to be "success or failure"
Feb 14 14:50:07.098: INFO: Pod "var-expansion-44d5344d-e126-4801-9769-9b7e7198cee7": Phase="Pending", Reason="", readiness=false. Elapsed: 55.153479ms
Feb 14 14:50:09.108: INFO: Pod "var-expansion-44d5344d-e126-4801-9769-9b7e7198cee7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.065314687s
Feb 14 14:50:11.115: INFO: Pod "var-expansion-44d5344d-e126-4801-9769-9b7e7198cee7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.072310154s
Feb 14 14:50:13.123: INFO: Pod "var-expansion-44d5344d-e126-4801-9769-9b7e7198cee7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.079564726s
Feb 14 14:50:15.132: INFO: Pod "var-expansion-44d5344d-e126-4801-9769-9b7e7198cee7": Phase="Pending", Reason="", readiness=false. Elapsed: 8.0892505s
Feb 14 14:50:17.144: INFO: Pod "var-expansion-44d5344d-e126-4801-9769-9b7e7198cee7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.101168793s
STEP: Saw pod success
Feb 14 14:50:17.144: INFO: Pod "var-expansion-44d5344d-e126-4801-9769-9b7e7198cee7" satisfied condition "success or failure"
Feb 14 14:50:17.148: INFO: Trying to get logs from node iruya-node pod var-expansion-44d5344d-e126-4801-9769-9b7e7198cee7 container dapi-container: 
STEP: delete the pod
Feb 14 14:50:17.312: INFO: Waiting for pod var-expansion-44d5344d-e126-4801-9769-9b7e7198cee7 to disappear
Feb 14 14:50:17.324: INFO: Pod var-expansion-44d5344d-e126-4801-9769-9b7e7198cee7 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 14 14:50:17.325: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-2324" for this suite.
Feb 14 14:50:23.402: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 14:50:23.542: INFO: namespace var-expansion-2324 deletion completed in 6.203710128s

• [SLOW TEST:16.574 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-apps] ReplicationController 
  should surface a failure condition on a common issue like exceeded quota [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 14 14:50:23.542: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should surface a failure condition on a common issue like exceeded quota [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb 14 14:50:23.646: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace
STEP: Creating rc "condition-test" that asks for more than the allowed pod quota
STEP: Checking rc "condition-test" has the desired failure condition set
STEP: Scaling down rc "condition-test" to satisfy pod quota
Feb 14 14:50:26.783: INFO: Updating replication controller "condition-test"
STEP: Checking rc "condition-test" has no failure condition set
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 14 14:50:26.804: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-7664" for this suite.
Feb 14 14:50:40.958: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 14:50:41.048: INFO: namespace replication-controller-7664 deletion completed in 14.223672917s

• [SLOW TEST:17.506 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should surface a failure condition on a common issue like exceeded quota [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 14 14:50:41.048: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb 14 14:50:41.168: INFO: Waiting up to 5m0s for pod "downwardapi-volume-bfacf470-6ade-4230-866f-051698e5de7b" in namespace "downward-api-2654" to be "success or failure"
Feb 14 14:50:41.173: INFO: Pod "downwardapi-volume-bfacf470-6ade-4230-866f-051698e5de7b": Phase="Pending", Reason="", readiness=false. Elapsed: 5.0587ms
Feb 14 14:50:43.182: INFO: Pod "downwardapi-volume-bfacf470-6ade-4230-866f-051698e5de7b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014078889s
Feb 14 14:50:45.209: INFO: Pod "downwardapi-volume-bfacf470-6ade-4230-866f-051698e5de7b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.041433614s
Feb 14 14:50:47.218: INFO: Pod "downwardapi-volume-bfacf470-6ade-4230-866f-051698e5de7b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.049783243s
Feb 14 14:50:49.229: INFO: Pod "downwardapi-volume-bfacf470-6ade-4230-866f-051698e5de7b": Phase="Pending", Reason="", readiness=false. Elapsed: 8.06057613s
Feb 14 14:50:51.244: INFO: Pod "downwardapi-volume-bfacf470-6ade-4230-866f-051698e5de7b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.075628014s
STEP: Saw pod success
Feb 14 14:50:51.244: INFO: Pod "downwardapi-volume-bfacf470-6ade-4230-866f-051698e5de7b" satisfied condition "success or failure"
Feb 14 14:50:51.255: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-bfacf470-6ade-4230-866f-051698e5de7b container client-container: 
STEP: delete the pod
Feb 14 14:50:51.422: INFO: Waiting for pod downwardapi-volume-bfacf470-6ade-4230-866f-051698e5de7b to disappear
Feb 14 14:50:51.437: INFO: Pod downwardapi-volume-bfacf470-6ade-4230-866f-051698e5de7b no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 14 14:50:51.437: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-2654" for this suite.
Feb 14 14:50:57.553: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 14:50:57.822: INFO: namespace downward-api-2654 deletion completed in 6.303830835s

• [SLOW TEST:16.774 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Service endpoints latency 
  should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Service endpoints latency
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 14 14:50:57.823: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svc-latency
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating replication controller svc-latency-rc in namespace svc-latency-8036
I0214 14:50:57.927396       8 runners.go:180] Created replication controller with name: svc-latency-rc, namespace: svc-latency-8036, replica count: 1
I0214 14:50:58.978350       8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0214 14:50:59.978898       8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0214 14:51:00.979712       8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0214 14:51:01.980256       8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0214 14:51:02.980782       8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0214 14:51:03.981776       8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0214 14:51:04.982727       8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0214 14:51:05.983116       8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Feb 14 14:51:06.156: INFO: Created: latency-svc-lrz2c
Feb 14 14:51:06.173: INFO: Got endpoints: latency-svc-lrz2c [90.093362ms]
Feb 14 14:51:06.251: INFO: Created: latency-svc-fz9w8
Feb 14 14:51:06.292: INFO: Got endpoints: latency-svc-fz9w8 [118.144331ms]
Feb 14 14:51:06.358: INFO: Created: latency-svc-gkqwz
Feb 14 14:51:06.364: INFO: Got endpoints: latency-svc-gkqwz [188.775535ms]
Feb 14 14:51:06.446: INFO: Created: latency-svc-7qbsb
Feb 14 14:51:06.453: INFO: Got endpoints: latency-svc-7qbsb [278.754636ms]
Feb 14 14:51:06.639: INFO: Created: latency-svc-wxgs4
Feb 14 14:51:06.647: INFO: Got endpoints: latency-svc-wxgs4 [472.205961ms]
Feb 14 14:51:06.756: INFO: Created: latency-svc-pq65c
Feb 14 14:51:06.762: INFO: Got endpoints: latency-svc-pq65c [587.984033ms]
Feb 14 14:51:06.803: INFO: Created: latency-svc-9gmm8
Feb 14 14:51:06.818: INFO: Got endpoints: latency-svc-9gmm8 [643.082111ms]
Feb 14 14:51:06.919: INFO: Created: latency-svc-dpgl9
Feb 14 14:51:06.920: INFO: Got endpoints: latency-svc-dpgl9 [157.562565ms]
Feb 14 14:51:06.984: INFO: Created: latency-svc-t8fcw
Feb 14 14:51:06.995: INFO: Got endpoints: latency-svc-t8fcw [820.844842ms]
Feb 14 14:51:07.069: INFO: Created: latency-svc-pgnb9
Feb 14 14:51:07.079: INFO: Got endpoints: latency-svc-pgnb9 [904.010129ms]
Feb 14 14:51:07.124: INFO: Created: latency-svc-6fs26
Feb 14 14:51:07.135: INFO: Got endpoints: latency-svc-6fs26 [960.969228ms]
Feb 14 14:51:07.249: INFO: Created: latency-svc-7fvgh
Feb 14 14:51:07.260: INFO: Got endpoints: latency-svc-7fvgh [1.084797131s]
Feb 14 14:51:07.304: INFO: Created: latency-svc-pv5ch
Feb 14 14:51:07.308: INFO: Got endpoints: latency-svc-pv5ch [1.134411536s]
Feb 14 14:51:07.397: INFO: Created: latency-svc-8r9tv
Feb 14 14:51:07.409: INFO: Got endpoints: latency-svc-8r9tv [1.233729072s]
Feb 14 14:51:07.449: INFO: Created: latency-svc-b9smc
Feb 14 14:51:07.498: INFO: Created: latency-svc-bqvkg
Feb 14 14:51:07.548: INFO: Got endpoints: latency-svc-b9smc [1.37350742s]
Feb 14 14:51:07.556: INFO: Got endpoints: latency-svc-bqvkg [1.381964552s]
Feb 14 14:51:07.610: INFO: Created: latency-svc-j7d46
Feb 14 14:51:07.640: INFO: Got endpoints: latency-svc-j7d46 [1.464715932s]
Feb 14 14:51:07.703: INFO: Created: latency-svc-7gmqm
Feb 14 14:51:07.726: INFO: Got endpoints: latency-svc-7gmqm [1.433430465s]
Feb 14 14:51:07.791: INFO: Created: latency-svc-g6dk5
Feb 14 14:51:07.897: INFO: Got endpoints: latency-svc-g6dk5 [1.533463689s]
Feb 14 14:51:07.956: INFO: Created: latency-svc-s5fn7
Feb 14 14:51:07.979: INFO: Got endpoints: latency-svc-s5fn7 [1.525347133s]
Feb 14 14:51:08.108: INFO: Created: latency-svc-68zpd
Feb 14 14:51:08.108: INFO: Got endpoints: latency-svc-68zpd [1.460751908s]
Feb 14 14:51:08.186: INFO: Created: latency-svc-xwbbm
Feb 14 14:51:08.320: INFO: Got endpoints: latency-svc-xwbbm [1.501909259s]
Feb 14 14:51:08.585: INFO: Created: latency-svc-xvx6h
Feb 14 14:51:08.794: INFO: Created: latency-svc-4p8kp
Feb 14 14:51:08.794: INFO: Got endpoints: latency-svc-xvx6h [1.873834902s]
Feb 14 14:51:08.805: INFO: Got endpoints: latency-svc-4p8kp [1.810039132s]
Feb 14 14:51:09.010: INFO: Created: latency-svc-sq76s
Feb 14 14:51:09.026: INFO: Got endpoints: latency-svc-sq76s [1.947163162s]
Feb 14 14:51:09.097: INFO: Created: latency-svc-vbl2c
Feb 14 14:51:09.454: INFO: Got endpoints: latency-svc-vbl2c [2.319154333s]
Feb 14 14:51:09.504: INFO: Created: latency-svc-zbd55
Feb 14 14:51:09.508: INFO: Got endpoints: latency-svc-zbd55 [2.248624733s]
Feb 14 14:51:09.603: INFO: Created: latency-svc-96klh
Feb 14 14:51:09.624: INFO: Got endpoints: latency-svc-96klh [2.315399457s]
Feb 14 14:51:09.669: INFO: Created: latency-svc-nsb27
Feb 14 14:51:09.680: INFO: Got endpoints: latency-svc-nsb27 [2.270600785s]
Feb 14 14:51:09.786: INFO: Created: latency-svc-rjr5w
Feb 14 14:51:09.808: INFO: Got endpoints: latency-svc-rjr5w [2.259792903s]
Feb 14 14:51:09.956: INFO: Created: latency-svc-fg44l
Feb 14 14:51:09.956: INFO: Got endpoints: latency-svc-fg44l [2.399768066s]
Feb 14 14:51:10.127: INFO: Created: latency-svc-f8n7f
Feb 14 14:51:10.131: INFO: Got endpoints: latency-svc-f8n7f [2.491193897s]
Feb 14 14:51:10.177: INFO: Created: latency-svc-dff5f
Feb 14 14:51:10.181: INFO: Got endpoints: latency-svc-dff5f [2.454276333s]
Feb 14 14:51:10.294: INFO: Created: latency-svc-z979v
Feb 14 14:51:10.294: INFO: Got endpoints: latency-svc-z979v [2.396192865s]
Feb 14 14:51:10.508: INFO: Created: latency-svc-kljpm
Feb 14 14:51:10.519: INFO: Got endpoints: latency-svc-kljpm [2.53971862s]
Feb 14 14:51:10.585: INFO: Created: latency-svc-9bcbr
Feb 14 14:51:10.734: INFO: Got endpoints: latency-svc-9bcbr [2.625870755s]
Feb 14 14:51:10.735: INFO: Created: latency-svc-76dht
Feb 14 14:51:10.746: INFO: Got endpoints: latency-svc-76dht [2.426099271s]
Feb 14 14:51:10.958: INFO: Created: latency-svc-vb7d9
Feb 14 14:51:11.116: INFO: Got endpoints: latency-svc-vb7d9 [2.321758102s]
Feb 14 14:51:11.127: INFO: Created: latency-svc-6g65p
Feb 14 14:51:11.129: INFO: Got endpoints: latency-svc-6g65p [2.323755649s]
Feb 14 14:51:11.182: INFO: Created: latency-svc-n2p9v
Feb 14 14:51:11.187: INFO: Got endpoints: latency-svc-n2p9v [2.159943204s]
Feb 14 14:51:11.281: INFO: Created: latency-svc-ljj2l
Feb 14 14:51:11.299: INFO: Got endpoints: latency-svc-ljj2l [1.843545828s]
Feb 14 14:51:11.349: INFO: Created: latency-svc-z9sd2
Feb 14 14:51:11.361: INFO: Got endpoints: latency-svc-z9sd2 [1.852668177s]
Feb 14 14:51:11.447: INFO: Created: latency-svc-246h4
Feb 14 14:51:11.458: INFO: Got endpoints: latency-svc-246h4 [1.833729547s]
Feb 14 14:51:11.536: INFO: Created: latency-svc-lzcd2
Feb 14 14:51:11.619: INFO: Got endpoints: latency-svc-lzcd2 [1.939585356s]
Feb 14 14:51:11.642: INFO: Created: latency-svc-c4d6z
Feb 14 14:51:11.659: INFO: Got endpoints: latency-svc-c4d6z [1.85097158s]
Feb 14 14:51:11.800: INFO: Created: latency-svc-jqqbc
Feb 14 14:51:11.811: INFO: Got endpoints: latency-svc-jqqbc [1.854563474s]
Feb 14 14:51:11.923: INFO: Created: latency-svc-xj9l5
Feb 14 14:51:11.927: INFO: Got endpoints: latency-svc-xj9l5 [1.795207798s]
Feb 14 14:51:11.995: INFO: Created: latency-svc-k2pdl
Feb 14 14:51:12.016: INFO: Got endpoints: latency-svc-k2pdl [1.835540832s]
Feb 14 14:51:12.095: INFO: Created: latency-svc-tg7r5
Feb 14 14:51:12.114: INFO: Got endpoints: latency-svc-tg7r5 [1.819074374s]
Feb 14 14:51:12.155: INFO: Created: latency-svc-2st4r
Feb 14 14:51:12.230: INFO: Got endpoints: latency-svc-2st4r [1.711385772s]
Feb 14 14:51:12.256: INFO: Created: latency-svc-x7w9f
Feb 14 14:51:12.260: INFO: Got endpoints: latency-svc-x7w9f [1.525302542s]
Feb 14 14:51:12.386: INFO: Created: latency-svc-2w2h2
Feb 14 14:51:12.397: INFO: Got endpoints: latency-svc-2w2h2 [1.649534344s]
Feb 14 14:51:12.463: INFO: Created: latency-svc-8ntdz
Feb 14 14:51:12.478: INFO: Got endpoints: latency-svc-8ntdz [1.361497188s]
Feb 14 14:51:12.580: INFO: Created: latency-svc-jbz2p
Feb 14 14:51:12.592: INFO: Got endpoints: latency-svc-jbz2p [1.462884235s]
Feb 14 14:51:12.667: INFO: Created: latency-svc-m9nqr
Feb 14 14:51:12.728: INFO: Got endpoints: latency-svc-m9nqr [1.540792825s]
Feb 14 14:51:12.740: INFO: Created: latency-svc-8n2xx
Feb 14 14:51:12.747: INFO: Got endpoints: latency-svc-8n2xx [1.448613926s]
Feb 14 14:51:12.788: INFO: Created: latency-svc-rs5xs
Feb 14 14:51:12.798: INFO: Got endpoints: latency-svc-rs5xs [1.437200582s]
Feb 14 14:51:12.886: INFO: Created: latency-svc-xw47b
Feb 14 14:51:12.903: INFO: Got endpoints: latency-svc-xw47b [1.444823088s]
Feb 14 14:51:12.958: INFO: Created: latency-svc-grfbn
Feb 14 14:51:12.975: INFO: Got endpoints: latency-svc-grfbn [1.35503739s]
Feb 14 14:51:13.060: INFO: Created: latency-svc-65hhx
Feb 14 14:51:13.071: INFO: Got endpoints: latency-svc-65hhx [1.41158331s]
Feb 14 14:51:13.199: INFO: Created: latency-svc-64fg2
Feb 14 14:51:13.215: INFO: Got endpoints: latency-svc-64fg2 [1.403663041s]
Feb 14 14:51:13.252: INFO: Created: latency-svc-n4k6m
Feb 14 14:51:13.258: INFO: Got endpoints: latency-svc-n4k6m [1.331330024s]
Feb 14 14:51:13.360: INFO: Created: latency-svc-w284m
Feb 14 14:51:13.368: INFO: Got endpoints: latency-svc-w284m [1.351390182s]
Feb 14 14:51:13.418: INFO: Created: latency-svc-nlxcl
Feb 14 14:51:13.424: INFO: Got endpoints: latency-svc-nlxcl [1.310279149s]
Feb 14 14:51:13.538: INFO: Created: latency-svc-tb2z2
Feb 14 14:51:13.541: INFO: Got endpoints: latency-svc-tb2z2 [1.310728838s]
Feb 14 14:51:13.606: INFO: Created: latency-svc-4nn9d
Feb 14 14:51:13.624: INFO: Got endpoints: latency-svc-4nn9d [1.363646435s]
Feb 14 14:51:13.709: INFO: Created: latency-svc-lffgb
Feb 14 14:51:13.712: INFO: Got endpoints: latency-svc-lffgb [1.314639491s]
Feb 14 14:51:13.751: INFO: Created: latency-svc-2sm6r
Feb 14 14:51:13.767: INFO: Got endpoints: latency-svc-2sm6r [1.288980671s]
Feb 14 14:51:13.887: INFO: Created: latency-svc-qm7vk
Feb 14 14:51:13.887: INFO: Got endpoints: latency-svc-qm7vk [1.294889398s]
Feb 14 14:51:13.924: INFO: Created: latency-svc-zn48w
Feb 14 14:51:14.008: INFO: Got endpoints: latency-svc-zn48w [1.280132774s]
Feb 14 14:51:14.067: INFO: Created: latency-svc-xgc7z
Feb 14 14:51:14.155: INFO: Got endpoints: latency-svc-xgc7z [1.407757941s]
Feb 14 14:51:14.206: INFO: Created: latency-svc-mldqn
Feb 14 14:51:14.225: INFO: Got endpoints: latency-svc-mldqn [1.42546717s]
Feb 14 14:51:14.320: INFO: Created: latency-svc-cbgsd
Feb 14 14:51:14.350: INFO: Got endpoints: latency-svc-cbgsd [1.446958813s]
Feb 14 14:51:14.406: INFO: Created: latency-svc-8xsbk
Feb 14 14:51:14.407: INFO: Got endpoints: latency-svc-8xsbk [1.432083075s]
Feb 14 14:51:14.492: INFO: Created: latency-svc-cvlfx
Feb 14 14:51:14.986: INFO: Got endpoints: latency-svc-cvlfx [1.914260939s]
Feb 14 14:51:15.064: INFO: Created: latency-svc-ck7sw
Feb 14 14:51:15.080: INFO: Got endpoints: latency-svc-ck7sw [1.864380498s]
Feb 14 14:51:15.278: INFO: Created: latency-svc-bjwxf
Feb 14 14:51:15.352: INFO: Got endpoints: latency-svc-bjwxf [2.093394427s]
Feb 14 14:51:15.361: INFO: Created: latency-svc-dvbrt
Feb 14 14:51:15.377: INFO: Got endpoints: latency-svc-dvbrt [2.008442533s]
Feb 14 14:51:15.408: INFO: Created: latency-svc-dcdgk
Feb 14 14:51:15.414: INFO: Got endpoints: latency-svc-dcdgk [1.989474773s]
Feb 14 14:51:15.555: INFO: Created: latency-svc-8fx4h
Feb 14 14:51:15.567: INFO: Got endpoints: latency-svc-8fx4h [2.024966042s]
Feb 14 14:51:15.605: INFO: Created: latency-svc-64bs2
Feb 14 14:51:15.612: INFO: Got endpoints: latency-svc-64bs2 [1.987953056s]
Feb 14 14:51:15.718: INFO: Created: latency-svc-hwnh4
Feb 14 14:51:15.718: INFO: Got endpoints: latency-svc-hwnh4 [2.005729006s]
Feb 14 14:51:15.758: INFO: Created: latency-svc-wwzrz
Feb 14 14:51:15.770: INFO: Got endpoints: latency-svc-wwzrz [2.001987832s]
Feb 14 14:51:15.891: INFO: Created: latency-svc-pw9zd
Feb 14 14:51:15.900: INFO: Got endpoints: latency-svc-pw9zd [2.012488035s]
Feb 14 14:51:15.954: INFO: Created: latency-svc-d4dgt
Feb 14 14:51:15.962: INFO: Got endpoints: latency-svc-d4dgt [1.953083645s]
Feb 14 14:51:16.056: INFO: Created: latency-svc-24bkm
Feb 14 14:51:16.069: INFO: Got endpoints: latency-svc-24bkm [1.91278031s]
Feb 14 14:51:16.107: INFO: Created: latency-svc-5djrd
Feb 14 14:51:16.117: INFO: Got endpoints: latency-svc-5djrd [1.892365442s]
Feb 14 14:51:16.268: INFO: Created: latency-svc-kn6ll
Feb 14 14:51:16.276: INFO: Got endpoints: latency-svc-kn6ll [1.925074433s]
Feb 14 14:51:16.357: INFO: Created: latency-svc-tpcnw
Feb 14 14:51:16.358: INFO: Got endpoints: latency-svc-tpcnw [1.950153497s]
Feb 14 14:51:16.459: INFO: Created: latency-svc-vgpnv
Feb 14 14:51:16.488: INFO: Got endpoints: latency-svc-vgpnv [1.50110212s]
Feb 14 14:51:16.561: INFO: Created: latency-svc-jp8j7
Feb 14 14:51:16.629: INFO: Got endpoints: latency-svc-jp8j7 [1.54922817s]
Feb 14 14:51:16.687: INFO: Created: latency-svc-2xg8l
Feb 14 14:51:16.693: INFO: Got endpoints: latency-svc-2xg8l [1.341118448s]
Feb 14 14:51:16.797: INFO: Created: latency-svc-6lpp2
Feb 14 14:51:16.807: INFO: Got endpoints: latency-svc-6lpp2 [1.430251814s]
Feb 14 14:51:16.856: INFO: Created: latency-svc-bjsgl
Feb 14 14:51:16.878: INFO: Got endpoints: latency-svc-bjsgl [1.463862922s]
Feb 14 14:51:16.961: INFO: Created: latency-svc-mxml5
Feb 14 14:51:16.980: INFO: Got endpoints: latency-svc-mxml5 [1.412723509s]
Feb 14 14:51:17.030: INFO: Created: latency-svc-zkthw
Feb 14 14:51:17.036: INFO: Got endpoints: latency-svc-zkthw [1.423197037s]
Feb 14 14:51:17.170: INFO: Created: latency-svc-kgdtr
Feb 14 14:51:17.182: INFO: Got endpoints: latency-svc-kgdtr [1.464195976s]
Feb 14 14:51:17.256: INFO: Created: latency-svc-d8bkk
Feb 14 14:51:17.336: INFO: Got endpoints: latency-svc-d8bkk [1.566210062s]
Feb 14 14:51:17.363: INFO: Created: latency-svc-j87s5
Feb 14 14:51:17.367: INFO: Got endpoints: latency-svc-j87s5 [1.466754175s]
Feb 14 14:51:17.407: INFO: Created: latency-svc-t9dwz
Feb 14 14:51:17.416: INFO: Got endpoints: latency-svc-t9dwz [1.454245181s]
Feb 14 14:51:17.519: INFO: Created: latency-svc-47lrq
Feb 14 14:51:17.531: INFO: Got endpoints: latency-svc-47lrq [1.461621528s]
Feb 14 14:51:17.586: INFO: Created: latency-svc-n97vd
Feb 14 14:51:17.597: INFO: Got endpoints: latency-svc-n97vd [1.479194564s]
Feb 14 14:51:17.701: INFO: Created: latency-svc-88mhz
Feb 14 14:51:17.708: INFO: Got endpoints: latency-svc-88mhz [1.431620063s]
Feb 14 14:51:17.758: INFO: Created: latency-svc-gf8ls
Feb 14 14:51:17.940: INFO: Got endpoints: latency-svc-gf8ls [1.581776229s]
Feb 14 14:51:17.947: INFO: Created: latency-svc-jlw4l
Feb 14 14:51:17.969: INFO: Got endpoints: latency-svc-jlw4l [1.480666416s]
Feb 14 14:51:18.057: INFO: Created: latency-svc-75btg
Feb 14 14:51:18.158: INFO: Got endpoints: latency-svc-75btg [1.528284154s]
Feb 14 14:51:18.177: INFO: Created: latency-svc-sd5fv
Feb 14 14:51:18.187: INFO: Got endpoints: latency-svc-sd5fv [1.493733021s]
Feb 14 14:51:18.415: INFO: Created: latency-svc-rdz9z
Feb 14 14:51:18.460: INFO: Got endpoints: latency-svc-rdz9z [1.652194473s]
Feb 14 14:51:18.593: INFO: Created: latency-svc-kc58z
Feb 14 14:51:18.617: INFO: Got endpoints: latency-svc-kc58z [1.738508435s]
Feb 14 14:51:18.644: INFO: Created: latency-svc-2bfvm
Feb 14 14:51:18.653: INFO: Got endpoints: latency-svc-2bfvm [1.673705944s]
Feb 14 14:51:18.776: INFO: Created: latency-svc-lwshr
Feb 14 14:51:18.825: INFO: Created: latency-svc-rpvkk
Feb 14 14:51:18.825: INFO: Got endpoints: latency-svc-lwshr [1.789507336s]
Feb 14 14:51:18.850: INFO: Got endpoints: latency-svc-rpvkk [1.667314168s]
Feb 14 14:51:18.998: INFO: Created: latency-svc-lxvjr
Feb 14 14:51:19.003: INFO: Got endpoints: latency-svc-lxvjr [1.666353093s]
Feb 14 14:51:19.050: INFO: Created: latency-svc-tlpbz
Feb 14 14:51:19.072: INFO: Got endpoints: latency-svc-tlpbz [1.704405673s]
Feb 14 14:51:19.279: INFO: Created: latency-svc-n777l
Feb 14 14:51:19.348: INFO: Got endpoints: latency-svc-n777l [1.931231553s]
Feb 14 14:51:19.453: INFO: Created: latency-svc-prvc6
Feb 14 14:51:19.502: INFO: Got endpoints: latency-svc-prvc6 [1.971072787s]
Feb 14 14:51:19.557: INFO: Created: latency-svc-tjrh9
Feb 14 14:51:19.557: INFO: Got endpoints: latency-svc-tjrh9 [1.960029246s]
Feb 14 14:51:19.661: INFO: Created: latency-svc-x6x69
Feb 14 14:51:19.677: INFO: Got endpoints: latency-svc-x6x69 [1.96897945s]
Feb 14 14:51:19.723: INFO: Created: latency-svc-flxdn
Feb 14 14:51:19.748: INFO: Created: latency-svc-92vrt
Feb 14 14:51:19.749: INFO: Got endpoints: latency-svc-flxdn [1.808925874s]
Feb 14 14:51:19.818: INFO: Got endpoints: latency-svc-92vrt [1.848632157s]
Feb 14 14:51:19.839: INFO: Created: latency-svc-gzb7b
Feb 14 14:51:19.844: INFO: Got endpoints: latency-svc-gzb7b [1.685911681s]
Feb 14 14:51:19.887: INFO: Created: latency-svc-67fg7
Feb 14 14:51:19.901: INFO: Got endpoints: latency-svc-67fg7 [1.713873526s]
Feb 14 14:51:20.020: INFO: Created: latency-svc-5djht
Feb 14 14:51:20.029: INFO: Got endpoints: latency-svc-5djht [1.567420377s]
Feb 14 14:51:20.067: INFO: Created: latency-svc-lvf6p
Feb 14 14:51:20.076: INFO: Got endpoints: latency-svc-lvf6p [1.458704535s]
Feb 14 14:51:20.120: INFO: Created: latency-svc-74hlk
Feb 14 14:51:20.219: INFO: Got endpoints: latency-svc-74hlk [1.564873495s]
Feb 14 14:51:20.280: INFO: Created: latency-svc-dh7xb
Feb 14 14:51:20.300: INFO: Got endpoints: latency-svc-dh7xb [1.474337123s]
Feb 14 14:51:20.705: INFO: Created: latency-svc-c8d8x
Feb 14 14:51:20.710: INFO: Got endpoints: latency-svc-c8d8x [1.860301253s]
Feb 14 14:51:20.793: INFO: Created: latency-svc-z79vv
Feb 14 14:51:20.835: INFO: Got endpoints: latency-svc-z79vv [1.831450319s]
Feb 14 14:51:20.873: INFO: Created: latency-svc-p7hvs
Feb 14 14:51:20.877: INFO: Got endpoints: latency-svc-p7hvs [1.804404052s]
Feb 14 14:51:20.936: INFO: Created: latency-svc-jxpzn
Feb 14 14:51:21.003: INFO: Got endpoints: latency-svc-jxpzn [1.655156653s]
Feb 14 14:51:21.052: INFO: Created: latency-svc-spppj
Feb 14 14:51:21.181: INFO: Got endpoints: latency-svc-spppj [1.67894753s]
Feb 14 14:51:21.182: INFO: Created: latency-svc-nl8l7
Feb 14 14:51:21.214: INFO: Got endpoints: latency-svc-nl8l7 [1.657116734s]
Feb 14 14:51:21.218: INFO: Created: latency-svc-jz8vn
Feb 14 14:51:21.385: INFO: Got endpoints: latency-svc-jz8vn [1.707803138s]
Feb 14 14:51:21.387: INFO: Created: latency-svc-zq79z
Feb 14 14:51:21.393: INFO: Got endpoints: latency-svc-zq79z [1.643597682s]
Feb 14 14:51:21.445: INFO: Created: latency-svc-48tkc
Feb 14 14:51:21.447: INFO: Got endpoints: latency-svc-48tkc [1.629108825s]
Feb 14 14:51:21.557: INFO: Created: latency-svc-k6dcp
Feb 14 14:51:21.564: INFO: Got endpoints: latency-svc-k6dcp [1.719594604s]
Feb 14 14:51:21.628: INFO: Created: latency-svc-hqbrb
Feb 14 14:51:21.636: INFO: Got endpoints: latency-svc-hqbrb [1.734944548s]
Feb 14 14:51:21.710: INFO: Created: latency-svc-rdb7x
Feb 14 14:51:21.725: INFO: Got endpoints: latency-svc-rdb7x [1.696347074s]
Feb 14 14:51:21.769: INFO: Created: latency-svc-9hdbn
Feb 14 14:51:21.778: INFO: Got endpoints: latency-svc-9hdbn [1.701776653s]
Feb 14 14:51:21.847: INFO: Created: latency-svc-fkpgg
Feb 14 14:51:21.858: INFO: Got endpoints: latency-svc-fkpgg [1.638758987s]
Feb 14 14:51:21.921: INFO: Created: latency-svc-w5xxd
Feb 14 14:51:21.925: INFO: Got endpoints: latency-svc-w5xxd [1.624753223s]
Feb 14 14:51:22.032: INFO: Created: latency-svc-2gcvw
Feb 14 14:51:22.064: INFO: Got endpoints: latency-svc-2gcvw [1.353675731s]
Feb 14 14:51:22.073: INFO: Created: latency-svc-szkmm
Feb 14 14:51:22.102: INFO: Got endpoints: latency-svc-szkmm [1.267069283s]
Feb 14 14:51:22.112: INFO: Created: latency-svc-t2kgx
Feb 14 14:51:22.179: INFO: Got endpoints: latency-svc-t2kgx [1.302148181s]
Feb 14 14:51:22.248: INFO: Created: latency-svc-jbzjh
Feb 14 14:51:22.272: INFO: Created: latency-svc-486j2
Feb 14 14:51:22.273: INFO: Got endpoints: latency-svc-jbzjh [1.269619501s]
Feb 14 14:51:22.405: INFO: Got endpoints: latency-svc-486j2 [1.223358682s]
Feb 14 14:51:22.408: INFO: Created: latency-svc-x6wl9
Feb 14 14:51:22.438: INFO: Got endpoints: latency-svc-x6wl9 [1.223713346s]
Feb 14 14:51:22.466: INFO: Created: latency-svc-78fcv
Feb 14 14:51:22.482: INFO: Got endpoints: latency-svc-78fcv [1.095652405s]
Feb 14 14:51:22.571: INFO: Created: latency-svc-zb4lq
Feb 14 14:51:22.577: INFO: Got endpoints: latency-svc-zb4lq [1.1839805s]
Feb 14 14:51:22.612: INFO: Created: latency-svc-dc9jl
Feb 14 14:51:22.631: INFO: Got endpoints: latency-svc-dc9jl [1.18359026s]
Feb 14 14:51:22.731: INFO: Created: latency-svc-x4nt9
Feb 14 14:51:22.735: INFO: Got endpoints: latency-svc-x4nt9 [1.17104254s]
Feb 14 14:51:22.793: INFO: Created: latency-svc-qf9ks
Feb 14 14:51:22.799: INFO: Got endpoints: latency-svc-qf9ks [1.162466072s]
Feb 14 14:51:22.912: INFO: Created: latency-svc-wtvt4
Feb 14 14:51:22.953: INFO: Got endpoints: latency-svc-wtvt4 [1.227623776s]
Feb 14 14:51:22.965: INFO: Created: latency-svc-6tfhl
Feb 14 14:51:22.976: INFO: Got endpoints: latency-svc-6tfhl [1.197756808s]
Feb 14 14:51:23.060: INFO: Created: latency-svc-zcrgm
Feb 14 14:51:23.090: INFO: Got endpoints: latency-svc-zcrgm [1.231322329s]
Feb 14 14:51:23.095: INFO: Created: latency-svc-wkk98
Feb 14 14:51:23.099: INFO: Got endpoints: latency-svc-wkk98 [1.173162648s]
Feb 14 14:51:23.136: INFO: Created: latency-svc-jt2k5
Feb 14 14:51:23.196: INFO: Got endpoints: latency-svc-jt2k5 [1.131031304s]
Feb 14 14:51:23.256: INFO: Created: latency-svc-dvb7n
Feb 14 14:51:23.263: INFO: Got endpoints: latency-svc-dvb7n [1.160038363s]
Feb 14 14:51:23.405: INFO: Created: latency-svc-t8hlk
Feb 14 14:51:23.417: INFO: Got endpoints: latency-svc-t8hlk [1.238324969s]
Feb 14 14:51:23.459: INFO: Created: latency-svc-hld69
Feb 14 14:51:23.468: INFO: Got endpoints: latency-svc-hld69 [1.195032653s]
Feb 14 14:51:23.562: INFO: Created: latency-svc-5xz68
Feb 14 14:51:23.568: INFO: Got endpoints: latency-svc-5xz68 [1.162227768s]
Feb 14 14:51:23.616: INFO: Created: latency-svc-4whq2
Feb 14 14:51:23.626: INFO: Got endpoints: latency-svc-4whq2 [1.187620713s]
Feb 14 14:51:23.664: INFO: Created: latency-svc-v2rs7
Feb 14 14:51:23.735: INFO: Got endpoints: latency-svc-v2rs7 [1.252561659s]
Feb 14 14:51:23.756: INFO: Created: latency-svc-nlq8m
Feb 14 14:51:23.806: INFO: Got endpoints: latency-svc-nlq8m [1.228568481s]
Feb 14 14:51:23.806: INFO: Created: latency-svc-lhjbv
Feb 14 14:51:23.898: INFO: Got endpoints: latency-svc-lhjbv [1.266850853s]
Feb 14 14:51:23.935: INFO: Created: latency-svc-fzvxn
Feb 14 14:51:23.979: INFO: Got endpoints: latency-svc-fzvxn [1.24282079s]
Feb 14 14:51:23.988: INFO: Created: latency-svc-nzqzx
Feb 14 14:51:24.066: INFO: Got endpoints: latency-svc-nzqzx [1.266703674s]
Feb 14 14:51:24.102: INFO: Created: latency-svc-n2rpg
Feb 14 14:51:24.149: INFO: Got endpoints: latency-svc-n2rpg [1.195128511s]
Feb 14 14:51:24.155: INFO: Created: latency-svc-jg57g
Feb 14 14:51:24.225: INFO: Got endpoints: latency-svc-jg57g [1.248812484s]
Feb 14 14:51:24.297: INFO: Created: latency-svc-nnbfk
Feb 14 14:51:24.313: INFO: Got endpoints: latency-svc-nnbfk [1.222673726s]
Feb 14 14:51:24.444: INFO: Created: latency-svc-488lt
Feb 14 14:51:24.449: INFO: Got endpoints: latency-svc-488lt [1.350560805s]
Feb 14 14:51:24.513: INFO: Created: latency-svc-g2rjl
Feb 14 14:51:24.526: INFO: Got endpoints: latency-svc-g2rjl [1.328783763s]
Feb 14 14:51:24.603: INFO: Created: latency-svc-9h7hd
Feb 14 14:51:24.637: INFO: Got endpoints: latency-svc-9h7hd [1.374008667s]
Feb 14 14:51:24.639: INFO: Created: latency-svc-t5x8z
Feb 14 14:51:24.645: INFO: Got endpoints: latency-svc-t5x8z [1.227190648s]
Feb 14 14:51:24.689: INFO: Created: latency-svc-vwbb8
Feb 14 14:51:24.775: INFO: Got endpoints: latency-svc-vwbb8 [1.306063804s]
Feb 14 14:51:24.804: INFO: Created: latency-svc-872f7
Feb 14 14:51:24.810: INFO: Got endpoints: latency-svc-872f7 [1.242364647s]
Feb 14 14:51:24.849: INFO: Created: latency-svc-jw6bg
Feb 14 14:51:24.870: INFO: Got endpoints: latency-svc-jw6bg [1.243713593s]
Feb 14 14:51:24.978: INFO: Created: latency-svc-czs5l
Feb 14 14:51:24.982: INFO: Got endpoints: latency-svc-czs5l [1.247053057s]
Feb 14 14:51:25.036: INFO: Created: latency-svc-jlbx2
Feb 14 14:51:25.046: INFO: Got endpoints: latency-svc-jlbx2 [1.238796652s]
Feb 14 14:51:25.123: INFO: Created: latency-svc-glp4v
Feb 14 14:51:25.164: INFO: Got endpoints: latency-svc-glp4v [1.265138412s]
Feb 14 14:51:25.169: INFO: Created: latency-svc-mt99k
Feb 14 14:51:25.171: INFO: Got endpoints: latency-svc-mt99k [1.191955474s]
Feb 14 14:51:25.214: INFO: Created: latency-svc-k978p
Feb 14 14:51:25.262: INFO: Got endpoints: latency-svc-k978p [1.195746168s]
Feb 14 14:51:25.306: INFO: Created: latency-svc-b6zzw
Feb 14 14:51:25.312: INFO: Got endpoints: latency-svc-b6zzw [1.161421212s]
Feb 14 14:51:25.443: INFO: Created: latency-svc-6cxt2
Feb 14 14:51:25.490: INFO: Got endpoints: latency-svc-6cxt2 [1.264389336s]
Feb 14 14:51:25.491: INFO: Created: latency-svc-rs4xf
Feb 14 14:51:25.523: INFO: Got endpoints: latency-svc-rs4xf [1.209290536s]
Feb 14 14:51:25.605: INFO: Created: latency-svc-4z76t
Feb 14 14:51:25.607: INFO: Got endpoints: latency-svc-4z76t [1.15711541s]
Feb 14 14:51:25.647: INFO: Created: latency-svc-lljs9
Feb 14 14:51:25.649: INFO: Got endpoints: latency-svc-lljs9 [1.123257596s]
Feb 14 14:51:25.701: INFO: Created: latency-svc-dsjln
Feb 14 14:51:25.773: INFO: Got endpoints: latency-svc-dsjln [1.135608211s]
Feb 14 14:51:25.811: INFO: Created: latency-svc-qmgjh
Feb 14 14:51:25.817: INFO: Got endpoints: latency-svc-qmgjh [1.170982448s]
Feb 14 14:51:25.850: INFO: Created: latency-svc-lfvhf
Feb 14 14:51:25.857: INFO: Got endpoints: latency-svc-lfvhf [1.081593242s]
Feb 14 14:51:25.940: INFO: Created: latency-svc-4ljdb
Feb 14 14:51:26.001: INFO: Created: latency-svc-f8w4f
Feb 14 14:51:26.001: INFO: Got endpoints: latency-svc-4ljdb [1.19049696s]
Feb 14 14:51:26.138: INFO: Got endpoints: latency-svc-f8w4f [1.267445692s]
Feb 14 14:51:26.175: INFO: Created: latency-svc-mxzm2
Feb 14 14:51:26.183: INFO: Got endpoints: latency-svc-mxzm2 [1.200319668s]
Feb 14 14:51:26.222: INFO: Created: latency-svc-vckcm
Feb 14 14:51:26.225: INFO: Got endpoints: latency-svc-vckcm [1.179412726s]
Feb 14 14:51:26.372: INFO: Created: latency-svc-vxf6p
Feb 14 14:51:26.390: INFO: Got endpoints: latency-svc-vxf6p [1.225949475s]
Feb 14 14:51:26.456: INFO: Created: latency-svc-t4h7k
Feb 14 14:51:26.461: INFO: Got endpoints: latency-svc-t4h7k [1.289372155s]
Feb 14 14:51:26.536: INFO: Created: latency-svc-28skk
Feb 14 14:51:26.637: INFO: Got endpoints: latency-svc-28skk [1.374013101s]
Feb 14 14:51:26.642: INFO: Created: latency-svc-gm2ht
Feb 14 14:51:26.658: INFO: Got endpoints: latency-svc-gm2ht [1.34638066s]
Feb 14 14:51:26.719: INFO: Created: latency-svc-7m29z
Feb 14 14:51:26.776: INFO: Got endpoints: latency-svc-7m29z [1.286097397s]
Feb 14 14:51:26.802: INFO: Created: latency-svc-9kt2w
Feb 14 14:51:26.802: INFO: Got endpoints: latency-svc-9kt2w [1.279147454s]
Feb 14 14:51:26.846: INFO: Created: latency-svc-xl6tv
Feb 14 14:51:26.852: INFO: Got endpoints: latency-svc-xl6tv [1.244832697s]
Feb 14 14:51:26.852: INFO: Latencies: [118.144331ms 157.562565ms 188.775535ms 278.754636ms 472.205961ms 587.984033ms 643.082111ms 820.844842ms 904.010129ms 960.969228ms 1.081593242s 1.084797131s 1.095652405s 1.123257596s 1.131031304s 1.134411536s 1.135608211s 1.15711541s 1.160038363s 1.161421212s 1.162227768s 1.162466072s 1.170982448s 1.17104254s 1.173162648s 1.179412726s 1.18359026s 1.1839805s 1.187620713s 1.19049696s 1.191955474s 1.195032653s 1.195128511s 1.195746168s 1.197756808s 1.200319668s 1.209290536s 1.222673726s 1.223358682s 1.223713346s 1.225949475s 1.227190648s 1.227623776s 1.228568481s 1.231322329s 1.233729072s 1.238324969s 1.238796652s 1.242364647s 1.24282079s 1.243713593s 1.244832697s 1.247053057s 1.248812484s 1.252561659s 1.264389336s 1.265138412s 1.266703674s 1.266850853s 1.267069283s 1.267445692s 1.269619501s 1.279147454s 1.280132774s 1.286097397s 1.288980671s 1.289372155s 1.294889398s 1.302148181s 1.306063804s 1.310279149s 1.310728838s 1.314639491s 1.328783763s 1.331330024s 1.341118448s 1.34638066s 1.350560805s 1.351390182s 1.353675731s 1.35503739s 1.361497188s 1.363646435s 1.37350742s 1.374008667s 1.374013101s 1.381964552s 1.403663041s 1.407757941s 1.41158331s 1.412723509s 1.423197037s 1.42546717s 1.430251814s 1.431620063s 1.432083075s 1.433430465s 1.437200582s 1.444823088s 1.446958813s 1.448613926s 1.454245181s 1.458704535s 1.460751908s 1.461621528s 1.462884235s 1.463862922s 1.464195976s 1.464715932s 1.466754175s 1.474337123s 1.479194564s 1.480666416s 1.493733021s 1.50110212s 1.501909259s 1.525302542s 1.525347133s 1.528284154s 1.533463689s 1.540792825s 1.54922817s 1.564873495s 1.566210062s 1.567420377s 1.581776229s 1.624753223s 1.629108825s 1.638758987s 1.643597682s 1.649534344s 1.652194473s 1.655156653s 1.657116734s 1.666353093s 1.667314168s 1.673705944s 1.67894753s 1.685911681s 1.696347074s 1.701776653s 1.704405673s 1.707803138s 1.711385772s 1.713873526s 1.719594604s 1.734944548s 1.738508435s 1.789507336s 1.795207798s 1.804404052s 1.808925874s 1.810039132s 1.819074374s 1.831450319s 1.833729547s 1.835540832s 1.843545828s 1.848632157s 1.85097158s 1.852668177s 1.854563474s 1.860301253s 1.864380498s 1.873834902s 1.892365442s 1.91278031s 1.914260939s 1.925074433s 1.931231553s 1.939585356s 1.947163162s 1.950153497s 1.953083645s 1.960029246s 1.96897945s 1.971072787s 1.987953056s 1.989474773s 2.001987832s 2.005729006s 2.008442533s 2.012488035s 2.024966042s 2.093394427s 2.159943204s 2.248624733s 2.259792903s 2.270600785s 2.315399457s 2.319154333s 2.321758102s 2.323755649s 2.396192865s 2.399768066s 2.426099271s 2.454276333s 2.491193897s 2.53971862s 2.625870755s]
Feb 14 14:51:26.853: INFO: 50 %ile: 1.448613926s
Feb 14 14:51:26.854: INFO: 90 %ile: 2.005729006s
Feb 14 14:51:26.854: INFO: 99 %ile: 2.53971862s
Feb 14 14:51:26.854: INFO: Total sample count: 200
[AfterEach] [sig-network] Service endpoints latency
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 14 14:51:26.855: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svc-latency-8036" for this suite.
Feb 14 14:52:00.973: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 14:52:01.107: INFO: namespace svc-latency-8036 deletion completed in 34.163384499s

• [SLOW TEST:63.285 seconds]
[sig-network] Service endpoints latency
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 14 14:52:01.108: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb 14 14:52:01.213: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3c3339a4-624a-4d17-88e4-c851ff07ea85" in namespace "projected-2852" to be "success or failure"
Feb 14 14:52:01.231: INFO: Pod "downwardapi-volume-3c3339a4-624a-4d17-88e4-c851ff07ea85": Phase="Pending", Reason="", readiness=false. Elapsed: 17.496241ms
Feb 14 14:52:03.240: INFO: Pod "downwardapi-volume-3c3339a4-624a-4d17-88e4-c851ff07ea85": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026164887s
Feb 14 14:52:05.253: INFO: Pod "downwardapi-volume-3c3339a4-624a-4d17-88e4-c851ff07ea85": Phase="Pending", Reason="", readiness=false. Elapsed: 4.039272992s
Feb 14 14:52:07.262: INFO: Pod "downwardapi-volume-3c3339a4-624a-4d17-88e4-c851ff07ea85": Phase="Pending", Reason="", readiness=false. Elapsed: 6.049042445s
Feb 14 14:52:09.275: INFO: Pod "downwardapi-volume-3c3339a4-624a-4d17-88e4-c851ff07ea85": Phase="Pending", Reason="", readiness=false. Elapsed: 8.061900487s
Feb 14 14:52:11.282: INFO: Pod "downwardapi-volume-3c3339a4-624a-4d17-88e4-c851ff07ea85": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.068499262s
STEP: Saw pod success
Feb 14 14:52:11.282: INFO: Pod "downwardapi-volume-3c3339a4-624a-4d17-88e4-c851ff07ea85" satisfied condition "success or failure"
Feb 14 14:52:11.285: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-3c3339a4-624a-4d17-88e4-c851ff07ea85 container client-container: 
STEP: delete the pod
Feb 14 14:52:11.328: INFO: Waiting for pod downwardapi-volume-3c3339a4-624a-4d17-88e4-c851ff07ea85 to disappear
Feb 14 14:52:11.347: INFO: Pod downwardapi-volume-3c3339a4-624a-4d17-88e4-c851ff07ea85 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 14 14:52:11.348: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2852" for this suite.
Feb 14 14:52:17.473: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 14:52:17.584: INFO: namespace projected-2852 deletion completed in 6.230036173s

• [SLOW TEST:16.476 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl version 
  should check is all data is printed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 14 14:52:17.585: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should check is all data is printed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb 14 14:52:17.668: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version'
Feb 14 14:52:17.874: INFO: stderr: ""
Feb 14 14:52:17.874: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"15\", GitVersion:\"v1.15.7\", GitCommit:\"6c143d35bb11d74970e7bc0b6c45b6bfdffc0bd4\", GitTreeState:\"clean\", BuildDate:\"2019-12-22T16:55:20Z\", GoVersion:\"go1.12.14\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"15\", GitVersion:\"v1.15.1\", GitCommit:\"4485c6f18cee9a5d3c3b4e523bd27972b1b53892\", GitTreeState:\"clean\", BuildDate:\"2019-07-18T09:09:21Z\", GoVersion:\"go1.12.5\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 14 14:52:17.874: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-7065" for this suite.
Feb 14 14:52:23.933: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 14:52:24.034: INFO: namespace kubectl-7065 deletion completed in 6.145828629s

• [SLOW TEST:6.450 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl version
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should check is all data is printed  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 14 14:52:24.035: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating the pod
Feb 14 14:52:34.742: INFO: Successfully updated pod "labelsupdatee21e9377-ab83-4226-a944-4b423c4f7d6e"
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 14 14:52:36.810: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3627" for this suite.
Feb 14 14:52:58.854: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 14:52:59.000: INFO: namespace projected-3627 deletion completed in 22.183656517s

• [SLOW TEST:34.966 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-storage] HostPath 
  should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 14 14:52:59.001: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename hostpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37
[It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test hostPath mode
Feb 14 14:52:59.131: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-1535" to be "success or failure"
Feb 14 14:52:59.140: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 8.058924ms
Feb 14 14:53:01.147: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015554662s
Feb 14 14:53:03.159: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.02744302s
Feb 14 14:53:05.166: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 6.034120263s
Feb 14 14:53:07.183: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 8.050997412s
Feb 14 14:53:09.231: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 10.098948297s
Feb 14 14:53:11.238: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 12.105849006s
Feb 14 14:53:13.252: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.120120892s
STEP: Saw pod success
Feb 14 14:53:13.252: INFO: Pod "pod-host-path-test" satisfied condition "success or failure"
Feb 14 14:53:13.266: INFO: Trying to get logs from node iruya-node pod pod-host-path-test container test-container-1: 
STEP: delete the pod
Feb 14 14:53:13.413: INFO: Waiting for pod pod-host-path-test to disappear
Feb 14 14:53:13.423: INFO: Pod pod-host-path-test no longer exists
[AfterEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 14 14:53:13.424: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "hostpath-1535" for this suite.
Feb 14 14:53:19.464: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 14:53:19.599: INFO: namespace hostpath-1535 deletion completed in 6.162112845s

• [SLOW TEST:20.599 seconds]
[sig-storage] HostPath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34
  should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 14 14:53:19.600: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod busybox-fdfc860b-0d0d-4835-9173-46c7a84b9127 in namespace container-probe-9610
Feb 14 14:53:27.817: INFO: Started pod busybox-fdfc860b-0d0d-4835-9173-46c7a84b9127 in namespace container-probe-9610
STEP: checking the pod's current state and verifying that restartCount is present
Feb 14 14:53:27.824: INFO: Initial restart count of pod busybox-fdfc860b-0d0d-4835-9173-46c7a84b9127 is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 14 14:57:27.959: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-9610" for this suite.
Feb 14 14:57:33.997: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 14:57:34.105: INFO: namespace container-probe-9610 deletion completed in 6.133544632s

• [SLOW TEST:254.506 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run deployment 
  should create a deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 14 14:57:34.106: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1557
[It] should create a deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Feb 14 14:57:34.169: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --generator=deployment/apps.v1 --namespace=kubectl-3955'
Feb 14 14:57:36.225: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Feb 14 14:57:36.225: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n"
STEP: verifying the deployment e2e-test-nginx-deployment was created
STEP: verifying the pod controlled by deployment e2e-test-nginx-deployment was created
[AfterEach] [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1562
Feb 14 14:57:40.328: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=kubectl-3955'
Feb 14 14:57:40.571: INFO: stderr: ""
Feb 14 14:57:40.571: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 14 14:57:40.572: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3955" for this suite.
Feb 14 14:58:04.658: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 14:58:04.811: INFO: namespace kubectl-3955 deletion completed in 24.202457876s

• [SLOW TEST:30.705 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create a deployment from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 14 14:58:04.812: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name projected-secret-test-547002b5-d67e-4e33-9587-313bcedde094
STEP: Creating a pod to test consume secrets
Feb 14 14:58:05.005: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-c781ab71-6871-495d-8e74-0dcdb2f61a85" in namespace "projected-7803" to be "success or failure"
Feb 14 14:58:05.187: INFO: Pod "pod-projected-secrets-c781ab71-6871-495d-8e74-0dcdb2f61a85": Phase="Pending", Reason="", readiness=false. Elapsed: 182.165682ms
Feb 14 14:58:07.201: INFO: Pod "pod-projected-secrets-c781ab71-6871-495d-8e74-0dcdb2f61a85": Phase="Pending", Reason="", readiness=false. Elapsed: 2.195917924s
Feb 14 14:58:09.213: INFO: Pod "pod-projected-secrets-c781ab71-6871-495d-8e74-0dcdb2f61a85": Phase="Pending", Reason="", readiness=false. Elapsed: 4.207482868s
Feb 14 14:58:11.220: INFO: Pod "pod-projected-secrets-c781ab71-6871-495d-8e74-0dcdb2f61a85": Phase="Pending", Reason="", readiness=false. Elapsed: 6.215199556s
Feb 14 14:58:13.229: INFO: Pod "pod-projected-secrets-c781ab71-6871-495d-8e74-0dcdb2f61a85": Phase="Pending", Reason="", readiness=false. Elapsed: 8.224108998s
Feb 14 14:58:15.250: INFO: Pod "pod-projected-secrets-c781ab71-6871-495d-8e74-0dcdb2f61a85": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.24513227s
STEP: Saw pod success
Feb 14 14:58:15.251: INFO: Pod "pod-projected-secrets-c781ab71-6871-495d-8e74-0dcdb2f61a85" satisfied condition "success or failure"
Feb 14 14:58:15.256: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-c781ab71-6871-495d-8e74-0dcdb2f61a85 container projected-secret-volume-test: 
STEP: delete the pod
Feb 14 14:58:15.416: INFO: Waiting for pod pod-projected-secrets-c781ab71-6871-495d-8e74-0dcdb2f61a85 to disappear
Feb 14 14:58:15.424: INFO: Pod pod-projected-secrets-c781ab71-6871-495d-8e74-0dcdb2f61a85 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 14 14:58:15.425: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7803" for this suite.
Feb 14 14:58:21.467: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 14:58:21.584: INFO: namespace projected-7803 deletion completed in 6.155671893s

• [SLOW TEST:16.773 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 14 14:58:21.585: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb 14 14:58:21.859: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"1aa2e317-1297-4c3a-aab6-43bc671cebbe", Controller:(*bool)(0xc002d4610a), BlockOwnerDeletion:(*bool)(0xc002d4610b)}}
Feb 14 14:58:21.893: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"c9c3e4c5-4562-4e23-b330-e3ef8b1de0d8", Controller:(*bool)(0xc00243aa2a), BlockOwnerDeletion:(*bool)(0xc00243aa2b)}}
Feb 14 14:58:21.909: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"94b2afb0-6da8-4765-99be-35b9fade681e", Controller:(*bool)(0xc00243b00a), BlockOwnerDeletion:(*bool)(0xc00243b00b)}}
[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 14 14:58:26.939: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-4016" for this suite.
Feb 14 14:58:33.124: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 14:58:33.405: INFO: namespace gc-4016 deletion completed in 6.449838064s

• [SLOW TEST:11.820 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-storage] Projected secret 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 14 14:58:33.405: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name s-test-opt-del-4a92bb8b-a95c-41d5-bbc4-041f9d44d299
STEP: Creating secret with name s-test-opt-upd-179422ec-a4a9-4e07-8e75-4dcddb390f2a
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-4a92bb8b-a95c-41d5-bbc4-041f9d44d299
STEP: Updating secret s-test-opt-upd-179422ec-a4a9-4e07-8e75-4dcddb390f2a
STEP: Creating secret with name s-test-opt-create-a5071fba-dc1d-4792-958b-c624eade8558
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 14 15:00:03.813: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9416" for this suite.
Feb 14 15:00:27.853: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 15:00:28.011: INFO: namespace projected-9416 deletion completed in 24.190285297s

• [SLOW TEST:114.605 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 14 15:00:28.011: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the rc1
STEP: create the rc2
STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well
STEP: delete the rc simpletest-rc-to-be-deleted
STEP: wait for the rc to be deleted
STEP: Gathering metrics
W0214 15:00:42.719752       8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Feb 14 15:00:42.719: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 14 15:00:42.719: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-8373" for this suite.
Feb 14 15:00:54.444: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 15:00:55.469: INFO: namespace gc-8373 deletion completed in 12.675322298s

• [SLOW TEST:27.458 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 14 15:00:55.469: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb 14 15:00:55.551: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f2140b54-17b4-487d-929a-c609aa420af8" in namespace "downward-api-1773" to be "success or failure"
Feb 14 15:00:55.557: INFO: Pod "downwardapi-volume-f2140b54-17b4-487d-929a-c609aa420af8": Phase="Pending", Reason="", readiness=false. Elapsed: 5.415202ms
Feb 14 15:00:57.567: INFO: Pod "downwardapi-volume-f2140b54-17b4-487d-929a-c609aa420af8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015738327s
Feb 14 15:00:59.579: INFO: Pod "downwardapi-volume-f2140b54-17b4-487d-929a-c609aa420af8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.027051369s
Feb 14 15:01:01.589: INFO: Pod "downwardapi-volume-f2140b54-17b4-487d-929a-c609aa420af8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.037553931s
Feb 14 15:01:03.604: INFO: Pod "downwardapi-volume-f2140b54-17b4-487d-929a-c609aa420af8": Phase="Pending", Reason="", readiness=false. Elapsed: 8.052591759s
Feb 14 15:01:05.612: INFO: Pod "downwardapi-volume-f2140b54-17b4-487d-929a-c609aa420af8": Phase="Pending", Reason="", readiness=false. Elapsed: 10.060703157s
Feb 14 15:01:07.623: INFO: Pod "downwardapi-volume-f2140b54-17b4-487d-929a-c609aa420af8": Phase="Pending", Reason="", readiness=false. Elapsed: 12.071287553s
Feb 14 15:01:09.633: INFO: Pod "downwardapi-volume-f2140b54-17b4-487d-929a-c609aa420af8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.081733693s
STEP: Saw pod success
Feb 14 15:01:09.633: INFO: Pod "downwardapi-volume-f2140b54-17b4-487d-929a-c609aa420af8" satisfied condition "success or failure"
Feb 14 15:01:09.638: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-f2140b54-17b4-487d-929a-c609aa420af8 container client-container: 
STEP: delete the pod
Feb 14 15:01:09.799: INFO: Waiting for pod downwardapi-volume-f2140b54-17b4-487d-929a-c609aa420af8 to disappear
Feb 14 15:01:09.804: INFO: Pod downwardapi-volume-f2140b54-17b4-487d-929a-c609aa420af8 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 14 15:01:09.805: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-1773" for this suite.
Feb 14 15:01:15.841: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 15:01:16.059: INFO: namespace downward-api-1773 deletion completed in 6.247279359s

• [SLOW TEST:20.590 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 14 15:01:16.059: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name projected-secret-test-768961f7-32bc-42bd-8608-5ce85e12752f
STEP: Creating a pod to test consume secrets
Feb 14 15:01:16.178: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-9ddfe597-4c10-4226-958d-59b360f0444d" in namespace "projected-8955" to be "success or failure"
Feb 14 15:01:16.205: INFO: Pod "pod-projected-secrets-9ddfe597-4c10-4226-958d-59b360f0444d": Phase="Pending", Reason="", readiness=false. Elapsed: 26.544323ms
Feb 14 15:01:18.211: INFO: Pod "pod-projected-secrets-9ddfe597-4c10-4226-958d-59b360f0444d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032650041s
Feb 14 15:01:20.220: INFO: Pod "pod-projected-secrets-9ddfe597-4c10-4226-958d-59b360f0444d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.041947404s
Feb 14 15:01:22.227: INFO: Pod "pod-projected-secrets-9ddfe597-4c10-4226-958d-59b360f0444d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.049146629s
Feb 14 15:01:24.235: INFO: Pod "pod-projected-secrets-9ddfe597-4c10-4226-958d-59b360f0444d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.057120067s
Feb 14 15:01:26.552: INFO: Pod "pod-projected-secrets-9ddfe597-4c10-4226-958d-59b360f0444d": Phase="Pending", Reason="", readiness=false. Elapsed: 10.373648504s
Feb 14 15:01:28.566: INFO: Pod "pod-projected-secrets-9ddfe597-4c10-4226-958d-59b360f0444d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.387597275s
STEP: Saw pod success
Feb 14 15:01:28.566: INFO: Pod "pod-projected-secrets-9ddfe597-4c10-4226-958d-59b360f0444d" satisfied condition "success or failure"
Feb 14 15:01:28.572: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-9ddfe597-4c10-4226-958d-59b360f0444d container projected-secret-volume-test: 
STEP: delete the pod
Feb 14 15:01:28.781: INFO: Waiting for pod pod-projected-secrets-9ddfe597-4c10-4226-958d-59b360f0444d to disappear
Feb 14 15:01:28.793: INFO: Pod pod-projected-secrets-9ddfe597-4c10-4226-958d-59b360f0444d no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 14 15:01:28.794: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8955" for this suite.
Feb 14 15:01:34.837: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 15:01:34.969: INFO: namespace projected-8955 deletion completed in 6.167331675s

• [SLOW TEST:18.910 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 14 15:01:34.971: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Feb 14 15:01:43.199: INFO: Expected: &{} to match Container's Termination Message:  --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 14 15:01:43.232: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-6805" for this suite.
Feb 14 15:01:49.252: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 15:01:49.371: INFO: namespace container-runtime-6805 deletion completed in 6.133871765s

• [SLOW TEST:14.400 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129
      should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should receive events on concurrent watches in same order [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 14 15:01:49.371: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should receive events on concurrent watches in same order [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: starting a background goroutine to produce watch events
STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 14 15:01:55.674: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-4517" for this suite.
Feb 14 15:02:01.733: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 15:02:01.935: INFO: namespace watch-4517 deletion completed in 6.253106733s

• [SLOW TEST:12.564 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should receive events on concurrent watches in same order [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 14 15:02:01.936: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-map-ecb29887-a852-483c-945b-d8e896e28de8
STEP: Creating a pod to test consume configMaps
Feb 14 15:02:02.139: INFO: Waiting up to 5m0s for pod "pod-configmaps-dcd0b588-8d5d-429e-8553-dcccb4ccacc3" in namespace "configmap-8576" to be "success or failure"
Feb 14 15:02:02.300: INFO: Pod "pod-configmaps-dcd0b588-8d5d-429e-8553-dcccb4ccacc3": Phase="Pending", Reason="", readiness=false. Elapsed: 160.679802ms
Feb 14 15:02:04.366: INFO: Pod "pod-configmaps-dcd0b588-8d5d-429e-8553-dcccb4ccacc3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.225832235s
Feb 14 15:02:06.394: INFO: Pod "pod-configmaps-dcd0b588-8d5d-429e-8553-dcccb4ccacc3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.253774241s
Feb 14 15:02:08.407: INFO: Pod "pod-configmaps-dcd0b588-8d5d-429e-8553-dcccb4ccacc3": Phase="Pending", Reason="", readiness=false. Elapsed: 6.267177786s
Feb 14 15:02:10.441: INFO: Pod "pod-configmaps-dcd0b588-8d5d-429e-8553-dcccb4ccacc3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.300936852s
STEP: Saw pod success
Feb 14 15:02:10.441: INFO: Pod "pod-configmaps-dcd0b588-8d5d-429e-8553-dcccb4ccacc3" satisfied condition "success or failure"
Feb 14 15:02:10.446: INFO: Trying to get logs from node iruya-node pod pod-configmaps-dcd0b588-8d5d-429e-8553-dcccb4ccacc3 container configmap-volume-test: 
STEP: delete the pod
Feb 14 15:02:10.623: INFO: Waiting for pod pod-configmaps-dcd0b588-8d5d-429e-8553-dcccb4ccacc3 to disappear
Feb 14 15:02:10.633: INFO: Pod pod-configmaps-dcd0b588-8d5d-429e-8553-dcccb4ccacc3 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 14 15:02:10.633: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-8576" for this suite.
Feb 14 15:02:16.666: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 15:02:16.829: INFO: namespace configmap-8576 deletion completed in 6.188784173s

• [SLOW TEST:14.893 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 14 15:02:16.829: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name projected-secret-test-809a4043-9321-470f-aea8-ac98f8742dc4
STEP: Creating a pod to test consume secrets
Feb 14 15:02:16.938: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-7e37fc78-11c9-46ce-b1dc-3b32f4a35ab0" in namespace "projected-9158" to be "success or failure"
Feb 14 15:02:16.944: INFO: Pod "pod-projected-secrets-7e37fc78-11c9-46ce-b1dc-3b32f4a35ab0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.911997ms
Feb 14 15:02:18.951: INFO: Pod "pod-projected-secrets-7e37fc78-11c9-46ce-b1dc-3b32f4a35ab0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012289765s
Feb 14 15:02:20.970: INFO: Pod "pod-projected-secrets-7e37fc78-11c9-46ce-b1dc-3b32f4a35ab0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.031010793s
Feb 14 15:02:23.015: INFO: Pod "pod-projected-secrets-7e37fc78-11c9-46ce-b1dc-3b32f4a35ab0": Phase="Pending", Reason="", readiness=false. Elapsed: 6.076066832s
Feb 14 15:02:25.024: INFO: Pod "pod-projected-secrets-7e37fc78-11c9-46ce-b1dc-3b32f4a35ab0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.085206704s
STEP: Saw pod success
Feb 14 15:02:25.024: INFO: Pod "pod-projected-secrets-7e37fc78-11c9-46ce-b1dc-3b32f4a35ab0" satisfied condition "success or failure"
Feb 14 15:02:25.027: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-7e37fc78-11c9-46ce-b1dc-3b32f4a35ab0 container projected-secret-volume-test: 
STEP: delete the pod
Feb 14 15:02:25.165: INFO: Waiting for pod pod-projected-secrets-7e37fc78-11c9-46ce-b1dc-3b32f4a35ab0 to disappear
Feb 14 15:02:25.176: INFO: Pod pod-projected-secrets-7e37fc78-11c9-46ce-b1dc-3b32f4a35ab0 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 14 15:02:25.176: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9158" for this suite.
Feb 14 15:02:31.225: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 15:02:31.345: INFO: namespace projected-9158 deletion completed in 6.161745091s

• [SLOW TEST:14.516 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[sig-auth] ServiceAccounts 
  should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 14 15:02:31.346: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: getting the auto-created API token
STEP: reading a file in the container
Feb 14 15:02:41.999: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-4891 pod-service-account-d5ab8d4c-50bb-4ea7-87bd-af581dd59747 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token'
STEP: reading a file in the container
Feb 14 15:02:43.016: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-4891 pod-service-account-d5ab8d4c-50bb-4ea7-87bd-af581dd59747 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt'
STEP: reading a file in the container
Feb 14 15:02:43.378: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-4891 pod-service-account-d5ab8d4c-50bb-4ea7-87bd-af581dd59747 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace'
[AfterEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 14 15:02:43.932: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-4891" for this suite.
Feb 14 15:02:49.969: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 15:02:50.081: INFO: namespace svcaccounts-4891 deletion completed in 6.141383762s

• [SLOW TEST:18.735 seconds]
[sig-auth] ServiceAccounts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23
  should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Guestbook application 
  should create and stop a working application  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 14 15:02:50.082: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should create and stop a working application  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating all guestbook components
Feb 14 15:02:50.127: INFO: apiVersion: v1
kind: Service
metadata:
  name: redis-slave
  labels:
    app: redis
    role: slave
    tier: backend
spec:
  ports:
  - port: 6379
  selector:
    app: redis
    role: slave
    tier: backend

Feb 14 15:02:50.128: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-496'
Feb 14 15:02:50.591: INFO: stderr: ""
Feb 14 15:02:50.591: INFO: stdout: "service/redis-slave created\n"
Feb 14 15:02:50.592: INFO: apiVersion: v1
kind: Service
metadata:
  name: redis-master
  labels:
    app: redis
    role: master
    tier: backend
spec:
  ports:
  - port: 6379
    targetPort: 6379
  selector:
    app: redis
    role: master
    tier: backend

Feb 14 15:02:50.593: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-496'
Feb 14 15:02:51.036: INFO: stderr: ""
Feb 14 15:02:51.036: INFO: stdout: "service/redis-master created\n"
Feb 14 15:02:51.037: INFO: apiVersion: v1
kind: Service
metadata:
  name: frontend
  labels:
    app: guestbook
    tier: frontend
spec:
  # if your cluster supports it, uncomment the following to automatically create
  # an external load-balanced IP for the frontend service.
  # type: LoadBalancer
  ports:
  - port: 80
  selector:
    app: guestbook
    tier: frontend

Feb 14 15:02:51.037: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-496'
Feb 14 15:02:51.729: INFO: stderr: ""
Feb 14 15:02:51.729: INFO: stdout: "service/frontend created\n"
Feb 14 15:02:51.730: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: frontend
spec:
  replicas: 3
  selector:
    matchLabels:
      app: guestbook
      tier: frontend
  template:
    metadata:
      labels:
        app: guestbook
        tier: frontend
    spec:
      containers:
      - name: php-redis
        image: gcr.io/google-samples/gb-frontend:v6
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        env:
        - name: GET_HOSTS_FROM
          value: dns
          # If your cluster config does not include a dns service, then to
          # instead access environment variables to find service host
          # info, comment out the 'value: dns' line above, and uncomment the
          # line below:
          # value: env
        ports:
        - containerPort: 80

Feb 14 15:02:51.730: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-496'
Feb 14 15:02:52.239: INFO: stderr: ""
Feb 14 15:02:52.239: INFO: stdout: "deployment.apps/frontend created\n"
Feb 14 15:02:52.240: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: redis-master
spec:
  replicas: 1
  selector:
    matchLabels:
      app: redis
      role: master
      tier: backend
  template:
    metadata:
      labels:
        app: redis
        role: master
        tier: backend
    spec:
      containers:
      - name: master
        image: gcr.io/kubernetes-e2e-test-images/redis:1.0
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 6379

Feb 14 15:02:52.240: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-496'
Feb 14 15:02:52.770: INFO: stderr: ""
Feb 14 15:02:52.771: INFO: stdout: "deployment.apps/redis-master created\n"
Feb 14 15:02:52.771: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: redis-slave
spec:
  replicas: 2
  selector:
    matchLabels:
      app: redis
      role: slave
      tier: backend
  template:
    metadata:
      labels:
        app: redis
        role: slave
        tier: backend
    spec:
      containers:
      - name: slave
        image: gcr.io/google-samples/gb-redisslave:v3
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        env:
        - name: GET_HOSTS_FROM
          value: dns
          # If your cluster config does not include a dns service, then to
          # instead access an environment variable to find the master
          # service's host, comment out the 'value: dns' line above, and
          # uncomment the line below:
          # value: env
        ports:
        - containerPort: 6379

Feb 14 15:02:52.771: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-496'
Feb 14 15:02:53.101: INFO: stderr: ""
Feb 14 15:02:53.101: INFO: stdout: "deployment.apps/redis-slave created\n"
STEP: validating guestbook app
Feb 14 15:02:53.101: INFO: Waiting for all frontend pods to be Running.
Feb 14 15:03:23.154: INFO: Waiting for frontend to serve content.
Feb 14 15:03:23.363: INFO: Trying to add a new entry to the guestbook.
Feb 14 15:03:23.394: INFO: Verifying that added entry can be retrieved.
STEP: using delete to clean up resources
Feb 14 15:03:23.486: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-496'
Feb 14 15:03:23.722: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb 14 15:03:23.722: INFO: stdout: "service \"redis-slave\" force deleted\n"
STEP: using delete to clean up resources
Feb 14 15:03:23.723: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-496'
Feb 14 15:03:23.957: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb 14 15:03:23.957: INFO: stdout: "service \"redis-master\" force deleted\n"
STEP: using delete to clean up resources
Feb 14 15:03:23.960: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-496'
Feb 14 15:03:24.258: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb 14 15:03:24.258: INFO: stdout: "service \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Feb 14 15:03:24.259: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-496'
Feb 14 15:03:24.366: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb 14 15:03:24.366: INFO: stdout: "deployment.apps \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Feb 14 15:03:24.367: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-496'
Feb 14 15:03:24.505: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb 14 15:03:24.505: INFO: stdout: "deployment.apps \"redis-master\" force deleted\n"
STEP: using delete to clean up resources
Feb 14 15:03:24.507: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-496'
Feb 14 15:03:24.703: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb 14 15:03:24.703: INFO: stdout: "deployment.apps \"redis-slave\" force deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 14 15:03:24.703: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-496" for this suite.
Feb 14 15:04:08.805: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 15:04:08.944: INFO: namespace kubectl-496 deletion completed in 44.229357771s

• [SLOW TEST:78.863 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Guestbook application
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create and stop a working application  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 14 15:04:08.945: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-84a94d49-a677-4b0a-b7ae-b40ce98e6d3b
STEP: Creating a pod to test consume configMaps
Feb 14 15:04:09.066: INFO: Waiting up to 5m0s for pod "pod-configmaps-5b88afbf-5041-4516-a2c7-727a90ffb01c" in namespace "configmap-6209" to be "success or failure"
Feb 14 15:04:09.079: INFO: Pod "pod-configmaps-5b88afbf-5041-4516-a2c7-727a90ffb01c": Phase="Pending", Reason="", readiness=false. Elapsed: 13.018095ms
Feb 14 15:04:11.116: INFO: Pod "pod-configmaps-5b88afbf-5041-4516-a2c7-727a90ffb01c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.049975899s
Feb 14 15:04:13.122: INFO: Pod "pod-configmaps-5b88afbf-5041-4516-a2c7-727a90ffb01c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.056277749s
Feb 14 15:04:15.140: INFO: Pod "pod-configmaps-5b88afbf-5041-4516-a2c7-727a90ffb01c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.073924516s
Feb 14 15:04:17.150: INFO: Pod "pod-configmaps-5b88afbf-5041-4516-a2c7-727a90ffb01c": Phase="Pending", Reason="", readiness=false. Elapsed: 8.083907943s
Feb 14 15:04:19.174: INFO: Pod "pod-configmaps-5b88afbf-5041-4516-a2c7-727a90ffb01c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.108173643s
STEP: Saw pod success
Feb 14 15:04:19.174: INFO: Pod "pod-configmaps-5b88afbf-5041-4516-a2c7-727a90ffb01c" satisfied condition "success or failure"
Feb 14 15:04:19.177: INFO: Trying to get logs from node iruya-node pod pod-configmaps-5b88afbf-5041-4516-a2c7-727a90ffb01c container configmap-volume-test: 
STEP: delete the pod
Feb 14 15:04:19.230: INFO: Waiting for pod pod-configmaps-5b88afbf-5041-4516-a2c7-727a90ffb01c to disappear
Feb 14 15:04:19.242: INFO: Pod pod-configmaps-5b88afbf-5041-4516-a2c7-727a90ffb01c no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 14 15:04:19.243: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-6209" for this suite.
Feb 14 15:04:25.350: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 15:04:25.499: INFO: namespace configmap-6209 deletion completed in 6.250575592s

• [SLOW TEST:16.554 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 14 15:04:25.499: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-map-6aa1194f-4ad9-4f5a-a09b-8ef427081abc
STEP: Creating a pod to test consume secrets
Feb 14 15:04:25.601: INFO: Waiting up to 5m0s for pod "pod-secrets-83e7c564-5b2a-48e5-baeb-f238b9c00004" in namespace "secrets-5548" to be "success or failure"
Feb 14 15:04:25.611: INFO: Pod "pod-secrets-83e7c564-5b2a-48e5-baeb-f238b9c00004": Phase="Pending", Reason="", readiness=false. Elapsed: 9.625071ms
Feb 14 15:04:27.621: INFO: Pod "pod-secrets-83e7c564-5b2a-48e5-baeb-f238b9c00004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01986293s
Feb 14 15:04:29.714: INFO: Pod "pod-secrets-83e7c564-5b2a-48e5-baeb-f238b9c00004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.112323447s
Feb 14 15:04:31.738: INFO: Pod "pod-secrets-83e7c564-5b2a-48e5-baeb-f238b9c00004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.137063701s
Feb 14 15:04:33.753: INFO: Pod "pod-secrets-83e7c564-5b2a-48e5-baeb-f238b9c00004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.151790388s
STEP: Saw pod success
Feb 14 15:04:33.753: INFO: Pod "pod-secrets-83e7c564-5b2a-48e5-baeb-f238b9c00004" satisfied condition "success or failure"
Feb 14 15:04:33.766: INFO: Trying to get logs from node iruya-node pod pod-secrets-83e7c564-5b2a-48e5-baeb-f238b9c00004 container secret-volume-test: 
STEP: delete the pod
Feb 14 15:04:33.874: INFO: Waiting for pod pod-secrets-83e7c564-5b2a-48e5-baeb-f238b9c00004 to disappear
Feb 14 15:04:33.887: INFO: Pod pod-secrets-83e7c564-5b2a-48e5-baeb-f238b9c00004 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 14 15:04:33.888: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-5548" for this suite.
Feb 14 15:04:39.990: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 15:04:40.103: INFO: namespace secrets-5548 deletion completed in 6.162408014s

• [SLOW TEST:14.603 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command in a pod 
  should print the output to logs [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 14 15:04:40.103: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should print the output to logs [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 14 15:04:50.281: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-8181" for this suite.
Feb 14 15:05:32.366: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 15:05:32.494: INFO: namespace kubelet-test-8181 deletion completed in 42.205123855s

• [SLOW TEST:52.391 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when scheduling a busybox command in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40
    should print the output to logs [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 14 15:05:32.495: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88
[It] should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 14 15:05:32.608: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-1447" for this suite.
Feb 14 15:05:38.676: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 15:05:38.780: INFO: namespace services-1447 deletion completed in 6.166100624s
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92

• [SLOW TEST:6.285 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 14 15:05:38.781: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0777 on node default medium
Feb 14 15:05:38.994: INFO: Waiting up to 5m0s for pod "pod-c387d9b3-b14b-4f7f-90c4-876e97b878df" in namespace "emptydir-5459" to be "success or failure"
Feb 14 15:05:39.129: INFO: Pod "pod-c387d9b3-b14b-4f7f-90c4-876e97b878df": Phase="Pending", Reason="", readiness=false. Elapsed: 134.503739ms
Feb 14 15:05:41.142: INFO: Pod "pod-c387d9b3-b14b-4f7f-90c4-876e97b878df": Phase="Pending", Reason="", readiness=false. Elapsed: 2.147982553s
Feb 14 15:05:43.194: INFO: Pod "pod-c387d9b3-b14b-4f7f-90c4-876e97b878df": Phase="Pending", Reason="", readiness=false. Elapsed: 4.199855752s
Feb 14 15:05:45.201: INFO: Pod "pod-c387d9b3-b14b-4f7f-90c4-876e97b878df": Phase="Pending", Reason="", readiness=false. Elapsed: 6.206720688s
Feb 14 15:05:47.211: INFO: Pod "pod-c387d9b3-b14b-4f7f-90c4-876e97b878df": Phase="Running", Reason="", readiness=true. Elapsed: 8.217058529s
Feb 14 15:05:49.219: INFO: Pod "pod-c387d9b3-b14b-4f7f-90c4-876e97b878df": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.22513289s
STEP: Saw pod success
Feb 14 15:05:49.219: INFO: Pod "pod-c387d9b3-b14b-4f7f-90c4-876e97b878df" satisfied condition "success or failure"
Feb 14 15:05:49.224: INFO: Trying to get logs from node iruya-node pod pod-c387d9b3-b14b-4f7f-90c4-876e97b878df container test-container: 
STEP: delete the pod
Feb 14 15:05:49.401: INFO: Waiting for pod pod-c387d9b3-b14b-4f7f-90c4-876e97b878df to disappear
Feb 14 15:05:49.427: INFO: Pod pod-c387d9b3-b14b-4f7f-90c4-876e97b878df no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 14 15:05:49.427: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-5459" for this suite.
Feb 14 15:05:55.474: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 15:05:55.615: INFO: namespace emptydir-5459 deletion completed in 6.178265238s

• [SLOW TEST:16.835 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run pod 
  should create a pod from an image when restart is Never  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 14 15:05:55.618: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1685
[It] should create a pod from an image when restart is Never  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Feb 14 15:05:55.736: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-4968'
Feb 14 15:05:55.938: INFO: stderr: ""
Feb 14 15:05:55.939: INFO: stdout: "pod/e2e-test-nginx-pod created\n"
STEP: verifying the pod e2e-test-nginx-pod was created
[AfterEach] [k8s.io] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1690
Feb 14 15:05:55.967: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=kubectl-4968'
Feb 14 15:06:00.085: INFO: stderr: ""
Feb 14 15:06:00.085: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 14 15:06:00.085: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4968" for this suite.
Feb 14 15:06:06.113: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 15:06:06.279: INFO: namespace kubectl-4968 deletion completed in 6.185398753s

• [SLOW TEST:10.661 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create a pod from an image when restart is Never  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 14 15:06:06.280: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Feb 14 15:06:24.627: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Feb 14 15:06:24.713: INFO: Pod pod-with-poststart-http-hook still exists
Feb 14 15:06:26.714: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Feb 14 15:06:26.729: INFO: Pod pod-with-poststart-http-hook still exists
Feb 14 15:06:28.714: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Feb 14 15:06:28.723: INFO: Pod pod-with-poststart-http-hook still exists
Feb 14 15:06:30.714: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Feb 14 15:06:30.721: INFO: Pod pod-with-poststart-http-hook still exists
Feb 14 15:06:32.714: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Feb 14 15:06:32.734: INFO: Pod pod-with-poststart-http-hook still exists
Feb 14 15:06:34.714: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Feb 14 15:06:34.725: INFO: Pod pod-with-poststart-http-hook still exists
Feb 14 15:06:36.714: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Feb 14 15:06:36.723: INFO: Pod pod-with-poststart-http-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 14 15:06:36.723: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-2380" for this suite.
Feb 14 15:06:58.780: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 15:06:58.887: INFO: namespace container-lifecycle-hook-2380 deletion completed in 22.156172612s

• [SLOW TEST:52.608 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute poststart http hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-apps] Job 
  should delete a job [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 14 15:06:58.888: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete a job [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a job
STEP: Ensuring active pods == parallelism
STEP: delete a job
STEP: deleting Job.batch foo in namespace job-7296, will wait for the garbage collector to delete the pods
Feb 14 15:07:11.022: INFO: Deleting Job.batch foo took: 10.580864ms
Feb 14 15:07:11.323: INFO: Terminating Job.batch foo pods took: 300.608752ms
STEP: Ensuring job was deleted
[AfterEach] [sig-apps] Job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 14 15:07:56.730: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-7296" for this suite.
Feb 14 15:08:02.758: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 15:08:02.886: INFO: namespace job-7296 deletion completed in 6.150799507s

• [SLOW TEST:63.998 seconds]
[sig-apps] Job
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should delete a job [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 14 15:08:02.886: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb 14 15:08:02.988: INFO: Waiting up to 5m0s for pod "downwardapi-volume-79f4fe3e-8372-441e-874a-031913e95ff2" in namespace "projected-6065" to be "success or failure"
Feb 14 15:08:02.999: INFO: Pod "downwardapi-volume-79f4fe3e-8372-441e-874a-031913e95ff2": Phase="Pending", Reason="", readiness=false. Elapsed: 10.369796ms
Feb 14 15:08:05.019: INFO: Pod "downwardapi-volume-79f4fe3e-8372-441e-874a-031913e95ff2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030282451s
Feb 14 15:08:07.032: INFO: Pod "downwardapi-volume-79f4fe3e-8372-441e-874a-031913e95ff2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.043836094s
Feb 14 15:08:09.044: INFO: Pod "downwardapi-volume-79f4fe3e-8372-441e-874a-031913e95ff2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.055298377s
Feb 14 15:08:11.059: INFO: Pod "downwardapi-volume-79f4fe3e-8372-441e-874a-031913e95ff2": Phase="Pending", Reason="", readiness=false. Elapsed: 8.070775402s
Feb 14 15:08:13.070: INFO: Pod "downwardapi-volume-79f4fe3e-8372-441e-874a-031913e95ff2": Phase="Pending", Reason="", readiness=false. Elapsed: 10.082077474s
Feb 14 15:08:15.083: INFO: Pod "downwardapi-volume-79f4fe3e-8372-441e-874a-031913e95ff2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.094204332s
STEP: Saw pod success
Feb 14 15:08:15.083: INFO: Pod "downwardapi-volume-79f4fe3e-8372-441e-874a-031913e95ff2" satisfied condition "success or failure"
Feb 14 15:08:15.086: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-79f4fe3e-8372-441e-874a-031913e95ff2 container client-container: 
STEP: delete the pod
Feb 14 15:08:15.142: INFO: Waiting for pod downwardapi-volume-79f4fe3e-8372-441e-874a-031913e95ff2 to disappear
Feb 14 15:08:15.151: INFO: Pod downwardapi-volume-79f4fe3e-8372-441e-874a-031913e95ff2 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 14 15:08:15.151: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6065" for this suite.
Feb 14 15:08:21.204: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 15:08:21.320: INFO: namespace projected-6065 deletion completed in 6.146177456s

• [SLOW TEST:18.434 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 14 15:08:21.321: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0777 on node default medium
Feb 14 15:08:21.399: INFO: Waiting up to 5m0s for pod "pod-7b9027d3-8ecb-4b60-8fea-ae862eb460d3" in namespace "emptydir-1922" to be "success or failure"
Feb 14 15:08:21.506: INFO: Pod "pod-7b9027d3-8ecb-4b60-8fea-ae862eb460d3": Phase="Pending", Reason="", readiness=false. Elapsed: 106.759829ms
Feb 14 15:08:23.520: INFO: Pod "pod-7b9027d3-8ecb-4b60-8fea-ae862eb460d3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.120511963s
Feb 14 15:08:25.531: INFO: Pod "pod-7b9027d3-8ecb-4b60-8fea-ae862eb460d3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.131720445s
Feb 14 15:08:27.541: INFO: Pod "pod-7b9027d3-8ecb-4b60-8fea-ae862eb460d3": Phase="Pending", Reason="", readiness=false. Elapsed: 6.140912775s
Feb 14 15:08:29.554: INFO: Pod "pod-7b9027d3-8ecb-4b60-8fea-ae862eb460d3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.154193395s
STEP: Saw pod success
Feb 14 15:08:29.554: INFO: Pod "pod-7b9027d3-8ecb-4b60-8fea-ae862eb460d3" satisfied condition "success or failure"
Feb 14 15:08:29.559: INFO: Trying to get logs from node iruya-node pod pod-7b9027d3-8ecb-4b60-8fea-ae862eb460d3 container test-container: 
STEP: delete the pod
Feb 14 15:08:29.662: INFO: Waiting for pod pod-7b9027d3-8ecb-4b60-8fea-ae862eb460d3 to disappear
Feb 14 15:08:29.669: INFO: Pod pod-7b9027d3-8ecb-4b60-8fea-ae862eb460d3 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 14 15:08:29.669: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-1922" for this suite.
Feb 14 15:08:35.770: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 15:08:35.952: INFO: namespace emptydir-1922 deletion completed in 6.273468324s

• [SLOW TEST:14.631 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 14 15:08:35.952: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test override arguments
Feb 14 15:08:36.028: INFO: Waiting up to 5m0s for pod "client-containers-b50148f6-475e-4895-b74a-d42a414a1c4e" in namespace "containers-4326" to be "success or failure"
Feb 14 15:08:36.032: INFO: Pod "client-containers-b50148f6-475e-4895-b74a-d42a414a1c4e": Phase="Pending", Reason="", readiness=false. Elapsed: 3.061019ms
Feb 14 15:08:38.044: INFO: Pod "client-containers-b50148f6-475e-4895-b74a-d42a414a1c4e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0152751s
Feb 14 15:08:40.051: INFO: Pod "client-containers-b50148f6-475e-4895-b74a-d42a414a1c4e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.02236033s
Feb 14 15:08:42.060: INFO: Pod "client-containers-b50148f6-475e-4895-b74a-d42a414a1c4e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.031429131s
Feb 14 15:08:44.071: INFO: Pod "client-containers-b50148f6-475e-4895-b74a-d42a414a1c4e": Phase="Pending", Reason="", readiness=false. Elapsed: 8.04287285s
Feb 14 15:08:46.210: INFO: Pod "client-containers-b50148f6-475e-4895-b74a-d42a414a1c4e": Phase="Pending", Reason="", readiness=false. Elapsed: 10.181496755s
Feb 14 15:08:48.218: INFO: Pod "client-containers-b50148f6-475e-4895-b74a-d42a414a1c4e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.189362297s
STEP: Saw pod success
Feb 14 15:08:48.218: INFO: Pod "client-containers-b50148f6-475e-4895-b74a-d42a414a1c4e" satisfied condition "success or failure"
Feb 14 15:08:48.222: INFO: Trying to get logs from node iruya-node pod client-containers-b50148f6-475e-4895-b74a-d42a414a1c4e container test-container: 
STEP: delete the pod
Feb 14 15:08:48.273: INFO: Waiting for pod client-containers-b50148f6-475e-4895-b74a-d42a414a1c4e to disappear
Feb 14 15:08:48.277: INFO: Pod client-containers-b50148f6-475e-4895-b74a-d42a414a1c4e no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 14 15:08:48.277: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-4326" for this suite.
Feb 14 15:08:54.295: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 15:08:54.405: INFO: namespace containers-4326 deletion completed in 6.124235073s

• [SLOW TEST:18.453 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[k8s.io] Pods 
  should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 14 15:08:54.405: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: setting up watch
STEP: submitting the pod to kubernetes
Feb 14 15:08:54.519: INFO: observed the pod list
STEP: verifying the pod is in kubernetes
STEP: verifying pod creation was observed
STEP: deleting the pod gracefully
STEP: verifying the kubelet observed the termination notice
STEP: verifying pod deletion was observed
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 14 15:09:11.165: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-364" for this suite.
Feb 14 15:09:17.228: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 15:09:17.418: INFO: namespace pods-364 deletion completed in 6.238935681s

• [SLOW TEST:23.013 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 14 15:09:17.419: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb 14 15:09:17.502: INFO: Waiting up to 5m0s for pod "downwardapi-volume-308d851f-badf-4e8a-9f75-103c3bc43551" in namespace "downward-api-7815" to be "success or failure"
Feb 14 15:09:17.519: INFO: Pod "downwardapi-volume-308d851f-badf-4e8a-9f75-103c3bc43551": Phase="Pending", Reason="", readiness=false. Elapsed: 16.34044ms
Feb 14 15:09:19.530: INFO: Pod "downwardapi-volume-308d851f-badf-4e8a-9f75-103c3bc43551": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027691183s
Feb 14 15:09:21.545: INFO: Pod "downwardapi-volume-308d851f-badf-4e8a-9f75-103c3bc43551": Phase="Pending", Reason="", readiness=false. Elapsed: 4.043065763s
Feb 14 15:09:23.553: INFO: Pod "downwardapi-volume-308d851f-badf-4e8a-9f75-103c3bc43551": Phase="Pending", Reason="", readiness=false. Elapsed: 6.050490554s
Feb 14 15:09:25.562: INFO: Pod "downwardapi-volume-308d851f-badf-4e8a-9f75-103c3bc43551": Phase="Pending", Reason="", readiness=false. Elapsed: 8.059620821s
Feb 14 15:09:27.570: INFO: Pod "downwardapi-volume-308d851f-badf-4e8a-9f75-103c3bc43551": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.067979227s
STEP: Saw pod success
Feb 14 15:09:27.571: INFO: Pod "downwardapi-volume-308d851f-badf-4e8a-9f75-103c3bc43551" satisfied condition "success or failure"
Feb 14 15:09:27.575: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-308d851f-badf-4e8a-9f75-103c3bc43551 container client-container: 
STEP: delete the pod
Feb 14 15:09:27.688: INFO: Waiting for pod downwardapi-volume-308d851f-badf-4e8a-9f75-103c3bc43551 to disappear
Feb 14 15:09:27.717: INFO: Pod downwardapi-volume-308d851f-badf-4e8a-9f75-103c3bc43551 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 14 15:09:27.718: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-7815" for this suite.
Feb 14 15:09:33.786: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 15:09:33.992: INFO: namespace downward-api-7815 deletion completed in 6.265262348s

• [SLOW TEST:16.574 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 14 15:09:33.993: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod pod-subpath-test-configmap-l5gb
STEP: Creating a pod to test atomic-volume-subpath
Feb 14 15:09:34.099: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-l5gb" in namespace "subpath-6171" to be "success or failure"
Feb 14 15:09:34.104: INFO: Pod "pod-subpath-test-configmap-l5gb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.000127ms
Feb 14 15:09:36.113: INFO: Pod "pod-subpath-test-configmap-l5gb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013625572s
Feb 14 15:09:38.119: INFO: Pod "pod-subpath-test-configmap-l5gb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.019520756s
Feb 14 15:09:40.168: INFO: Pod "pod-subpath-test-configmap-l5gb": Phase="Pending", Reason="", readiness=false. Elapsed: 6.068459965s
Feb 14 15:09:42.183: INFO: Pod "pod-subpath-test-configmap-l5gb": Phase="Pending", Reason="", readiness=false. Elapsed: 8.083413536s
Feb 14 15:09:44.199: INFO: Pod "pod-subpath-test-configmap-l5gb": Phase="Pending", Reason="", readiness=false. Elapsed: 10.099245363s
Feb 14 15:09:46.207: INFO: Pod "pod-subpath-test-configmap-l5gb": Phase="Running", Reason="", readiness=true. Elapsed: 12.107500421s
Feb 14 15:09:48.217: INFO: Pod "pod-subpath-test-configmap-l5gb": Phase="Running", Reason="", readiness=true. Elapsed: 14.117317567s
Feb 14 15:09:50.232: INFO: Pod "pod-subpath-test-configmap-l5gb": Phase="Running", Reason="", readiness=true. Elapsed: 16.131909957s
Feb 14 15:09:52.241: INFO: Pod "pod-subpath-test-configmap-l5gb": Phase="Running", Reason="", readiness=true. Elapsed: 18.14170758s
Feb 14 15:09:54.250: INFO: Pod "pod-subpath-test-configmap-l5gb": Phase="Running", Reason="", readiness=true. Elapsed: 20.150708684s
Feb 14 15:09:56.260: INFO: Pod "pod-subpath-test-configmap-l5gb": Phase="Running", Reason="", readiness=true. Elapsed: 22.16034693s
Feb 14 15:09:58.271: INFO: Pod "pod-subpath-test-configmap-l5gb": Phase="Running", Reason="", readiness=true. Elapsed: 24.170961065s
Feb 14 15:10:00.277: INFO: Pod "pod-subpath-test-configmap-l5gb": Phase="Running", Reason="", readiness=true. Elapsed: 26.177387607s
Feb 14 15:10:02.284: INFO: Pod "pod-subpath-test-configmap-l5gb": Phase="Running", Reason="", readiness=true. Elapsed: 28.184215649s
Feb 14 15:10:04.304: INFO: Pod "pod-subpath-test-configmap-l5gb": Phase="Running", Reason="", readiness=true. Elapsed: 30.204536422s
Feb 14 15:10:06.323: INFO: Pod "pod-subpath-test-configmap-l5gb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 32.223443682s
STEP: Saw pod success
Feb 14 15:10:06.323: INFO: Pod "pod-subpath-test-configmap-l5gb" satisfied condition "success or failure"
Feb 14 15:10:06.328: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-configmap-l5gb container test-container-subpath-configmap-l5gb: 
STEP: delete the pod
Feb 14 15:10:06.388: INFO: Waiting for pod pod-subpath-test-configmap-l5gb to disappear
Feb 14 15:10:06.429: INFO: Pod pod-subpath-test-configmap-l5gb no longer exists
STEP: Deleting pod pod-subpath-test-configmap-l5gb
Feb 14 15:10:06.429: INFO: Deleting pod "pod-subpath-test-configmap-l5gb" in namespace "subpath-6171"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 14 15:10:06.434: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-6171" for this suite.
Feb 14 15:10:12.480: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 15:10:12.617: INFO: namespace subpath-6171 deletion completed in 6.175810952s

• [SLOW TEST:38.624 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-storage] ConfigMap 
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 14 15:10:12.617: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-upd-ffc40244-c45b-4ff2-ab26-a186b8cad801
STEP: Creating the pod
STEP: Waiting for pod with text data
STEP: Waiting for pod with binary data
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 14 15:10:24.914: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-8323" for this suite.
Feb 14 15:10:48.945: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 15:10:49.029: INFO: namespace configmap-8323 deletion completed in 24.107344734s

• [SLOW TEST:36.412 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 14 15:10:49.030: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-c56f122b-35af-48d8-adf3-53aaf77ec762
STEP: Creating a pod to test consume secrets
Feb 14 15:10:49.132: INFO: Waiting up to 5m0s for pod "pod-secrets-e69b6cb3-52f7-4ccf-98a3-dc367efde16b" in namespace "secrets-4683" to be "success or failure"
Feb 14 15:10:49.141: INFO: Pod "pod-secrets-e69b6cb3-52f7-4ccf-98a3-dc367efde16b": Phase="Pending", Reason="", readiness=false. Elapsed: 8.697983ms
Feb 14 15:10:51.153: INFO: Pod "pod-secrets-e69b6cb3-52f7-4ccf-98a3-dc367efde16b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020839478s
Feb 14 15:10:53.158: INFO: Pod "pod-secrets-e69b6cb3-52f7-4ccf-98a3-dc367efde16b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.025591935s
Feb 14 15:10:55.165: INFO: Pod "pod-secrets-e69b6cb3-52f7-4ccf-98a3-dc367efde16b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.032245083s
Feb 14 15:10:57.172: INFO: Pod "pod-secrets-e69b6cb3-52f7-4ccf-98a3-dc367efde16b": Phase="Pending", Reason="", readiness=false. Elapsed: 8.039996911s
Feb 14 15:10:59.178: INFO: Pod "pod-secrets-e69b6cb3-52f7-4ccf-98a3-dc367efde16b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.045831592s
STEP: Saw pod success
Feb 14 15:10:59.178: INFO: Pod "pod-secrets-e69b6cb3-52f7-4ccf-98a3-dc367efde16b" satisfied condition "success or failure"
Feb 14 15:10:59.181: INFO: Trying to get logs from node iruya-node pod pod-secrets-e69b6cb3-52f7-4ccf-98a3-dc367efde16b container secret-volume-test: 
STEP: delete the pod
Feb 14 15:10:59.368: INFO: Waiting for pod pod-secrets-e69b6cb3-52f7-4ccf-98a3-dc367efde16b to disappear
Feb 14 15:10:59.380: INFO: Pod pod-secrets-e69b6cb3-52f7-4ccf-98a3-dc367efde16b no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 14 15:10:59.381: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-4683" for this suite.
Feb 14 15:11:05.471: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 15:11:05.636: INFO: namespace secrets-4683 deletion completed in 6.246128328s

• [SLOW TEST:16.607 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should fail to create ConfigMap with empty key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 14 15:11:05.637: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create ConfigMap with empty key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap that has name configmap-test-emptyKey-895a0d83-e7b0-4a3c-963c-df52fccd8582
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 14 15:11:05.832: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-7350" for this suite.
Feb 14 15:11:11.912: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 15:11:12.023: INFO: namespace configmap-7350 deletion completed in 6.181169286s

• [SLOW TEST:6.385 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should fail to create ConfigMap with empty key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 14 15:11:12.023: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
[It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb 14 15:11:12.284: INFO: Creating simple daemon set daemon-set
STEP: Check that daemon pods launch on every node of the cluster.
Feb 14 15:11:12.304: INFO: Number of nodes with available pods: 0
Feb 14 15:11:12.304: INFO: Node iruya-node is running more than one daemon pod
Feb 14 15:11:13.962: INFO: Number of nodes with available pods: 0
Feb 14 15:11:13.963: INFO: Node iruya-node is running more than one daemon pod
Feb 14 15:11:14.584: INFO: Number of nodes with available pods: 0
Feb 14 15:11:14.584: INFO: Node iruya-node is running more than one daemon pod
Feb 14 15:11:15.321: INFO: Number of nodes with available pods: 0
Feb 14 15:11:15.321: INFO: Node iruya-node is running more than one daemon pod
Feb 14 15:11:16.316: INFO: Number of nodes with available pods: 0
Feb 14 15:11:16.316: INFO: Node iruya-node is running more than one daemon pod
Feb 14 15:11:17.316: INFO: Number of nodes with available pods: 0
Feb 14 15:11:17.316: INFO: Node iruya-node is running more than one daemon pod
Feb 14 15:11:19.577: INFO: Number of nodes with available pods: 0
Feb 14 15:11:19.577: INFO: Node iruya-node is running more than one daemon pod
Feb 14 15:11:20.327: INFO: Number of nodes with available pods: 0
Feb 14 15:11:20.327: INFO: Node iruya-node is running more than one daemon pod
Feb 14 15:11:21.377: INFO: Number of nodes with available pods: 0
Feb 14 15:11:21.377: INFO: Node iruya-node is running more than one daemon pod
Feb 14 15:11:22.331: INFO: Number of nodes with available pods: 0
Feb 14 15:11:22.331: INFO: Node iruya-node is running more than one daemon pod
Feb 14 15:11:23.322: INFO: Number of nodes with available pods: 2
Feb 14 15:11:23.322: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Update daemon pods image.
STEP: Check that daemon pods images are updated.
Feb 14 15:11:23.375: INFO: Wrong image for pod: daemon-set-7j9fl. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 14 15:11:23.376: INFO: Wrong image for pod: daemon-set-cwlb7. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 14 15:11:24.397: INFO: Wrong image for pod: daemon-set-7j9fl. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 14 15:11:24.397: INFO: Wrong image for pod: daemon-set-cwlb7. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 14 15:11:25.395: INFO: Wrong image for pod: daemon-set-7j9fl. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 14 15:11:25.395: INFO: Wrong image for pod: daemon-set-cwlb7. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 14 15:11:26.396: INFO: Wrong image for pod: daemon-set-7j9fl. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 14 15:11:26.396: INFO: Wrong image for pod: daemon-set-cwlb7. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 14 15:11:27.398: INFO: Wrong image for pod: daemon-set-7j9fl. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 14 15:11:27.398: INFO: Wrong image for pod: daemon-set-cwlb7. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 14 15:11:28.396: INFO: Wrong image for pod: daemon-set-7j9fl. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 14 15:11:28.396: INFO: Wrong image for pod: daemon-set-cwlb7. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 14 15:11:29.395: INFO: Wrong image for pod: daemon-set-7j9fl. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 14 15:11:29.395: INFO: Wrong image for pod: daemon-set-cwlb7. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 14 15:11:29.395: INFO: Pod daemon-set-cwlb7 is not available
Feb 14 15:11:30.402: INFO: Wrong image for pod: daemon-set-7j9fl. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 14 15:11:30.402: INFO: Pod daemon-set-c6lcg is not available
Feb 14 15:11:31.393: INFO: Wrong image for pod: daemon-set-7j9fl. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 14 15:11:31.393: INFO: Pod daemon-set-c6lcg is not available
Feb 14 15:11:33.609: INFO: Wrong image for pod: daemon-set-7j9fl. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 14 15:11:33.609: INFO: Pod daemon-set-c6lcg is not available
Feb 14 15:11:34.393: INFO: Wrong image for pod: daemon-set-7j9fl. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 14 15:11:34.393: INFO: Pod daemon-set-c6lcg is not available
Feb 14 15:11:36.001: INFO: Wrong image for pod: daemon-set-7j9fl. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 14 15:11:36.001: INFO: Pod daemon-set-c6lcg is not available
Feb 14 15:11:36.396: INFO: Wrong image for pod: daemon-set-7j9fl. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 14 15:11:36.396: INFO: Pod daemon-set-c6lcg is not available
Feb 14 15:11:37.409: INFO: Wrong image for pod: daemon-set-7j9fl. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 14 15:11:37.410: INFO: Pod daemon-set-c6lcg is not available
Feb 14 15:11:38.390: INFO: Wrong image for pod: daemon-set-7j9fl. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 14 15:11:38.390: INFO: Pod daemon-set-c6lcg is not available
Feb 14 15:11:39.397: INFO: Wrong image for pod: daemon-set-7j9fl. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 14 15:11:40.394: INFO: Wrong image for pod: daemon-set-7j9fl. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 14 15:11:41.396: INFO: Wrong image for pod: daemon-set-7j9fl. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 14 15:11:42.397: INFO: Wrong image for pod: daemon-set-7j9fl. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 14 15:11:43.397: INFO: Wrong image for pod: daemon-set-7j9fl. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 14 15:11:44.398: INFO: Wrong image for pod: daemon-set-7j9fl. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 14 15:11:45.397: INFO: Wrong image for pod: daemon-set-7j9fl. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 14 15:11:45.397: INFO: Pod daemon-set-7j9fl is not available
Feb 14 15:11:46.394: INFO: Wrong image for pod: daemon-set-7j9fl. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 14 15:11:46.394: INFO: Pod daemon-set-7j9fl is not available
Feb 14 15:11:47.394: INFO: Wrong image for pod: daemon-set-7j9fl. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 14 15:11:47.394: INFO: Pod daemon-set-7j9fl is not available
Feb 14 15:11:48.394: INFO: Wrong image for pod: daemon-set-7j9fl. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 14 15:11:48.395: INFO: Pod daemon-set-7j9fl is not available
Feb 14 15:11:49.467: INFO: Wrong image for pod: daemon-set-7j9fl. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 14 15:11:49.467: INFO: Pod daemon-set-7j9fl is not available
Feb 14 15:11:50.395: INFO: Wrong image for pod: daemon-set-7j9fl. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 14 15:11:50.396: INFO: Pod daemon-set-7j9fl is not available
Feb 14 15:11:51.396: INFO: Wrong image for pod: daemon-set-7j9fl. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 14 15:11:51.396: INFO: Pod daemon-set-7j9fl is not available
Feb 14 15:11:52.397: INFO: Wrong image for pod: daemon-set-7j9fl. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 14 15:11:52.397: INFO: Pod daemon-set-7j9fl is not available
Feb 14 15:11:53.394: INFO: Wrong image for pod: daemon-set-7j9fl. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 14 15:11:53.394: INFO: Pod daemon-set-7j9fl is not available
Feb 14 15:11:54.401: INFO: Wrong image for pod: daemon-set-7j9fl. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 14 15:11:54.401: INFO: Pod daemon-set-7j9fl is not available
Feb 14 15:11:55.394: INFO: Wrong image for pod: daemon-set-7j9fl. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 14 15:11:55.394: INFO: Pod daemon-set-7j9fl is not available
Feb 14 15:11:56.393: INFO: Wrong image for pod: daemon-set-7j9fl. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 14 15:11:56.393: INFO: Pod daemon-set-7j9fl is not available
Feb 14 15:11:57.395: INFO: Pod daemon-set-lz5wp is not available
STEP: Check that daemon pods are still running on every node of the cluster.
Feb 14 15:11:57.416: INFO: Number of nodes with available pods: 1
Feb 14 15:11:57.416: INFO: Node iruya-node is running more than one daemon pod
Feb 14 15:11:58.431: INFO: Number of nodes with available pods: 1
Feb 14 15:11:58.431: INFO: Node iruya-node is running more than one daemon pod
Feb 14 15:11:59.443: INFO: Number of nodes with available pods: 1
Feb 14 15:11:59.443: INFO: Node iruya-node is running more than one daemon pod
Feb 14 15:12:00.428: INFO: Number of nodes with available pods: 1
Feb 14 15:12:00.428: INFO: Node iruya-node is running more than one daemon pod
Feb 14 15:12:01.429: INFO: Number of nodes with available pods: 1
Feb 14 15:12:01.429: INFO: Node iruya-node is running more than one daemon pod
Feb 14 15:12:02.433: INFO: Number of nodes with available pods: 1
Feb 14 15:12:02.434: INFO: Node iruya-node is running more than one daemon pod
Feb 14 15:12:03.439: INFO: Number of nodes with available pods: 1
Feb 14 15:12:03.439: INFO: Node iruya-node is running more than one daemon pod
Feb 14 15:12:04.489: INFO: Number of nodes with available pods: 1
Feb 14 15:12:04.489: INFO: Node iruya-node is running more than one daemon pod
Feb 14 15:12:05.442: INFO: Number of nodes with available pods: 1
Feb 14 15:12:05.442: INFO: Node iruya-node is running more than one daemon pod
Feb 14 15:12:06.436: INFO: Number of nodes with available pods: 2
Feb 14 15:12:06.436: INFO: Number of running nodes: 2, number of available pods: 2
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-7097, will wait for the garbage collector to delete the pods
Feb 14 15:12:06.524: INFO: Deleting DaemonSet.extensions daemon-set took: 13.643618ms
Feb 14 15:12:06.825: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.738606ms
Feb 14 15:12:13.422: INFO: Number of nodes with available pods: 0
Feb 14 15:12:13.423: INFO: Number of running nodes: 0, number of available pods: 0
Feb 14 15:12:13.428: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-7097/daemonsets","resourceVersion":"24339785"},"items":null}

Feb 14 15:12:13.432: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-7097/pods","resourceVersion":"24339785"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 14 15:12:13.450: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-7097" for this suite.
Feb 14 15:12:19.511: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 15:12:19.686: INFO: namespace daemonsets-7097 deletion completed in 6.23139453s

• [SLOW TEST:67.663 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 14 15:12:19.687: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0644 on tmpfs
Feb 14 15:12:19.781: INFO: Waiting up to 5m0s for pod "pod-1bcae398-040f-4e58-8863-71aaf1bef16b" in namespace "emptydir-6615" to be "success or failure"
Feb 14 15:12:19.790: INFO: Pod "pod-1bcae398-040f-4e58-8863-71aaf1bef16b": Phase="Pending", Reason="", readiness=false. Elapsed: 9.089927ms
Feb 14 15:12:21.807: INFO: Pod "pod-1bcae398-040f-4e58-8863-71aaf1bef16b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02643626s
Feb 14 15:12:23.823: INFO: Pod "pod-1bcae398-040f-4e58-8863-71aaf1bef16b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.042770736s
Feb 14 15:12:25.832: INFO: Pod "pod-1bcae398-040f-4e58-8863-71aaf1bef16b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.051628884s
Feb 14 15:12:27.843: INFO: Pod "pod-1bcae398-040f-4e58-8863-71aaf1bef16b": Phase="Pending", Reason="", readiness=false. Elapsed: 8.0627717s
Feb 14 15:12:29.855: INFO: Pod "pod-1bcae398-040f-4e58-8863-71aaf1bef16b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.07443288s
STEP: Saw pod success
Feb 14 15:12:29.855: INFO: Pod "pod-1bcae398-040f-4e58-8863-71aaf1bef16b" satisfied condition "success or failure"
Feb 14 15:12:29.864: INFO: Trying to get logs from node iruya-node pod pod-1bcae398-040f-4e58-8863-71aaf1bef16b container test-container: 
STEP: delete the pod
Feb 14 15:12:29.956: INFO: Waiting for pod pod-1bcae398-040f-4e58-8863-71aaf1bef16b to disappear
Feb 14 15:12:30.010: INFO: Pod pod-1bcae398-040f-4e58-8863-71aaf1bef16b no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 14 15:12:30.010: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-6615" for this suite.
Feb 14 15:12:36.076: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 15:12:36.186: INFO: namespace emptydir-6615 deletion completed in 6.164322399s

• [SLOW TEST:16.500 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 14 15:12:36.187: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
[It] should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb 14 15:12:36.351: INFO: Create a RollingUpdate DaemonSet
Feb 14 15:12:36.365: INFO: Check that daemon pods launch on every node of the cluster
Feb 14 15:12:36.399: INFO: Number of nodes with available pods: 0
Feb 14 15:12:36.399: INFO: Node iruya-node is running more than one daemon pod
Feb 14 15:12:39.057: INFO: Number of nodes with available pods: 0
Feb 14 15:12:39.057: INFO: Node iruya-node is running more than one daemon pod
Feb 14 15:12:39.488: INFO: Number of nodes with available pods: 0
Feb 14 15:12:39.488: INFO: Node iruya-node is running more than one daemon pod
Feb 14 15:12:40.420: INFO: Number of nodes with available pods: 0
Feb 14 15:12:40.421: INFO: Node iruya-node is running more than one daemon pod
Feb 14 15:12:41.413: INFO: Number of nodes with available pods: 0
Feb 14 15:12:41.413: INFO: Node iruya-node is running more than one daemon pod
Feb 14 15:12:42.574: INFO: Number of nodes with available pods: 0
Feb 14 15:12:42.574: INFO: Node iruya-node is running more than one daemon pod
Feb 14 15:12:44.616: INFO: Number of nodes with available pods: 0
Feb 14 15:12:44.616: INFO: Node iruya-node is running more than one daemon pod
Feb 14 15:12:45.558: INFO: Number of nodes with available pods: 0
Feb 14 15:12:45.558: INFO: Node iruya-node is running more than one daemon pod
Feb 14 15:12:46.614: INFO: Number of nodes with available pods: 0
Feb 14 15:12:46.614: INFO: Node iruya-node is running more than one daemon pod
Feb 14 15:12:47.414: INFO: Number of nodes with available pods: 0
Feb 14 15:12:47.414: INFO: Node iruya-node is running more than one daemon pod
Feb 14 15:12:48.448: INFO: Number of nodes with available pods: 2
Feb 14 15:12:48.448: INFO: Number of running nodes: 2, number of available pods: 2
Feb 14 15:12:48.448: INFO: Update the DaemonSet to trigger a rollout
Feb 14 15:12:48.459: INFO: Updating DaemonSet daemon-set
Feb 14 15:12:58.502: INFO: Roll back the DaemonSet before rollout is complete
Feb 14 15:12:58.524: INFO: Updating DaemonSet daemon-set
Feb 14 15:12:58.524: INFO: Make sure DaemonSet rollback is complete
Feb 14 15:12:59.226: INFO: Wrong image for pod: daemon-set-478n6. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent.
Feb 14 15:12:59.226: INFO: Pod daemon-set-478n6 is not available
Feb 14 15:13:00.585: INFO: Wrong image for pod: daemon-set-478n6. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent.
Feb 14 15:13:00.585: INFO: Pod daemon-set-478n6 is not available
Feb 14 15:13:01.577: INFO: Wrong image for pod: daemon-set-478n6. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent.
Feb 14 15:13:01.577: INFO: Pod daemon-set-478n6 is not available
Feb 14 15:13:03.379: INFO: Wrong image for pod: daemon-set-478n6. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent.
Feb 14 15:13:03.379: INFO: Pod daemon-set-478n6 is not available
Feb 14 15:13:03.764: INFO: Wrong image for pod: daemon-set-478n6. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent.
Feb 14 15:13:03.764: INFO: Pod daemon-set-478n6 is not available
Feb 14 15:13:04.590: INFO: Pod daemon-set-788zr is not available
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-7984, will wait for the garbage collector to delete the pods
Feb 14 15:13:04.730: INFO: Deleting DaemonSet.extensions daemon-set took: 34.353205ms
Feb 14 15:13:05.532: INFO: Terminating DaemonSet.extensions daemon-set pods took: 801.169826ms
Feb 14 15:13:17.443: INFO: Number of nodes with available pods: 0
Feb 14 15:13:17.443: INFO: Number of running nodes: 0, number of available pods: 0
Feb 14 15:13:17.450: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-7984/daemonsets","resourceVersion":"24339993"},"items":null}

Feb 14 15:13:17.456: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-7984/pods","resourceVersion":"24339993"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 14 15:13:17.472: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-7984" for this suite.
Feb 14 15:13:23.506: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 15:13:23.660: INFO: namespace daemonsets-7984 deletion completed in 6.18212039s

• [SLOW TEST:47.473 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 14 15:13:23.661: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Feb 14 15:13:32.121: INFO: Expected: &{OK} to match Container's Termination Message: OK --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 14 15:13:32.187: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-6012" for this suite.
Feb 14 15:13:38.256: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 15:13:38.494: INFO: namespace container-runtime-6012 deletion completed in 6.298896575s

• [SLOW TEST:14.833 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129
      should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-apps] ReplicaSet 
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 14 15:13:38.495: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Given a Pod with a 'name' label pod-adoption-release is created
STEP: When a replicaset with a matching selector is created
STEP: Then the orphan pod is adopted
STEP: When the matched label of one of its pods change
Feb 14 15:13:47.691: INFO: Pod name pod-adoption-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 14 15:13:48.733: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replicaset-4749" for this suite.
Feb 14 15:14:10.861: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 15:14:11.018: INFO: namespace replicaset-4749 deletion completed in 22.279239606s

• [SLOW TEST:32.524 seconds]
[sig-apps] ReplicaSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 14 15:14:11.019: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir volume type on tmpfs
Feb 14 15:14:11.129: INFO: Waiting up to 5m0s for pod "pod-83254448-d311-4818-9227-9572be3c7752" in namespace "emptydir-8068" to be "success or failure"
Feb 14 15:14:11.139: INFO: Pod "pod-83254448-d311-4818-9227-9572be3c7752": Phase="Pending", Reason="", readiness=false. Elapsed: 9.895265ms
Feb 14 15:14:13.147: INFO: Pod "pod-83254448-d311-4818-9227-9572be3c7752": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017340807s
Feb 14 15:14:15.155: INFO: Pod "pod-83254448-d311-4818-9227-9572be3c7752": Phase="Pending", Reason="", readiness=false. Elapsed: 4.025525967s
Feb 14 15:14:17.167: INFO: Pod "pod-83254448-d311-4818-9227-9572be3c7752": Phase="Pending", Reason="", readiness=false. Elapsed: 6.037783373s
Feb 14 15:14:19.176: INFO: Pod "pod-83254448-d311-4818-9227-9572be3c7752": Phase="Pending", Reason="", readiness=false. Elapsed: 8.047102383s
Feb 14 15:14:21.184: INFO: Pod "pod-83254448-d311-4818-9227-9572be3c7752": Phase="Pending", Reason="", readiness=false. Elapsed: 10.054989663s
Feb 14 15:14:23.208: INFO: Pod "pod-83254448-d311-4818-9227-9572be3c7752": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.07890115s
STEP: Saw pod success
Feb 14 15:14:23.208: INFO: Pod "pod-83254448-d311-4818-9227-9572be3c7752" satisfied condition "success or failure"
Feb 14 15:14:23.212: INFO: Trying to get logs from node iruya-node pod pod-83254448-d311-4818-9227-9572be3c7752 container test-container: 
STEP: delete the pod
Feb 14 15:14:23.268: INFO: Waiting for pod pod-83254448-d311-4818-9227-9572be3c7752 to disappear
Feb 14 15:14:23.275: INFO: Pod pod-83254448-d311-4818-9227-9572be3c7752 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 14 15:14:23.276: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-8068" for this suite.
Feb 14 15:14:29.302: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 15:14:29.435: INFO: namespace emptydir-8068 deletion completed in 6.153212963s

• [SLOW TEST:18.417 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 14 15:14:29.436: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test override all
Feb 14 15:14:29.524: INFO: Waiting up to 5m0s for pod "client-containers-6a33c55f-9b6f-4c89-a5bc-05c7969aa19e" in namespace "containers-9307" to be "success or failure"
Feb 14 15:14:29.617: INFO: Pod "client-containers-6a33c55f-9b6f-4c89-a5bc-05c7969aa19e": Phase="Pending", Reason="", readiness=false. Elapsed: 92.627083ms
Feb 14 15:14:31.628: INFO: Pod "client-containers-6a33c55f-9b6f-4c89-a5bc-05c7969aa19e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.103343929s
Feb 14 15:14:33.676: INFO: Pod "client-containers-6a33c55f-9b6f-4c89-a5bc-05c7969aa19e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.151863724s
Feb 14 15:14:35.691: INFO: Pod "client-containers-6a33c55f-9b6f-4c89-a5bc-05c7969aa19e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.166444615s
Feb 14 15:14:37.702: INFO: Pod "client-containers-6a33c55f-9b6f-4c89-a5bc-05c7969aa19e": Phase="Pending", Reason="", readiness=false. Elapsed: 8.177896194s
Feb 14 15:14:39.711: INFO: Pod "client-containers-6a33c55f-9b6f-4c89-a5bc-05c7969aa19e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.186075913s
STEP: Saw pod success
Feb 14 15:14:39.711: INFO: Pod "client-containers-6a33c55f-9b6f-4c89-a5bc-05c7969aa19e" satisfied condition "success or failure"
Feb 14 15:14:39.715: INFO: Trying to get logs from node iruya-node pod client-containers-6a33c55f-9b6f-4c89-a5bc-05c7969aa19e container test-container: 
STEP: delete the pod
Feb 14 15:14:39.818: INFO: Waiting for pod client-containers-6a33c55f-9b6f-4c89-a5bc-05c7969aa19e to disappear
Feb 14 15:14:39.860: INFO: Pod client-containers-6a33c55f-9b6f-4c89-a5bc-05c7969aa19e no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 14 15:14:39.861: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-9307" for this suite.
Feb 14 15:14:45.930: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 15:14:46.136: INFO: namespace containers-9307 deletion completed in 6.249054163s

• [SLOW TEST:16.700 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 14 15:14:46.136: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-map-d6f7941d-3d86-401e-aca3-892a0b5b3732
STEP: Creating a pod to test consume configMaps
Feb 14 15:14:46.209: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-cf7874f9-aa82-4f82-9482-f18e2481089f" in namespace "projected-2010" to be "success or failure"
Feb 14 15:14:46.257: INFO: Pod "pod-projected-configmaps-cf7874f9-aa82-4f82-9482-f18e2481089f": Phase="Pending", Reason="", readiness=false. Elapsed: 47.858587ms
Feb 14 15:14:48.270: INFO: Pod "pod-projected-configmaps-cf7874f9-aa82-4f82-9482-f18e2481089f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.06031422s
Feb 14 15:14:50.277: INFO: Pod "pod-projected-configmaps-cf7874f9-aa82-4f82-9482-f18e2481089f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.067207389s
Feb 14 15:14:52.286: INFO: Pod "pod-projected-configmaps-cf7874f9-aa82-4f82-9482-f18e2481089f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.076196699s
Feb 14 15:14:54.296: INFO: Pod "pod-projected-configmaps-cf7874f9-aa82-4f82-9482-f18e2481089f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.086763287s
Feb 14 15:14:56.307: INFO: Pod "pod-projected-configmaps-cf7874f9-aa82-4f82-9482-f18e2481089f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.097280408s
STEP: Saw pod success
Feb 14 15:14:56.307: INFO: Pod "pod-projected-configmaps-cf7874f9-aa82-4f82-9482-f18e2481089f" satisfied condition "success or failure"
Feb 14 15:14:56.313: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-cf7874f9-aa82-4f82-9482-f18e2481089f container projected-configmap-volume-test: 
STEP: delete the pod
Feb 14 15:14:56.404: INFO: Waiting for pod pod-projected-configmaps-cf7874f9-aa82-4f82-9482-f18e2481089f to disappear
Feb 14 15:14:56.417: INFO: Pod pod-projected-configmaps-cf7874f9-aa82-4f82-9482-f18e2481089f no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 14 15:14:56.418: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2010" for this suite.
Feb 14 15:15:02.484: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 15:15:02.584: INFO: namespace projected-2010 deletion completed in 6.128374556s

• [SLOW TEST:16.448 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSFeb 14 15:15:02.585: INFO: Running AfterSuite actions on all nodes
Feb 14 15:15:02.586: INFO: Running AfterSuite actions on node 1
Feb 14 15:15:02.586: INFO: Skipping dumping logs from cluster

Ran 215 of 4412 Specs in 8330.342 seconds
SUCCESS! -- 215 Passed | 0 Failed | 0 Pending | 4197 Skipped
PASS