I0105 12:55:59.022137 8 e2e.go:243] Starting e2e run "17eced7b-853b-42a2-9b89-c0ac36014a8f" on Ginkgo node 1 Running Suite: Kubernetes e2e suite =================================== Random Seed: 1578228957 - Will randomize all specs Will run 215 of 4412 specs Jan 5 12:55:59.379: INFO: >>> kubeConfig: /root/.kube/config Jan 5 12:55:59.383: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Jan 5 12:55:59.416: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Jan 5 12:55:59.466: INFO: 10 / 10 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Jan 5 12:55:59.466: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Jan 5 12:55:59.466: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Jan 5 12:55:59.481: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Jan 5 12:55:59.481: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'weave-net' (0 seconds elapsed) Jan 5 12:55:59.481: INFO: e2e test version: v1.15.7 Jan 5 12:55:59.503: INFO: kube-apiserver version: v1.15.1 SSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 5 12:55:59.503: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected Jan 5 12:55:59.621: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jan 5 12:55:59.634: INFO: Waiting up to 5m0s for pod "downwardapi-volume-5c9699ac-8aa2-44e4-8039-bacd924870e8" in namespace "projected-268" to be "success or failure" Jan 5 12:55:59.669: INFO: Pod "downwardapi-volume-5c9699ac-8aa2-44e4-8039-bacd924870e8": Phase="Pending", Reason="", readiness=false. Elapsed: 35.06215ms Jan 5 12:56:01.681: INFO: Pod "downwardapi-volume-5c9699ac-8aa2-44e4-8039-bacd924870e8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.046972974s Jan 5 12:56:03.692: INFO: Pod "downwardapi-volume-5c9699ac-8aa2-44e4-8039-bacd924870e8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.058446934s Jan 5 12:56:05.709: INFO: Pod "downwardapi-volume-5c9699ac-8aa2-44e4-8039-bacd924870e8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.074581247s Jan 5 12:56:07.716: INFO: Pod "downwardapi-volume-5c9699ac-8aa2-44e4-8039-bacd924870e8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.081857776s STEP: Saw pod success Jan 5 12:56:07.716: INFO: Pod "downwardapi-volume-5c9699ac-8aa2-44e4-8039-bacd924870e8" satisfied condition "success or failure" Jan 5 12:56:07.720: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-5c9699ac-8aa2-44e4-8039-bacd924870e8 container client-container: STEP: delete the pod Jan 5 12:56:07.867: INFO: Waiting for pod downwardapi-volume-5c9699ac-8aa2-44e4-8039-bacd924870e8 to disappear Jan 5 12:56:07.878: INFO: Pod downwardapi-volume-5c9699ac-8aa2-44e4-8039-bacd924870e8 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 5 12:56:07.878: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-268" for this suite. Jan 5 12:56:13.913: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 5 12:56:14.032: INFO: namespace projected-268 deletion completed in 6.145797814s • [SLOW TEST:14.529 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 5 12:56:14.032: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jan 5 12:56:14.104: INFO: Creating ReplicaSet my-hostname-basic-e4efd0c6-6ea9-436f-ab12-9e2155e64a4f Jan 5 12:56:14.185: INFO: Pod name my-hostname-basic-e4efd0c6-6ea9-436f-ab12-9e2155e64a4f: Found 0 pods out of 1 Jan 5 12:56:19.194: INFO: Pod name my-hostname-basic-e4efd0c6-6ea9-436f-ab12-9e2155e64a4f: Found 1 pods out of 1 Jan 5 12:56:19.194: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-e4efd0c6-6ea9-436f-ab12-9e2155e64a4f" is running Jan 5 12:56:23.206: INFO: Pod "my-hostname-basic-e4efd0c6-6ea9-436f-ab12-9e2155e64a4f-dlgft" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-05 12:56:14 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-05 12:56:14 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-e4efd0c6-6ea9-436f-ab12-9e2155e64a4f]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-05 12:56:14 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-e4efd0c6-6ea9-436f-ab12-9e2155e64a4f]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-05 12:56:14 +0000 UTC Reason: Message:}]) Jan 5 12:56:23.206: INFO: Trying to dial the pod Jan 5 12:56:28.239: INFO: Controller my-hostname-basic-e4efd0c6-6ea9-436f-ab12-9e2155e64a4f: Got expected result from replica 1 [my-hostname-basic-e4efd0c6-6ea9-436f-ab12-9e2155e64a4f-dlgft]: "my-hostname-basic-e4efd0c6-6ea9-436f-ab12-9e2155e64a4f-dlgft", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 5 12:56:28.239: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-8692" for this suite. Jan 5 12:56:34.371: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 5 12:56:34.472: INFO: namespace replicaset-8692 deletion completed in 6.227791645s • [SLOW TEST:20.440 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 5 12:56:34.475: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jan 5 12:56:34.665: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"1225d9ee-14b6-48ae-8b92-c57d773ce9c1", Controller:(*bool)(0xc002b2de32), BlockOwnerDeletion:(*bool)(0xc002b2de33)}} Jan 5 12:56:34.681: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"d7a18876-30ec-46ea-9cac-73f439ee9ad9", Controller:(*bool)(0xc002cbb76a), BlockOwnerDeletion:(*bool)(0xc002cbb76b)}} Jan 5 12:56:34.706: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"7fbbbf40-e331-4c7e-a847-63e3633fa668", Controller:(*bool)(0xc00033c09a), BlockOwnerDeletion:(*bool)(0xc00033c09b)}} [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 5 12:56:39.815: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-1232" for this suite. Jan 5 12:56:45.858: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 5 12:56:45.991: INFO: namespace gc-1232 deletion completed in 6.171320055s • [SLOW TEST:11.516 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 5 12:56:45.991: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-4c6de7e8-0a33-4968-b2cf-d6b47730b888 STEP: Creating a pod to test consume secrets Jan 5 12:56:46.138: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-dbf32de6-c74a-431d-9a1b-fe2ef0d837e1" in namespace "projected-5461" to be "success or failure" Jan 5 12:56:46.169: INFO: Pod "pod-projected-secrets-dbf32de6-c74a-431d-9a1b-fe2ef0d837e1": Phase="Pending", Reason="", readiness=false. Elapsed: 30.637563ms Jan 5 12:56:48.185: INFO: Pod "pod-projected-secrets-dbf32de6-c74a-431d-9a1b-fe2ef0d837e1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.046497483s Jan 5 12:56:50.192: INFO: Pod "pod-projected-secrets-dbf32de6-c74a-431d-9a1b-fe2ef0d837e1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.053591413s Jan 5 12:56:52.197: INFO: Pod "pod-projected-secrets-dbf32de6-c74a-431d-9a1b-fe2ef0d837e1": Phase="Pending", Reason="", readiness=false. Elapsed: 6.058581459s Jan 5 12:56:54.204: INFO: Pod "pod-projected-secrets-dbf32de6-c74a-431d-9a1b-fe2ef0d837e1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.065909152s STEP: Saw pod success Jan 5 12:56:54.204: INFO: Pod "pod-projected-secrets-dbf32de6-c74a-431d-9a1b-fe2ef0d837e1" satisfied condition "success or failure" Jan 5 12:56:54.208: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-dbf32de6-c74a-431d-9a1b-fe2ef0d837e1 container projected-secret-volume-test: STEP: delete the pod Jan 5 12:56:54.340: INFO: Waiting for pod pod-projected-secrets-dbf32de6-c74a-431d-9a1b-fe2ef0d837e1 to disappear Jan 5 12:56:54.498: INFO: Pod pod-projected-secrets-dbf32de6-c74a-431d-9a1b-fe2ef0d837e1 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 5 12:56:54.499: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5461" for this suite. Jan 5 12:57:00.553: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 5 12:57:00.683: INFO: namespace projected-5461 deletion completed in 6.16473361s • [SLOW TEST:14.692 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 5 12:57:00.684: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 5 12:57:08.991: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-6004" for this suite. Jan 5 12:57:51.019: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 5 12:57:51.158: INFO: namespace kubelet-test-6004 deletion completed in 42.161223218s • [SLOW TEST:50.474 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a read only busybox container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:187 should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 5 12:57:51.158: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: validating cluster-info Jan 5 12:57:51.250: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info' Jan 5 12:57:53.158: INFO: stderr: "" Jan 5 12:57:53.158: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.24.4.57:6443\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.24.4.57:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 5 12:57:53.158: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-735" for this suite. Jan 5 12:57:59.349: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 5 12:57:59.493: INFO: namespace kubectl-735 deletion completed in 6.316125526s • [SLOW TEST:8.335 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl cluster-info /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 5 12:57:59.494: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Jan 5 12:58:17.684: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jan 5 12:58:17.735: INFO: Pod pod-with-prestop-http-hook still exists Jan 5 12:58:19.736: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jan 5 12:58:19.749: INFO: Pod pod-with-prestop-http-hook still exists Jan 5 12:58:21.736: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jan 5 12:58:21.749: INFO: Pod pod-with-prestop-http-hook still exists Jan 5 12:58:23.736: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jan 5 12:58:23.747: INFO: Pod pod-with-prestop-http-hook still exists Jan 5 12:58:25.736: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jan 5 12:58:25.746: INFO: Pod pod-with-prestop-http-hook still exists Jan 5 12:58:27.736: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jan 5 12:58:27.744: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 5 12:58:27.774: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-9512" for this suite. Jan 5 12:58:49.890: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 5 12:58:50.007: INFO: namespace container-lifecycle-hook-9512 deletion completed in 22.219887915s • [SLOW TEST:50.513 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 5 12:58:50.007: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating pod Jan 5 12:58:58.181: INFO: Pod pod-hostip-c524d3ce-c011-4c7d-b75a-b4ec94cfb1f4 has hostIP: 10.96.3.65 [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 5 12:58:58.181: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-2380" for this suite. Jan 5 12:59:20.301: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 5 12:59:20.467: INFO: namespace pods-2380 deletion completed in 22.281426134s • [SLOW TEST:30.460 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 5 12:59:20.468: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-cc8d46ee-5679-4379-8e17-d0f0ab44ff6e STEP: Creating a pod to test consume secrets Jan 5 12:59:20.669: INFO: Waiting up to 5m0s for pod "pod-secrets-a8673ec7-722a-4c7b-9c7a-3ed10d73f414" in namespace "secrets-8830" to be "success or failure" Jan 5 12:59:20.711: INFO: Pod "pod-secrets-a8673ec7-722a-4c7b-9c7a-3ed10d73f414": Phase="Pending", Reason="", readiness=false. Elapsed: 41.160993ms Jan 5 12:59:22.721: INFO: Pod "pod-secrets-a8673ec7-722a-4c7b-9c7a-3ed10d73f414": Phase="Pending", Reason="", readiness=false. Elapsed: 2.051310874s Jan 5 12:59:24.737: INFO: Pod "pod-secrets-a8673ec7-722a-4c7b-9c7a-3ed10d73f414": Phase="Pending", Reason="", readiness=false. Elapsed: 4.067615038s Jan 5 12:59:26.751: INFO: Pod "pod-secrets-a8673ec7-722a-4c7b-9c7a-3ed10d73f414": Phase="Pending", Reason="", readiness=false. Elapsed: 6.081204109s Jan 5 12:59:28.760: INFO: Pod "pod-secrets-a8673ec7-722a-4c7b-9c7a-3ed10d73f414": Phase="Pending", Reason="", readiness=false. Elapsed: 8.089948934s Jan 5 12:59:30.765: INFO: Pod "pod-secrets-a8673ec7-722a-4c7b-9c7a-3ed10d73f414": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.095544486s STEP: Saw pod success Jan 5 12:59:30.765: INFO: Pod "pod-secrets-a8673ec7-722a-4c7b-9c7a-3ed10d73f414" satisfied condition "success or failure" Jan 5 12:59:30.769: INFO: Trying to get logs from node iruya-node pod pod-secrets-a8673ec7-722a-4c7b-9c7a-3ed10d73f414 container secret-volume-test: STEP: delete the pod Jan 5 12:59:30.929: INFO: Waiting for pod pod-secrets-a8673ec7-722a-4c7b-9c7a-3ed10d73f414 to disappear Jan 5 12:59:30.944: INFO: Pod pod-secrets-a8673ec7-722a-4c7b-9c7a-3ed10d73f414 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 5 12:59:30.944: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8830" for this suite. Jan 5 12:59:36.968: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 5 12:59:37.102: INFO: namespace secrets-8830 deletion completed in 6.153542933s • [SLOW TEST:16.634 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 5 12:59:37.103: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jan 5 12:59:37.202: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e6f7c322-78b4-4973-ae76-580436224fe2" in namespace "projected-7378" to be "success or failure" Jan 5 12:59:37.213: INFO: Pod "downwardapi-volume-e6f7c322-78b4-4973-ae76-580436224fe2": Phase="Pending", Reason="", readiness=false. Elapsed: 10.720342ms Jan 5 12:59:39.221: INFO: Pod "downwardapi-volume-e6f7c322-78b4-4973-ae76-580436224fe2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018619819s Jan 5 12:59:41.242: INFO: Pod "downwardapi-volume-e6f7c322-78b4-4973-ae76-580436224fe2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.040484002s Jan 5 12:59:43.251: INFO: Pod "downwardapi-volume-e6f7c322-78b4-4973-ae76-580436224fe2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.048761129s Jan 5 12:59:45.261: INFO: Pod "downwardapi-volume-e6f7c322-78b4-4973-ae76-580436224fe2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.058833261s STEP: Saw pod success Jan 5 12:59:45.261: INFO: Pod "downwardapi-volume-e6f7c322-78b4-4973-ae76-580436224fe2" satisfied condition "success or failure" Jan 5 12:59:45.265: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-e6f7c322-78b4-4973-ae76-580436224fe2 container client-container: STEP: delete the pod Jan 5 12:59:45.391: INFO: Waiting for pod downwardapi-volume-e6f7c322-78b4-4973-ae76-580436224fe2 to disappear Jan 5 12:59:45.398: INFO: Pod downwardapi-volume-e6f7c322-78b4-4973-ae76-580436224fe2 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 5 12:59:45.399: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7378" for this suite. Jan 5 12:59:51.427: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 5 12:59:51.562: INFO: namespace projected-7378 deletion completed in 6.157645826s • [SLOW TEST:14.459 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 5 12:59:51.562: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-669 STEP: creating a selector STEP: Creating the service pods in kubernetes Jan 5 12:59:51.653: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Jan 5 13:00:31.996: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.44.0.1:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-669 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 5 13:00:31.996: INFO: >>> kubeConfig: /root/.kube/config I0105 13:00:32.110004 8 log.go:172] (0xc0013fc8f0) (0xc000f177c0) Create stream I0105 13:00:32.110331 8 log.go:172] (0xc0013fc8f0) (0xc000f177c0) Stream added, broadcasting: 1 I0105 13:00:32.127091 8 log.go:172] (0xc0013fc8f0) Reply frame received for 1 I0105 13:00:32.127280 8 log.go:172] (0xc0013fc8f0) (0xc002284d20) Create stream I0105 13:00:32.127294 8 log.go:172] (0xc0013fc8f0) (0xc002284d20) Stream added, broadcasting: 3 I0105 13:00:32.132200 8 log.go:172] (0xc0013fc8f0) Reply frame received for 3 I0105 13:00:32.132232 8 log.go:172] (0xc0013fc8f0) (0xc000f17a40) Create stream I0105 13:00:32.132240 8 log.go:172] (0xc0013fc8f0) (0xc000f17a40) Stream added, broadcasting: 5 I0105 13:00:32.135578 8 log.go:172] (0xc0013fc8f0) Reply frame received for 5 I0105 13:00:32.567153 8 log.go:172] (0xc0013fc8f0) Data frame received for 3 I0105 13:00:32.567267 8 log.go:172] (0xc002284d20) (3) Data frame handling I0105 13:00:32.567292 8 log.go:172] (0xc002284d20) (3) Data frame sent I0105 13:00:32.802783 8 log.go:172] (0xc0013fc8f0) (0xc002284d20) Stream removed, broadcasting: 3 I0105 13:00:32.802913 8 log.go:172] (0xc0013fc8f0) Data frame received for 1 I0105 13:00:32.802942 8 log.go:172] (0xc0013fc8f0) (0xc000f17a40) Stream removed, broadcasting: 5 I0105 13:00:32.802970 8 log.go:172] (0xc000f177c0) (1) Data frame handling I0105 13:00:32.802999 8 log.go:172] (0xc000f177c0) (1) Data frame sent I0105 13:00:32.803007 8 log.go:172] (0xc0013fc8f0) (0xc000f177c0) Stream removed, broadcasting: 1 I0105 13:00:32.803016 8 log.go:172] (0xc0013fc8f0) Go away received I0105 13:00:32.803986 8 log.go:172] (0xc0013fc8f0) (0xc000f177c0) Stream removed, broadcasting: 1 I0105 13:00:32.804000 8 log.go:172] (0xc0013fc8f0) (0xc002284d20) Stream removed, broadcasting: 3 I0105 13:00:32.804010 8 log.go:172] (0xc0013fc8f0) (0xc000f17a40) Stream removed, broadcasting: 5 Jan 5 13:00:32.804: INFO: Found all expected endpoints: [netserver-0] Jan 5 13:00:32.810: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.32.0.4:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-669 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 5 13:00:32.810: INFO: >>> kubeConfig: /root/.kube/config I0105 13:00:32.860035 8 log.go:172] (0xc0024eb4a0) (0xc001f62dc0) Create stream I0105 13:00:32.860369 8 log.go:172] (0xc0024eb4a0) (0xc001f62dc0) Stream added, broadcasting: 1 I0105 13:00:32.876469 8 log.go:172] (0xc0024eb4a0) Reply frame received for 1 I0105 13:00:32.876657 8 log.go:172] (0xc0024eb4a0) (0xc001f62e60) Create stream I0105 13:00:32.876677 8 log.go:172] (0xc0024eb4a0) (0xc001f62e60) Stream added, broadcasting: 3 I0105 13:00:32.878001 8 log.go:172] (0xc0024eb4a0) Reply frame received for 3 I0105 13:00:32.878026 8 log.go:172] (0xc0024eb4a0) (0xc000f17cc0) Create stream I0105 13:00:32.878034 8 log.go:172] (0xc0024eb4a0) (0xc000f17cc0) Stream added, broadcasting: 5 I0105 13:00:32.880976 8 log.go:172] (0xc0024eb4a0) Reply frame received for 5 I0105 13:00:32.989791 8 log.go:172] (0xc0024eb4a0) Data frame received for 3 I0105 13:00:32.989857 8 log.go:172] (0xc001f62e60) (3) Data frame handling I0105 13:00:32.989875 8 log.go:172] (0xc001f62e60) (3) Data frame sent I0105 13:00:33.100804 8 log.go:172] (0xc0024eb4a0) Data frame received for 1 I0105 13:00:33.101022 8 log.go:172] (0xc0024eb4a0) (0xc001f62e60) Stream removed, broadcasting: 3 I0105 13:00:33.101108 8 log.go:172] (0xc001f62dc0) (1) Data frame handling I0105 13:00:33.101131 8 log.go:172] (0xc0024eb4a0) (0xc000f17cc0) Stream removed, broadcasting: 5 I0105 13:00:33.101185 8 log.go:172] (0xc001f62dc0) (1) Data frame sent I0105 13:00:33.101201 8 log.go:172] (0xc0024eb4a0) (0xc001f62dc0) Stream removed, broadcasting: 1 I0105 13:00:33.101209 8 log.go:172] (0xc0024eb4a0) Go away received I0105 13:00:33.101505 8 log.go:172] (0xc0024eb4a0) (0xc001f62dc0) Stream removed, broadcasting: 1 I0105 13:00:33.101533 8 log.go:172] (0xc0024eb4a0) (0xc001f62e60) Stream removed, broadcasting: 3 I0105 13:00:33.101551 8 log.go:172] (0xc0024eb4a0) (0xc000f17cc0) Stream removed, broadcasting: 5 Jan 5 13:00:33.101: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 5 13:00:33.101: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-669" for this suite. Jan 5 13:00:57.227: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 5 13:00:57.407: INFO: namespace pod-network-test-669 deletion completed in 24.288257603s • [SLOW TEST:65.844 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 5 13:00:57.407: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 5 13:01:57.500: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-2446" for this suite. Jan 5 13:02:19.601: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 5 13:02:19.740: INFO: namespace container-probe-2446 deletion completed in 22.230127293s • [SLOW TEST:82.333 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-network] Services should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 5 13:02:19.741: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88 [It] should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 5 13:02:19.798: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-8749" for this suite. Jan 5 13:02:25.834: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 5 13:02:25.981: INFO: namespace services-8749 deletion completed in 6.177799939s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92 • [SLOW TEST:6.240 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 5 13:02:25.982: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap configmap-2695/configmap-test-223a8ed2-ba03-4b5d-8d52-20923773978b STEP: Creating a pod to test consume configMaps Jan 5 13:02:26.119: INFO: Waiting up to 5m0s for pod "pod-configmaps-a466abcb-055c-4e82-94f7-a2a03d782238" in namespace "configmap-2695" to be "success or failure" Jan 5 13:02:26.126: INFO: Pod "pod-configmaps-a466abcb-055c-4e82-94f7-a2a03d782238": Phase="Pending", Reason="", readiness=false. Elapsed: 7.010683ms Jan 5 13:02:28.139: INFO: Pod "pod-configmaps-a466abcb-055c-4e82-94f7-a2a03d782238": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020069343s Jan 5 13:02:30.150: INFO: Pod "pod-configmaps-a466abcb-055c-4e82-94f7-a2a03d782238": Phase="Pending", Reason="", readiness=false. Elapsed: 4.030958444s Jan 5 13:02:32.162: INFO: Pod "pod-configmaps-a466abcb-055c-4e82-94f7-a2a03d782238": Phase="Pending", Reason="", readiness=false. Elapsed: 6.042680691s Jan 5 13:02:34.177: INFO: Pod "pod-configmaps-a466abcb-055c-4e82-94f7-a2a03d782238": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.058123851s STEP: Saw pod success Jan 5 13:02:34.177: INFO: Pod "pod-configmaps-a466abcb-055c-4e82-94f7-a2a03d782238" satisfied condition "success or failure" Jan 5 13:02:34.184: INFO: Trying to get logs from node iruya-node pod pod-configmaps-a466abcb-055c-4e82-94f7-a2a03d782238 container env-test: STEP: delete the pod Jan 5 13:02:34.256: INFO: Waiting for pod pod-configmaps-a466abcb-055c-4e82-94f7-a2a03d782238 to disappear Jan 5 13:02:34.270: INFO: Pod pod-configmaps-a466abcb-055c-4e82-94f7-a2a03d782238 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 5 13:02:34.270: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2695" for this suite. Jan 5 13:02:40.349: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 5 13:02:40.461: INFO: namespace configmap-2695 deletion completed in 6.184225743s • [SLOW TEST:14.479 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 5 13:02:40.461: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-9239 [It] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace statefulset-9239 STEP: Creating statefulset with conflicting port in namespace statefulset-9239 STEP: Waiting until pod test-pod will start running in namespace statefulset-9239 STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-9239 Jan 5 13:02:50.750: INFO: Observed stateful pod in namespace: statefulset-9239, name: ss-0, uid: 0ba21230-2140-494a-a511-f26545cb5337, status phase: Pending. Waiting for statefulset controller to delete. Jan 5 13:02:51.224: INFO: Observed stateful pod in namespace: statefulset-9239, name: ss-0, uid: 0ba21230-2140-494a-a511-f26545cb5337, status phase: Failed. Waiting for statefulset controller to delete. Jan 5 13:02:51.239: INFO: Observed stateful pod in namespace: statefulset-9239, name: ss-0, uid: 0ba21230-2140-494a-a511-f26545cb5337, status phase: Failed. Waiting for statefulset controller to delete. Jan 5 13:02:51.243: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-9239 STEP: Removing pod with conflicting port in namespace statefulset-9239 STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-9239 and will be in running state [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 Jan 5 13:03:03.506: INFO: Deleting all statefulset in ns statefulset-9239 Jan 5 13:03:03.511: INFO: Scaling statefulset ss to 0 Jan 5 13:03:13.559: INFO: Waiting for statefulset status.replicas updated to 0 Jan 5 13:03:13.565: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 5 13:03:13.598: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-9239" for this suite. Jan 5 13:03:19.642: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 5 13:03:19.918: INFO: namespace statefulset-9239 deletion completed in 6.309930074s • [SLOW TEST:39.457 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 5 13:03:19.919: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-map-ae2f4156-a095-40b0-822b-db4de0ab045e STEP: Creating a pod to test consume secrets Jan 5 13:03:20.060: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-adaf61d3-bef7-4815-85ef-6333913a031d" in namespace "projected-5315" to be "success or failure" Jan 5 13:03:20.080: INFO: Pod "pod-projected-secrets-adaf61d3-bef7-4815-85ef-6333913a031d": Phase="Pending", Reason="", readiness=false. Elapsed: 19.300644ms Jan 5 13:03:22.090: INFO: Pod "pod-projected-secrets-adaf61d3-bef7-4815-85ef-6333913a031d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029509808s Jan 5 13:03:24.103: INFO: Pod "pod-projected-secrets-adaf61d3-bef7-4815-85ef-6333913a031d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.042073021s Jan 5 13:03:26.111: INFO: Pod "pod-projected-secrets-adaf61d3-bef7-4815-85ef-6333913a031d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.050590488s Jan 5 13:03:28.133: INFO: Pod "pod-projected-secrets-adaf61d3-bef7-4815-85ef-6333913a031d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.072820178s Jan 5 13:03:30.147: INFO: Pod "pod-projected-secrets-adaf61d3-bef7-4815-85ef-6333913a031d": Phase="Pending", Reason="", readiness=false. Elapsed: 10.086676838s Jan 5 13:03:32.162: INFO: Pod "pod-projected-secrets-adaf61d3-bef7-4815-85ef-6333913a031d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.101851145s STEP: Saw pod success Jan 5 13:03:32.162: INFO: Pod "pod-projected-secrets-adaf61d3-bef7-4815-85ef-6333913a031d" satisfied condition "success or failure" Jan 5 13:03:32.171: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-adaf61d3-bef7-4815-85ef-6333913a031d container projected-secret-volume-test: STEP: delete the pod Jan 5 13:03:32.293: INFO: Waiting for pod pod-projected-secrets-adaf61d3-bef7-4815-85ef-6333913a031d to disappear Jan 5 13:03:32.351: INFO: Pod pod-projected-secrets-adaf61d3-bef7-4815-85ef-6333913a031d no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 5 13:03:32.352: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5315" for this suite. Jan 5 13:03:38.389: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 5 13:03:38.501: INFO: namespace projected-5315 deletion completed in 6.142972309s • [SLOW TEST:18.582 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 5 13:03:38.502: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jan 5 13:03:38.673: INFO: (0) /api/v1/nodes/iruya-node:10250/proxy/logs/:
alternatives.log
alternatives.l... (200; 91.645433ms)
Jan  5 13:03:38.686: INFO: (1) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 12.884955ms)
Jan  5 13:03:38.693: INFO: (2) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.667019ms)
Jan  5 13:03:38.710: INFO: (3) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 16.836821ms)
Jan  5 13:03:38.719: INFO: (4) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 9.487006ms)
Jan  5 13:03:38.724: INFO: (5) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.434845ms)
Jan  5 13:03:38.728: INFO: (6) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.406246ms)
Jan  5 13:03:38.733: INFO: (7) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.542615ms)
Jan  5 13:03:38.737: INFO: (8) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.622112ms)
Jan  5 13:03:38.744: INFO: (9) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.27436ms)
Jan  5 13:03:38.757: INFO: (10) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 13.708472ms)
Jan  5 13:03:38.763: INFO: (11) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.348149ms)
Jan  5 13:03:38.768: INFO: (12) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.975796ms)
Jan  5 13:03:38.772: INFO: (13) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.97961ms)
Jan  5 13:03:38.776: INFO: (14) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.160424ms)
Jan  5 13:03:38.783: INFO: (15) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.945952ms)
Jan  5 13:03:38.789: INFO: (16) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.380005ms)
Jan  5 13:03:38.792: INFO: (17) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.875706ms)
Jan  5 13:03:38.798: INFO: (18) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.104869ms)
Jan  5 13:03:38.804: INFO: (19) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.990754ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  5 13:03:38.804: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-7066" for this suite.
Jan  5 13:03:44.866: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 13:03:44.995: INFO: namespace proxy-7066 deletion completed in 6.186881899s

• [SLOW TEST:6.493 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58
    should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  5 13:03:44.996: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name projected-secret-test-b9320951-0f49-4796-ae35-a895a91e630a
STEP: Creating a pod to test consume secrets
Jan  5 13:03:45.188: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-e7a297e9-dfae-4c52-a9b0-39dfe3b9f3cf" in namespace "projected-5461" to be "success or failure"
Jan  5 13:03:45.229: INFO: Pod "pod-projected-secrets-e7a297e9-dfae-4c52-a9b0-39dfe3b9f3cf": Phase="Pending", Reason="", readiness=false. Elapsed: 41.296819ms
Jan  5 13:03:47.243: INFO: Pod "pod-projected-secrets-e7a297e9-dfae-4c52-a9b0-39dfe3b9f3cf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.055159933s
Jan  5 13:03:49.258: INFO: Pod "pod-projected-secrets-e7a297e9-dfae-4c52-a9b0-39dfe3b9f3cf": Phase="Pending", Reason="", readiness=false. Elapsed: 4.069752878s
Jan  5 13:03:51.276: INFO: Pod "pod-projected-secrets-e7a297e9-dfae-4c52-a9b0-39dfe3b9f3cf": Phase="Pending", Reason="", readiness=false. Elapsed: 6.088688903s
Jan  5 13:03:53.283: INFO: Pod "pod-projected-secrets-e7a297e9-dfae-4c52-a9b0-39dfe3b9f3cf": Phase="Pending", Reason="", readiness=false. Elapsed: 8.095118553s
Jan  5 13:03:55.300: INFO: Pod "pod-projected-secrets-e7a297e9-dfae-4c52-a9b0-39dfe3b9f3cf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.111812123s
STEP: Saw pod success
Jan  5 13:03:55.300: INFO: Pod "pod-projected-secrets-e7a297e9-dfae-4c52-a9b0-39dfe3b9f3cf" satisfied condition "success or failure"
Jan  5 13:03:55.311: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-e7a297e9-dfae-4c52-a9b0-39dfe3b9f3cf container secret-volume-test: 
STEP: delete the pod
Jan  5 13:03:55.378: INFO: Waiting for pod pod-projected-secrets-e7a297e9-dfae-4c52-a9b0-39dfe3b9f3cf to disappear
Jan  5 13:03:55.493: INFO: Pod pod-projected-secrets-e7a297e9-dfae-4c52-a9b0-39dfe3b9f3cf no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  5 13:03:55.494: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5461" for this suite.
Jan  5 13:04:01.552: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 13:04:01.740: INFO: namespace projected-5461 deletion completed in 6.240616553s

• [SLOW TEST:16.745 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not cause race condition when used for configmaps [Serial] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  5 13:04:01.741: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not cause race condition when used for configmaps [Serial] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating 50 configmaps
STEP: Creating RC which spawns configmap-volume pods
Jan  5 13:04:02.646: INFO: Pod name wrapped-volume-race-801d3ef5-fe97-4083-b436-e5b4db2250db: Found 0 pods out of 5
Jan  5 13:04:07.668: INFO: Pod name wrapped-volume-race-801d3ef5-fe97-4083-b436-e5b4db2250db: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-801d3ef5-fe97-4083-b436-e5b4db2250db in namespace emptydir-wrapper-8237, will wait for the garbage collector to delete the pods
Jan  5 13:04:41.856: INFO: Deleting ReplicationController wrapped-volume-race-801d3ef5-fe97-4083-b436-e5b4db2250db took: 65.217081ms
Jan  5 13:04:42.357: INFO: Terminating ReplicationController wrapped-volume-race-801d3ef5-fe97-4083-b436-e5b4db2250db pods took: 501.19444ms
STEP: Creating RC which spawns configmap-volume pods
Jan  5 13:05:27.968: INFO: Pod name wrapped-volume-race-28565502-21b5-4425-a881-d47a2fe46908: Found 0 pods out of 5
Jan  5 13:05:32.979: INFO: Pod name wrapped-volume-race-28565502-21b5-4425-a881-d47a2fe46908: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-28565502-21b5-4425-a881-d47a2fe46908 in namespace emptydir-wrapper-8237, will wait for the garbage collector to delete the pods
Jan  5 13:06:05.093: INFO: Deleting ReplicationController wrapped-volume-race-28565502-21b5-4425-a881-d47a2fe46908 took: 17.149094ms
Jan  5 13:06:05.394: INFO: Terminating ReplicationController wrapped-volume-race-28565502-21b5-4425-a881-d47a2fe46908 pods took: 300.997393ms
STEP: Creating RC which spawns configmap-volume pods
Jan  5 13:06:49.724: INFO: Pod name wrapped-volume-race-269e7c44-327b-4ced-9ab5-be7f66ed89ba: Found 0 pods out of 5
Jan  5 13:06:54.749: INFO: Pod name wrapped-volume-race-269e7c44-327b-4ced-9ab5-be7f66ed89ba: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-269e7c44-327b-4ced-9ab5-be7f66ed89ba in namespace emptydir-wrapper-8237, will wait for the garbage collector to delete the pods
Jan  5 13:07:30.862: INFO: Deleting ReplicationController wrapped-volume-race-269e7c44-327b-4ced-9ab5-be7f66ed89ba took: 12.56564ms
Jan  5 13:07:31.262: INFO: Terminating ReplicationController wrapped-volume-race-269e7c44-327b-4ced-9ab5-be7f66ed89ba pods took: 400.54932ms
STEP: Cleaning up the configMaps
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  5 13:08:17.736: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-wrapper-8237" for this suite.
Jan  5 13:08:27.770: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 13:08:27.885: INFO: namespace emptydir-wrapper-8237 deletion completed in 10.141939436s

• [SLOW TEST:266.145 seconds]
[sig-storage] EmptyDir wrapper volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  should not cause race condition when used for configmaps [Serial] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  5 13:08:27.886: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-map-8d2fd8f4-917c-4b7b-8c95-4aad94966d49
STEP: Creating a pod to test consume configMaps
Jan  5 13:08:28.085: INFO: Waiting up to 5m0s for pod "pod-configmaps-191cfb62-a8a2-4342-a9c8-da03ec4f515d" in namespace "configmap-6439" to be "success or failure"
Jan  5 13:08:28.097: INFO: Pod "pod-configmaps-191cfb62-a8a2-4342-a9c8-da03ec4f515d": Phase="Pending", Reason="", readiness=false. Elapsed: 11.686496ms
Jan  5 13:08:30.108: INFO: Pod "pod-configmaps-191cfb62-a8a2-4342-a9c8-da03ec4f515d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022769926s
Jan  5 13:08:32.121: INFO: Pod "pod-configmaps-191cfb62-a8a2-4342-a9c8-da03ec4f515d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.035237782s
Jan  5 13:08:34.140: INFO: Pod "pod-configmaps-191cfb62-a8a2-4342-a9c8-da03ec4f515d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.054648523s
Jan  5 13:08:36.150: INFO: Pod "pod-configmaps-191cfb62-a8a2-4342-a9c8-da03ec4f515d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.06505019s
Jan  5 13:08:38.159: INFO: Pod "pod-configmaps-191cfb62-a8a2-4342-a9c8-da03ec4f515d": Phase="Pending", Reason="", readiness=false. Elapsed: 10.073859169s
Jan  5 13:08:40.189: INFO: Pod "pod-configmaps-191cfb62-a8a2-4342-a9c8-da03ec4f515d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.10363609s
STEP: Saw pod success
Jan  5 13:08:40.189: INFO: Pod "pod-configmaps-191cfb62-a8a2-4342-a9c8-da03ec4f515d" satisfied condition "success or failure"
Jan  5 13:08:40.199: INFO: Trying to get logs from node iruya-node pod pod-configmaps-191cfb62-a8a2-4342-a9c8-da03ec4f515d container configmap-volume-test: 
STEP: delete the pod
Jan  5 13:08:40.428: INFO: Waiting for pod pod-configmaps-191cfb62-a8a2-4342-a9c8-da03ec4f515d to disappear
Jan  5 13:08:40.439: INFO: Pod pod-configmaps-191cfb62-a8a2-4342-a9c8-da03ec4f515d no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  5 13:08:40.439: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-6439" for this suite.
Jan  5 13:08:46.477: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 13:08:46.630: INFO: namespace configmap-6439 deletion completed in 6.184849774s

• [SLOW TEST:18.744 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[k8s.io] KubeletManagedEtcHosts 
  should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] KubeletManagedEtcHosts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  5 13:08:46.631: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Setting up the test
STEP: Creating hostNetwork=false pod
STEP: Creating hostNetwork=true pod
STEP: Running the test
STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false
Jan  5 13:09:08.863: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-4250 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan  5 13:09:08.863: INFO: >>> kubeConfig: /root/.kube/config
I0105 13:09:08.976568       8 log.go:172] (0xc0007eb8c0) (0xc0016e6140) Create stream
I0105 13:09:08.976663       8 log.go:172] (0xc0007eb8c0) (0xc0016e6140) Stream added, broadcasting: 1
I0105 13:09:08.982885       8 log.go:172] (0xc0007eb8c0) Reply frame received for 1
I0105 13:09:08.982947       8 log.go:172] (0xc0007eb8c0) (0xc0018c0000) Create stream
I0105 13:09:08.982961       8 log.go:172] (0xc0007eb8c0) (0xc0018c0000) Stream added, broadcasting: 3
I0105 13:09:08.986014       8 log.go:172] (0xc0007eb8c0) Reply frame received for 3
I0105 13:09:08.986069       8 log.go:172] (0xc0007eb8c0) (0xc00176a640) Create stream
I0105 13:09:08.986101       8 log.go:172] (0xc0007eb8c0) (0xc00176a640) Stream added, broadcasting: 5
I0105 13:09:08.993200       8 log.go:172] (0xc0007eb8c0) Reply frame received for 5
I0105 13:09:09.134247       8 log.go:172] (0xc0007eb8c0) Data frame received for 3
I0105 13:09:09.134391       8 log.go:172] (0xc0018c0000) (3) Data frame handling
I0105 13:09:09.134443       8 log.go:172] (0xc0018c0000) (3) Data frame sent
I0105 13:09:09.259145       8 log.go:172] (0xc0007eb8c0) Data frame received for 1
I0105 13:09:09.259272       8 log.go:172] (0xc0007eb8c0) (0xc0018c0000) Stream removed, broadcasting: 3
I0105 13:09:09.259329       8 log.go:172] (0xc0016e6140) (1) Data frame handling
I0105 13:09:09.259353       8 log.go:172] (0xc0007eb8c0) (0xc00176a640) Stream removed, broadcasting: 5
I0105 13:09:09.259373       8 log.go:172] (0xc0016e6140) (1) Data frame sent
I0105 13:09:09.259393       8 log.go:172] (0xc0007eb8c0) (0xc0016e6140) Stream removed, broadcasting: 1
I0105 13:09:09.259417       8 log.go:172] (0xc0007eb8c0) Go away received
I0105 13:09:09.259806       8 log.go:172] (0xc0007eb8c0) (0xc0016e6140) Stream removed, broadcasting: 1
I0105 13:09:09.259822       8 log.go:172] (0xc0007eb8c0) (0xc0018c0000) Stream removed, broadcasting: 3
I0105 13:09:09.259837       8 log.go:172] (0xc0007eb8c0) (0xc00176a640) Stream removed, broadcasting: 5
Jan  5 13:09:09.259: INFO: Exec stderr: ""
Jan  5 13:09:09.260: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-4250 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan  5 13:09:09.260: INFO: >>> kubeConfig: /root/.kube/config
I0105 13:09:09.321389       8 log.go:172] (0xc00182e840) (0xc0018c03c0) Create stream
I0105 13:09:09.321525       8 log.go:172] (0xc00182e840) (0xc0018c03c0) Stream added, broadcasting: 1
I0105 13:09:09.327635       8 log.go:172] (0xc00182e840) Reply frame received for 1
I0105 13:09:09.327693       8 log.go:172] (0xc00182e840) (0xc0024e6780) Create stream
I0105 13:09:09.327710       8 log.go:172] (0xc00182e840) (0xc0024e6780) Stream added, broadcasting: 3
I0105 13:09:09.330109       8 log.go:172] (0xc00182e840) Reply frame received for 3
I0105 13:09:09.330286       8 log.go:172] (0xc00182e840) (0xc0018c0460) Create stream
I0105 13:09:09.330295       8 log.go:172] (0xc00182e840) (0xc0018c0460) Stream added, broadcasting: 5
I0105 13:09:09.331760       8 log.go:172] (0xc00182e840) Reply frame received for 5
I0105 13:09:09.413017       8 log.go:172] (0xc00182e840) Data frame received for 3
I0105 13:09:09.413050       8 log.go:172] (0xc0024e6780) (3) Data frame handling
I0105 13:09:09.413074       8 log.go:172] (0xc0024e6780) (3) Data frame sent
I0105 13:09:09.524897       8 log.go:172] (0xc00182e840) (0xc0018c0460) Stream removed, broadcasting: 5
I0105 13:09:09.525092       8 log.go:172] (0xc00182e840) Data frame received for 1
I0105 13:09:09.525129       8 log.go:172] (0xc00182e840) (0xc0024e6780) Stream removed, broadcasting: 3
I0105 13:09:09.525208       8 log.go:172] (0xc0018c03c0) (1) Data frame handling
I0105 13:09:09.525245       8 log.go:172] (0xc0018c03c0) (1) Data frame sent
I0105 13:09:09.525263       8 log.go:172] (0xc00182e840) (0xc0018c03c0) Stream removed, broadcasting: 1
I0105 13:09:09.525344       8 log.go:172] (0xc00182e840) Go away received
I0105 13:09:09.526241       8 log.go:172] (0xc00182e840) (0xc0018c03c0) Stream removed, broadcasting: 1
I0105 13:09:09.526265       8 log.go:172] (0xc00182e840) (0xc0024e6780) Stream removed, broadcasting: 3
I0105 13:09:09.526293       8 log.go:172] (0xc00182e840) (0xc0018c0460) Stream removed, broadcasting: 5
Jan  5 13:09:09.526: INFO: Exec stderr: ""
Jan  5 13:09:09.526: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-4250 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan  5 13:09:09.526: INFO: >>> kubeConfig: /root/.kube/config
I0105 13:09:09.587711       8 log.go:172] (0xc0019a84d0) (0xc0016e65a0) Create stream
I0105 13:09:09.587753       8 log.go:172] (0xc0019a84d0) (0xc0016e65a0) Stream added, broadcasting: 1
I0105 13:09:09.595002       8 log.go:172] (0xc0019a84d0) Reply frame received for 1
I0105 13:09:09.595155       8 log.go:172] (0xc0019a84d0) (0xc00176a780) Create stream
I0105 13:09:09.595165       8 log.go:172] (0xc0019a84d0) (0xc00176a780) Stream added, broadcasting: 3
I0105 13:09:09.596900       8 log.go:172] (0xc0019a84d0) Reply frame received for 3
I0105 13:09:09.596924       8 log.go:172] (0xc0019a84d0) (0xc001573d60) Create stream
I0105 13:09:09.596933       8 log.go:172] (0xc0019a84d0) (0xc001573d60) Stream added, broadcasting: 5
I0105 13:09:09.598031       8 log.go:172] (0xc0019a84d0) Reply frame received for 5
I0105 13:09:09.678854       8 log.go:172] (0xc0019a84d0) Data frame received for 3
I0105 13:09:09.678984       8 log.go:172] (0xc00176a780) (3) Data frame handling
I0105 13:09:09.679024       8 log.go:172] (0xc00176a780) (3) Data frame sent
I0105 13:09:09.820101       8 log.go:172] (0xc0019a84d0) (0xc001573d60) Stream removed, broadcasting: 5
I0105 13:09:09.820467       8 log.go:172] (0xc0019a84d0) Data frame received for 1
I0105 13:09:09.820544       8 log.go:172] (0xc0019a84d0) (0xc00176a780) Stream removed, broadcasting: 3
I0105 13:09:09.820619       8 log.go:172] (0xc0016e65a0) (1) Data frame handling
I0105 13:09:09.820651       8 log.go:172] (0xc0016e65a0) (1) Data frame sent
I0105 13:09:09.820670       8 log.go:172] (0xc0019a84d0) (0xc0016e65a0) Stream removed, broadcasting: 1
I0105 13:09:09.820690       8 log.go:172] (0xc0019a84d0) Go away received
I0105 13:09:09.822050       8 log.go:172] (0xc0019a84d0) (0xc0016e65a0) Stream removed, broadcasting: 1
I0105 13:09:09.822072       8 log.go:172] (0xc0019a84d0) (0xc00176a780) Stream removed, broadcasting: 3
I0105 13:09:09.822085       8 log.go:172] (0xc0019a84d0) (0xc001573d60) Stream removed, broadcasting: 5
Jan  5 13:09:09.822: INFO: Exec stderr: ""
Jan  5 13:09:09.822: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-4250 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan  5 13:09:09.822: INFO: >>> kubeConfig: /root/.kube/config
I0105 13:09:09.919214       8 log.go:172] (0xc00182f290) (0xc0018c0820) Create stream
I0105 13:09:09.919378       8 log.go:172] (0xc00182f290) (0xc0018c0820) Stream added, broadcasting: 1
I0105 13:09:09.928670       8 log.go:172] (0xc00182f290) Reply frame received for 1
I0105 13:09:09.928766       8 log.go:172] (0xc00182f290) (0xc0024e68c0) Create stream
I0105 13:09:09.928776       8 log.go:172] (0xc00182f290) (0xc0024e68c0) Stream added, broadcasting: 3
I0105 13:09:09.930119       8 log.go:172] (0xc00182f290) Reply frame received for 3
I0105 13:09:09.930154       8 log.go:172] (0xc00182f290) (0xc00115c000) Create stream
I0105 13:09:09.930275       8 log.go:172] (0xc00182f290) (0xc00115c000) Stream added, broadcasting: 5
I0105 13:09:09.931930       8 log.go:172] (0xc00182f290) Reply frame received for 5
I0105 13:09:10.042771       8 log.go:172] (0xc00182f290) Data frame received for 3
I0105 13:09:10.042992       8 log.go:172] (0xc0024e68c0) (3) Data frame handling
I0105 13:09:10.043068       8 log.go:172] (0xc0024e68c0) (3) Data frame sent
I0105 13:09:10.180449       8 log.go:172] (0xc00182f290) Data frame received for 1
I0105 13:09:10.180721       8 log.go:172] (0xc00182f290) (0xc0024e68c0) Stream removed, broadcasting: 3
I0105 13:09:10.180806       8 log.go:172] (0xc0018c0820) (1) Data frame handling
I0105 13:09:10.180852       8 log.go:172] (0xc0018c0820) (1) Data frame sent
I0105 13:09:10.180873       8 log.go:172] (0xc00182f290) (0xc00115c000) Stream removed, broadcasting: 5
I0105 13:09:10.180940       8 log.go:172] (0xc00182f290) (0xc0018c0820) Stream removed, broadcasting: 1
I0105 13:09:10.180979       8 log.go:172] (0xc00182f290) Go away received
I0105 13:09:10.181308       8 log.go:172] (0xc00182f290) (0xc0018c0820) Stream removed, broadcasting: 1
I0105 13:09:10.181320       8 log.go:172] (0xc00182f290) (0xc0024e68c0) Stream removed, broadcasting: 3
I0105 13:09:10.181325       8 log.go:172] (0xc00182f290) (0xc00115c000) Stream removed, broadcasting: 5
Jan  5 13:09:10.181: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount
Jan  5 13:09:10.181: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-4250 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan  5 13:09:10.181: INFO: >>> kubeConfig: /root/.kube/config
I0105 13:09:10.242347       8 log.go:172] (0xc0014c4dc0) (0xc0024e6be0) Create stream
I0105 13:09:10.242475       8 log.go:172] (0xc0014c4dc0) (0xc0024e6be0) Stream added, broadcasting: 1
I0105 13:09:10.248342       8 log.go:172] (0xc0014c4dc0) Reply frame received for 1
I0105 13:09:10.248371       8 log.go:172] (0xc0014c4dc0) (0xc0024e6d20) Create stream
I0105 13:09:10.248383       8 log.go:172] (0xc0014c4dc0) (0xc0024e6d20) Stream added, broadcasting: 3
I0105 13:09:10.251224       8 log.go:172] (0xc0014c4dc0) Reply frame received for 3
I0105 13:09:10.251308       8 log.go:172] (0xc0014c4dc0) (0xc0024e6dc0) Create stream
I0105 13:09:10.251318       8 log.go:172] (0xc0014c4dc0) (0xc0024e6dc0) Stream added, broadcasting: 5
I0105 13:09:10.252631       8 log.go:172] (0xc0014c4dc0) Reply frame received for 5
I0105 13:09:10.340106       8 log.go:172] (0xc0014c4dc0) Data frame received for 3
I0105 13:09:10.340226       8 log.go:172] (0xc0024e6d20) (3) Data frame handling
I0105 13:09:10.340265       8 log.go:172] (0xc0024e6d20) (3) Data frame sent
I0105 13:09:10.531047       8 log.go:172] (0xc0014c4dc0) (0xc0024e6d20) Stream removed, broadcasting: 3
I0105 13:09:10.531300       8 log.go:172] (0xc0014c4dc0) Data frame received for 1
I0105 13:09:10.531329       8 log.go:172] (0xc0024e6be0) (1) Data frame handling
I0105 13:09:10.531370       8 log.go:172] (0xc0024e6be0) (1) Data frame sent
I0105 13:09:10.531393       8 log.go:172] (0xc0014c4dc0) (0xc0024e6be0) Stream removed, broadcasting: 1
I0105 13:09:10.531434       8 log.go:172] (0xc0014c4dc0) (0xc0024e6dc0) Stream removed, broadcasting: 5
I0105 13:09:10.531495       8 log.go:172] (0xc0014c4dc0) Go away received
I0105 13:09:10.532078       8 log.go:172] (0xc0014c4dc0) (0xc0024e6be0) Stream removed, broadcasting: 1
I0105 13:09:10.532103       8 log.go:172] (0xc0014c4dc0) (0xc0024e6d20) Stream removed, broadcasting: 3
I0105 13:09:10.532123       8 log.go:172] (0xc0014c4dc0) (0xc0024e6dc0) Stream removed, broadcasting: 5
Jan  5 13:09:10.532: INFO: Exec stderr: ""
Jan  5 13:09:10.532: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-4250 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan  5 13:09:10.532: INFO: >>> kubeConfig: /root/.kube/config
I0105 13:09:10.707519       8 log.go:172] (0xc0014c5ce0) (0xc0024e7180) Create stream
I0105 13:09:10.707814       8 log.go:172] (0xc0014c5ce0) (0xc0024e7180) Stream added, broadcasting: 1
I0105 13:09:10.778370       8 log.go:172] (0xc0014c5ce0) Reply frame received for 1
I0105 13:09:10.778633       8 log.go:172] (0xc0014c5ce0) (0xc0018c08c0) Create stream
I0105 13:09:10.778653       8 log.go:172] (0xc0014c5ce0) (0xc0018c08c0) Stream added, broadcasting: 3
I0105 13:09:10.794185       8 log.go:172] (0xc0014c5ce0) Reply frame received for 3
I0105 13:09:10.794365       8 log.go:172] (0xc0014c5ce0) (0xc0024e7220) Create stream
I0105 13:09:10.794382       8 log.go:172] (0xc0014c5ce0) (0xc0024e7220) Stream added, broadcasting: 5
I0105 13:09:10.803010       8 log.go:172] (0xc0014c5ce0) Reply frame received for 5
I0105 13:09:10.944085       8 log.go:172] (0xc0014c5ce0) Data frame received for 3
I0105 13:09:10.944182       8 log.go:172] (0xc0018c08c0) (3) Data frame handling
I0105 13:09:10.944210       8 log.go:172] (0xc0018c08c0) (3) Data frame sent
I0105 13:09:11.039332       8 log.go:172] (0xc0014c5ce0) (0xc0018c08c0) Stream removed, broadcasting: 3
I0105 13:09:11.039517       8 log.go:172] (0xc0014c5ce0) Data frame received for 1
I0105 13:09:11.039537       8 log.go:172] (0xc0024e7180) (1) Data frame handling
I0105 13:09:11.039550       8 log.go:172] (0xc0024e7180) (1) Data frame sent
I0105 13:09:11.039556       8 log.go:172] (0xc0014c5ce0) (0xc0024e7180) Stream removed, broadcasting: 1
I0105 13:09:11.039740       8 log.go:172] (0xc0014c5ce0) (0xc0024e7220) Stream removed, broadcasting: 5
I0105 13:09:11.039777       8 log.go:172] (0xc0014c5ce0) (0xc0024e7180) Stream removed, broadcasting: 1
I0105 13:09:11.039784       8 log.go:172] (0xc0014c5ce0) (0xc0018c08c0) Stream removed, broadcasting: 3
I0105 13:09:11.039791       8 log.go:172] (0xc0014c5ce0) (0xc0024e7220) Stream removed, broadcasting: 5
Jan  5 13:09:11.040: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true
Jan  5 13:09:11.040: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-4250 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan  5 13:09:11.040: INFO: >>> kubeConfig: /root/.kube/config
I0105 13:09:11.041052       8 log.go:172] (0xc0014c5ce0) Go away received
I0105 13:09:11.103242       8 log.go:172] (0xc001e5e000) (0xc00176abe0) Create stream
I0105 13:09:11.103292       8 log.go:172] (0xc001e5e000) (0xc00176abe0) Stream added, broadcasting: 1
I0105 13:09:11.107295       8 log.go:172] (0xc001e5e000) Reply frame received for 1
I0105 13:09:11.107330       8 log.go:172] (0xc001e5e000) (0xc0018c0960) Create stream
I0105 13:09:11.107337       8 log.go:172] (0xc001e5e000) (0xc0018c0960) Stream added, broadcasting: 3
I0105 13:09:11.108768       8 log.go:172] (0xc001e5e000) Reply frame received for 3
I0105 13:09:11.108798       8 log.go:172] (0xc001e5e000) (0xc00115c140) Create stream
I0105 13:09:11.108817       8 log.go:172] (0xc001e5e000) (0xc00115c140) Stream added, broadcasting: 5
I0105 13:09:11.110939       8 log.go:172] (0xc001e5e000) Reply frame received for 5
I0105 13:09:11.182615       8 log.go:172] (0xc001e5e000) Data frame received for 3
I0105 13:09:11.182638       8 log.go:172] (0xc0018c0960) (3) Data frame handling
I0105 13:09:11.182658       8 log.go:172] (0xc0018c0960) (3) Data frame sent
I0105 13:09:11.287848       8 log.go:172] (0xc001e5e000) Data frame received for 1
I0105 13:09:11.287985       8 log.go:172] (0xc001e5e000) (0xc0018c0960) Stream removed, broadcasting: 3
I0105 13:09:11.288030       8 log.go:172] (0xc00176abe0) (1) Data frame handling
I0105 13:09:11.288050       8 log.go:172] (0xc00176abe0) (1) Data frame sent
I0105 13:09:11.288073       8 log.go:172] (0xc001e5e000) (0xc00115c140) Stream removed, broadcasting: 5
I0105 13:09:11.288104       8 log.go:172] (0xc001e5e000) (0xc00176abe0) Stream removed, broadcasting: 1
I0105 13:09:11.288129       8 log.go:172] (0xc001e5e000) Go away received
I0105 13:09:11.288705       8 log.go:172] (0xc001e5e000) (0xc00176abe0) Stream removed, broadcasting: 1
I0105 13:09:11.288736       8 log.go:172] (0xc001e5e000) (0xc0018c0960) Stream removed, broadcasting: 3
I0105 13:09:11.288749       8 log.go:172] (0xc001e5e000) (0xc00115c140) Stream removed, broadcasting: 5
Jan  5 13:09:11.288: INFO: Exec stderr: ""
Jan  5 13:09:11.288: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-4250 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan  5 13:09:11.288: INFO: >>> kubeConfig: /root/.kube/config
I0105 13:09:11.337031       8 log.go:172] (0xc0019a9810) (0xc0016e6a00) Create stream
I0105 13:09:11.337128       8 log.go:172] (0xc0019a9810) (0xc0016e6a00) Stream added, broadcasting: 1
I0105 13:09:11.343471       8 log.go:172] (0xc0019a9810) Reply frame received for 1
I0105 13:09:11.343527       8 log.go:172] (0xc0019a9810) (0xc00115c460) Create stream
I0105 13:09:11.343544       8 log.go:172] (0xc0019a9810) (0xc00115c460) Stream added, broadcasting: 3
I0105 13:09:11.346062       8 log.go:172] (0xc0019a9810) Reply frame received for 3
I0105 13:09:11.346168       8 log.go:172] (0xc0019a9810) (0xc0018c0a00) Create stream
I0105 13:09:11.346177       8 log.go:172] (0xc0019a9810) (0xc0018c0a00) Stream added, broadcasting: 5
I0105 13:09:11.347294       8 log.go:172] (0xc0019a9810) Reply frame received for 5
I0105 13:09:11.430201       8 log.go:172] (0xc0019a9810) Data frame received for 3
I0105 13:09:11.430306       8 log.go:172] (0xc00115c460) (3) Data frame handling
I0105 13:09:11.430361       8 log.go:172] (0xc00115c460) (3) Data frame sent
I0105 13:09:11.568979       8 log.go:172] (0xc0019a9810) Data frame received for 1
I0105 13:09:11.569157       8 log.go:172] (0xc0019a9810) (0xc00115c460) Stream removed, broadcasting: 3
I0105 13:09:11.569218       8 log.go:172] (0xc0016e6a00) (1) Data frame handling
I0105 13:09:11.569255       8 log.go:172] (0xc0016e6a00) (1) Data frame sent
I0105 13:09:11.569290       8 log.go:172] (0xc0019a9810) (0xc0018c0a00) Stream removed, broadcasting: 5
I0105 13:09:11.569331       8 log.go:172] (0xc0019a9810) (0xc0016e6a00) Stream removed, broadcasting: 1
I0105 13:09:11.569345       8 log.go:172] (0xc0019a9810) Go away received
I0105 13:09:11.570225       8 log.go:172] (0xc0019a9810) (0xc0016e6a00) Stream removed, broadcasting: 1
I0105 13:09:11.570274       8 log.go:172] (0xc0019a9810) (0xc00115c460) Stream removed, broadcasting: 3
I0105 13:09:11.570292       8 log.go:172] (0xc0019a9810) (0xc0018c0a00) Stream removed, broadcasting: 5
Jan  5 13:09:11.570: INFO: Exec stderr: ""
Jan  5 13:09:11.570: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-4250 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan  5 13:09:11.570: INFO: >>> kubeConfig: /root/.kube/config
I0105 13:09:11.674431       8 log.go:172] (0xc001fb2d10) (0xc0018c0e60) Create stream
I0105 13:09:11.674674       8 log.go:172] (0xc001fb2d10) (0xc0018c0e60) Stream added, broadcasting: 1
I0105 13:09:11.680731       8 log.go:172] (0xc001fb2d10) Reply frame received for 1
I0105 13:09:11.680790       8 log.go:172] (0xc001fb2d10) (0xc0018c0f00) Create stream
I0105 13:09:11.680804       8 log.go:172] (0xc001fb2d10) (0xc0018c0f00) Stream added, broadcasting: 3
I0105 13:09:11.683011       8 log.go:172] (0xc001fb2d10) Reply frame received for 3
I0105 13:09:11.683049       8 log.go:172] (0xc001fb2d10) (0xc0016e6aa0) Create stream
I0105 13:09:11.683063       8 log.go:172] (0xc001fb2d10) (0xc0016e6aa0) Stream added, broadcasting: 5
I0105 13:09:11.685420       8 log.go:172] (0xc001fb2d10) Reply frame received for 5
I0105 13:09:11.818960       8 log.go:172] (0xc001fb2d10) Data frame received for 3
I0105 13:09:11.819163       8 log.go:172] (0xc0018c0f00) (3) Data frame handling
I0105 13:09:11.819211       8 log.go:172] (0xc0018c0f00) (3) Data frame sent
I0105 13:09:12.015731       8 log.go:172] (0xc001fb2d10) (0xc0018c0f00) Stream removed, broadcasting: 3
I0105 13:09:12.015913       8 log.go:172] (0xc001fb2d10) Data frame received for 1
I0105 13:09:12.015931       8 log.go:172] (0xc0018c0e60) (1) Data frame handling
I0105 13:09:12.015943       8 log.go:172] (0xc0018c0e60) (1) Data frame sent
I0105 13:09:12.016011       8 log.go:172] (0xc001fb2d10) (0xc0018c0e60) Stream removed, broadcasting: 1
I0105 13:09:12.016308       8 log.go:172] (0xc001fb2d10) (0xc0016e6aa0) Stream removed, broadcasting: 5
I0105 13:09:12.016371       8 log.go:172] (0xc001fb2d10) Go away received
I0105 13:09:12.016469       8 log.go:172] (0xc001fb2d10) (0xc0018c0e60) Stream removed, broadcasting: 1
I0105 13:09:12.016511       8 log.go:172] (0xc001fb2d10) (0xc0018c0f00) Stream removed, broadcasting: 3
I0105 13:09:12.016564       8 log.go:172] (0xc001fb2d10) (0xc0016e6aa0) Stream removed, broadcasting: 5
Jan  5 13:09:12.016: INFO: Exec stderr: ""
Jan  5 13:09:12.016: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-4250 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan  5 13:09:12.017: INFO: >>> kubeConfig: /root/.kube/config
I0105 13:09:12.132750       8 log.go:172] (0xc001d0d130) (0xc0024e75e0) Create stream
I0105 13:09:12.133031       8 log.go:172] (0xc001d0d130) (0xc0024e75e0) Stream added, broadcasting: 1
I0105 13:09:12.142806       8 log.go:172] (0xc001d0d130) Reply frame received for 1
I0105 13:09:12.143025       8 log.go:172] (0xc001d0d130) (0xc0016e6b40) Create stream
I0105 13:09:12.143058       8 log.go:172] (0xc001d0d130) (0xc0016e6b40) Stream added, broadcasting: 3
I0105 13:09:12.147087       8 log.go:172] (0xc001d0d130) Reply frame received for 3
I0105 13:09:12.147195       8 log.go:172] (0xc001d0d130) (0xc0018c1180) Create stream
I0105 13:09:12.147213       8 log.go:172] (0xc001d0d130) (0xc0018c1180) Stream added, broadcasting: 5
I0105 13:09:12.148922       8 log.go:172] (0xc001d0d130) Reply frame received for 5
I0105 13:09:12.363492       8 log.go:172] (0xc001d0d130) Data frame received for 3
I0105 13:09:12.363638       8 log.go:172] (0xc0016e6b40) (3) Data frame handling
I0105 13:09:12.363668       8 log.go:172] (0xc0016e6b40) (3) Data frame sent
I0105 13:09:12.587352       8 log.go:172] (0xc001d0d130) (0xc0016e6b40) Stream removed, broadcasting: 3
I0105 13:09:12.587624       8 log.go:172] (0xc001d0d130) Data frame received for 1
I0105 13:09:12.587637       8 log.go:172] (0xc0024e75e0) (1) Data frame handling
I0105 13:09:12.587657       8 log.go:172] (0xc0024e75e0) (1) Data frame sent
I0105 13:09:12.587708       8 log.go:172] (0xc001d0d130) (0xc0018c1180) Stream removed, broadcasting: 5
I0105 13:09:12.587807       8 log.go:172] (0xc001d0d130) (0xc0024e75e0) Stream removed, broadcasting: 1
I0105 13:09:12.587838       8 log.go:172] (0xc001d0d130) Go away received
I0105 13:09:12.588359       8 log.go:172] (0xc001d0d130) (0xc0024e75e0) Stream removed, broadcasting: 1
I0105 13:09:12.588372       8 log.go:172] (0xc001d0d130) (0xc0016e6b40) Stream removed, broadcasting: 3
I0105 13:09:12.588378       8 log.go:172] (0xc001d0d130) (0xc0018c1180) Stream removed, broadcasting: 5
Jan  5 13:09:12.588: INFO: Exec stderr: ""
[AfterEach] [k8s.io] KubeletManagedEtcHosts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  5 13:09:12.588: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-kubelet-etc-hosts-4250" for this suite.
Jan  5 13:09:56.644: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 13:09:56.773: INFO: namespace e2e-kubelet-etc-hosts-4250 deletion completed in 44.165638315s

• [SLOW TEST:70.143 seconds]
[k8s.io] KubeletManagedEtcHosts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-apps] Deployment 
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  5 13:09:56.775: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72
[It] RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jan  5 13:09:56.896: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted)
Jan  5 13:09:56.928: INFO: Pod name sample-pod: Found 0 pods out of 1
Jan  5 13:10:01.947: INFO: Pod name sample-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Jan  5 13:10:05.966: INFO: Creating deployment "test-rolling-update-deployment"
Jan  5 13:10:05.980: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has
Jan  5 13:10:06.020: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created
Jan  5 13:10:08.035: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected
Jan  5 13:10:08.039: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713826606, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713826606, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713826606, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713826606, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  5 13:10:10.061: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713826606, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713826606, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713826606, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713826606, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  5 13:10:12.047: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713826606, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713826606, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713826606, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713826606, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  5 13:10:14.054: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713826606, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713826606, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713826606, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713826606, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  5 13:10:16.088: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713826606, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713826606, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713826606, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713826606, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  5 13:10:18.049: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted)
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66
Jan  5 13:10:18.061: INFO: Deployment "test-rolling-update-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment,GenerateName:,Namespace:deployment-3121,SelfLink:/apis/apps/v1/namespaces/deployment-3121/deployments/test-rolling-update-deployment,UID:fd8894b1-16f0-418c-b9d6-a86dbce8c441,ResourceVersion:19396594,Generation:1,CreationTimestamp:2020-01-05 13:10:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-01-05 13:10:06 +0000 UTC 2020-01-05 13:10:06 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-01-05 13:10:16 +0000 UTC 2020-01-05 13:10:06 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rolling-update-deployment-79f6b9d75c" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},}

Jan  5 13:10:18.064: INFO: New ReplicaSet "test-rolling-update-deployment-79f6b9d75c" of Deployment "test-rolling-update-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-79f6b9d75c,GenerateName:,Namespace:deployment-3121,SelfLink:/apis/apps/v1/namespaces/deployment-3121/replicasets/test-rolling-update-deployment-79f6b9d75c,UID:b7bf8e86-912d-442f-808c-134c590dbce4,ResourceVersion:19396584,Generation:1,CreationTimestamp:2020-01-05 13:10:06 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment fd8894b1-16f0-418c-b9d6-a86dbce8c441 0xc002a7e877 0xc002a7e878}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Jan  5 13:10:18.065: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment":
Jan  5 13:10:18.065: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-controller,GenerateName:,Namespace:deployment-3121,SelfLink:/apis/apps/v1/namespaces/deployment-3121/replicasets/test-rolling-update-controller,UID:8b1708b9-c882-4a3e-ac5e-ba045711c598,ResourceVersion:19396593,Generation:2,CreationTimestamp:2020-01-05 13:09:56 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305832,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment fd8894b1-16f0-418c-b9d6-a86dbce8c441 0xc002a7e797 0xc002a7e798}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Jan  5 13:10:18.068: INFO: Pod "test-rolling-update-deployment-79f6b9d75c-dv55f" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-79f6b9d75c-dv55f,GenerateName:test-rolling-update-deployment-79f6b9d75c-,Namespace:deployment-3121,SelfLink:/api/v1/namespaces/deployment-3121/pods/test-rolling-update-deployment-79f6b9d75c-dv55f,UID:4e691b7e-f0aa-4780-9a70-f34de5791b24,ResourceVersion:19396583,Generation:0,CreationTimestamp:2020-01-05 13:10:06 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rolling-update-deployment-79f6b9d75c b7bf8e86-912d-442f-808c-134c590dbce4 0xc0022b0cf7 0xc0022b0cf8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-5mk5k {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5mk5k,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [{default-token-5mk5k true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0022b0d70} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0022b0d90}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-05 13:10:06 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-05 13:10:15 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-05 13:10:15 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-05 13:10:06 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.2,StartTime:2020-01-05 13:10:06 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-01-05 13:10:14 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 docker://93319d7e2a6d4c3d97e05ffa94936f0892b494e8337e8419bb48b1704777cb19}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  5 13:10:18.068: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-3121" for this suite.
Jan  5 13:10:24.130: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 13:10:24.220: INFO: namespace deployment-3121 deletion completed in 6.147669276s

• [SLOW TEST:27.445 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform rolling updates and roll backs of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  5 13:10:24.221: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75
STEP: Creating service test in namespace statefulset-5638
[It] should perform rolling updates and roll backs of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a new StatefulSet
Jan  5 13:10:24.495: INFO: Found 0 stateful pods, waiting for 3
Jan  5 13:10:34.517: INFO: Found 2 stateful pods, waiting for 3
Jan  5 13:10:44.517: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jan  5 13:10:44.517: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jan  5 13:10:44.517: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Jan  5 13:10:54.515: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jan  5 13:10:54.516: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jan  5 13:10:54.516: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
Jan  5 13:10:54.545: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5638 ss2-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan  5 13:10:57.229: INFO: stderr: "I0105 13:10:56.831317      74 log.go:172] (0xc000138dc0) (0xc0005dc780) Create stream\nI0105 13:10:56.831520      74 log.go:172] (0xc000138dc0) (0xc0005dc780) Stream added, broadcasting: 1\nI0105 13:10:56.838520      74 log.go:172] (0xc000138dc0) Reply frame received for 1\nI0105 13:10:56.838632      74 log.go:172] (0xc000138dc0) (0xc0005dc820) Create stream\nI0105 13:10:56.838652      74 log.go:172] (0xc000138dc0) (0xc0005dc820) Stream added, broadcasting: 3\nI0105 13:10:56.840950      74 log.go:172] (0xc000138dc0) Reply frame received for 3\nI0105 13:10:56.840991      74 log.go:172] (0xc000138dc0) (0xc0006ca0a0) Create stream\nI0105 13:10:56.841010      74 log.go:172] (0xc000138dc0) (0xc0006ca0a0) Stream added, broadcasting: 5\nI0105 13:10:56.843159      74 log.go:172] (0xc000138dc0) Reply frame received for 5\nI0105 13:10:57.034391      74 log.go:172] (0xc000138dc0) Data frame received for 5\nI0105 13:10:57.034440      74 log.go:172] (0xc0006ca0a0) (5) Data frame handling\nI0105 13:10:57.034470      74 log.go:172] (0xc0006ca0a0) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0105 13:10:57.093962      74 log.go:172] (0xc000138dc0) Data frame received for 3\nI0105 13:10:57.094025      74 log.go:172] (0xc0005dc820) (3) Data frame handling\nI0105 13:10:57.094062      74 log.go:172] (0xc0005dc820) (3) Data frame sent\nI0105 13:10:57.210157      74 log.go:172] (0xc000138dc0) Data frame received for 1\nI0105 13:10:57.210433      74 log.go:172] (0xc0005dc780) (1) Data frame handling\nI0105 13:10:57.210530      74 log.go:172] (0xc0005dc780) (1) Data frame sent\nI0105 13:10:57.213637      74 log.go:172] (0xc000138dc0) (0xc0005dc780) Stream removed, broadcasting: 1\nI0105 13:10:57.214054      74 log.go:172] (0xc000138dc0) (0xc0005dc820) Stream removed, broadcasting: 3\nI0105 13:10:57.214171      74 log.go:172] (0xc000138dc0) (0xc0006ca0a0) Stream removed, broadcasting: 5\nI0105 13:10:57.214283      74 log.go:172] (0xc000138dc0) Go away received\nI0105 13:10:57.215799      74 log.go:172] (0xc000138dc0) (0xc0005dc780) Stream removed, broadcasting: 1\nI0105 13:10:57.215819      74 log.go:172] (0xc000138dc0) (0xc0005dc820) Stream removed, broadcasting: 3\nI0105 13:10:57.215830      74 log.go:172] (0xc000138dc0) (0xc0006ca0a0) Stream removed, broadcasting: 5\n"
Jan  5 13:10:57.229: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan  5 13:10:57.229: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

STEP: Updating StatefulSet template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine
Jan  5 13:11:07.289: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Updating Pods in reverse ordinal order
Jan  5 13:11:17.431: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5638 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  5 13:11:17.871: INFO: stderr: "I0105 13:11:17.635506     106 log.go:172] (0xc000124d10) (0xc0007bc5a0) Create stream\nI0105 13:11:17.635831     106 log.go:172] (0xc000124d10) (0xc0007bc5a0) Stream added, broadcasting: 1\nI0105 13:11:17.639519     106 log.go:172] (0xc000124d10) Reply frame received for 1\nI0105 13:11:17.639600     106 log.go:172] (0xc000124d10) (0xc0005e61e0) Create stream\nI0105 13:11:17.639608     106 log.go:172] (0xc000124d10) (0xc0005e61e0) Stream added, broadcasting: 3\nI0105 13:11:17.647774     106 log.go:172] (0xc000124d10) Reply frame received for 3\nI0105 13:11:17.647954     106 log.go:172] (0xc000124d10) (0xc0003ca000) Create stream\nI0105 13:11:17.647979     106 log.go:172] (0xc000124d10) (0xc0003ca000) Stream added, broadcasting: 5\nI0105 13:11:17.649413     106 log.go:172] (0xc000124d10) Reply frame received for 5\nI0105 13:11:17.730237     106 log.go:172] (0xc000124d10) Data frame received for 5\nI0105 13:11:17.730458     106 log.go:172] (0xc0003ca000) (5) Data frame handling\nI0105 13:11:17.730506     106 log.go:172] (0xc0003ca000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0105 13:11:17.730724     106 log.go:172] (0xc000124d10) Data frame received for 3\nI0105 13:11:17.730780     106 log.go:172] (0xc0005e61e0) (3) Data frame handling\nI0105 13:11:17.730820     106 log.go:172] (0xc0005e61e0) (3) Data frame sent\nI0105 13:11:17.855804     106 log.go:172] (0xc000124d10) Data frame received for 1\nI0105 13:11:17.856024     106 log.go:172] (0xc000124d10) (0xc0003ca000) Stream removed, broadcasting: 5\nI0105 13:11:17.856232     106 log.go:172] (0xc000124d10) (0xc0005e61e0) Stream removed, broadcasting: 3\nI0105 13:11:17.856468     106 log.go:172] (0xc0007bc5a0) (1) Data frame handling\nI0105 13:11:17.856536     106 log.go:172] (0xc0007bc5a0) (1) Data frame sent\nI0105 13:11:17.856617     106 log.go:172] (0xc000124d10) (0xc0007bc5a0) Stream removed, broadcasting: 1\nI0105 13:11:17.856691     106 log.go:172] (0xc000124d10) Go away received\nI0105 13:11:17.858395     106 log.go:172] (0xc000124d10) (0xc0007bc5a0) Stream removed, broadcasting: 1\nI0105 13:11:17.858413     106 log.go:172] (0xc000124d10) (0xc0005e61e0) Stream removed, broadcasting: 3\nI0105 13:11:17.858417     106 log.go:172] (0xc000124d10) (0xc0003ca000) Stream removed, broadcasting: 5\n"
Jan  5 13:11:17.871: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jan  5 13:11:17.871: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jan  5 13:11:27.915: INFO: Waiting for StatefulSet statefulset-5638/ss2 to complete update
Jan  5 13:11:27.915: INFO: Waiting for Pod statefulset-5638/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan  5 13:11:27.915: INFO: Waiting for Pod statefulset-5638/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan  5 13:11:37.939: INFO: Waiting for StatefulSet statefulset-5638/ss2 to complete update
Jan  5 13:11:37.940: INFO: Waiting for Pod statefulset-5638/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan  5 13:11:47.932: INFO: Waiting for StatefulSet statefulset-5638/ss2 to complete update
Jan  5 13:11:47.932: INFO: Waiting for Pod statefulset-5638/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan  5 13:11:57.935: INFO: Waiting for StatefulSet statefulset-5638/ss2 to complete update
STEP: Rolling back to a previous revision
Jan  5 13:12:07.934: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5638 ss2-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan  5 13:12:08.447: INFO: stderr: "I0105 13:12:08.126287     124 log.go:172] (0xc000a0e370) (0xc00089e640) Create stream\nI0105 13:12:08.126737     124 log.go:172] (0xc000a0e370) (0xc00089e640) Stream added, broadcasting: 1\nI0105 13:12:08.132464     124 log.go:172] (0xc000a0e370) Reply frame received for 1\nI0105 13:12:08.132546     124 log.go:172] (0xc000a0e370) (0xc0009b2000) Create stream\nI0105 13:12:08.132567     124 log.go:172] (0xc000a0e370) (0xc0009b2000) Stream added, broadcasting: 3\nI0105 13:12:08.134228     124 log.go:172] (0xc000a0e370) Reply frame received for 3\nI0105 13:12:08.134301     124 log.go:172] (0xc000a0e370) (0xc00089e6e0) Create stream\nI0105 13:12:08.134318     124 log.go:172] (0xc000a0e370) (0xc00089e6e0) Stream added, broadcasting: 5\nI0105 13:12:08.135979     124 log.go:172] (0xc000a0e370) Reply frame received for 5\nI0105 13:12:08.275072     124 log.go:172] (0xc000a0e370) Data frame received for 5\nI0105 13:12:08.275148     124 log.go:172] (0xc00089e6e0) (5) Data frame handling\nI0105 13:12:08.275175     124 log.go:172] (0xc00089e6e0) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0105 13:12:08.314215     124 log.go:172] (0xc000a0e370) Data frame received for 3\nI0105 13:12:08.314277     124 log.go:172] (0xc0009b2000) (3) Data frame handling\nI0105 13:12:08.314298     124 log.go:172] (0xc0009b2000) (3) Data frame sent\nI0105 13:12:08.434759     124 log.go:172] (0xc000a0e370) Data frame received for 1\nI0105 13:12:08.434935     124 log.go:172] (0xc000a0e370) (0xc0009b2000) Stream removed, broadcasting: 3\nI0105 13:12:08.435028     124 log.go:172] (0xc00089e640) (1) Data frame handling\nI0105 13:12:08.435067     124 log.go:172] (0xc00089e640) (1) Data frame sent\nI0105 13:12:08.435146     124 log.go:172] (0xc000a0e370) (0xc00089e6e0) Stream removed, broadcasting: 5\nI0105 13:12:08.435180     124 log.go:172] (0xc000a0e370) (0xc00089e640) Stream removed, broadcasting: 1\nI0105 13:12:08.435215     124 log.go:172] (0xc000a0e370) Go away received\nI0105 13:12:08.436198     124 log.go:172] (0xc000a0e370) (0xc00089e640) Stream removed, broadcasting: 1\nI0105 13:12:08.436213     124 log.go:172] (0xc000a0e370) (0xc0009b2000) Stream removed, broadcasting: 3\nI0105 13:12:08.436219     124 log.go:172] (0xc000a0e370) (0xc00089e6e0) Stream removed, broadcasting: 5\n"
Jan  5 13:12:08.447: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan  5 13:12:08.447: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan  5 13:12:18.520: INFO: Updating stateful set ss2
STEP: Rolling back update in reverse ordinal order
Jan  5 13:12:28.641: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5638 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  5 13:12:29.160: INFO: stderr: "I0105 13:12:28.944805     144 log.go:172] (0xc0009140b0) (0xc0007ae640) Create stream\nI0105 13:12:28.945512     144 log.go:172] (0xc0009140b0) (0xc0007ae640) Stream added, broadcasting: 1\nI0105 13:12:28.953127     144 log.go:172] (0xc0009140b0) Reply frame received for 1\nI0105 13:12:28.953347     144 log.go:172] (0xc0009140b0) (0xc000576280) Create stream\nI0105 13:12:28.953384     144 log.go:172] (0xc0009140b0) (0xc000576280) Stream added, broadcasting: 3\nI0105 13:12:28.956997     144 log.go:172] (0xc0009140b0) Reply frame received for 3\nI0105 13:12:28.957109     144 log.go:172] (0xc0009140b0) (0xc000576320) Create stream\nI0105 13:12:28.957129     144 log.go:172] (0xc0009140b0) (0xc000576320) Stream added, broadcasting: 5\nI0105 13:12:28.961162     144 log.go:172] (0xc0009140b0) Reply frame received for 5\nI0105 13:12:29.037544     144 log.go:172] (0xc0009140b0) Data frame received for 5\nI0105 13:12:29.037907     144 log.go:172] (0xc000576320) (5) Data frame handling\nI0105 13:12:29.037985     144 log.go:172] (0xc000576320) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0105 13:12:29.038085     144 log.go:172] (0xc0009140b0) Data frame received for 3\nI0105 13:12:29.038148     144 log.go:172] (0xc000576280) (3) Data frame handling\nI0105 13:12:29.038161     144 log.go:172] (0xc000576280) (3) Data frame sent\nI0105 13:12:29.152082     144 log.go:172] (0xc0009140b0) Data frame received for 1\nI0105 13:12:29.152172     144 log.go:172] (0xc0007ae640) (1) Data frame handling\nI0105 13:12:29.152195     144 log.go:172] (0xc0007ae640) (1) Data frame sent\nI0105 13:12:29.152215     144 log.go:172] (0xc0009140b0) (0xc0007ae640) Stream removed, broadcasting: 1\nI0105 13:12:29.152343     144 log.go:172] (0xc0009140b0) (0xc000576280) Stream removed, broadcasting: 3\nI0105 13:12:29.152464     144 log.go:172] (0xc0009140b0) (0xc000576320) Stream removed, broadcasting: 5\nI0105 13:12:29.153070     144 log.go:172] (0xc0009140b0) Go away received\nI0105 13:12:29.153467     144 log.go:172] (0xc0009140b0) (0xc0007ae640) Stream removed, broadcasting: 1\nI0105 13:12:29.153480     144 log.go:172] (0xc0009140b0) (0xc000576280) Stream removed, broadcasting: 3\nI0105 13:12:29.153490     144 log.go:172] (0xc0009140b0) (0xc000576320) Stream removed, broadcasting: 5\n"
Jan  5 13:12:29.160: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jan  5 13:12:29.160: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jan  5 13:12:29.339: INFO: Waiting for StatefulSet statefulset-5638/ss2 to complete update
Jan  5 13:12:29.340: INFO: Waiting for Pod statefulset-5638/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Jan  5 13:12:29.340: INFO: Waiting for Pod statefulset-5638/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Jan  5 13:12:29.340: INFO: Waiting for Pod statefulset-5638/ss2-2 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Jan  5 13:12:39.360: INFO: Waiting for StatefulSet statefulset-5638/ss2 to complete update
Jan  5 13:12:39.360: INFO: Waiting for Pod statefulset-5638/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Jan  5 13:12:39.360: INFO: Waiting for Pod statefulset-5638/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Jan  5 13:12:49.356: INFO: Waiting for StatefulSet statefulset-5638/ss2 to complete update
Jan  5 13:12:49.356: INFO: Waiting for Pod statefulset-5638/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Jan  5 13:12:49.356: INFO: Waiting for Pod statefulset-5638/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Jan  5 13:12:59.549: INFO: Waiting for StatefulSet statefulset-5638/ss2 to complete update
Jan  5 13:12:59.549: INFO: Waiting for Pod statefulset-5638/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Jan  5 13:13:09.366: INFO: Waiting for StatefulSet statefulset-5638/ss2 to complete update
Jan  5 13:13:09.366: INFO: Waiting for Pod statefulset-5638/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Jan  5 13:13:19.401: INFO: Waiting for StatefulSet statefulset-5638/ss2 to complete update
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86
Jan  5 13:13:29.369: INFO: Deleting all statefulset in ns statefulset-5638
Jan  5 13:13:29.373: INFO: Scaling statefulset ss2 to 0
Jan  5 13:13:59.428: INFO: Waiting for statefulset status.replicas updated to 0
Jan  5 13:13:59.433: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  5 13:13:59.462: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-5638" for this suite.
Jan  5 13:14:07.518: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 13:14:07.666: INFO: namespace statefulset-5638 deletion completed in 8.187310835s

• [SLOW TEST:223.445 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should perform rolling updates and roll backs of template modifications [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  5 13:14:07.666: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Performing setup for networking test in namespace pod-network-test-1678
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Jan  5 13:14:08.533: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Jan  5 13:14:46.849: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.32.0.4 8081 | grep -v '^\s*$'] Namespace:pod-network-test-1678 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan  5 13:14:46.849: INFO: >>> kubeConfig: /root/.kube/config
I0105 13:14:46.967673       8 log.go:172] (0xc0014c4000) (0xc0003a4640) Create stream
I0105 13:14:46.967792       8 log.go:172] (0xc0014c4000) (0xc0003a4640) Stream added, broadcasting: 1
I0105 13:14:46.973778       8 log.go:172] (0xc0014c4000) Reply frame received for 1
I0105 13:14:46.973814       8 log.go:172] (0xc0014c4000) (0xc00034e280) Create stream
I0105 13:14:46.973825       8 log.go:172] (0xc0014c4000) (0xc00034e280) Stream added, broadcasting: 3
I0105 13:14:46.975283       8 log.go:172] (0xc0014c4000) Reply frame received for 3
I0105 13:14:46.975315       8 log.go:172] (0xc0014c4000) (0xc000a5a000) Create stream
I0105 13:14:46.975327       8 log.go:172] (0xc0014c4000) (0xc000a5a000) Stream added, broadcasting: 5
I0105 13:14:46.976793       8 log.go:172] (0xc0014c4000) Reply frame received for 5
I0105 13:14:48.171729       8 log.go:172] (0xc0014c4000) Data frame received for 3
I0105 13:14:48.171891       8 log.go:172] (0xc00034e280) (3) Data frame handling
I0105 13:14:48.171938       8 log.go:172] (0xc00034e280) (3) Data frame sent
I0105 13:14:48.354329       8 log.go:172] (0xc0014c4000) Data frame received for 1
I0105 13:14:48.354738       8 log.go:172] (0xc0014c4000) (0xc000a5a000) Stream removed, broadcasting: 5
I0105 13:14:48.354848       8 log.go:172] (0xc0003a4640) (1) Data frame handling
I0105 13:14:48.354889       8 log.go:172] (0xc0003a4640) (1) Data frame sent
I0105 13:14:48.354975       8 log.go:172] (0xc0014c4000) (0xc00034e280) Stream removed, broadcasting: 3
I0105 13:14:48.355062       8 log.go:172] (0xc0014c4000) (0xc0003a4640) Stream removed, broadcasting: 1
I0105 13:14:48.355086       8 log.go:172] (0xc0014c4000) Go away received
I0105 13:14:48.356243       8 log.go:172] (0xc0014c4000) (0xc0003a4640) Stream removed, broadcasting: 1
I0105 13:14:48.356277       8 log.go:172] (0xc0014c4000) (0xc00034e280) Stream removed, broadcasting: 3
I0105 13:14:48.356298       8 log.go:172] (0xc0014c4000) (0xc000a5a000) Stream removed, broadcasting: 5
Jan  5 13:14:48.356: INFO: Found all expected endpoints: [netserver-0]
Jan  5 13:14:48.370: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.44.0.1 8081 | grep -v '^\s*$'] Namespace:pod-network-test-1678 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan  5 13:14:48.371: INFO: >>> kubeConfig: /root/.kube/config
I0105 13:14:48.449639       8 log.go:172] (0xc001d0cd10) (0xc00034edc0) Create stream
I0105 13:14:48.449791       8 log.go:172] (0xc001d0cd10) (0xc00034edc0) Stream added, broadcasting: 1
I0105 13:14:48.460036       8 log.go:172] (0xc001d0cd10) Reply frame received for 1
I0105 13:14:48.460098       8 log.go:172] (0xc001d0cd10) (0xc000fa2500) Create stream
I0105 13:14:48.460116       8 log.go:172] (0xc001d0cd10) (0xc000fa2500) Stream added, broadcasting: 3
I0105 13:14:48.462875       8 log.go:172] (0xc001d0cd10) Reply frame received for 3
I0105 13:14:48.462904       8 log.go:172] (0xc001d0cd10) (0xc000f160a0) Create stream
I0105 13:14:48.462913       8 log.go:172] (0xc001d0cd10) (0xc000f160a0) Stream added, broadcasting: 5
I0105 13:14:48.465422       8 log.go:172] (0xc001d0cd10) Reply frame received for 5
I0105 13:14:49.730199       8 log.go:172] (0xc001d0cd10) Data frame received for 3
I0105 13:14:49.730403       8 log.go:172] (0xc000fa2500) (3) Data frame handling
I0105 13:14:49.730458       8 log.go:172] (0xc000fa2500) (3) Data frame sent
I0105 13:14:49.907163       8 log.go:172] (0xc001d0cd10) Data frame received for 1
I0105 13:14:49.907457       8 log.go:172] (0xc001d0cd10) (0xc000fa2500) Stream removed, broadcasting: 3
I0105 13:14:49.907561       8 log.go:172] (0xc00034edc0) (1) Data frame handling
I0105 13:14:49.907581       8 log.go:172] (0xc00034edc0) (1) Data frame sent
I0105 13:14:49.907589       8 log.go:172] (0xc001d0cd10) (0xc00034edc0) Stream removed, broadcasting: 1
I0105 13:14:49.907972       8 log.go:172] (0xc001d0cd10) (0xc000f160a0) Stream removed, broadcasting: 5
I0105 13:14:49.908011       8 log.go:172] (0xc001d0cd10) (0xc00034edc0) Stream removed, broadcasting: 1
I0105 13:14:49.908020       8 log.go:172] (0xc001d0cd10) (0xc000fa2500) Stream removed, broadcasting: 3
I0105 13:14:49.908026       8 log.go:172] (0xc001d0cd10) (0xc000f160a0) Stream removed, broadcasting: 5
Jan  5 13:14:49.908: INFO: Found all expected endpoints: [netserver-1]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  5 13:14:49.908: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0105 13:14:49.909539       8 log.go:172] (0xc001d0cd10) Go away received
STEP: Destroying namespace "pod-network-test-1678" for this suite.
Jan  5 13:15:13.945: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 13:15:14.097: INFO: namespace pod-network-test-1678 deletion completed in 24.175567808s

• [SLOW TEST:66.431 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  5 13:15:14.098: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72
[It] deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jan  5 13:15:14.300: INFO: Pod name rollover-pod: Found 0 pods out of 1
Jan  5 13:15:19.314: INFO: Pod name rollover-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Jan  5 13:15:23.330: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready
Jan  5 13:15:25.339: INFO: Creating deployment "test-rollover-deployment"
Jan  5 13:15:25.371: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations
Jan  5 13:15:27.384: INFO: Check revision of new replica set for deployment "test-rollover-deployment"
Jan  5 13:15:27.400: INFO: Ensure that both replica sets have 1 created replica
Jan  5 13:15:27.420: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update
Jan  5 13:15:27.432: INFO: Updating deployment test-rollover-deployment
Jan  5 13:15:27.432: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller
Jan  5 13:15:29.473: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2
Jan  5 13:15:29.479: INFO: Make sure deployment "test-rollover-deployment" is complete
Jan  5 13:15:29.484: INFO: all replica sets need to contain the pod-template-hash label
Jan  5 13:15:29.484: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713826925, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713826925, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713826927, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713826925, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  5 13:15:31.498: INFO: all replica sets need to contain the pod-template-hash label
Jan  5 13:15:31.498: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713826925, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713826925, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713826927, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713826925, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  5 13:15:33.542: INFO: all replica sets need to contain the pod-template-hash label
Jan  5 13:15:33.543: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713826925, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713826925, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713826927, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713826925, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  5 13:15:35.500: INFO: all replica sets need to contain the pod-template-hash label
Jan  5 13:15:35.501: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713826925, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713826925, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713826927, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713826925, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  5 13:15:37.500: INFO: all replica sets need to contain the pod-template-hash label
Jan  5 13:15:37.500: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713826925, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713826925, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713826936, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713826925, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  5 13:15:39.504: INFO: all replica sets need to contain the pod-template-hash label
Jan  5 13:15:39.504: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713826925, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713826925, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713826936, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713826925, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  5 13:15:41.499: INFO: all replica sets need to contain the pod-template-hash label
Jan  5 13:15:41.500: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713826925, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713826925, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713826936, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713826925, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  5 13:15:43.499: INFO: all replica sets need to contain the pod-template-hash label
Jan  5 13:15:43.499: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713826925, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713826925, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713826936, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713826925, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  5 13:15:45.499: INFO: all replica sets need to contain the pod-template-hash label
Jan  5 13:15:45.500: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713826925, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713826925, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713826936, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713826925, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  5 13:15:47.499: INFO: 
Jan  5 13:15:47.499: INFO: Ensure that both old replica sets have no replicas
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66
Jan  5 13:15:47.512: INFO: Deployment "test-rollover-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment,GenerateName:,Namespace:deployment-7103,SelfLink:/apis/apps/v1/namespaces/deployment-7103/deployments/test-rollover-deployment,UID:3d65193d-e664-4441-9092-add5669f68ab,ResourceVersion:19397557,Generation:2,CreationTimestamp:2020-01-05 13:15:25 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-01-05 13:15:25 +0000 UTC 2020-01-05 13:15:25 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-01-05 13:15:46 +0000 UTC 2020-01-05 13:15:25 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rollover-deployment-854595fc44" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},}

Jan  5 13:15:47.517: INFO: New ReplicaSet "test-rollover-deployment-854595fc44" of Deployment "test-rollover-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-854595fc44,GenerateName:,Namespace:deployment-7103,SelfLink:/apis/apps/v1/namespaces/deployment-7103/replicasets/test-rollover-deployment-854595fc44,UID:b907b576-f25e-4b74-8d03-56d60e098c64,ResourceVersion:19397547,Generation:2,CreationTimestamp:2020-01-05 13:15:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 3d65193d-e664-4441-9092-add5669f68ab 0xc0026b7757 0xc0026b7758}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Jan  5 13:15:47.517: INFO: All old ReplicaSets of Deployment "test-rollover-deployment":
Jan  5 13:15:47.517: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-controller,GenerateName:,Namespace:deployment-7103,SelfLink:/apis/apps/v1/namespaces/deployment-7103/replicasets/test-rollover-controller,UID:75563562-7c0d-47a3-86a5-09ce3844ee7c,ResourceVersion:19397556,Generation:2,CreationTimestamp:2020-01-05 13:15:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 3d65193d-e664-4441-9092-add5669f68ab 0xc0026b7687 0xc0026b7688}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Jan  5 13:15:47.518: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-9b8b997cf,GenerateName:,Namespace:deployment-7103,SelfLink:/apis/apps/v1/namespaces/deployment-7103/replicasets/test-rollover-deployment-9b8b997cf,UID:55ac5dd6-09f2-4fa7-9012-cf0974d7c40f,ResourceVersion:19397510,Generation:2,CreationTimestamp:2020-01-05 13:15:25 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 3d65193d-e664-4441-9092-add5669f68ab 0xc0026b7820 0xc0026b7821}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Jan  5 13:15:47.524: INFO: Pod "test-rollover-deployment-854595fc44-plpwg" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-854595fc44-plpwg,GenerateName:test-rollover-deployment-854595fc44-,Namespace:deployment-7103,SelfLink:/api/v1/namespaces/deployment-7103/pods/test-rollover-deployment-854595fc44-plpwg,UID:38f55b57-3089-40d3-9049-94235c57cdfc,ResourceVersion:19397530,Generation:0,CreationTimestamp:2020-01-05 13:15:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rollover-deployment-854595fc44 b907b576-f25e-4b74-8d03-56d60e098c64 0xc00222a747 0xc00222a748}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-fd56w {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-fd56w,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [{default-token-fd56w true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00222a860} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00222a880}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-05 13:15:27 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-05 13:15:35 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-05 13:15:35 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-05 13:15:27 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.2,StartTime:2020-01-05 13:15:27 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-01-05 13:15:34 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 docker://40e44ea310bce8d01bab8e1fe509e4084eed04a22cbbcaa669f43c1f905f61a3}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  5 13:15:47.524: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-7103" for this suite.
Jan  5 13:15:55.577: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 13:15:55.712: INFO: namespace deployment-7103 deletion completed in 8.182493077s

• [SLOW TEST:41.615 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  5 13:15:55.713: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward api env vars
Jan  5 13:15:55.878: INFO: Waiting up to 5m0s for pod "downward-api-45c2767f-a006-447a-a69b-ca979e796a1a" in namespace "downward-api-2552" to be "success or failure"
Jan  5 13:15:55.887: INFO: Pod "downward-api-45c2767f-a006-447a-a69b-ca979e796a1a": Phase="Pending", Reason="", readiness=false. Elapsed: 9.641671ms
Jan  5 13:15:57.898: INFO: Pod "downward-api-45c2767f-a006-447a-a69b-ca979e796a1a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020663688s
Jan  5 13:15:59.908: INFO: Pod "downward-api-45c2767f-a006-447a-a69b-ca979e796a1a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.030400474s
Jan  5 13:16:01.936: INFO: Pod "downward-api-45c2767f-a006-447a-a69b-ca979e796a1a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.058020746s
Jan  5 13:16:03.953: INFO: Pod "downward-api-45c2767f-a006-447a-a69b-ca979e796a1a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.074879377s
STEP: Saw pod success
Jan  5 13:16:03.953: INFO: Pod "downward-api-45c2767f-a006-447a-a69b-ca979e796a1a" satisfied condition "success or failure"
Jan  5 13:16:03.958: INFO: Trying to get logs from node iruya-node pod downward-api-45c2767f-a006-447a-a69b-ca979e796a1a container dapi-container: 
STEP: delete the pod
Jan  5 13:16:04.139: INFO: Waiting for pod downward-api-45c2767f-a006-447a-a69b-ca979e796a1a to disappear
Jan  5 13:16:04.149: INFO: Pod downward-api-45c2767f-a006-447a-a69b-ca979e796a1a no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  5 13:16:04.150: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-2552" for this suite.
Jan  5 13:16:10.205: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 13:16:10.327: INFO: namespace downward-api-2552 deletion completed in 6.169278689s

• [SLOW TEST:14.614 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  5 13:16:10.327: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs
STEP: Gathering metrics
W0105 13:16:40.506211       8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jan  5 13:16:40.506: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  5 13:16:40.506: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-302" for this suite.
Jan  5 13:16:48.611: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 13:16:49.422: INFO: namespace gc-302 deletion completed in 8.910063168s

• [SLOW TEST:39.095 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  5 13:16:49.424: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0666 on node default medium
Jan  5 13:16:49.827: INFO: Waiting up to 5m0s for pod "pod-ea710da5-cc60-495f-9b7a-a2c8933e0362" in namespace "emptydir-1256" to be "success or failure"
Jan  5 13:16:49.929: INFO: Pod "pod-ea710da5-cc60-495f-9b7a-a2c8933e0362": Phase="Pending", Reason="", readiness=false. Elapsed: 101.695665ms
Jan  5 13:16:51.936: INFO: Pod "pod-ea710da5-cc60-495f-9b7a-a2c8933e0362": Phase="Pending", Reason="", readiness=false. Elapsed: 2.108773738s
Jan  5 13:16:53.952: INFO: Pod "pod-ea710da5-cc60-495f-9b7a-a2c8933e0362": Phase="Pending", Reason="", readiness=false. Elapsed: 4.125037859s
Jan  5 13:16:55.979: INFO: Pod "pod-ea710da5-cc60-495f-9b7a-a2c8933e0362": Phase="Pending", Reason="", readiness=false. Elapsed: 6.151298804s
Jan  5 13:16:57.989: INFO: Pod "pod-ea710da5-cc60-495f-9b7a-a2c8933e0362": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.16137246s
STEP: Saw pod success
Jan  5 13:16:57.989: INFO: Pod "pod-ea710da5-cc60-495f-9b7a-a2c8933e0362" satisfied condition "success or failure"
Jan  5 13:16:57.996: INFO: Trying to get logs from node iruya-node pod pod-ea710da5-cc60-495f-9b7a-a2c8933e0362 container test-container: 
STEP: delete the pod
Jan  5 13:16:58.068: INFO: Waiting for pod pod-ea710da5-cc60-495f-9b7a-a2c8933e0362 to disappear
Jan  5 13:16:58.074: INFO: Pod pod-ea710da5-cc60-495f-9b7a-a2c8933e0362 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  5 13:16:58.074: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-1256" for this suite.
Jan  5 13:17:04.144: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 13:17:04.279: INFO: namespace emptydir-1256 deletion completed in 6.199717616s

• [SLOW TEST:14.856 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-auth] ServiceAccounts 
  should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  5 13:17:04.280: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: getting the auto-created API token
Jan  5 13:17:04.980: INFO: created pod pod-service-account-defaultsa
Jan  5 13:17:04.980: INFO: pod pod-service-account-defaultsa service account token volume mount: true
Jan  5 13:17:04.992: INFO: created pod pod-service-account-mountsa
Jan  5 13:17:04.992: INFO: pod pod-service-account-mountsa service account token volume mount: true
Jan  5 13:17:05.058: INFO: created pod pod-service-account-nomountsa
Jan  5 13:17:05.059: INFO: pod pod-service-account-nomountsa service account token volume mount: false
Jan  5 13:17:05.196: INFO: created pod pod-service-account-defaultsa-mountspec
Jan  5 13:17:05.197: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true
Jan  5 13:17:05.219: INFO: created pod pod-service-account-mountsa-mountspec
Jan  5 13:17:05.219: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true
Jan  5 13:17:05.238: INFO: created pod pod-service-account-nomountsa-mountspec
Jan  5 13:17:05.239: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true
Jan  5 13:17:05.348: INFO: created pod pod-service-account-defaultsa-nomountspec
Jan  5 13:17:05.349: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false
Jan  5 13:17:05.387: INFO: created pod pod-service-account-mountsa-nomountspec
Jan  5 13:17:05.387: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false
Jan  5 13:17:05.427: INFO: created pod pod-service-account-nomountsa-nomountspec
Jan  5 13:17:05.427: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false
[AfterEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  5 13:17:05.427: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-4472" for this suite.
Jan  5 13:17:31.794: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 13:17:31.982: INFO: namespace svcaccounts-4472 deletion completed in 26.44313381s

• [SLOW TEST:27.702 seconds]
[sig-auth] ServiceAccounts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23
  should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should be possible to delete [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  5 13:17:31.982: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should be possible to delete [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  5 13:17:32.202: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-4825" for this suite.
Jan  5 13:17:38.285: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 13:17:38.453: INFO: namespace kubelet-test-4825 deletion completed in 6.23173271s

• [SLOW TEST:6.471 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78
    should be possible to delete [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  5 13:17:38.454: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a watch on configmaps
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: closing the watch once it receives two notifications
Jan  5 13:17:38.589: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-6711,SelfLink:/api/v1/namespaces/watch-6711/configmaps/e2e-watch-test-watch-closed,UID:20e8c71b-6773-4b99-ab93-2387fd13d196,ResourceVersion:19397960,Generation:0,CreationTimestamp:2020-01-05 13:17:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
Jan  5 13:17:38.590: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-6711,SelfLink:/api/v1/namespaces/watch-6711/configmaps/e2e-watch-test-watch-closed,UID:20e8c71b-6773-4b99-ab93-2387fd13d196,ResourceVersion:19397961,Generation:0,CreationTimestamp:2020-01-05 13:17:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying the configmap a second time, while the watch is closed
STEP: creating a new watch on configmaps from the last resource version observed by the first watch
STEP: deleting the configmap
STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed
Jan  5 13:17:38.626: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-6711,SelfLink:/api/v1/namespaces/watch-6711/configmaps/e2e-watch-test-watch-closed,UID:20e8c71b-6773-4b99-ab93-2387fd13d196,ResourceVersion:19397962,Generation:0,CreationTimestamp:2020-01-05 13:17:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Jan  5 13:17:38.627: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-6711,SelfLink:/api/v1/namespaces/watch-6711/configmaps/e2e-watch-test-watch-closed,UID:20e8c71b-6773-4b99-ab93-2387fd13d196,ResourceVersion:19397963,Generation:0,CreationTimestamp:2020-01-05 13:17:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  5 13:17:38.627: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-6711" for this suite.
Jan  5 13:17:44.694: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 13:17:44.879: INFO: namespace watch-6711 deletion completed in 6.212863744s

• [SLOW TEST:6.425 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  5 13:17:44.881: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-914e376b-9951-456c-82e4-b3d7ceaecfb9
STEP: Creating a pod to test consume secrets
Jan  5 13:17:45.038: INFO: Waiting up to 5m0s for pod "pod-secrets-c43d4dcb-b785-43f1-8f7d-de18187f2459" in namespace "secrets-7425" to be "success or failure"
Jan  5 13:17:45.070: INFO: Pod "pod-secrets-c43d4dcb-b785-43f1-8f7d-de18187f2459": Phase="Pending", Reason="", readiness=false. Elapsed: 31.304049ms
Jan  5 13:17:47.079: INFO: Pod "pod-secrets-c43d4dcb-b785-43f1-8f7d-de18187f2459": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040420453s
Jan  5 13:17:49.870: INFO: Pod "pod-secrets-c43d4dcb-b785-43f1-8f7d-de18187f2459": Phase="Pending", Reason="", readiness=false. Elapsed: 4.831858169s
Jan  5 13:17:51.885: INFO: Pod "pod-secrets-c43d4dcb-b785-43f1-8f7d-de18187f2459": Phase="Pending", Reason="", readiness=false. Elapsed: 6.846349222s
Jan  5 13:17:53.900: INFO: Pod "pod-secrets-c43d4dcb-b785-43f1-8f7d-de18187f2459": Phase="Pending", Reason="", readiness=false. Elapsed: 8.861739462s
Jan  5 13:17:55.911: INFO: Pod "pod-secrets-c43d4dcb-b785-43f1-8f7d-de18187f2459": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.872955866s
STEP: Saw pod success
Jan  5 13:17:55.911: INFO: Pod "pod-secrets-c43d4dcb-b785-43f1-8f7d-de18187f2459" satisfied condition "success or failure"
Jan  5 13:17:55.916: INFO: Trying to get logs from node iruya-node pod pod-secrets-c43d4dcb-b785-43f1-8f7d-de18187f2459 container secret-volume-test: 
STEP: delete the pod
Jan  5 13:17:56.026: INFO: Waiting for pod pod-secrets-c43d4dcb-b785-43f1-8f7d-de18187f2459 to disappear
Jan  5 13:17:56.180: INFO: Pod pod-secrets-c43d4dcb-b785-43f1-8f7d-de18187f2459 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  5 13:17:56.180: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-7425" for this suite.
Jan  5 13:18:02.218: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 13:18:02.341: INFO: namespace secrets-7425 deletion completed in 6.149588726s

• [SLOW TEST:17.461 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class 
  should be submitted and removed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  5 13:18:02.342: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods Set QOS Class
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:179
[It] should be submitted and removed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying QOS class is set on the pod
[AfterEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  5 13:18:02.492: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-9007" for this suite.
Jan  5 13:18:18.546: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 13:18:18.635: INFO: namespace pods-9007 deletion completed in 16.137105826s

• [SLOW TEST:16.293 seconds]
[k8s.io] [sig-node] Pods Extended
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  [k8s.io] Pods Set QOS Class
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should be submitted and removed  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  5 13:18:18.635: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jan  5 13:18:18.684: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  5 13:18:27.115: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-4813" for this suite.
Jan  5 13:19:09.157: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 13:19:09.312: INFO: namespace pods-4813 deletion completed in 42.185886365s

• [SLOW TEST:50.677 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  5 13:19:09.313: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod pod-subpath-test-configmap-kqjx
STEP: Creating a pod to test atomic-volume-subpath
Jan  5 13:19:09.485: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-kqjx" in namespace "subpath-4317" to be "success or failure"
Jan  5 13:19:09.493: INFO: Pod "pod-subpath-test-configmap-kqjx": Phase="Pending", Reason="", readiness=false. Elapsed: 7.462682ms
Jan  5 13:19:11.510: INFO: Pod "pod-subpath-test-configmap-kqjx": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024696901s
Jan  5 13:19:13.522: INFO: Pod "pod-subpath-test-configmap-kqjx": Phase="Pending", Reason="", readiness=false. Elapsed: 4.036489084s
Jan  5 13:19:15.534: INFO: Pod "pod-subpath-test-configmap-kqjx": Phase="Pending", Reason="", readiness=false. Elapsed: 6.048886974s
Jan  5 13:19:17.546: INFO: Pod "pod-subpath-test-configmap-kqjx": Phase="Pending", Reason="", readiness=false. Elapsed: 8.061337942s
Jan  5 13:19:19.556: INFO: Pod "pod-subpath-test-configmap-kqjx": Phase="Running", Reason="", readiness=true. Elapsed: 10.070978724s
Jan  5 13:19:21.568: INFO: Pod "pod-subpath-test-configmap-kqjx": Phase="Running", Reason="", readiness=true. Elapsed: 12.083021673s
Jan  5 13:19:23.580: INFO: Pod "pod-subpath-test-configmap-kqjx": Phase="Running", Reason="", readiness=true. Elapsed: 14.095087935s
Jan  5 13:19:25.592: INFO: Pod "pod-subpath-test-configmap-kqjx": Phase="Running", Reason="", readiness=true. Elapsed: 16.107195398s
Jan  5 13:19:27.605: INFO: Pod "pod-subpath-test-configmap-kqjx": Phase="Running", Reason="", readiness=true. Elapsed: 18.120347121s
Jan  5 13:19:29.618: INFO: Pod "pod-subpath-test-configmap-kqjx": Phase="Running", Reason="", readiness=true. Elapsed: 20.133178983s
Jan  5 13:19:31.626: INFO: Pod "pod-subpath-test-configmap-kqjx": Phase="Running", Reason="", readiness=true. Elapsed: 22.141261006s
Jan  5 13:19:33.638: INFO: Pod "pod-subpath-test-configmap-kqjx": Phase="Running", Reason="", readiness=true. Elapsed: 24.152803289s
Jan  5 13:19:35.648: INFO: Pod "pod-subpath-test-configmap-kqjx": Phase="Running", Reason="", readiness=true. Elapsed: 26.163199327s
Jan  5 13:19:37.657: INFO: Pod "pod-subpath-test-configmap-kqjx": Phase="Running", Reason="", readiness=true. Elapsed: 28.172201209s
Jan  5 13:19:39.668: INFO: Pod "pod-subpath-test-configmap-kqjx": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.183345163s
STEP: Saw pod success
Jan  5 13:19:39.669: INFO: Pod "pod-subpath-test-configmap-kqjx" satisfied condition "success or failure"
Jan  5 13:19:39.674: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-configmap-kqjx container test-container-subpath-configmap-kqjx: 
STEP: delete the pod
Jan  5 13:19:39.753: INFO: Waiting for pod pod-subpath-test-configmap-kqjx to disappear
Jan  5 13:19:39.760: INFO: Pod pod-subpath-test-configmap-kqjx no longer exists
STEP: Deleting pod pod-subpath-test-configmap-kqjx
Jan  5 13:19:39.760: INFO: Deleting pod "pod-subpath-test-configmap-kqjx" in namespace "subpath-4317"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  5 13:19:39.765: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-4317" for this suite.
Jan  5 13:19:45.795: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 13:19:45.949: INFO: namespace subpath-4317 deletion completed in 6.177545259s

• [SLOW TEST:36.637 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Burst scaling should run to completion even with unhealthy pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  5 13:19:45.951: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75
STEP: Creating service test in namespace statefulset-4272
[It] Burst scaling should run to completion even with unhealthy pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating stateful set ss in namespace statefulset-4272
STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-4272
Jan  5 13:19:46.172: INFO: Found 0 stateful pods, waiting for 1
Jan  5 13:19:56.189: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod
Jan  5 13:19:56.199: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4272 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan  5 13:19:56.915: INFO: stderr: "I0105 13:19:56.500169     161 log.go:172] (0xc000a32420) (0xc0009ea640) Create stream\nI0105 13:19:56.500771     161 log.go:172] (0xc000a32420) (0xc0009ea640) Stream added, broadcasting: 1\nI0105 13:19:56.512232     161 log.go:172] (0xc000a32420) Reply frame received for 1\nI0105 13:19:56.512595     161 log.go:172] (0xc000a32420) (0xc0009ea6e0) Create stream\nI0105 13:19:56.512699     161 log.go:172] (0xc000a32420) (0xc0009ea6e0) Stream added, broadcasting: 3\nI0105 13:19:56.520184     161 log.go:172] (0xc000a32420) Reply frame received for 3\nI0105 13:19:56.520447     161 log.go:172] (0xc000a32420) (0xc000a2a000) Create stream\nI0105 13:19:56.520483     161 log.go:172] (0xc000a32420) (0xc000a2a000) Stream added, broadcasting: 5\nI0105 13:19:56.524128     161 log.go:172] (0xc000a32420) Reply frame received for 5\nI0105 13:19:56.706769     161 log.go:172] (0xc000a32420) Data frame received for 5\nI0105 13:19:56.706838     161 log.go:172] (0xc000a2a000) (5) Data frame handling\nI0105 13:19:56.706867     161 log.go:172] (0xc000a2a000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0105 13:19:56.766217     161 log.go:172] (0xc000a32420) Data frame received for 3\nI0105 13:19:56.766290     161 log.go:172] (0xc0009ea6e0) (3) Data frame handling\nI0105 13:19:56.766311     161 log.go:172] (0xc0009ea6e0) (3) Data frame sent\nI0105 13:19:56.900551     161 log.go:172] (0xc000a32420) Data frame received for 1\nI0105 13:19:56.901112     161 log.go:172] (0xc000a32420) (0xc000a2a000) Stream removed, broadcasting: 5\nI0105 13:19:56.901234     161 log.go:172] (0xc0009ea640) (1) Data frame handling\nI0105 13:19:56.901468     161 log.go:172] (0xc0009ea640) (1) Data frame sent\nI0105 13:19:56.901614     161 log.go:172] (0xc000a32420) (0xc0009ea6e0) Stream removed, broadcasting: 3\nI0105 13:19:56.901673     161 log.go:172] (0xc000a32420) (0xc0009ea640) Stream removed, broadcasting: 1\nI0105 13:19:56.901689     161 log.go:172] (0xc000a32420) Go away received\nI0105 13:19:56.904319     161 log.go:172] (0xc000a32420) (0xc0009ea640) Stream removed, broadcasting: 1\nI0105 13:19:56.904412     161 log.go:172] (0xc000a32420) (0xc0009ea6e0) Stream removed, broadcasting: 3\nI0105 13:19:56.904425     161 log.go:172] (0xc000a32420) (0xc000a2a000) Stream removed, broadcasting: 5\n"
Jan  5 13:19:56.915: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan  5 13:19:56.915: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan  5 13:19:56.925: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
Jan  5 13:20:06.935: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Jan  5 13:20:06.936: INFO: Waiting for statefulset status.replicas updated to 0
Jan  5 13:20:06.988: INFO: POD   NODE        PHASE    GRACE  CONDITIONS
Jan  5 13:20:06.989: INFO: ss-0  iruya-node  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-05 13:19:46 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-05 13:19:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-05 13:19:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-05 13:19:46 +0000 UTC  }]
Jan  5 13:20:06.989: INFO: 
Jan  5 13:20:06.989: INFO: StatefulSet ss has not reached scale 3, at 1
Jan  5 13:20:08.789: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.988648029s
Jan  5 13:20:10.225: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.188104289s
Jan  5 13:20:11.232: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.752693007s
Jan  5 13:20:13.424: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.745219693s
Jan  5 13:20:14.811: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.5529993s
Jan  5 13:20:15.825: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.166074463s
Jan  5 13:20:16.844: INFO: Verifying statefulset ss doesn't scale past 3 for another 152.580422ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-4272
Jan  5 13:20:17.863: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4272 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  5 13:20:18.355: INFO: stderr: "I0105 13:20:18.084840     182 log.go:172] (0xc00055c160) (0xc00088a5a0) Create stream\nI0105 13:20:18.084989     182 log.go:172] (0xc00055c160) (0xc00088a5a0) Stream added, broadcasting: 1\nI0105 13:20:18.088941     182 log.go:172] (0xc00055c160) Reply frame received for 1\nI0105 13:20:18.088980     182 log.go:172] (0xc00055c160) (0xc00066c320) Create stream\nI0105 13:20:18.088993     182 log.go:172] (0xc00055c160) (0xc00066c320) Stream added, broadcasting: 3\nI0105 13:20:18.091449     182 log.go:172] (0xc00055c160) Reply frame received for 3\nI0105 13:20:18.091494     182 log.go:172] (0xc00055c160) (0xc00035e000) Create stream\nI0105 13:20:18.091512     182 log.go:172] (0xc00055c160) (0xc00035e000) Stream added, broadcasting: 5\nI0105 13:20:18.093343     182 log.go:172] (0xc00055c160) Reply frame received for 5\nI0105 13:20:18.203748     182 log.go:172] (0xc00055c160) Data frame received for 5\nI0105 13:20:18.203820     182 log.go:172] (0xc00035e000) (5) Data frame handling\nI0105 13:20:18.203840     182 log.go:172] (0xc00035e000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0105 13:20:18.203864     182 log.go:172] (0xc00055c160) Data frame received for 3\nI0105 13:20:18.203874     182 log.go:172] (0xc00066c320) (3) Data frame handling\nI0105 13:20:18.203895     182 log.go:172] (0xc00066c320) (3) Data frame sent\nI0105 13:20:18.337527     182 log.go:172] (0xc00055c160) (0xc00066c320) Stream removed, broadcasting: 3\nI0105 13:20:18.337828     182 log.go:172] (0xc00055c160) Data frame received for 1\nI0105 13:20:18.337855     182 log.go:172] (0xc00088a5a0) (1) Data frame handling\nI0105 13:20:18.337894     182 log.go:172] (0xc00088a5a0) (1) Data frame sent\nI0105 13:20:18.337915     182 log.go:172] (0xc00055c160) (0xc00088a5a0) Stream removed, broadcasting: 1\nI0105 13:20:18.339692     182 log.go:172] (0xc00055c160) (0xc00035e000) Stream removed, broadcasting: 5\nI0105 13:20:18.340001     182 log.go:172] (0xc00055c160) (0xc00088a5a0) Stream removed, broadcasting: 1\nI0105 13:20:18.340135     182 log.go:172] (0xc00055c160) (0xc00066c320) Stream removed, broadcasting: 3\nI0105 13:20:18.340288     182 log.go:172] (0xc00055c160) (0xc00035e000) Stream removed, broadcasting: 5\nI0105 13:20:18.340668     182 log.go:172] (0xc00055c160) Go away received\n"
Jan  5 13:20:18.355: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jan  5 13:20:18.355: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jan  5 13:20:18.356: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4272 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  5 13:20:18.823: INFO: stderr: "I0105 13:20:18.605463     200 log.go:172] (0xc0009ba370) (0xc0007f46e0) Create stream\nI0105 13:20:18.605882     200 log.go:172] (0xc0009ba370) (0xc0007f46e0) Stream added, broadcasting: 1\nI0105 13:20:18.609977     200 log.go:172] (0xc0009ba370) Reply frame received for 1\nI0105 13:20:18.610064     200 log.go:172] (0xc0009ba370) (0xc0005ea140) Create stream\nI0105 13:20:18.610084     200 log.go:172] (0xc0009ba370) (0xc0005ea140) Stream added, broadcasting: 3\nI0105 13:20:18.611331     200 log.go:172] (0xc0009ba370) Reply frame received for 3\nI0105 13:20:18.611366     200 log.go:172] (0xc0009ba370) (0xc0007f4780) Create stream\nI0105 13:20:18.611379     200 log.go:172] (0xc0009ba370) (0xc0007f4780) Stream added, broadcasting: 5\nI0105 13:20:18.614201     200 log.go:172] (0xc0009ba370) Reply frame received for 5\nI0105 13:20:18.710972     200 log.go:172] (0xc0009ba370) Data frame received for 5\nI0105 13:20:18.711646     200 log.go:172] (0xc0007f4780) (5) Data frame handling\nI0105 13:20:18.711753     200 log.go:172] (0xc0007f4780) (5) Data frame sent\nI0105 13:20:18.712254     200 log.go:172] (0xc0009ba370) Data frame received for 5\nI0105 13:20:18.712295     200 log.go:172] (0xc0007f4780) (5) Data frame handling\nI0105 13:20:18.712336     200 log.go:172] (0xc0009ba370) Data frame received for 3\nI0105 13:20:18.712386     200 log.go:172] (0xc0005ea140) (3) Data frame handling\nI0105 13:20:18.712415     200 log.go:172] (0xc0005ea140) (3) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nmv: can't rename '/tmp/index.html': No such file or directory\nI0105 13:20:18.712626     200 log.go:172] (0xc0007f4780) (5) Data frame sent\nI0105 13:20:18.712655     200 log.go:172] (0xc0009ba370) Data frame received for 5\nI0105 13:20:18.712699     200 log.go:172] (0xc0007f4780) (5) Data frame handling\nI0105 13:20:18.712732     200 log.go:172] (0xc0007f4780) (5) Data frame sent\n+ true\nI0105 13:20:18.812154     200 log.go:172] (0xc0009ba370) (0xc0007f4780) Stream removed, broadcasting: 5\nI0105 13:20:18.812331     200 log.go:172] (0xc0009ba370) Data frame received for 1\nI0105 13:20:18.812360     200 log.go:172] (0xc0009ba370) (0xc0005ea140) Stream removed, broadcasting: 3\nI0105 13:20:18.812401     200 log.go:172] (0xc0007f46e0) (1) Data frame handling\nI0105 13:20:18.812419     200 log.go:172] (0xc0007f46e0) (1) Data frame sent\nI0105 13:20:18.812440     200 log.go:172] (0xc0009ba370) (0xc0007f46e0) Stream removed, broadcasting: 1\nI0105 13:20:18.812452     200 log.go:172] (0xc0009ba370) Go away received\nI0105 13:20:18.813940     200 log.go:172] (0xc0009ba370) (0xc0007f46e0) Stream removed, broadcasting: 1\nI0105 13:20:18.813951     200 log.go:172] (0xc0009ba370) (0xc0005ea140) Stream removed, broadcasting: 3\nI0105 13:20:18.813957     200 log.go:172] (0xc0009ba370) (0xc0007f4780) Stream removed, broadcasting: 5\n"
Jan  5 13:20:18.823: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jan  5 13:20:18.823: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jan  5 13:20:18.824: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4272 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  5 13:20:19.249: INFO: stderr: "I0105 13:20:18.971432     220 log.go:172] (0xc000aa4370) (0xc000978640) Create stream\nI0105 13:20:18.971659     220 log.go:172] (0xc000aa4370) (0xc000978640) Stream added, broadcasting: 1\nI0105 13:20:18.976701     220 log.go:172] (0xc000aa4370) Reply frame received for 1\nI0105 13:20:18.976743     220 log.go:172] (0xc000aa4370) (0xc000a8e000) Create stream\nI0105 13:20:18.976751     220 log.go:172] (0xc000aa4370) (0xc000a8e000) Stream added, broadcasting: 3\nI0105 13:20:18.977875     220 log.go:172] (0xc000aa4370) Reply frame received for 3\nI0105 13:20:18.977911     220 log.go:172] (0xc000aa4370) (0xc0005d2280) Create stream\nI0105 13:20:18.977923     220 log.go:172] (0xc000aa4370) (0xc0005d2280) Stream added, broadcasting: 5\nI0105 13:20:18.982889     220 log.go:172] (0xc000aa4370) Reply frame received for 5\nI0105 13:20:19.080814     220 log.go:172] (0xc000aa4370) Data frame received for 5\nI0105 13:20:19.080993     220 log.go:172] (0xc0005d2280) (5) Data frame handling\nI0105 13:20:19.081035     220 log.go:172] (0xc0005d2280) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nmv: can't rename '/tmp/index.html': No such file or directory\nI0105 13:20:19.081068     220 log.go:172] (0xc000aa4370) Data frame received for 3\nI0105 13:20:19.081096     220 log.go:172] (0xc000a8e000) (3) Data frame handling\nI0105 13:20:19.081140     220 log.go:172] (0xc000a8e000) (3) Data frame sent\nI0105 13:20:19.083577     220 log.go:172] (0xc000aa4370) Data frame received for 5\nI0105 13:20:19.083597     220 log.go:172] (0xc0005d2280) (5) Data frame handling\nI0105 13:20:19.083618     220 log.go:172] (0xc0005d2280) (5) Data frame sent\n+ true\nI0105 13:20:19.233125     220 log.go:172] (0xc000aa4370) Data frame received for 1\nI0105 13:20:19.233362     220 log.go:172] (0xc000978640) (1) Data frame handling\nI0105 13:20:19.233419     220 log.go:172] (0xc000978640) (1) Data frame sent\nI0105 13:20:19.234197     220 log.go:172] (0xc000aa4370) (0xc000978640) Stream removed, broadcasting: 1\nI0105 13:20:19.234324     220 log.go:172] (0xc000aa4370) (0xc000a8e000) Stream removed, broadcasting: 3\nI0105 13:20:19.234367     220 log.go:172] (0xc000aa4370) (0xc0005d2280) Stream removed, broadcasting: 5\nI0105 13:20:19.234391     220 log.go:172] (0xc000aa4370) Go away received\nI0105 13:20:19.236162     220 log.go:172] (0xc000aa4370) (0xc000978640) Stream removed, broadcasting: 1\nI0105 13:20:19.236216     220 log.go:172] (0xc000aa4370) (0xc000a8e000) Stream removed, broadcasting: 3\nI0105 13:20:19.236235     220 log.go:172] (0xc000aa4370) (0xc0005d2280) Stream removed, broadcasting: 5\n"
Jan  5 13:20:19.249: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jan  5 13:20:19.249: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jan  5 13:20:19.264: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Jan  5 13:20:19.264: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Jan  5 13:20:19.264: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Scale down will not halt with unhealthy stateful pod
Jan  5 13:20:19.276: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4272 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan  5 13:20:19.736: INFO: stderr: "I0105 13:20:19.448148     240 log.go:172] (0xc000116dc0) (0xc0005b2960) Create stream\nI0105 13:20:19.448466     240 log.go:172] (0xc000116dc0) (0xc0005b2960) Stream added, broadcasting: 1\nI0105 13:20:19.455468     240 log.go:172] (0xc000116dc0) Reply frame received for 1\nI0105 13:20:19.455550     240 log.go:172] (0xc000116dc0) (0xc00066e000) Create stream\nI0105 13:20:19.455565     240 log.go:172] (0xc000116dc0) (0xc00066e000) Stream added, broadcasting: 3\nI0105 13:20:19.457443     240 log.go:172] (0xc000116dc0) Reply frame received for 3\nI0105 13:20:19.457482     240 log.go:172] (0xc000116dc0) (0xc0005b2a00) Create stream\nI0105 13:20:19.457494     240 log.go:172] (0xc000116dc0) (0xc0005b2a00) Stream added, broadcasting: 5\nI0105 13:20:19.460355     240 log.go:172] (0xc000116dc0) Reply frame received for 5\nI0105 13:20:19.564030     240 log.go:172] (0xc000116dc0) Data frame received for 3\nI0105 13:20:19.564148     240 log.go:172] (0xc00066e000) (3) Data frame handling\nI0105 13:20:19.564169     240 log.go:172] (0xc00066e000) (3) Data frame sent\nI0105 13:20:19.564207     240 log.go:172] (0xc000116dc0) Data frame received for 5\nI0105 13:20:19.564218     240 log.go:172] (0xc0005b2a00) (5) Data frame handling\nI0105 13:20:19.564230     240 log.go:172] (0xc0005b2a00) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0105 13:20:19.721295     240 log.go:172] (0xc000116dc0) Data frame received for 1\nI0105 13:20:19.721421     240 log.go:172] (0xc000116dc0) (0xc00066e000) Stream removed, broadcasting: 3\nI0105 13:20:19.721556     240 log.go:172] (0xc0005b2960) (1) Data frame handling\nI0105 13:20:19.721590     240 log.go:172] (0xc0005b2960) (1) Data frame sent\nI0105 13:20:19.721615     240 log.go:172] (0xc000116dc0) (0xc0005b2a00) Stream removed, broadcasting: 5\nI0105 13:20:19.721633     240 log.go:172] (0xc000116dc0) (0xc0005b2960) Stream removed, broadcasting: 1\nI0105 13:20:19.721648     240 log.go:172] (0xc000116dc0) Go away received\nI0105 13:20:19.722706     240 log.go:172] (0xc000116dc0) (0xc0005b2960) Stream removed, broadcasting: 1\nI0105 13:20:19.722726     240 log.go:172] (0xc000116dc0) (0xc00066e000) Stream removed, broadcasting: 3\nI0105 13:20:19.722733     240 log.go:172] (0xc000116dc0) (0xc0005b2a00) Stream removed, broadcasting: 5\n"
Jan  5 13:20:19.736: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan  5 13:20:19.736: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan  5 13:20:19.737: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4272 ss-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan  5 13:20:20.140: INFO: stderr: "I0105 13:20:19.923578     258 log.go:172] (0xc00012edc0) (0xc0004ee820) Create stream\nI0105 13:20:19.923963     258 log.go:172] (0xc00012edc0) (0xc0004ee820) Stream added, broadcasting: 1\nI0105 13:20:19.931990     258 log.go:172] (0xc00012edc0) Reply frame received for 1\nI0105 13:20:19.932225     258 log.go:172] (0xc00012edc0) (0xc000616320) Create stream\nI0105 13:20:19.932256     258 log.go:172] (0xc00012edc0) (0xc000616320) Stream added, broadcasting: 3\nI0105 13:20:19.935884     258 log.go:172] (0xc00012edc0) Reply frame received for 3\nI0105 13:20:19.935922     258 log.go:172] (0xc00012edc0) (0xc0004ee000) Create stream\nI0105 13:20:19.935931     258 log.go:172] (0xc00012edc0) (0xc0004ee000) Stream added, broadcasting: 5\nI0105 13:20:19.937831     258 log.go:172] (0xc00012edc0) Reply frame received for 5\nI0105 13:20:20.016766     258 log.go:172] (0xc00012edc0) Data frame received for 5\nI0105 13:20:20.016826     258 log.go:172] (0xc0004ee000) (5) Data frame handling\nI0105 13:20:20.016847     258 log.go:172] (0xc0004ee000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0105 13:20:20.059980     258 log.go:172] (0xc00012edc0) Data frame received for 3\nI0105 13:20:20.060058     258 log.go:172] (0xc000616320) (3) Data frame handling\nI0105 13:20:20.060089     258 log.go:172] (0xc000616320) (3) Data frame sent\nI0105 13:20:20.132169     258 log.go:172] (0xc00012edc0) Data frame received for 1\nI0105 13:20:20.132276     258 log.go:172] (0xc00012edc0) (0xc000616320) Stream removed, broadcasting: 3\nI0105 13:20:20.132380     258 log.go:172] (0xc0004ee820) (1) Data frame handling\nI0105 13:20:20.132398     258 log.go:172] (0xc0004ee820) (1) Data frame sent\nI0105 13:20:20.132456     258 log.go:172] (0xc00012edc0) (0xc0004ee000) Stream removed, broadcasting: 5\nI0105 13:20:20.132485     258 log.go:172] (0xc00012edc0) (0xc0004ee820) Stream removed, broadcasting: 1\nI0105 13:20:20.132499     258 log.go:172] (0xc00012edc0) Go away received\nI0105 13:20:20.133155     258 log.go:172] (0xc00012edc0) (0xc0004ee820) Stream removed, broadcasting: 1\nI0105 13:20:20.133204     258 log.go:172] (0xc00012edc0) (0xc000616320) Stream removed, broadcasting: 3\nI0105 13:20:20.133230     258 log.go:172] (0xc00012edc0) (0xc0004ee000) Stream removed, broadcasting: 5\n"
Jan  5 13:20:20.140: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan  5 13:20:20.140: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan  5 13:20:20.141: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4272 ss-2 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan  5 13:20:20.826: INFO: stderr: "I0105 13:20:20.333872     278 log.go:172] (0xc0007ee420) (0xc0007b2640) Create stream\nI0105 13:20:20.334157     278 log.go:172] (0xc0007ee420) (0xc0007b2640) Stream added, broadcasting: 1\nI0105 13:20:20.347611     278 log.go:172] (0xc0007ee420) Reply frame received for 1\nI0105 13:20:20.347712     278 log.go:172] (0xc0007ee420) (0xc0007f4000) Create stream\nI0105 13:20:20.347738     278 log.go:172] (0xc0007ee420) (0xc0007f4000) Stream added, broadcasting: 3\nI0105 13:20:20.350004     278 log.go:172] (0xc0007ee420) Reply frame received for 3\nI0105 13:20:20.350033     278 log.go:172] (0xc0007ee420) (0xc000562320) Create stream\nI0105 13:20:20.350043     278 log.go:172] (0xc0007ee420) (0xc000562320) Stream added, broadcasting: 5\nI0105 13:20:20.354741     278 log.go:172] (0xc0007ee420) Reply frame received for 5\nI0105 13:20:20.516728     278 log.go:172] (0xc0007ee420) Data frame received for 5\nI0105 13:20:20.516829     278 log.go:172] (0xc000562320) (5) Data frame handling\nI0105 13:20:20.516884     278 log.go:172] (0xc000562320) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0105 13:20:20.564864     278 log.go:172] (0xc0007ee420) Data frame received for 3\nI0105 13:20:20.565109     278 log.go:172] (0xc0007f4000) (3) Data frame handling\nI0105 13:20:20.565155     278 log.go:172] (0xc0007f4000) (3) Data frame sent\nI0105 13:20:20.810886     278 log.go:172] (0xc0007ee420) Data frame received for 1\nI0105 13:20:20.811155     278 log.go:172] (0xc0007ee420) (0xc0007f4000) Stream removed, broadcasting: 3\nI0105 13:20:20.811395     278 log.go:172] (0xc0007b2640) (1) Data frame handling\nI0105 13:20:20.811449     278 log.go:172] (0xc0007b2640) (1) Data frame sent\nI0105 13:20:20.811495     278 log.go:172] (0xc0007ee420) (0xc0007b2640) Stream removed, broadcasting: 1\nI0105 13:20:20.811645     278 log.go:172] (0xc0007ee420) (0xc000562320) Stream removed, broadcasting: 5\nI0105 13:20:20.811794     278 log.go:172] (0xc0007ee420) Go away received\nI0105 13:20:20.813610     278 log.go:172] (0xc0007ee420) (0xc0007b2640) Stream removed, broadcasting: 1\nI0105 13:20:20.813630     278 log.go:172] (0xc0007ee420) (0xc0007f4000) Stream removed, broadcasting: 3\nI0105 13:20:20.813641     278 log.go:172] (0xc0007ee420) (0xc000562320) Stream removed, broadcasting: 5\n"
Jan  5 13:20:20.826: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan  5 13:20:20.826: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan  5 13:20:20.826: INFO: Waiting for statefulset status.replicas updated to 0
Jan  5 13:20:20.836: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1
Jan  5 13:20:30.861: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Jan  5 13:20:30.862: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Jan  5 13:20:30.862: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Jan  5 13:20:30.903: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Jan  5 13:20:30.904: INFO: ss-0  iruya-node                 Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-05 13:19:46 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-05 13:20:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-05 13:20:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-05 13:19:46 +0000 UTC  }]
Jan  5 13:20:30.904: INFO: ss-1  iruya-server-sfge57q7djm7  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-05 13:20:07 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-05 13:20:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-05 13:20:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-05 13:20:06 +0000 UTC  }]
Jan  5 13:20:30.904: INFO: ss-2  iruya-node                 Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-05 13:20:07 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-05 13:20:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-05 13:20:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-05 13:20:07 +0000 UTC  }]
Jan  5 13:20:30.904: INFO: 
Jan  5 13:20:30.904: INFO: StatefulSet ss has not reached scale 0, at 3
Jan  5 13:20:32.427: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Jan  5 13:20:32.427: INFO: ss-0  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-05 13:19:46 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-05 13:20:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-05 13:20:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-05 13:19:46 +0000 UTC  }]
Jan  5 13:20:32.427: INFO: ss-1  iruya-server-sfge57q7djm7  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-05 13:20:07 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-05 13:20:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-05 13:20:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-05 13:20:06 +0000 UTC  }]
Jan  5 13:20:32.427: INFO: ss-2  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-05 13:20:07 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-05 13:20:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-05 13:20:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-05 13:20:07 +0000 UTC  }]
Jan  5 13:20:32.427: INFO: 
Jan  5 13:20:32.427: INFO: StatefulSet ss has not reached scale 0, at 3
Jan  5 13:20:33.464: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Jan  5 13:20:33.465: INFO: ss-0  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-05 13:19:46 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-05 13:20:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-05 13:20:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-05 13:19:46 +0000 UTC  }]
Jan  5 13:20:33.465: INFO: ss-1  iruya-server-sfge57q7djm7  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-05 13:20:07 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-05 13:20:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-05 13:20:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-05 13:20:06 +0000 UTC  }]
Jan  5 13:20:33.465: INFO: ss-2  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-05 13:20:07 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-05 13:20:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-05 13:20:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-05 13:20:07 +0000 UTC  }]
Jan  5 13:20:33.465: INFO: 
Jan  5 13:20:33.465: INFO: StatefulSet ss has not reached scale 0, at 3
Jan  5 13:20:34.849: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Jan  5 13:20:34.850: INFO: ss-0  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-05 13:19:46 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-05 13:20:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-05 13:20:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-05 13:19:46 +0000 UTC  }]
Jan  5 13:20:34.850: INFO: ss-1  iruya-server-sfge57q7djm7  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-05 13:20:07 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-05 13:20:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-05 13:20:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-05 13:20:06 +0000 UTC  }]
Jan  5 13:20:34.850: INFO: ss-2  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-05 13:20:07 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-05 13:20:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-05 13:20:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-05 13:20:07 +0000 UTC  }]
Jan  5 13:20:34.851: INFO: 
Jan  5 13:20:34.851: INFO: StatefulSet ss has not reached scale 0, at 3
Jan  5 13:20:35.875: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Jan  5 13:20:35.875: INFO: ss-0  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-05 13:19:46 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-05 13:20:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-05 13:20:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-05 13:19:46 +0000 UTC  }]
Jan  5 13:20:35.876: INFO: ss-1  iruya-server-sfge57q7djm7  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-05 13:20:07 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-05 13:20:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-05 13:20:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-05 13:20:06 +0000 UTC  }]
Jan  5 13:20:35.876: INFO: ss-2  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-05 13:20:07 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-05 13:20:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-05 13:20:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-05 13:20:07 +0000 UTC  }]
Jan  5 13:20:35.877: INFO: 
Jan  5 13:20:35.877: INFO: StatefulSet ss has not reached scale 0, at 3
Jan  5 13:20:36.901: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Jan  5 13:20:36.901: INFO: ss-0  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-05 13:19:46 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-05 13:20:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-05 13:20:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-05 13:19:46 +0000 UTC  }]
Jan  5 13:20:36.901: INFO: ss-1  iruya-server-sfge57q7djm7  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-05 13:20:07 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-05 13:20:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-05 13:20:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-05 13:20:06 +0000 UTC  }]
Jan  5 13:20:36.901: INFO: ss-2  iruya-node                 Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-05 13:20:07 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-05 13:20:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-05 13:20:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-05 13:20:07 +0000 UTC  }]
Jan  5 13:20:36.901: INFO: 
Jan  5 13:20:36.901: INFO: StatefulSet ss has not reached scale 0, at 3
Jan  5 13:20:37.923: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Jan  5 13:20:37.923: INFO: ss-0  iruya-node                 Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-05 13:19:46 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-05 13:20:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-05 13:20:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-05 13:19:46 +0000 UTC  }]
Jan  5 13:20:37.924: INFO: ss-1  iruya-server-sfge57q7djm7  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-05 13:20:07 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-05 13:20:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-05 13:20:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-05 13:20:06 +0000 UTC  }]
Jan  5 13:20:37.924: INFO: 
Jan  5 13:20:37.924: INFO: StatefulSet ss has not reached scale 0, at 2
Jan  5 13:20:38.954: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Jan  5 13:20:38.955: INFO: ss-0  iruya-node                 Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-05 13:19:46 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-05 13:20:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-05 13:20:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-05 13:19:46 +0000 UTC  }]
Jan  5 13:20:38.955: INFO: ss-1  iruya-server-sfge57q7djm7  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-05 13:20:07 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-05 13:20:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-05 13:20:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-05 13:20:06 +0000 UTC  }]
Jan  5 13:20:38.955: INFO: 
Jan  5 13:20:38.955: INFO: StatefulSet ss has not reached scale 0, at 2
Jan  5 13:20:39.968: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Jan  5 13:20:39.968: INFO: ss-0  iruya-node                 Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-05 13:19:46 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-05 13:20:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-05 13:20:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-05 13:19:46 +0000 UTC  }]
Jan  5 13:20:39.968: INFO: ss-1  iruya-server-sfge57q7djm7  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-05 13:20:07 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-05 13:20:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-05 13:20:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-05 13:20:06 +0000 UTC  }]
Jan  5 13:20:39.968: INFO: 
Jan  5 13:20:39.968: INFO: StatefulSet ss has not reached scale 0, at 2
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-4272
Jan  5 13:20:40.981: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4272 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  5 13:20:41.175: INFO: rc: 1
Jan  5 13:20:41.175: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4272 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    error: unable to upgrade connection: container not found ("nginx")
 []  0xc001aa1230 exit status 1   true [0xc002c96330 0xc002c96348 0xc002c96360] [0xc002c96330 0xc002c96348 0xc002c96360] [0xc002c96340 0xc002c96358] [0xba6c50 0xba6c50] 0xc000382060 }:
Command stdout:

stderr:
error: unable to upgrade connection: container not found ("nginx")

error:
exit status 1
Jan  5 13:20:51.176: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4272 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  5 13:20:51.353: INFO: rc: 1
Jan  5 13:20:51.353: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4272 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc00158e150 exit status 1   true [0xc00071e2a8 0xc00071e550 0xc00071e7a0] [0xc00071e2a8 0xc00071e550 0xc00071e7a0] [0xc00071e4e8 0xc00071e768] [0xba6c50 0xba6c50] 0xc00271e180 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jan  5 13:21:01.354: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4272 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  5 13:21:04.390: INFO: rc: 1
Jan  5 13:21:04.391: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4272 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc00158e240 exit status 1   true [0xc00071e7f8 0xc00071e9a8 0xc00071eb60] [0xc00071e7f8 0xc00071e9a8 0xc00071eb60] [0xc00071e940 0xc00071ea18] [0xba6c50 0xba6c50] 0xc00271e900 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jan  5 13:21:14.391: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4272 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  5 13:21:14.677: INFO: rc: 1
Jan  5 13:21:14.677: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4272 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0029920c0 exit status 1   true [0xc00053e038 0xc0007241c0 0xc0007243c8] [0xc00053e038 0xc0007241c0 0xc0007243c8] [0xc000724138 0xc000724360] [0xba6c50 0xba6c50] 0xc002c585a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jan  5 13:21:24.678: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4272 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  5 13:21:24.915: INFO: rc: 1
Jan  5 13:21:24.915: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4272 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc002992180 exit status 1   true [0xc0007243f8 0xc000724600 0xc000724668] [0xc0007243f8 0xc000724600 0xc000724668] [0xc0007245c8 0xc000724660] [0xba6c50 0xba6c50] 0xc002c588a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jan  5 13:21:34.916: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4272 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  5 13:21:35.110: INFO: rc: 1
Jan  5 13:21:35.110: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4272 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc00158e360 exit status 1   true [0xc00071eb78 0xc00071ed58 0xc00071efc0] [0xc00071eb78 0xc00071ed58 0xc00071efc0] [0xc00071ecb8 0xc00071ef50] [0xba6c50 0xba6c50] 0xc00271ede0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jan  5 13:21:45.111: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4272 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  5 13:21:45.284: INFO: rc: 1
Jan  5 13:21:45.285: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4272 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc002bca270 exit status 1   true [0xc000010010 0xc000010f00 0xc000010f40] [0xc000010010 0xc000010f00 0xc000010f40] [0xc000010ef8 0xc000010f28] [0xba6c50 0xba6c50] 0xc00279eba0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jan  5 13:21:55.286: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4272 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  5 13:21:55.491: INFO: rc: 1
Jan  5 13:21:55.491: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4272 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001f920c0 exit status 1   true [0xc0025f2010 0xc0025f2028 0xc0025f2040] [0xc0025f2010 0xc0025f2028 0xc0025f2040] [0xc0025f2020 0xc0025f2038] [0xba6c50 0xba6c50] 0xc0028322a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jan  5 13:22:05.492: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4272 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  5 13:22:05.639: INFO: rc: 1
Jan  5 13:22:05.639: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4272 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc002bca390 exit status 1   true [0xc000010f48 0xc000010fa0 0xc000010fc8] [0xc000010f48 0xc000010fa0 0xc000010fc8] [0xc000010f58 0xc000010fc0] [0xba6c50 0xba6c50] 0xc00279f260 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jan  5 13:22:15.640: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4272 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  5 13:22:15.815: INFO: rc: 1
Jan  5 13:22:15.815: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4272 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc00158e420 exit status 1   true [0xc00071f130 0xc00071f3c0 0xc00071f5d8] [0xc00071f130 0xc00071f3c0 0xc00071f5d8] [0xc00071f2b0 0xc00071f500] [0xba6c50 0xba6c50] 0xc00271f4a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jan  5 13:22:25.816: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4272 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  5 13:22:26.021: INFO: rc: 1
Jan  5 13:22:26.022: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4272 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001f921b0 exit status 1   true [0xc0025f2048 0xc0025f2078 0xc0025f20b8] [0xc0025f2048 0xc0025f2078 0xc0025f20b8] [0xc0025f2058 0xc0025f20b0] [0xba6c50 0xba6c50] 0xc0028327e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jan  5 13:22:36.022: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4272 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  5 13:22:36.172: INFO: rc: 1
Jan  5 13:22:36.173: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4272 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc00158e510 exit status 1   true [0xc00071f5f8 0xc00071f940 0xc00071fb68] [0xc00071f5f8 0xc00071f940 0xc00071fb68] [0xc00071f7c8 0xc00071fad8] [0xba6c50 0xba6c50] 0xc00271fd40 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jan  5 13:22:46.173: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4272 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  5 13:22:46.383: INFO: rc: 1
Jan  5 13:22:46.383: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4272 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc002992270 exit status 1   true [0xc000724678 0xc0007247f0 0xc000724940] [0xc000724678 0xc0007247f0 0xc000724940] [0xc000724740 0xc000724870] [0xba6c50 0xba6c50] 0xc002c58de0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jan  5 13:22:56.384: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4272 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  5 13:22:56.562: INFO: rc: 1
Jan  5 13:22:56.563: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4272 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001f92090 exit status 1   true [0xc0025f2010 0xc0025f2028 0xc0025f2040] [0xc0025f2010 0xc0025f2028 0xc0025f2040] [0xc0025f2020 0xc0025f2038] [0xba6c50 0xba6c50] 0xc002832180 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jan  5 13:23:06.564: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4272 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  5 13:23:06.700: INFO: rc: 1
Jan  5 13:23:06.701: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4272 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc00158e120 exit status 1   true [0xc000724110 0xc000724270 0xc0007243f8] [0xc000724110 0xc000724270 0xc0007243f8] [0xc0007241c0 0xc0007243c8] [0xba6c50 0xba6c50] 0xc002c585a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jan  5 13:23:16.702: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4272 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  5 13:23:16.874: INFO: rc: 1
Jan  5 13:23:16.875: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4272 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001f92180 exit status 1   true [0xc0025f2048 0xc0025f2078 0xc0025f20b8] [0xc0025f2048 0xc0025f2078 0xc0025f20b8] [0xc0025f2058 0xc0025f20b0] [0xba6c50 0xba6c50] 0xc002832600 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jan  5 13:23:26.875: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4272 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  5 13:23:27.062: INFO: rc: 1
Jan  5 13:23:27.062: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4272 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001f92270 exit status 1   true [0xc0025f20c0 0xc0025f20d8 0xc0025f2120] [0xc0025f20c0 0xc0025f20d8 0xc0025f2120] [0xc0025f20d0 0xc0025f2108] [0xba6c50 0xba6c50] 0xc002832a20 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jan  5 13:23:37.063: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4272 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  5 13:23:37.253: INFO: rc: 1
Jan  5 13:23:37.253: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4272 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc00158e2d0 exit status 1   true [0xc000724500 0xc000724638 0xc000724678] [0xc000724500 0xc000724638 0xc000724678] [0xc000724600 0xc000724668] [0xba6c50 0xba6c50] 0xc002c588a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jan  5 13:23:47.254: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4272 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  5 13:23:47.476: INFO: rc: 1
Jan  5 13:23:47.476: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4272 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc002992150 exit status 1   true [0xc00071e228 0xc00071e4e8 0xc00071e768] [0xc00071e228 0xc00071e4e8 0xc00071e768] [0xc00071e310 0xc00071e690] [0xba6c50 0xba6c50] 0xc00271e3c0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jan  5 13:23:57.477: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4272 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  5 13:23:57.706: INFO: rc: 1
Jan  5 13:23:57.707: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4272 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc002992240 exit status 1   true [0xc00071e7a0 0xc00071e940 0xc00071ea18] [0xc00071e7a0 0xc00071e940 0xc00071ea18] [0xc00071e8c0 0xc00071e9f8] [0xba6c50 0xba6c50] 0xc00271e9c0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jan  5 13:24:07.708: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4272 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  5 13:24:08.022: INFO: rc: 1
Jan  5 13:24:08.023: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4272 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001f92330 exit status 1   true [0xc0025f2140 0xc0025f2198 0xc0025f21d8] [0xc0025f2140 0xc0025f2198 0xc0025f21d8] [0xc0025f2178 0xc0025f21d0] [0xba6c50 0xba6c50] 0xc002832f60 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jan  5 13:24:18.023: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4272 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  5 13:24:18.201: INFO: rc: 1
Jan  5 13:24:18.202: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4272 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc002992330 exit status 1   true [0xc00071eb60 0xc00071ecb8 0xc00071ef50] [0xc00071eb60 0xc00071ecb8 0xc00071ef50] [0xc00071ec48 0xc00071eea0] [0xba6c50 0xba6c50] 0xc00271ef00 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jan  5 13:24:28.202: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4272 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  5 13:24:28.426: INFO: rc: 1
Jan  5 13:24:28.427: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4272 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001f92420 exit status 1   true [0xc0025f21e0 0xc0025f2218 0xc0025f2230] [0xc0025f21e0 0xc0025f2218 0xc0025f2230] [0xc0025f2200 0xc0025f2228] [0xba6c50 0xba6c50] 0xc00279e120 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jan  5 13:24:38.427: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4272 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  5 13:24:38.645: INFO: rc: 1
Jan  5 13:24:38.646: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4272 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc002bca300 exit status 1   true [0xc000010010 0xc000010f00 0xc000010f40] [0xc000010010 0xc000010f00 0xc000010f40] [0xc000010ef8 0xc000010f28] [0xba6c50 0xba6c50] 0xc002646540 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jan  5 13:24:48.646: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4272 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  5 13:24:48.797: INFO: rc: 1
Jan  5 13:24:48.798: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4272 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc00158e480 exit status 1   true [0xc0007246c0 0xc000724838 0xc000724a80] [0xc0007246c0 0xc000724838 0xc000724a80] [0xc0007247f0 0xc000724940] [0xba6c50 0xba6c50] 0xc002c58de0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jan  5 13:24:58.798: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4272 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  5 13:24:58.949: INFO: rc: 1
Jan  5 13:24:58.949: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4272 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc002bca240 exit status 1   true [0xc000010010 0xc000010f00 0xc000010f40] [0xc000010010 0xc000010f00 0xc000010f40] [0xc000010ef8 0xc000010f28] [0xba6c50 0xba6c50] 0xc0028322a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jan  5 13:25:08.950: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4272 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  5 13:25:09.167: INFO: rc: 1
Jan  5 13:25:09.167: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4272 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc00158e1b0 exit status 1   true [0xc000724110 0xc000724270 0xc0007243f8] [0xc000724110 0xc000724270 0xc0007243f8] [0xc0007241c0 0xc0007243c8] [0xba6c50 0xba6c50] 0xc002646420 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jan  5 13:25:19.168: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4272 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  5 13:25:19.345: INFO: rc: 1
Jan  5 13:25:19.346: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4272 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001f920c0 exit status 1   true [0xc0025f2010 0xc0025f2028 0xc0025f2040] [0xc0025f2010 0xc0025f2028 0xc0025f2040] [0xc0025f2020 0xc0025f2038] [0xba6c50 0xba6c50] 0xc002c585a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jan  5 13:25:29.346: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4272 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  5 13:25:29.533: INFO: rc: 1
Jan  5 13:25:29.533: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4272 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc00158e2a0 exit status 1   true [0xc000724500 0xc000724638 0xc000724678] [0xc000724500 0xc000724638 0xc000724678] [0xc000724600 0xc000724668] [0xba6c50 0xba6c50] 0xc002646b40 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jan  5 13:25:39.534: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4272 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  5 13:25:39.737: INFO: rc: 1
Jan  5 13:25:39.738: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4272 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc002992090 exit status 1   true [0xc00071e228 0xc00071e4e8 0xc00071e768] [0xc00071e228 0xc00071e4e8 0xc00071e768] [0xc00071e310 0xc00071e690] [0xba6c50 0xba6c50] 0xc00279eba0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jan  5 13:25:49.739: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4272 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  5 13:25:49.942: INFO: rc: 1
Jan  5 13:25:49.942: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: 
Jan  5 13:25:49.942: INFO: Scaling statefulset ss to 0
Jan  5 13:25:49.956: INFO: Waiting for statefulset status.replicas updated to 0
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86
Jan  5 13:25:49.961: INFO: Deleting all statefulset in ns statefulset-4272
Jan  5 13:25:49.964: INFO: Scaling statefulset ss to 0
Jan  5 13:25:49.975: INFO: Waiting for statefulset status.replicas updated to 0
Jan  5 13:25:49.978: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  5 13:25:50.011: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-4272" for this suite.
Jan  5 13:25:58.098: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 13:25:58.271: INFO: namespace statefulset-4272 deletion completed in 8.256070696s

• [SLOW TEST:372.320 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    Burst scaling should run to completion even with unhealthy pods [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-network] Services 
  should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  5 13:25:58.271: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88
[It] should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating service multi-endpoint-test in namespace services-9427
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-9427 to expose endpoints map[]
Jan  5 13:25:58.429: INFO: successfully validated that service multi-endpoint-test in namespace services-9427 exposes endpoints map[] (9.430164ms elapsed)
STEP: Creating pod pod1 in namespace services-9427
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-9427 to expose endpoints map[pod1:[100]]
Jan  5 13:26:02.634: INFO: Unexpected endpoints: found map[], expected map[pod1:[100]] (4.182216184s elapsed, will retry)
Jan  5 13:26:05.700: INFO: successfully validated that service multi-endpoint-test in namespace services-9427 exposes endpoints map[pod1:[100]] (7.24794582s elapsed)
STEP: Creating pod pod2 in namespace services-9427
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-9427 to expose endpoints map[pod1:[100] pod2:[101]]
Jan  5 13:26:10.339: INFO: Unexpected endpoints: found map[667f05d7-4202-4dc9-a456-15128e34beb6:[100]], expected map[pod1:[100] pod2:[101]] (4.619470494s elapsed, will retry)
Jan  5 13:26:13.442: INFO: successfully validated that service multi-endpoint-test in namespace services-9427 exposes endpoints map[pod1:[100] pod2:[101]] (7.721721621s elapsed)
STEP: Deleting pod pod1 in namespace services-9427
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-9427 to expose endpoints map[pod2:[101]]
Jan  5 13:26:13.514: INFO: successfully validated that service multi-endpoint-test in namespace services-9427 exposes endpoints map[pod2:[101]] (54.697923ms elapsed)
STEP: Deleting pod pod2 in namespace services-9427
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-9427 to expose endpoints map[]
Jan  5 13:26:14.552: INFO: successfully validated that service multi-endpoint-test in namespace services-9427 exposes endpoints map[] (1.027262429s elapsed)
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  5 13:26:14.629: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-9427" for this suite.
Jan  5 13:26:36.753: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 13:26:36.901: INFO: namespace services-9427 deletion completed in 22.197232779s
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92

• [SLOW TEST:38.630 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl rolling-update 
  should support rolling-update to same image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  5 13:26:36.901: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1516
[It] should support rolling-update to same image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Jan  5 13:26:36.981: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=kubectl-5421'
Jan  5 13:26:37.107: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Jan  5 13:26:37.107: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n"
STEP: verifying the rc e2e-test-nginx-rc was created
Jan  5 13:26:37.129: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0
Jan  5 13:26:37.150: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
STEP: rolling-update to same image controller
Jan  5 13:26:37.174: INFO: scanned /root for discovery docs: 
Jan  5 13:26:37.175: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-nginx-rc --update-period=1s --image=docker.io/library/nginx:1.14-alpine --image-pull-policy=IfNotPresent --namespace=kubectl-5421'
Jan  5 13:26:59.606: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Jan  5 13:26:59.607: INFO: stdout: "Created e2e-test-nginx-rc-0fec0db894d3a919a2c4f38e01d29548\nScaling up e2e-test-nginx-rc-0fec0db894d3a919a2c4f38e01d29548 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-0fec0db894d3a919a2c4f38e01d29548 up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-0fec0db894d3a919a2c4f38e01d29548 to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n"
Jan  5 13:26:59.607: INFO: stdout: "Created e2e-test-nginx-rc-0fec0db894d3a919a2c4f38e01d29548\nScaling up e2e-test-nginx-rc-0fec0db894d3a919a2c4f38e01d29548 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-0fec0db894d3a919a2c4f38e01d29548 up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-0fec0db894d3a919a2c4f38e01d29548 to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n"
STEP: waiting for all containers in run=e2e-test-nginx-rc pods to come up.
Jan  5 13:26:59.607: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-5421'
Jan  5 13:26:59.824: INFO: stderr: ""
Jan  5 13:26:59.824: INFO: stdout: "e2e-test-nginx-rc-0fec0db894d3a919a2c4f38e01d29548-kks6s "
Jan  5 13:26:59.825: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-0fec0db894d3a919a2c4f38e01d29548-kks6s -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-nginx-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5421'
Jan  5 13:26:59.983: INFO: stderr: ""
Jan  5 13:26:59.984: INFO: stdout: "true"
Jan  5 13:26:59.984: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-0fec0db894d3a919a2c4f38e01d29548-kks6s -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-nginx-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5421'
Jan  5 13:27:00.079: INFO: stderr: ""
Jan  5 13:27:00.079: INFO: stdout: "docker.io/library/nginx:1.14-alpine"
Jan  5 13:27:00.079: INFO: e2e-test-nginx-rc-0fec0db894d3a919a2c4f38e01d29548-kks6s is verified up and running
[AfterEach] [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1522
Jan  5 13:27:00.080: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=kubectl-5421'
Jan  5 13:27:00.209: INFO: stderr: ""
Jan  5 13:27:00.209: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  5 13:27:00.210: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5421" for this suite.
Jan  5 13:27:22.257: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 13:27:22.413: INFO: namespace kubectl-5421 deletion completed in 22.198292191s

• [SLOW TEST:45.512 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should support rolling-update to same image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  5 13:27:22.414: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0777 on tmpfs
Jan  5 13:27:22.550: INFO: Waiting up to 5m0s for pod "pod-eb80bfd1-dbfa-4143-95bd-5c8d4bab8f84" in namespace "emptydir-6252" to be "success or failure"
Jan  5 13:27:22.560: INFO: Pod "pod-eb80bfd1-dbfa-4143-95bd-5c8d4bab8f84": Phase="Pending", Reason="", readiness=false. Elapsed: 9.302682ms
Jan  5 13:27:24.577: INFO: Pod "pod-eb80bfd1-dbfa-4143-95bd-5c8d4bab8f84": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026452128s
Jan  5 13:27:26.587: INFO: Pod "pod-eb80bfd1-dbfa-4143-95bd-5c8d4bab8f84": Phase="Pending", Reason="", readiness=false. Elapsed: 4.036746362s
Jan  5 13:27:28.613: INFO: Pod "pod-eb80bfd1-dbfa-4143-95bd-5c8d4bab8f84": Phase="Pending", Reason="", readiness=false. Elapsed: 6.062997317s
Jan  5 13:27:30.620: INFO: Pod "pod-eb80bfd1-dbfa-4143-95bd-5c8d4bab8f84": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.070039444s
STEP: Saw pod success
Jan  5 13:27:30.621: INFO: Pod "pod-eb80bfd1-dbfa-4143-95bd-5c8d4bab8f84" satisfied condition "success or failure"
Jan  5 13:27:30.624: INFO: Trying to get logs from node iruya-node pod pod-eb80bfd1-dbfa-4143-95bd-5c8d4bab8f84 container test-container: 
STEP: delete the pod
Jan  5 13:27:30.678: INFO: Waiting for pod pod-eb80bfd1-dbfa-4143-95bd-5c8d4bab8f84 to disappear
Jan  5 13:27:30.694: INFO: Pod pod-eb80bfd1-dbfa-4143-95bd-5c8d4bab8f84 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  5 13:27:30.695: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-6252" for this suite.
Jan  5 13:27:36.777: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 13:27:36.906: INFO: namespace emptydir-6252 deletion completed in 6.192603082s

• [SLOW TEST:14.493 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  5 13:27:36.907: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward api env vars
Jan  5 13:27:37.020: INFO: Waiting up to 5m0s for pod "downward-api-15f837df-f78b-4118-a2ad-5d4f77f98a9b" in namespace "downward-api-4665" to be "success or failure"
Jan  5 13:27:37.027: INFO: Pod "downward-api-15f837df-f78b-4118-a2ad-5d4f77f98a9b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.629061ms
Jan  5 13:27:39.035: INFO: Pod "downward-api-15f837df-f78b-4118-a2ad-5d4f77f98a9b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014013053s
Jan  5 13:27:41.045: INFO: Pod "downward-api-15f837df-f78b-4118-a2ad-5d4f77f98a9b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.024398451s
Jan  5 13:27:43.057: INFO: Pod "downward-api-15f837df-f78b-4118-a2ad-5d4f77f98a9b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.036876878s
Jan  5 13:27:45.066: INFO: Pod "downward-api-15f837df-f78b-4118-a2ad-5d4f77f98a9b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.045346674s
STEP: Saw pod success
Jan  5 13:27:45.066: INFO: Pod "downward-api-15f837df-f78b-4118-a2ad-5d4f77f98a9b" satisfied condition "success or failure"
Jan  5 13:27:45.075: INFO: Trying to get logs from node iruya-node pod downward-api-15f837df-f78b-4118-a2ad-5d4f77f98a9b container dapi-container: 
STEP: delete the pod
Jan  5 13:27:45.140: INFO: Waiting for pod downward-api-15f837df-f78b-4118-a2ad-5d4f77f98a9b to disappear
Jan  5 13:27:45.277: INFO: Pod downward-api-15f837df-f78b-4118-a2ad-5d4f77f98a9b no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  5 13:27:45.278: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-4665" for this suite.
Jan  5 13:27:51.350: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 13:27:51.477: INFO: namespace downward-api-4665 deletion completed in 6.191276997s

• [SLOW TEST:14.570 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  5 13:27:51.477: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-map-82059a56-bc63-48e2-8175-e0d28e8440ad
STEP: Creating a pod to test consume secrets
Jan  5 13:27:51.702: INFO: Waiting up to 5m0s for pod "pod-secrets-ff99ef80-9a87-496c-a932-b88ae3e2874b" in namespace "secrets-8526" to be "success or failure"
Jan  5 13:27:51.713: INFO: Pod "pod-secrets-ff99ef80-9a87-496c-a932-b88ae3e2874b": Phase="Pending", Reason="", readiness=false. Elapsed: 10.800846ms
Jan  5 13:27:53.726: INFO: Pod "pod-secrets-ff99ef80-9a87-496c-a932-b88ae3e2874b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024075138s
Jan  5 13:27:55.781: INFO: Pod "pod-secrets-ff99ef80-9a87-496c-a932-b88ae3e2874b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.078574328s
Jan  5 13:27:57.793: INFO: Pod "pod-secrets-ff99ef80-9a87-496c-a932-b88ae3e2874b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.090803077s
Jan  5 13:27:59.810: INFO: Pod "pod-secrets-ff99ef80-9a87-496c-a932-b88ae3e2874b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.107711748s
STEP: Saw pod success
Jan  5 13:27:59.810: INFO: Pod "pod-secrets-ff99ef80-9a87-496c-a932-b88ae3e2874b" satisfied condition "success or failure"
Jan  5 13:27:59.819: INFO: Trying to get logs from node iruya-node pod pod-secrets-ff99ef80-9a87-496c-a932-b88ae3e2874b container secret-volume-test: 
STEP: delete the pod
Jan  5 13:27:59.889: INFO: Waiting for pod pod-secrets-ff99ef80-9a87-496c-a932-b88ae3e2874b to disappear
Jan  5 13:27:59.896: INFO: Pod pod-secrets-ff99ef80-9a87-496c-a932-b88ae3e2874b no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  5 13:27:59.896: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-8526" for this suite.
Jan  5 13:28:05.994: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 13:28:06.078: INFO: namespace secrets-8526 deletion completed in 6.175804219s

• [SLOW TEST:14.601 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  5 13:28:06.079: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a watch on configmaps with a certain label
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: changing the label value of the configmap
STEP: Expecting to observe a delete notification for the watched object
Jan  5 13:28:06.333: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-2826,SelfLink:/api/v1/namespaces/watch-2826/configmaps/e2e-watch-test-label-changed,UID:5e43b14f-3402-4f08-9e9f-d2e88f2a1339,ResourceVersion:19399274,Generation:0,CreationTimestamp:2020-01-05 13:28:06 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
Jan  5 13:28:06.333: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-2826,SelfLink:/api/v1/namespaces/watch-2826/configmaps/e2e-watch-test-label-changed,UID:5e43b14f-3402-4f08-9e9f-d2e88f2a1339,ResourceVersion:19399275,Generation:0,CreationTimestamp:2020-01-05 13:28:06 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
Jan  5 13:28:06.333: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-2826,SelfLink:/api/v1/namespaces/watch-2826/configmaps/e2e-watch-test-label-changed,UID:5e43b14f-3402-4f08-9e9f-d2e88f2a1339,ResourceVersion:19399276,Generation:0,CreationTimestamp:2020-01-05 13:28:06 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying the configmap a second time
STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements
STEP: changing the label value of the configmap back
STEP: modifying the configmap a third time
STEP: deleting the configmap
STEP: Expecting to observe an add notification for the watched object when the label value was restored
Jan  5 13:28:16.443: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-2826,SelfLink:/api/v1/namespaces/watch-2826/configmaps/e2e-watch-test-label-changed,UID:5e43b14f-3402-4f08-9e9f-d2e88f2a1339,ResourceVersion:19399291,Generation:0,CreationTimestamp:2020-01-05 13:28:06 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Jan  5 13:28:16.443: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-2826,SelfLink:/api/v1/namespaces/watch-2826/configmaps/e2e-watch-test-label-changed,UID:5e43b14f-3402-4f08-9e9f-d2e88f2a1339,ResourceVersion:19399292,Generation:0,CreationTimestamp:2020-01-05 13:28:06 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},}
Jan  5 13:28:16.443: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-2826,SelfLink:/api/v1/namespaces/watch-2826/configmaps/e2e-watch-test-label-changed,UID:5e43b14f-3402-4f08-9e9f-d2e88f2a1339,ResourceVersion:19399293,Generation:0,CreationTimestamp:2020-01-05 13:28:06 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  5 13:28:16.444: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-2826" for this suite.
Jan  5 13:28:22.493: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 13:28:22.635: INFO: namespace watch-2826 deletion completed in 6.182173933s

• [SLOW TEST:16.557 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  5 13:28:22.636: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Jan  5 13:28:22.708: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e1189f14-6662-4657-b8f6-501b5337b8bd" in namespace "downward-api-2" to be "success or failure"
Jan  5 13:28:22.740: INFO: Pod "downwardapi-volume-e1189f14-6662-4657-b8f6-501b5337b8bd": Phase="Pending", Reason="", readiness=false. Elapsed: 31.670816ms
Jan  5 13:28:24.750: INFO: Pod "downwardapi-volume-e1189f14-6662-4657-b8f6-501b5337b8bd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.042035225s
Jan  5 13:28:26.763: INFO: Pod "downwardapi-volume-e1189f14-6662-4657-b8f6-501b5337b8bd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.055226914s
Jan  5 13:28:28.783: INFO: Pod "downwardapi-volume-e1189f14-6662-4657-b8f6-501b5337b8bd": Phase="Pending", Reason="", readiness=false. Elapsed: 6.074736976s
Jan  5 13:28:30.792: INFO: Pod "downwardapi-volume-e1189f14-6662-4657-b8f6-501b5337b8bd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.083832614s
STEP: Saw pod success
Jan  5 13:28:30.792: INFO: Pod "downwardapi-volume-e1189f14-6662-4657-b8f6-501b5337b8bd" satisfied condition "success or failure"
Jan  5 13:28:30.795: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-e1189f14-6662-4657-b8f6-501b5337b8bd container client-container: 
STEP: delete the pod
Jan  5 13:28:30.911: INFO: Waiting for pod downwardapi-volume-e1189f14-6662-4657-b8f6-501b5337b8bd to disappear
Jan  5 13:28:31.670: INFO: Pod downwardapi-volume-e1189f14-6662-4657-b8f6-501b5337b8bd no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  5 13:28:31.670: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-2" for this suite.
Jan  5 13:28:37.781: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 13:28:37.955: INFO: namespace downward-api-2 deletion completed in 6.274549973s

• [SLOW TEST:15.319 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  5 13:28:37.956: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Performing setup for networking test in namespace pod-network-test-4538
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Jan  5 13:28:38.034: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Jan  5 13:29:08.393: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostName&protocol=udp&host=10.44.0.1&port=8081&tries=1'] Namespace:pod-network-test-4538 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan  5 13:29:08.393: INFO: >>> kubeConfig: /root/.kube/config
I0105 13:29:08.507034       8 log.go:172] (0xc000ae9290) (0xc001ac0dc0) Create stream
I0105 13:29:08.507258       8 log.go:172] (0xc000ae9290) (0xc001ac0dc0) Stream added, broadcasting: 1
I0105 13:29:08.521378       8 log.go:172] (0xc000ae9290) Reply frame received for 1
I0105 13:29:08.521464       8 log.go:172] (0xc000ae9290) (0xc00223c280) Create stream
I0105 13:29:08.521483       8 log.go:172] (0xc000ae9290) (0xc00223c280) Stream added, broadcasting: 3
I0105 13:29:08.523756       8 log.go:172] (0xc000ae9290) Reply frame received for 3
I0105 13:29:08.523786       8 log.go:172] (0xc000ae9290) (0xc002284000) Create stream
I0105 13:29:08.523799       8 log.go:172] (0xc000ae9290) (0xc002284000) Stream added, broadcasting: 5
I0105 13:29:08.529160       8 log.go:172] (0xc000ae9290) Reply frame received for 5
I0105 13:29:08.821527       8 log.go:172] (0xc000ae9290) Data frame received for 3
I0105 13:29:08.821816       8 log.go:172] (0xc00223c280) (3) Data frame handling
I0105 13:29:08.821880       8 log.go:172] (0xc00223c280) (3) Data frame sent
I0105 13:29:09.011553       8 log.go:172] (0xc000ae9290) Data frame received for 1
I0105 13:29:09.011663       8 log.go:172] (0xc000ae9290) (0xc00223c280) Stream removed, broadcasting: 3
I0105 13:29:09.011711       8 log.go:172] (0xc001ac0dc0) (1) Data frame handling
I0105 13:29:09.011741       8 log.go:172] (0xc001ac0dc0) (1) Data frame sent
I0105 13:29:09.011753       8 log.go:172] (0xc000ae9290) (0xc002284000) Stream removed, broadcasting: 5
I0105 13:29:09.011870       8 log.go:172] (0xc000ae9290) (0xc001ac0dc0) Stream removed, broadcasting: 1
I0105 13:29:09.011911       8 log.go:172] (0xc000ae9290) Go away received
I0105 13:29:09.012134       8 log.go:172] (0xc000ae9290) (0xc001ac0dc0) Stream removed, broadcasting: 1
I0105 13:29:09.012149       8 log.go:172] (0xc000ae9290) (0xc00223c280) Stream removed, broadcasting: 3
I0105 13:29:09.012160       8 log.go:172] (0xc000ae9290) (0xc002284000) Stream removed, broadcasting: 5
Jan  5 13:29:09.012: INFO: Waiting for endpoints: map[]
Jan  5 13:29:09.020: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostName&protocol=udp&host=10.32.0.4&port=8081&tries=1'] Namespace:pod-network-test-4538 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan  5 13:29:09.020: INFO: >>> kubeConfig: /root/.kube/config
I0105 13:29:09.084360       8 log.go:172] (0xc0007eb8c0) (0xc0022846e0) Create stream
I0105 13:29:09.084450       8 log.go:172] (0xc0007eb8c0) (0xc0022846e0) Stream added, broadcasting: 1
I0105 13:29:09.090385       8 log.go:172] (0xc0007eb8c0) Reply frame received for 1
I0105 13:29:09.090416       8 log.go:172] (0xc0007eb8c0) (0xc001470000) Create stream
I0105 13:29:09.090430       8 log.go:172] (0xc0007eb8c0) (0xc001470000) Stream added, broadcasting: 3
I0105 13:29:09.091608       8 log.go:172] (0xc0007eb8c0) Reply frame received for 3
I0105 13:29:09.091628       8 log.go:172] (0xc0007eb8c0) (0xc002284780) Create stream
I0105 13:29:09.091636       8 log.go:172] (0xc0007eb8c0) (0xc002284780) Stream added, broadcasting: 5
I0105 13:29:09.093793       8 log.go:172] (0xc0007eb8c0) Reply frame received for 5
I0105 13:29:09.202137       8 log.go:172] (0xc0007eb8c0) Data frame received for 3
I0105 13:29:09.202222       8 log.go:172] (0xc001470000) (3) Data frame handling
I0105 13:29:09.202252       8 log.go:172] (0xc001470000) (3) Data frame sent
I0105 13:29:09.314507       8 log.go:172] (0xc0007eb8c0) Data frame received for 1
I0105 13:29:09.314760       8 log.go:172] (0xc0007eb8c0) (0xc001470000) Stream removed, broadcasting: 3
I0105 13:29:09.314938       8 log.go:172] (0xc0022846e0) (1) Data frame handling
I0105 13:29:09.314979       8 log.go:172] (0xc0022846e0) (1) Data frame sent
I0105 13:29:09.314989       8 log.go:172] (0xc0007eb8c0) (0xc0022846e0) Stream removed, broadcasting: 1
I0105 13:29:09.315464       8 log.go:172] (0xc0007eb8c0) (0xc002284780) Stream removed, broadcasting: 5
I0105 13:29:09.315491       8 log.go:172] (0xc0007eb8c0) (0xc0022846e0) Stream removed, broadcasting: 1
I0105 13:29:09.315503       8 log.go:172] (0xc0007eb8c0) (0xc001470000) Stream removed, broadcasting: 3
I0105 13:29:09.315508       8 log.go:172] (0xc0007eb8c0) (0xc002284780) Stream removed, broadcasting: 5
Jan  5 13:29:09.316: INFO: Waiting for endpoints: map[]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  5 13:29:09.316: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0105 13:29:09.317345       8 log.go:172] (0xc0007eb8c0) Go away received
STEP: Destroying namespace "pod-network-test-4538" for this suite.
Jan  5 13:29:35.351: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 13:29:35.480: INFO: namespace pod-network-test-4538 deletion completed in 26.154539819s

• [SLOW TEST:57.524 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  5 13:29:35.481: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5464.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5464.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jan  5 13:29:47.698: INFO: Unable to read wheezy_udp@kubernetes.default.svc.cluster.local from pod dns-5464/dns-test-8a0dc684-f9f3-4f5e-9c0f-fccbf6b48d81: the server could not find the requested resource (get pods dns-test-8a0dc684-f9f3-4f5e-9c0f-fccbf6b48d81)
Jan  5 13:29:47.706: INFO: Unable to read wheezy_tcp@kubernetes.default.svc.cluster.local from pod dns-5464/dns-test-8a0dc684-f9f3-4f5e-9c0f-fccbf6b48d81: the server could not find the requested resource (get pods dns-test-8a0dc684-f9f3-4f5e-9c0f-fccbf6b48d81)
Jan  5 13:29:47.711: INFO: Unable to read wheezy_udp@PodARecord from pod dns-5464/dns-test-8a0dc684-f9f3-4f5e-9c0f-fccbf6b48d81: the server could not find the requested resource (get pods dns-test-8a0dc684-f9f3-4f5e-9c0f-fccbf6b48d81)
Jan  5 13:29:47.716: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-5464/dns-test-8a0dc684-f9f3-4f5e-9c0f-fccbf6b48d81: the server could not find the requested resource (get pods dns-test-8a0dc684-f9f3-4f5e-9c0f-fccbf6b48d81)
Jan  5 13:29:47.721: INFO: Unable to read jessie_udp@kubernetes.default.svc.cluster.local from pod dns-5464/dns-test-8a0dc684-f9f3-4f5e-9c0f-fccbf6b48d81: the server could not find the requested resource (get pods dns-test-8a0dc684-f9f3-4f5e-9c0f-fccbf6b48d81)
Jan  5 13:29:47.726: INFO: Unable to read jessie_tcp@kubernetes.default.svc.cluster.local from pod dns-5464/dns-test-8a0dc684-f9f3-4f5e-9c0f-fccbf6b48d81: the server could not find the requested resource (get pods dns-test-8a0dc684-f9f3-4f5e-9c0f-fccbf6b48d81)
Jan  5 13:29:47.729: INFO: Unable to read jessie_udp@PodARecord from pod dns-5464/dns-test-8a0dc684-f9f3-4f5e-9c0f-fccbf6b48d81: the server could not find the requested resource (get pods dns-test-8a0dc684-f9f3-4f5e-9c0f-fccbf6b48d81)
Jan  5 13:29:47.737: INFO: Unable to read jessie_tcp@PodARecord from pod dns-5464/dns-test-8a0dc684-f9f3-4f5e-9c0f-fccbf6b48d81: the server could not find the requested resource (get pods dns-test-8a0dc684-f9f3-4f5e-9c0f-fccbf6b48d81)
Jan  5 13:29:47.737: INFO: Lookups using dns-5464/dns-test-8a0dc684-f9f3-4f5e-9c0f-fccbf6b48d81 failed for: [wheezy_udp@kubernetes.default.svc.cluster.local wheezy_tcp@kubernetes.default.svc.cluster.local wheezy_udp@PodARecord wheezy_tcp@PodARecord jessie_udp@kubernetes.default.svc.cluster.local jessie_tcp@kubernetes.default.svc.cluster.local jessie_udp@PodARecord jessie_tcp@PodARecord]

Jan  5 13:29:52.796: INFO: DNS probes using dns-5464/dns-test-8a0dc684-f9f3-4f5e-9c0f-fccbf6b48d81 succeeded

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  5 13:29:52.920: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-5464" for this suite.
Jan  5 13:29:58.970: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 13:29:59.092: INFO: namespace dns-5464 deletion completed in 6.15787788s

• [SLOW TEST:23.611 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  5 13:29:59.093: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Jan  5 13:29:59.210: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9146ce27-0203-4a72-b54f-d73067d95762" in namespace "downward-api-5858" to be "success or failure"
Jan  5 13:29:59.221: INFO: Pod "downwardapi-volume-9146ce27-0203-4a72-b54f-d73067d95762": Phase="Pending", Reason="", readiness=false. Elapsed: 10.488057ms
Jan  5 13:30:01.232: INFO: Pod "downwardapi-volume-9146ce27-0203-4a72-b54f-d73067d95762": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02112984s
Jan  5 13:30:03.241: INFO: Pod "downwardapi-volume-9146ce27-0203-4a72-b54f-d73067d95762": Phase="Pending", Reason="", readiness=false. Elapsed: 4.029968869s
Jan  5 13:30:05.249: INFO: Pod "downwardapi-volume-9146ce27-0203-4a72-b54f-d73067d95762": Phase="Pending", Reason="", readiness=false. Elapsed: 6.038272621s
Jan  5 13:30:07.257: INFO: Pod "downwardapi-volume-9146ce27-0203-4a72-b54f-d73067d95762": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.046563928s
STEP: Saw pod success
Jan  5 13:30:07.257: INFO: Pod "downwardapi-volume-9146ce27-0203-4a72-b54f-d73067d95762" satisfied condition "success or failure"
Jan  5 13:30:07.265: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-9146ce27-0203-4a72-b54f-d73067d95762 container client-container: 
STEP: delete the pod
Jan  5 13:30:07.396: INFO: Waiting for pod downwardapi-volume-9146ce27-0203-4a72-b54f-d73067d95762 to disappear
Jan  5 13:30:07.501: INFO: Pod downwardapi-volume-9146ce27-0203-4a72-b54f-d73067d95762 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  5 13:30:07.501: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-5858" for this suite.
Jan  5 13:30:13.674: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 13:30:13.833: INFO: namespace downward-api-5858 deletion completed in 6.325071585s

• [SLOW TEST:14.741 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  5 13:30:13.834: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the rc
STEP: delete the rc
STEP: wait for all pods to be garbage collected
STEP: Gathering metrics
W0105 13:30:25.465988       8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jan  5 13:30:25.466: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  5 13:30:25.466: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-2549" for this suite.
Jan  5 13:30:31.508: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 13:30:31.644: INFO: namespace gc-2549 deletion completed in 6.172632774s

• [SLOW TEST:17.810 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  5 13:30:31.644: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Jan  5 13:30:31.721: INFO: Waiting up to 5m0s for pod "downwardapi-volume-82bdf391-f26a-4ef4-b6da-157930654e46" in namespace "downward-api-4088" to be "success or failure"
Jan  5 13:30:31.802: INFO: Pod "downwardapi-volume-82bdf391-f26a-4ef4-b6da-157930654e46": Phase="Pending", Reason="", readiness=false. Elapsed: 80.568013ms
Jan  5 13:30:33.816: INFO: Pod "downwardapi-volume-82bdf391-f26a-4ef4-b6da-157930654e46": Phase="Pending", Reason="", readiness=false. Elapsed: 2.094821763s
Jan  5 13:30:35.837: INFO: Pod "downwardapi-volume-82bdf391-f26a-4ef4-b6da-157930654e46": Phase="Pending", Reason="", readiness=false. Elapsed: 4.115689218s
Jan  5 13:30:37.846: INFO: Pod "downwardapi-volume-82bdf391-f26a-4ef4-b6da-157930654e46": Phase="Pending", Reason="", readiness=false. Elapsed: 6.12495118s
Jan  5 13:30:39.901: INFO: Pod "downwardapi-volume-82bdf391-f26a-4ef4-b6da-157930654e46": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.179868047s
STEP: Saw pod success
Jan  5 13:30:39.902: INFO: Pod "downwardapi-volume-82bdf391-f26a-4ef4-b6da-157930654e46" satisfied condition "success or failure"
Jan  5 13:30:39.907: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-82bdf391-f26a-4ef4-b6da-157930654e46 container client-container: 
STEP: delete the pod
Jan  5 13:30:39.970: INFO: Waiting for pod downwardapi-volume-82bdf391-f26a-4ef4-b6da-157930654e46 to disappear
Jan  5 13:30:39.983: INFO: Pod downwardapi-volume-82bdf391-f26a-4ef4-b6da-157930654e46 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  5 13:30:39.983: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-4088" for this suite.
Jan  5 13:30:46.025: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 13:30:46.159: INFO: namespace downward-api-4088 deletion completed in 6.170746458s

• [SLOW TEST:14.516 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-apps] Deployment 
  deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  5 13:30:46.160: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72
[It] deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jan  5 13:30:46.345: INFO: Creating deployment "nginx-deployment"
Jan  5 13:30:46.350: INFO: Waiting for observed generation 1
Jan  5 13:30:48.657: INFO: Waiting for all required pods to come up
Jan  5 13:30:49.352: INFO: Pod name nginx: Found 10 pods out of 10
STEP: ensuring each pod is running
Jan  5 13:31:11.641: INFO: Waiting for deployment "nginx-deployment" to complete
Jan  5 13:31:11.663: INFO: Updating deployment "nginx-deployment" with a non-existent image
Jan  5 13:31:11.680: INFO: Updating deployment nginx-deployment
Jan  5 13:31:11.680: INFO: Waiting for observed generation 2
Jan  5 13:31:14.193: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8
Jan  5 13:31:14.709: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8
Jan  5 13:31:14.803: INFO: Waiting for the first rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas
Jan  5 13:31:14.977: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0
Jan  5 13:31:14.978: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5
Jan  5 13:31:14.983: INFO: Waiting for the second rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas
Jan  5 13:31:14.989: INFO: Verifying that deployment "nginx-deployment" has minimum required number of available replicas
Jan  5 13:31:14.989: INFO: Scaling up the deployment "nginx-deployment" from 10 to 30
Jan  5 13:31:15.000: INFO: Updating deployment nginx-deployment
Jan  5 13:31:15.000: INFO: Waiting for the replicasets of deployment "nginx-deployment" to have desired number of replicas
Jan  5 13:31:16.235: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20
Jan  5 13:31:16.490: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66
Jan  5 13:31:21.589: INFO: Deployment "nginx-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment,GenerateName:,Namespace:deployment-7917,SelfLink:/apis/apps/v1/namespaces/deployment-7917/deployments/nginx-deployment,UID:f8273f64-b49f-49e4-87e7-8c2f35ed6954,ResourceVersion:19399987,Generation:3,CreationTimestamp:2020-01-05 13:30:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*30,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:21,UpdatedReplicas:13,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[{Available False 2020-01-05 13:31:16 +0000 UTC 2020-01-05 13:31:16 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2020-01-05 13:31:19 +0000 UTC 2020-01-05 13:30:46 +0000 UTC ReplicaSetUpdated ReplicaSet "nginx-deployment-55fb7cb77f" is progressing.}],ReadyReplicas:8,CollisionCount:nil,},}

Jan  5 13:31:23.770: INFO: New ReplicaSet "nginx-deployment-55fb7cb77f" of Deployment "nginx-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f,GenerateName:,Namespace:deployment-7917,SelfLink:/apis/apps/v1/namespaces/deployment-7917/replicasets/nginx-deployment-55fb7cb77f,UID:4bde00a6-d690-4938-87bf-db9cf9aa802f,ResourceVersion:19399983,Generation:3,CreationTimestamp:2020-01-05 13:31:11 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment nginx-deployment f8273f64-b49f-49e4-87e7-8c2f35ed6954 0xc0022b0757 0xc0022b0758}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*13,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Jan  5 13:31:23.770: INFO: All old ReplicaSets of Deployment "nginx-deployment":
Jan  5 13:31:23.770: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498,GenerateName:,Namespace:deployment-7917,SelfLink:/apis/apps/v1/namespaces/deployment-7917/replicasets/nginx-deployment-7b8c6f4498,UID:ea9e2db0-c765-47ab-beb4-c5e294912b22,ResourceVersion:19399994,Generation:3,CreationTimestamp:2020-01-05 13:30:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment nginx-deployment f8273f64-b49f-49e4-87e7-8c2f35ed6954 0xc0022b0827 0xc0022b0828}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*20,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[],},}
Jan  5 13:31:24.296: INFO: Pod "nginx-deployment-55fb7cb77f-55fhj" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-55fhj,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-7917,SelfLink:/api/v1/namespaces/deployment-7917/pods/nginx-deployment-55fb7cb77f-55fhj,UID:ce075b41-ff9b-4306-8860-53b13b345c1a,ResourceVersion:19399982,Generation:0,CreationTimestamp:2020-01-05 13:31:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 4bde00a6-d690-4938-87bf-db9cf9aa802f 0xc0022b1197 0xc0022b1198}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-5fhpz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5fhpz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-5fhpz true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0022b1200} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0022b1220}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-05 13:31:17 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-05 13:31:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-05 13:31:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-05 13:31:16 +0000 UTC  }],Message:,Reason:,HostIP:10.96.2.216,PodIP:,StartTime:2020-01-05 13:31:17 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  5 13:31:24.297: INFO: Pod "nginx-deployment-55fb7cb77f-8jgsm" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-8jgsm,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-7917,SelfLink:/api/v1/namespaces/deployment-7917/pods/nginx-deployment-55fb7cb77f-8jgsm,UID:6fccf4db-46b2-4c61-90d7-70931a636a24,ResourceVersion:19399892,Generation:0,CreationTimestamp:2020-01-05 13:31:11 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 4bde00a6-d690-4938-87bf-db9cf9aa802f 0xc0022b12f7 0xc0022b12f8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-5fhpz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5fhpz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-5fhpz true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0022b1370} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0022b1390}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-05 13:31:11 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-05 13:31:11 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-05 13:31:11 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-05 13:31:11 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2020-01-05 13:31:11 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  5 13:31:24.297: INFO: Pod "nginx-deployment-55fb7cb77f-fllhs" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-fllhs,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-7917,SelfLink:/api/v1/namespaces/deployment-7917/pods/nginx-deployment-55fb7cb77f-fllhs,UID:5ed45eee-bac2-4064-9205-0d05865041f6,ResourceVersion:19399979,Generation:0,CreationTimestamp:2020-01-05 13:31:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 4bde00a6-d690-4938-87bf-db9cf9aa802f 0xc0022b1467 0xc0022b1468}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-5fhpz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5fhpz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-5fhpz true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0022b14d0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0022b14f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-05 13:31:18 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  5 13:31:24.298: INFO: Pod "nginx-deployment-55fb7cb77f-fs6d5" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-fs6d5,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-7917,SelfLink:/api/v1/namespaces/deployment-7917/pods/nginx-deployment-55fb7cb77f-fs6d5,UID:61642726-1ff1-43e1-a5e2-b5162477d233,ResourceVersion:19399977,Generation:0,CreationTimestamp:2020-01-05 13:31:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 4bde00a6-d690-4938-87bf-db9cf9aa802f 0xc0022b1577 0xc0022b1578}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-5fhpz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5fhpz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-5fhpz true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0022b15f0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0022b1610}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-05 13:31:18 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  5 13:31:24.298: INFO: Pod "nginx-deployment-55fb7cb77f-ks6cv" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-ks6cv,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-7917,SelfLink:/api/v1/namespaces/deployment-7917/pods/nginx-deployment-55fb7cb77f-ks6cv,UID:8c804759-1798-487d-b583-40c8d5d80364,ResourceVersion:19399891,Generation:0,CreationTimestamp:2020-01-05 13:31:11 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 4bde00a6-d690-4938-87bf-db9cf9aa802f 0xc0022b16a7 0xc0022b16a8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-5fhpz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5fhpz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-5fhpz true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0022b1750} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0022b1770}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-05 13:31:11 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-05 13:31:11 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-05 13:31:11 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-05 13:31:11 +0000 UTC  }],Message:,Reason:,HostIP:10.96.2.216,PodIP:,StartTime:2020-01-05 13:31:11 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  5 13:31:24.298: INFO: Pod "nginx-deployment-55fb7cb77f-lmz46" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-lmz46,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-7917,SelfLink:/api/v1/namespaces/deployment-7917/pods/nginx-deployment-55fb7cb77f-lmz46,UID:2e2dc1c5-1f2d-4d70-b36d-380cfb7add5b,ResourceVersion:19399954,Generation:0,CreationTimestamp:2020-01-05 13:31:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 4bde00a6-d690-4938-87bf-db9cf9aa802f 0xc0022b1a37 0xc0022b1a38}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-5fhpz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5fhpz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-5fhpz true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0022b1af0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0022b1b10}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-05 13:31:16 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-05 13:31:16 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-05 13:31:16 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-05 13:31:16 +0000 UTC  }],Message:,Reason:,HostIP:10.96.2.216,PodIP:,StartTime:2020-01-05 13:31:16 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  5 13:31:24.299: INFO: Pod "nginx-deployment-55fb7cb77f-m6f7x" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-m6f7x,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-7917,SelfLink:/api/v1/namespaces/deployment-7917/pods/nginx-deployment-55fb7cb77f-m6f7x,UID:5f1ce6eb-8570-405f-860e-dad4d30fa369,ResourceVersion:19399882,Generation:0,CreationTimestamp:2020-01-05 13:31:11 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 4bde00a6-d690-4938-87bf-db9cf9aa802f 0xc0022b1cc7 0xc0022b1cc8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-5fhpz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5fhpz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-5fhpz true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0022b1d40} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0022b1d60}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-05 13:31:11 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-05 13:31:11 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-05 13:31:11 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-05 13:31:11 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2020-01-05 13:31:11 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  5 13:31:24.299: INFO: Pod "nginx-deployment-55fb7cb77f-mjc87" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-mjc87,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-7917,SelfLink:/api/v1/namespaces/deployment-7917/pods/nginx-deployment-55fb7cb77f-mjc87,UID:cc54f953-0402-4782-add2-f48f38635790,ResourceVersion:19399910,Generation:0,CreationTimestamp:2020-01-05 13:31:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 4bde00a6-d690-4938-87bf-db9cf9aa802f 0xc0022b1eb7 0xc0022b1eb8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-5fhpz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5fhpz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-5fhpz true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0022b1f30} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0022b1f50}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-05 13:31:12 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-05 13:31:12 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-05 13:31:12 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-05 13:31:12 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2020-01-05 13:31:12 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  5 13:31:24.299: INFO: Pod "nginx-deployment-55fb7cb77f-rvbj9" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-rvbj9,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-7917,SelfLink:/api/v1/namespaces/deployment-7917/pods/nginx-deployment-55fb7cb77f-rvbj9,UID:eb120fa5-527c-4a0d-a086-d48ab8d4683f,ResourceVersion:19399946,Generation:0,CreationTimestamp:2020-01-05 13:31:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 4bde00a6-d690-4938-87bf-db9cf9aa802f 0xc00200c047 0xc00200c048}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-5fhpz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5fhpz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-5fhpz true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00200c0c0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00200c0e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-05 13:31:16 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  5 13:31:24.299: INFO: Pod "nginx-deployment-55fb7cb77f-sfcl8" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-sfcl8,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-7917,SelfLink:/api/v1/namespaces/deployment-7917/pods/nginx-deployment-55fb7cb77f-sfcl8,UID:347e37e6-b2b5-4ac3-abf5-5d51b5cb03d4,ResourceVersion:19399909,Generation:0,CreationTimestamp:2020-01-05 13:31:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 4bde00a6-d690-4938-87bf-db9cf9aa802f 0xc00200c167 0xc00200c168}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-5fhpz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5fhpz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-5fhpz true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00200c1d0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00200c1f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-05 13:31:12 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-05 13:31:12 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-05 13:31:12 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-05 13:31:12 +0000 UTC  }],Message:,Reason:,HostIP:10.96.2.216,PodIP:,StartTime:2020-01-05 13:31:12 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  5 13:31:24.300: INFO: Pod "nginx-deployment-55fb7cb77f-t9skw" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-t9skw,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-7917,SelfLink:/api/v1/namespaces/deployment-7917/pods/nginx-deployment-55fb7cb77f-t9skw,UID:c0d2b32c-f910-4d57-abc0-ddce2200c52c,ResourceVersion:19399975,Generation:0,CreationTimestamp:2020-01-05 13:31:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 4bde00a6-d690-4938-87bf-db9cf9aa802f 0xc00200c2c7 0xc00200c2c8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-5fhpz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5fhpz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-5fhpz true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00200c340} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00200c360}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-05 13:31:18 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  5 13:31:24.300: INFO: Pod "nginx-deployment-55fb7cb77f-td22q" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-td22q,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-7917,SelfLink:/api/v1/namespaces/deployment-7917/pods/nginx-deployment-55fb7cb77f-td22q,UID:804be765-1552-4b12-9308-7ce3eb3e133e,ResourceVersion:19399971,Generation:0,CreationTimestamp:2020-01-05 13:31:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 4bde00a6-d690-4938-87bf-db9cf9aa802f 0xc00200c3e7 0xc00200c3e8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-5fhpz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5fhpz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-5fhpz true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00200c450} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00200c470}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-05 13:31:18 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  5 13:31:24.300: INFO: Pod "nginx-deployment-55fb7cb77f-xs28g" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-xs28g,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-7917,SelfLink:/api/v1/namespaces/deployment-7917/pods/nginx-deployment-55fb7cb77f-xs28g,UID:98084e16-a314-4c4f-b681-5dbd58d72ee3,ResourceVersion:19399966,Generation:0,CreationTimestamp:2020-01-05 13:31:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 4bde00a6-d690-4938-87bf-db9cf9aa802f 0xc00200c4f7 0xc00200c4f8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-5fhpz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5fhpz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-5fhpz true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00200c570} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00200c590}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-05 13:31:18 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  5 13:31:24.301: INFO: Pod "nginx-deployment-7b8c6f4498-2tm9g" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-2tm9g,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-7917,SelfLink:/api/v1/namespaces/deployment-7917/pods/nginx-deployment-7b8c6f4498-2tm9g,UID:bc9e34a3-6396-4ff0-9895-8a43a1ed0739,ResourceVersion:19399993,Generation:0,CreationTimestamp:2020-01-05 13:31:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 ea9e2db0-c765-47ab-beb4-c5e294912b22 0xc00200c617 0xc00200c618}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-5fhpz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5fhpz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-5fhpz true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00200c690} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00200c6b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-05 13:31:17 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-05 13:31:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-05 13:31:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-05 13:31:16 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2020-01-05 13:31:17 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  5 13:31:24.301: INFO: Pod "nginx-deployment-7b8c6f4498-2zwfh" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-2zwfh,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-7917,SelfLink:/api/v1/namespaces/deployment-7917/pods/nginx-deployment-7b8c6f4498-2zwfh,UID:ae9538f8-efed-46fb-967e-76263243e436,ResourceVersion:19399840,Generation:0,CreationTimestamp:2020-01-05 13:30:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 ea9e2db0-c765-47ab-beb4-c5e294912b22 0xc00200c777 0xc00200c778}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-5fhpz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5fhpz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-5fhpz true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00200c7e0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00200c800}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-05 13:30:46 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-05 13:31:10 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-05 13:31:10 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-05 13:30:46 +0000 UTC  }],Message:,Reason:,HostIP:10.96.2.216,PodIP:10.32.0.8,StartTime:2020-01-05 13:30:46 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-05 13:31:09 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://49de7eea9b407c6545c573a71d04a201e069845d52b49bcf07eb15487e3774a2}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  5 13:31:24.301: INFO: Pod "nginx-deployment-7b8c6f4498-4vrxw" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-4vrxw,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-7917,SelfLink:/api/v1/namespaces/deployment-7917/pods/nginx-deployment-7b8c6f4498-4vrxw,UID:f786962c-7b9c-4546-815e-bb73d5ce1904,ResourceVersion:19399843,Generation:0,CreationTimestamp:2020-01-05 13:30:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 ea9e2db0-c765-47ab-beb4-c5e294912b22 0xc00200c8d7 0xc00200c8d8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-5fhpz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5fhpz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-5fhpz true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00200c950} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00200c970}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-05 13:30:47 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-05 13:31:10 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-05 13:31:10 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-05 13:30:46 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.2,StartTime:2020-01-05 13:30:47 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-05 13:31:09 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://ad46d9dffaebc1f164367e69661e0ab1e7544d4178723adc25cd684bddf011a1}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  5 13:31:24.302: INFO: Pod "nginx-deployment-7b8c6f4498-52qzg" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-52qzg,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-7917,SelfLink:/api/v1/namespaces/deployment-7917/pods/nginx-deployment-7b8c6f4498-52qzg,UID:3347a645-1090-4488-b4e9-501f4e8a01b1,ResourceVersion:19399974,Generation:0,CreationTimestamp:2020-01-05 13:31:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 ea9e2db0-c765-47ab-beb4-c5e294912b22 0xc00200cac7 0xc00200cac8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-5fhpz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5fhpz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-5fhpz true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00200ccc0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00200cce0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-05 13:31:18 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  5 13:31:24.302: INFO: Pod "nginx-deployment-7b8c6f4498-6l26s" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-6l26s,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-7917,SelfLink:/api/v1/namespaces/deployment-7917/pods/nginx-deployment-7b8c6f4498-6l26s,UID:a443cce7-cac7-4d50-b64c-5e6618914078,ResourceVersion:19399984,Generation:0,CreationTimestamp:2020-01-05 13:31:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 ea9e2db0-c765-47ab-beb4-c5e294912b22 0xc00200cdb7 0xc00200cdb8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-5fhpz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5fhpz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-5fhpz true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00200cf00} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00200cf20}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-05 13:31:17 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-05 13:31:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-05 13:31:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-05 13:31:16 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2020-01-05 13:31:17 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  5 13:31:24.302: INFO: Pod "nginx-deployment-7b8c6f4498-6lcpz" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-6lcpz,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-7917,SelfLink:/api/v1/namespaces/deployment-7917/pods/nginx-deployment-7b8c6f4498-6lcpz,UID:3145ed2b-dd5c-420d-aeec-fd869d27a355,ResourceVersion:19399945,Generation:0,CreationTimestamp:2020-01-05 13:31:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 ea9e2db0-c765-47ab-beb4-c5e294912b22 0xc00200cff7 0xc00200cff8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-5fhpz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5fhpz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-5fhpz true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00200d070} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00200d090}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-05 13:31:16 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  5 13:31:24.302: INFO: Pod "nginx-deployment-7b8c6f4498-8plnq" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-8plnq,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-7917,SelfLink:/api/v1/namespaces/deployment-7917/pods/nginx-deployment-7b8c6f4498-8plnq,UID:4883362d-5cca-4e60-8e2b-954a74a2bd37,ResourceVersion:19399849,Generation:0,CreationTimestamp:2020-01-05 13:30:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 ea9e2db0-c765-47ab-beb4-c5e294912b22 0xc00200d187 0xc00200d188}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-5fhpz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5fhpz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-5fhpz true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00200d200} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00200d220}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-05 13:30:47 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-05 13:31:10 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-05 13:31:10 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-05 13:30:46 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.4,StartTime:2020-01-05 13:30:47 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-05 13:31:09 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://600df4d76b6bca1459fd7b225165a77b247df7c5956a328bcdaba9f788158467}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  5 13:31:24.302: INFO: Pod "nginx-deployment-7b8c6f4498-br4ws" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-br4ws,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-7917,SelfLink:/api/v1/namespaces/deployment-7917/pods/nginx-deployment-7b8c6f4498-br4ws,UID:188450cd-c5d1-4c71-b409-44e3aaa48860,ResourceVersion:19399976,Generation:0,CreationTimestamp:2020-01-05 13:31:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 ea9e2db0-c765-47ab-beb4-c5e294912b22 0xc00200d537 0xc00200d538}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-5fhpz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5fhpz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-5fhpz true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00200d600} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00200d670}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-05 13:31:18 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  5 13:31:24.303: INFO: Pod "nginx-deployment-7b8c6f4498-jwnzh" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-jwnzh,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-7917,SelfLink:/api/v1/namespaces/deployment-7917/pods/nginx-deployment-7b8c6f4498-jwnzh,UID:3ee1f160-323e-4d9e-a91e-d78207c14f8b,ResourceVersion:19399846,Generation:0,CreationTimestamp:2020-01-05 13:30:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 ea9e2db0-c765-47ab-beb4-c5e294912b22 0xc00200d777 0xc00200d778}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-5fhpz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5fhpz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-5fhpz true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00200d860} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00200d960}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-05 13:30:47 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-05 13:31:10 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-05 13:31:10 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-05 13:30:46 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.1,StartTime:2020-01-05 13:30:47 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-05 13:31:09 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://c2e50d8ae8f2056d1f30c431d832d9c54b2916580c82a15ea3d7a19464abf1e8}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  5 13:31:24.303: INFO: Pod "nginx-deployment-7b8c6f4498-l8rbl" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-l8rbl,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-7917,SelfLink:/api/v1/namespaces/deployment-7917/pods/nginx-deployment-7b8c6f4498-l8rbl,UID:14e9cad5-4b7b-4c16-8b1e-f37a96188a17,ResourceVersion:19399837,Generation:0,CreationTimestamp:2020-01-05 13:30:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 ea9e2db0-c765-47ab-beb4-c5e294912b22 0xc00200daa7 0xc00200daa8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-5fhpz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5fhpz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-5fhpz true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00200dbf0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00200dc10}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-05 13:30:46 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-05 13:31:10 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-05 13:31:10 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-05 13:30:46 +0000 UTC  }],Message:,Reason:,HostIP:10.96.2.216,PodIP:10.32.0.7,StartTime:2020-01-05 13:30:46 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-05 13:31:09 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://a8e03b21477cda2d35d15134f4e55988b3da1c555ce609119606f3e42b6ce086}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  5 13:31:24.304: INFO: Pod "nginx-deployment-7b8c6f4498-lnnbg" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-lnnbg,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-7917,SelfLink:/api/v1/namespaces/deployment-7917/pods/nginx-deployment-7b8c6f4498-lnnbg,UID:234b7d96-91d4-46fc-b76d-0292ac055ee9,ResourceVersion:19399819,Generation:0,CreationTimestamp:2020-01-05 13:30:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 ea9e2db0-c765-47ab-beb4-c5e294912b22 0xc00200ddb7 0xc00200ddb8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-5fhpz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5fhpz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-5fhpz true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00200dea0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00200df00}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-05 13:30:46 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-05 13:31:09 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-05 13:31:09 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-05 13:30:46 +0000 UTC  }],Message:,Reason:,HostIP:10.96.2.216,PodIP:10.32.0.6,StartTime:2020-01-05 13:30:46 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-05 13:31:08 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://7b8e4b478cf004e651c37aee9538dd5265f43ea34a8b2a873f7e8b7ea5c36a46}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  5 13:31:24.304: INFO: Pod "nginx-deployment-7b8c6f4498-mq6c5" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-mq6c5,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-7917,SelfLink:/api/v1/namespaces/deployment-7917/pods/nginx-deployment-7b8c6f4498-mq6c5,UID:24bd976d-f2e9-46d4-bdb9-c07760dbdaef,ResourceVersion:19399995,Generation:0,CreationTimestamp:2020-01-05 13:31:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 ea9e2db0-c765-47ab-beb4-c5e294912b22 0xc000540847 0xc000540848}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-5fhpz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5fhpz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-5fhpz true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000540980} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000540a40}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-05 13:31:18 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-05 13:31:18 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-05 13:31:18 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-05 13:31:16 +0000 UTC  }],Message:,Reason:,HostIP:10.96.2.216,PodIP:,StartTime:2020-01-05 13:31:18 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  5 13:31:24.304: INFO: Pod "nginx-deployment-7b8c6f4498-rswwk" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-rswwk,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-7917,SelfLink:/api/v1/namespaces/deployment-7917/pods/nginx-deployment-7b8c6f4498-rswwk,UID:f245002e-0d91-47eb-b639-f0210e8ce296,ResourceVersion:19399964,Generation:0,CreationTimestamp:2020-01-05 13:31:15 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 ea9e2db0-c765-47ab-beb4-c5e294912b22 0xc000540c57 0xc000540c58}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-5fhpz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5fhpz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-5fhpz true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000540d90} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000540db0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-05 13:31:16 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-05 13:31:16 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-05 13:31:16 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-05 13:31:16 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2020-01-05 13:31:16 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  5 13:31:24.304: INFO: Pod "nginx-deployment-7b8c6f4498-t6sxm" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-t6sxm,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-7917,SelfLink:/api/v1/namespaces/deployment-7917/pods/nginx-deployment-7b8c6f4498-t6sxm,UID:c62ca8d2-53f3-418b-9e95-e2e8ea2a62c7,ResourceVersion:19399834,Generation:0,CreationTimestamp:2020-01-05 13:30:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 ea9e2db0-c765-47ab-beb4-c5e294912b22 0xc000541837 0xc000541838}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-5fhpz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5fhpz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-5fhpz true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000541fe0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00060a420}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-05 13:30:46 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-05 13:31:10 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-05 13:31:10 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-05 13:30:46 +0000 UTC  }],Message:,Reason:,HostIP:10.96.2.216,PodIP:10.32.0.5,StartTime:2020-01-05 13:30:46 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-05 13:31:08 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://cfac8d6f33a31e884edc63681747410560f585e8bcbf1e1bc4ca008145d1ff90}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  5 13:31:24.305: INFO: Pod "nginx-deployment-7b8c6f4498-tbz9p" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-tbz9p,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-7917,SelfLink:/api/v1/namespaces/deployment-7917/pods/nginx-deployment-7b8c6f4498-tbz9p,UID:92c5a200-1720-4e03-9675-86f32e9d60b5,ResourceVersion:19399831,Generation:0,CreationTimestamp:2020-01-05 13:30:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 ea9e2db0-c765-47ab-beb4-c5e294912b22 0xc00060a6b7 0xc00060a6b8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-5fhpz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5fhpz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-5fhpz true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00060ae10} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00060ae80}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-05 13:30:46 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-05 13:31:10 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-05 13:31:10 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-05 13:30:46 +0000 UTC  }],Message:,Reason:,HostIP:10.96.2.216,PodIP:10.32.0.4,StartTime:2020-01-05 13:30:46 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-05 13:31:08 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://63d95be6839edd946b3a08f3e433fd3987a391837b8124827d5286d9616567da}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  5 13:31:24.305: INFO: Pod "nginx-deployment-7b8c6f4498-v6qv8" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-v6qv8,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-7917,SelfLink:/api/v1/namespaces/deployment-7917/pods/nginx-deployment-7b8c6f4498-v6qv8,UID:96c4cdad-8c56-47e7-a5a1-ac0400dd3d67,ResourceVersion:19399942,Generation:0,CreationTimestamp:2020-01-05 13:31:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 ea9e2db0-c765-47ab-beb4-c5e294912b22 0xc00060b297 0xc00060b298}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-5fhpz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5fhpz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-5fhpz true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00060b7d0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00060b820}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-05 13:31:16 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  5 13:31:24.305: INFO: Pod "nginx-deployment-7b8c6f4498-vn6k5" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-vn6k5,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-7917,SelfLink:/api/v1/namespaces/deployment-7917/pods/nginx-deployment-7b8c6f4498-vn6k5,UID:55f23c80-7344-4581-b6d1-ec3d2f499acb,ResourceVersion:19399969,Generation:0,CreationTimestamp:2020-01-05 13:31:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 ea9e2db0-c765-47ab-beb4-c5e294912b22 0xc00060bac7 0xc00060bac8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-5fhpz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5fhpz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-5fhpz true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00060bc80} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00060bcd0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-05 13:31:18 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  5 13:31:24.306: INFO: Pod "nginx-deployment-7b8c6f4498-vw4dh" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-vw4dh,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-7917,SelfLink:/api/v1/namespaces/deployment-7917/pods/nginx-deployment-7b8c6f4498-vw4dh,UID:405fd3c5-924f-490f-8e2b-6b8b356c2631,ResourceVersion:19399949,Generation:0,CreationTimestamp:2020-01-05 13:31:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 ea9e2db0-c765-47ab-beb4-c5e294912b22 0xc00060bf37 0xc00060bf38}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-5fhpz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5fhpz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-5fhpz true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00033c090} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00033c0e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-05 13:31:16 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  5 13:31:24.306: INFO: Pod "nginx-deployment-7b8c6f4498-vwr87" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-vwr87,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-7917,SelfLink:/api/v1/namespaces/deployment-7917/pods/nginx-deployment-7b8c6f4498-vwr87,UID:f3e5d482-d2f8-4a22-a604-076da363d96b,ResourceVersion:19399973,Generation:0,CreationTimestamp:2020-01-05 13:31:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 ea9e2db0-c765-47ab-beb4-c5e294912b22 0xc00033c237 0xc00033c238}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-5fhpz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5fhpz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-5fhpz true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00033c330} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00033c370}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-05 13:31:18 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  5 13:31:24.306: INFO: Pod "nginx-deployment-7b8c6f4498-vx5zz" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-vx5zz,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-7917,SelfLink:/api/v1/namespaces/deployment-7917/pods/nginx-deployment-7b8c6f4498-vx5zz,UID:c6d6574b-0258-4ce0-8a99-090787e80047,ResourceVersion:19399967,Generation:0,CreationTimestamp:2020-01-05 13:31:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 ea9e2db0-c765-47ab-beb4-c5e294912b22 0xc00033c467 0xc00033c468}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-5fhpz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5fhpz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-5fhpz true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00033c530} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00033c550}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-05 13:31:18 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  5 13:31:24.306: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-7917" for this suite.
Jan  5 13:32:06.152: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 13:32:06.292: INFO: namespace deployment-7917 deletion completed in 40.738355171s

• [SLOW TEST:80.132 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  5 13:32:06.292: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Jan  5 13:32:16.689: INFO: Expected: &{OK} to match Container's Termination Message: OK --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  5 13:32:16.748: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-7800" for this suite.
Jan  5 13:32:22.787: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 13:32:22.928: INFO: namespace container-runtime-7800 deletion completed in 6.171463553s

• [SLOW TEST:16.636 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129
      should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test when starting a container that exits 
  should run with the expected status [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  5 13:32:22.929: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run with the expected status [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpa': should get the expected 'State'
STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpof': should get the expected 'State'
STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpn': should get the expected 'State'
STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance]
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  5 13:33:10.460: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-8558" for this suite.
Jan  5 13:33:16.492: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 13:33:16.617: INFO: namespace container-runtime-8558 deletion completed in 6.150563836s

• [SLOW TEST:53.689 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    when starting a container that exits
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:39
      should run with the expected status [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with projected pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  5 13:33:16.617: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with projected pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod pod-subpath-test-projected-5w94
STEP: Creating a pod to test atomic-volume-subpath
Jan  5 13:33:16.714: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-5w94" in namespace "subpath-1572" to be "success or failure"
Jan  5 13:33:16.819: INFO: Pod "pod-subpath-test-projected-5w94": Phase="Pending", Reason="", readiness=false. Elapsed: 104.789878ms
Jan  5 13:33:18.830: INFO: Pod "pod-subpath-test-projected-5w94": Phase="Pending", Reason="", readiness=false. Elapsed: 2.115783369s
Jan  5 13:33:20.837: INFO: Pod "pod-subpath-test-projected-5w94": Phase="Pending", Reason="", readiness=false. Elapsed: 4.122721889s
Jan  5 13:33:22.847: INFO: Pod "pod-subpath-test-projected-5w94": Phase="Pending", Reason="", readiness=false. Elapsed: 6.133262833s
Jan  5 13:33:24.870: INFO: Pod "pod-subpath-test-projected-5w94": Phase="Running", Reason="", readiness=true. Elapsed: 8.155982552s
Jan  5 13:33:26.880: INFO: Pod "pod-subpath-test-projected-5w94": Phase="Running", Reason="", readiness=true. Elapsed: 10.165983685s
Jan  5 13:33:28.890: INFO: Pod "pod-subpath-test-projected-5w94": Phase="Running", Reason="", readiness=true. Elapsed: 12.17575688s
Jan  5 13:33:30.898: INFO: Pod "pod-subpath-test-projected-5w94": Phase="Running", Reason="", readiness=true. Elapsed: 14.183684225s
Jan  5 13:33:32.912: INFO: Pod "pod-subpath-test-projected-5w94": Phase="Running", Reason="", readiness=true. Elapsed: 16.19827657s
Jan  5 13:33:34.922: INFO: Pod "pod-subpath-test-projected-5w94": Phase="Running", Reason="", readiness=true. Elapsed: 18.207698712s
Jan  5 13:33:36.930: INFO: Pod "pod-subpath-test-projected-5w94": Phase="Running", Reason="", readiness=true. Elapsed: 20.216610834s
Jan  5 13:33:38.937: INFO: Pod "pod-subpath-test-projected-5w94": Phase="Running", Reason="", readiness=true. Elapsed: 22.223148874s
Jan  5 13:33:40.950: INFO: Pod "pod-subpath-test-projected-5w94": Phase="Running", Reason="", readiness=true. Elapsed: 24.235694162s
Jan  5 13:33:42.958: INFO: Pod "pod-subpath-test-projected-5w94": Phase="Running", Reason="", readiness=true. Elapsed: 26.244208379s
Jan  5 13:33:44.967: INFO: Pod "pod-subpath-test-projected-5w94": Phase="Running", Reason="", readiness=true. Elapsed: 28.252884499s
Jan  5 13:33:46.977: INFO: Pod "pod-subpath-test-projected-5w94": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.26318466s
STEP: Saw pod success
Jan  5 13:33:46.977: INFO: Pod "pod-subpath-test-projected-5w94" satisfied condition "success or failure"
Jan  5 13:33:46.981: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-projected-5w94 container test-container-subpath-projected-5w94: 
STEP: delete the pod
Jan  5 13:33:47.111: INFO: Waiting for pod pod-subpath-test-projected-5w94 to disappear
Jan  5 13:33:47.118: INFO: Pod pod-subpath-test-projected-5w94 no longer exists
STEP: Deleting pod pod-subpath-test-projected-5w94
Jan  5 13:33:47.118: INFO: Deleting pod "pod-subpath-test-projected-5w94" in namespace "subpath-1572"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  5 13:33:47.121: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-1572" for this suite.
Jan  5 13:33:53.145: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 13:33:53.282: INFO: namespace subpath-1572 deletion completed in 6.156472796s

• [SLOW TEST:36.665 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with projected pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  5 13:33:53.282: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-9968.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-9968.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9968.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-9968.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-9968.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9968.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe /etc/hosts
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jan  5 13:34:05.597: INFO: Unable to read wheezy_udp@PodARecord from pod dns-9968/dns-test-8a420686-1709-45df-8b5e-240909eb2577: the server could not find the requested resource (get pods dns-test-8a420686-1709-45df-8b5e-240909eb2577)
Jan  5 13:34:05.605: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-9968/dns-test-8a420686-1709-45df-8b5e-240909eb2577: the server could not find the requested resource (get pods dns-test-8a420686-1709-45df-8b5e-240909eb2577)
Jan  5 13:34:05.611: INFO: Unable to read jessie_hosts@dns-querier-1.dns-test-service.dns-9968.svc.cluster.local from pod dns-9968/dns-test-8a420686-1709-45df-8b5e-240909eb2577: the server could not find the requested resource (get pods dns-test-8a420686-1709-45df-8b5e-240909eb2577)
Jan  5 13:34:05.619: INFO: Unable to read jessie_hosts@dns-querier-1 from pod dns-9968/dns-test-8a420686-1709-45df-8b5e-240909eb2577: the server could not find the requested resource (get pods dns-test-8a420686-1709-45df-8b5e-240909eb2577)
Jan  5 13:34:05.626: INFO: Unable to read jessie_udp@PodARecord from pod dns-9968/dns-test-8a420686-1709-45df-8b5e-240909eb2577: the server could not find the requested resource (get pods dns-test-8a420686-1709-45df-8b5e-240909eb2577)
Jan  5 13:34:05.632: INFO: Unable to read jessie_tcp@PodARecord from pod dns-9968/dns-test-8a420686-1709-45df-8b5e-240909eb2577: the server could not find the requested resource (get pods dns-test-8a420686-1709-45df-8b5e-240909eb2577)
Jan  5 13:34:05.632: INFO: Lookups using dns-9968/dns-test-8a420686-1709-45df-8b5e-240909eb2577 failed for: [wheezy_udp@PodARecord wheezy_tcp@PodARecord jessie_hosts@dns-querier-1.dns-test-service.dns-9968.svc.cluster.local jessie_hosts@dns-querier-1 jessie_udp@PodARecord jessie_tcp@PodARecord]

Jan  5 13:34:10.694: INFO: DNS probes using dns-9968/dns-test-8a420686-1709-45df-8b5e-240909eb2577 succeeded

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  5 13:34:10.758: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-9968" for this suite.
Jan  5 13:34:16.858: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 13:34:16.990: INFO: namespace dns-9968 deletion completed in 6.19298874s

• [SLOW TEST:23.708 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  5 13:34:16.991: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod liveness-b62e0cf4-0fce-445c-8a06-a9cf2d982a3f in namespace container-probe-7837
Jan  5 13:34:25.139: INFO: Started pod liveness-b62e0cf4-0fce-445c-8a06-a9cf2d982a3f in namespace container-probe-7837
STEP: checking the pod's current state and verifying that restartCount is present
Jan  5 13:34:25.147: INFO: Initial restart count of pod liveness-b62e0cf4-0fce-445c-8a06-a9cf2d982a3f is 0
Jan  5 13:34:49.473: INFO: Restart count of pod container-probe-7837/liveness-b62e0cf4-0fce-445c-8a06-a9cf2d982a3f is now 1 (24.326412473s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  5 13:34:49.520: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-7837" for this suite.
Jan  5 13:34:55.549: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 13:34:55.662: INFO: namespace container-probe-7837 deletion completed in 6.132520654s

• [SLOW TEST:38.671 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl logs 
  should be able to retrieve and filter logs  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  5 13:34:55.662: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1292
STEP: creating an rc
Jan  5 13:34:55.736: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3939'
Jan  5 13:34:58.085: INFO: stderr: ""
Jan  5 13:34:58.085: INFO: stdout: "replicationcontroller/redis-master created\n"
[It] should be able to retrieve and filter logs  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Waiting for Redis master to start.
Jan  5 13:34:59.100: INFO: Selector matched 1 pods for map[app:redis]
Jan  5 13:34:59.100: INFO: Found 0 / 1
Jan  5 13:35:00.102: INFO: Selector matched 1 pods for map[app:redis]
Jan  5 13:35:00.102: INFO: Found 0 / 1
Jan  5 13:35:01.094: INFO: Selector matched 1 pods for map[app:redis]
Jan  5 13:35:01.094: INFO: Found 0 / 1
Jan  5 13:35:02.096: INFO: Selector matched 1 pods for map[app:redis]
Jan  5 13:35:02.097: INFO: Found 0 / 1
Jan  5 13:35:03.097: INFO: Selector matched 1 pods for map[app:redis]
Jan  5 13:35:03.097: INFO: Found 0 / 1
Jan  5 13:35:04.095: INFO: Selector matched 1 pods for map[app:redis]
Jan  5 13:35:04.095: INFO: Found 0 / 1
Jan  5 13:35:05.098: INFO: Selector matched 1 pods for map[app:redis]
Jan  5 13:35:05.098: INFO: Found 0 / 1
Jan  5 13:35:06.117: INFO: Selector matched 1 pods for map[app:redis]
Jan  5 13:35:06.117: INFO: Found 1 / 1
Jan  5 13:35:06.117: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Jan  5 13:35:06.124: INFO: Selector matched 1 pods for map[app:redis]
Jan  5 13:35:06.124: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
STEP: checking for a matching strings
Jan  5 13:35:06.125: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-wqdr2 redis-master --namespace=kubectl-3939'
Jan  5 13:35:06.403: INFO: stderr: ""
Jan  5 13:35:06.403: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 05 Jan 13:35:05.073 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 05 Jan 13:35:05.073 # Server started, Redis version 3.2.12\n1:M 05 Jan 13:35:05.074 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 05 Jan 13:35:05.074 * The server is now ready to accept connections on port 6379\n"
STEP: limiting log lines
Jan  5 13:35:06.403: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-wqdr2 redis-master --namespace=kubectl-3939 --tail=1'
Jan  5 13:35:06.595: INFO: stderr: ""
Jan  5 13:35:06.596: INFO: stdout: "1:M 05 Jan 13:35:05.074 * The server is now ready to accept connections on port 6379\n"
STEP: limiting log bytes
Jan  5 13:35:06.596: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-wqdr2 redis-master --namespace=kubectl-3939 --limit-bytes=1'
Jan  5 13:35:06.750: INFO: stderr: ""
Jan  5 13:35:06.750: INFO: stdout: " "
STEP: exposing timestamps
Jan  5 13:35:06.750: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-wqdr2 redis-master --namespace=kubectl-3939 --tail=1 --timestamps'
Jan  5 13:35:06.871: INFO: stderr: ""
Jan  5 13:35:06.871: INFO: stdout: "2020-01-05T13:35:05.075551586Z 1:M 05 Jan 13:35:05.074 * The server is now ready to accept connections on port 6379\n"
STEP: restricting to a time range
Jan  5 13:35:09.372: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-wqdr2 redis-master --namespace=kubectl-3939 --since=1s'
Jan  5 13:35:09.616: INFO: stderr: ""
Jan  5 13:35:09.616: INFO: stdout: ""
Jan  5 13:35:09.617: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-wqdr2 redis-master --namespace=kubectl-3939 --since=24h'
Jan  5 13:35:09.764: INFO: stderr: ""
Jan  5 13:35:09.764: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 05 Jan 13:35:05.073 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 05 Jan 13:35:05.073 # Server started, Redis version 3.2.12\n1:M 05 Jan 13:35:05.074 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 05 Jan 13:35:05.074 * The server is now ready to accept connections on port 6379\n"
[AfterEach] [k8s.io] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1298
STEP: using delete to clean up resources
Jan  5 13:35:09.764: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3939'
Jan  5 13:35:09.929: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan  5 13:35:09.929: INFO: stdout: "replicationcontroller \"redis-master\" force deleted\n"
Jan  5 13:35:09.930: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=nginx --no-headers --namespace=kubectl-3939'
Jan  5 13:35:10.028: INFO: stderr: "No resources found.\n"
Jan  5 13:35:10.028: INFO: stdout: ""
Jan  5 13:35:10.029: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=nginx --namespace=kubectl-3939 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Jan  5 13:35:10.155: INFO: stderr: ""
Jan  5 13:35:10.155: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  5 13:35:10.155: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3939" for this suite.
Jan  5 13:35:16.203: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 13:35:16.334: INFO: namespace kubectl-3939 deletion completed in 6.159088415s

• [SLOW TEST:20.672 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should be able to retrieve and filter logs  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  5 13:35:16.335: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-upd-245f1e76-0403-45cf-b001-0bc3a4ad66d4
STEP: Creating the pod
STEP: Waiting for pod with text data
STEP: Waiting for pod with binary data
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  5 13:35:26.639: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-4511" for this suite.
Jan  5 13:35:50.680: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 13:35:50.783: INFO: namespace configmap-4511 deletion completed in 24.133987982s

• [SLOW TEST:34.448 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl version 
  should check is all data is printed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  5 13:35:50.783: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should check is all data is printed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jan  5 13:35:50.910: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version'
Jan  5 13:35:51.071: INFO: stderr: ""
Jan  5 13:35:51.071: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"15\", GitVersion:\"v1.15.7\", GitCommit:\"6c143d35bb11d74970e7bc0b6c45b6bfdffc0bd4\", GitTreeState:\"clean\", BuildDate:\"2019-12-22T16:55:20Z\", GoVersion:\"go1.12.14\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"15\", GitVersion:\"v1.15.1\", GitCommit:\"4485c6f18cee9a5d3c3b4e523bd27972b1b53892\", GitTreeState:\"clean\", BuildDate:\"2019-07-18T09:09:21Z\", GoVersion:\"go1.12.5\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  5 13:35:51.071: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-7097" for this suite.
Jan  5 13:35:57.104: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 13:35:57.224: INFO: namespace kubectl-7097 deletion completed in 6.147639946s

• [SLOW TEST:6.441 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl version
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should check is all data is printed  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  5 13:35:57.225: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44
[It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
Jan  5 13:35:57.273: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  5 13:36:10.209: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-9684" for this suite.
Jan  5 13:36:16.318: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 13:36:16.527: INFO: namespace init-container-9684 deletion completed in 6.305170717s

• [SLOW TEST:19.303 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Proxy server 
  should support --unix-socket=/path  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  5 13:36:16.528: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should support --unix-socket=/path  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Starting the proxy
Jan  5 13:36:16.634: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix100916947/test'
STEP: retrieving proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  5 13:36:16.711: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5210" for this suite.
Jan  5 13:36:22.794: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 13:36:22.892: INFO: namespace kubectl-5210 deletion completed in 6.168587945s

• [SLOW TEST:6.364 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Proxy server
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should support --unix-socket=/path  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-storage] Projected secret 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  5 13:36:22.893: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name s-test-opt-del-f58ddc1a-055d-45d1-8fcf-868922e436c0
STEP: Creating secret with name s-test-opt-upd-25ec0ca9-aa8f-4391-b50d-40779d73aad1
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-f58ddc1a-055d-45d1-8fcf-868922e436c0
STEP: Updating secret s-test-opt-upd-25ec0ca9-aa8f-4391-b50d-40779d73aad1
STEP: Creating secret with name s-test-opt-create-5ba96f0d-b28b-4282-9487-981c37f3b84e
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  5 13:38:07.001: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3515" for this suite.
Jan  5 13:38:35.074: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 13:38:35.210: INFO: namespace projected-3515 deletion completed in 28.203748725s

• [SLOW TEST:132.318 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  5 13:38:35.211: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Jan  5 13:41:35.716: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  5 13:41:35.831: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  5 13:41:37.832: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  5 13:41:37.849: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  5 13:41:39.832: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  5 13:41:39.841: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  5 13:41:41.832: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  5 13:41:41.841: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  5 13:41:43.832: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  5 13:41:43.846: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  5 13:41:45.832: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  5 13:41:45.845: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  5 13:41:47.832: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  5 13:41:47.841: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  5 13:41:49.832: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  5 13:41:49.841: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  5 13:41:51.832: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  5 13:41:51.840: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  5 13:41:53.832: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  5 13:41:53.853: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  5 13:41:55.832: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  5 13:41:55.840: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  5 13:41:57.832: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  5 13:41:57.848: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  5 13:41:59.832: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  5 13:41:59.842: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  5 13:42:01.832: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  5 13:42:01.844: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  5 13:42:03.832: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  5 13:42:03.856: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  5 13:42:05.832: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  5 13:42:05.844: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  5 13:42:07.832: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  5 13:42:07.842: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  5 13:42:09.832: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  5 13:42:09.840: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  5 13:42:11.832: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  5 13:42:11.852: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  5 13:42:13.832: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  5 13:42:13.847: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  5 13:42:15.832: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  5 13:42:15.843: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  5 13:42:17.832: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  5 13:42:17.846: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  5 13:42:19.832: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  5 13:42:19.847: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  5 13:42:21.832: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  5 13:42:21.841: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  5 13:42:23.832: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  5 13:42:23.849: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  5 13:42:25.832: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  5 13:42:25.842: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  5 13:42:27.832: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  5 13:42:27.849: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  5 13:42:29.832: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  5 13:42:29.846: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  5 13:42:31.832: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  5 13:42:31.843: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  5 13:42:33.832: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  5 13:42:33.863: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  5 13:42:35.832: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  5 13:42:35.861: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  5 13:42:37.832: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  5 13:42:37.856: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  5 13:42:39.832: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  5 13:42:39.843: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  5 13:42:41.832: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  5 13:42:41.849: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  5 13:42:43.832: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  5 13:42:43.845: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  5 13:42:45.832: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  5 13:42:45.844: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  5 13:42:47.832: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  5 13:42:47.844: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  5 13:42:49.832: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  5 13:42:49.841: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  5 13:42:51.833: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  5 13:42:51.869: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  5 13:42:53.832: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  5 13:42:53.848: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  5 13:42:55.832: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  5 13:42:55.847: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  5 13:42:57.832: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  5 13:42:57.847: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  5 13:42:59.832: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  5 13:42:59.864: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  5 13:43:01.832: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  5 13:43:01.841: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  5 13:43:03.832: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  5 13:43:03.860: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  5 13:43:05.832: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  5 13:43:05.842: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  5 13:43:07.832: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  5 13:43:07.844: INFO: Pod pod-with-poststart-exec-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  5 13:43:07.845: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-604" for this suite.
Jan  5 13:43:29.893: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 13:43:30.043: INFO: namespace container-lifecycle-hook-604 deletion completed in 22.187050061s

• [SLOW TEST:294.833 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute poststart exec hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[k8s.io] Probing container 
  should have monotonically increasing restart count [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  5 13:43:30.044: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should have monotonically increasing restart count [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod liveness-7b7d6611-829d-41a2-91e3-793344f627c9 in namespace container-probe-2273
Jan  5 13:43:38.214: INFO: Started pod liveness-7b7d6611-829d-41a2-91e3-793344f627c9 in namespace container-probe-2273
STEP: checking the pod's current state and verifying that restartCount is present
Jan  5 13:43:38.219: INFO: Initial restart count of pod liveness-7b7d6611-829d-41a2-91e3-793344f627c9 is 0
Jan  5 13:43:58.330: INFO: Restart count of pod container-probe-2273/liveness-7b7d6611-829d-41a2-91e3-793344f627c9 is now 1 (20.111154483s elapsed)
Jan  5 13:44:18.415: INFO: Restart count of pod container-probe-2273/liveness-7b7d6611-829d-41a2-91e3-793344f627c9 is now 2 (40.195858933s elapsed)
Jan  5 13:44:38.572: INFO: Restart count of pod container-probe-2273/liveness-7b7d6611-829d-41a2-91e3-793344f627c9 is now 3 (1m0.352767651s elapsed)
Jan  5 13:44:58.681: INFO: Restart count of pod container-probe-2273/liveness-7b7d6611-829d-41a2-91e3-793344f627c9 is now 4 (1m20.4620056s elapsed)
Jan  5 13:46:01.123: INFO: Restart count of pod container-probe-2273/liveness-7b7d6611-829d-41a2-91e3-793344f627c9 is now 5 (2m22.90406002s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  5 13:46:01.178: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-2273" for this suite.
Jan  5 13:46:07.242: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 13:46:07.384: INFO: namespace container-probe-2273 deletion completed in 6.187820839s

• [SLOW TEST:157.340 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should have monotonically increasing restart count [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should scale a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  5 13:46:07.385: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should scale a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a replication controller
Jan  5 13:46:07.505: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2549'
Jan  5 13:46:10.031: INFO: stderr: ""
Jan  5 13:46:10.032: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jan  5 13:46:10.032: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2549'
Jan  5 13:46:10.255: INFO: stderr: ""
Jan  5 13:46:10.255: INFO: stdout: "update-demo-nautilus-nl7f4 update-demo-nautilus-qcdlb "
Jan  5 13:46:10.255: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-nl7f4 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2549'
Jan  5 13:46:10.371: INFO: stderr: ""
Jan  5 13:46:10.371: INFO: stdout: ""
Jan  5 13:46:10.371: INFO: update-demo-nautilus-nl7f4 is created but not running
Jan  5 13:46:15.373: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2549'
Jan  5 13:46:15.517: INFO: stderr: ""
Jan  5 13:46:15.517: INFO: stdout: "update-demo-nautilus-nl7f4 update-demo-nautilus-qcdlb "
Jan  5 13:46:15.518: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-nl7f4 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2549'
Jan  5 13:46:15.655: INFO: stderr: ""
Jan  5 13:46:15.656: INFO: stdout: ""
Jan  5 13:46:15.656: INFO: update-demo-nautilus-nl7f4 is created but not running
Jan  5 13:46:20.656: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2549'
Jan  5 13:46:20.886: INFO: stderr: ""
Jan  5 13:46:20.886: INFO: stdout: "update-demo-nautilus-nl7f4 update-demo-nautilus-qcdlb "
Jan  5 13:46:20.887: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-nl7f4 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2549'
Jan  5 13:46:21.088: INFO: stderr: ""
Jan  5 13:46:21.088: INFO: stdout: "true"
Jan  5 13:46:21.088: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-nl7f4 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-2549'
Jan  5 13:46:21.213: INFO: stderr: ""
Jan  5 13:46:21.213: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan  5 13:46:21.213: INFO: validating pod update-demo-nautilus-nl7f4
Jan  5 13:46:21.233: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan  5 13:46:21.234: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan  5 13:46:21.234: INFO: update-demo-nautilus-nl7f4 is verified up and running
Jan  5 13:46:21.234: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-qcdlb -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2549'
Jan  5 13:46:21.361: INFO: stderr: ""
Jan  5 13:46:21.361: INFO: stdout: "true"
Jan  5 13:46:21.361: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-qcdlb -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-2549'
Jan  5 13:46:21.451: INFO: stderr: ""
Jan  5 13:46:21.451: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan  5 13:46:21.451: INFO: validating pod update-demo-nautilus-qcdlb
Jan  5 13:46:21.468: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan  5 13:46:21.469: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan  5 13:46:21.469: INFO: update-demo-nautilus-qcdlb is verified up and running
STEP: scaling down the replication controller
Jan  5 13:46:21.471: INFO: scanned /root for discovery docs: 
Jan  5 13:46:21.471: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-2549'
Jan  5 13:46:22.606: INFO: stderr: ""
Jan  5 13:46:22.606: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jan  5 13:46:22.607: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2549'
Jan  5 13:46:22.708: INFO: stderr: ""
Jan  5 13:46:22.708: INFO: stdout: "update-demo-nautilus-nl7f4 update-demo-nautilus-qcdlb "
STEP: Replicas for name=update-demo: expected=1 actual=2
Jan  5 13:46:27.709: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2549'
Jan  5 13:46:27.817: INFO: stderr: ""
Jan  5 13:46:27.817: INFO: stdout: "update-demo-nautilus-nl7f4 update-demo-nautilus-qcdlb "
STEP: Replicas for name=update-demo: expected=1 actual=2
Jan  5 13:46:32.817: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2549'
Jan  5 13:46:32.985: INFO: stderr: ""
Jan  5 13:46:32.985: INFO: stdout: "update-demo-nautilus-nl7f4 update-demo-nautilus-qcdlb "
STEP: Replicas for name=update-demo: expected=1 actual=2
Jan  5 13:46:37.986: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2549'
Jan  5 13:46:38.150: INFO: stderr: ""
Jan  5 13:46:38.150: INFO: stdout: "update-demo-nautilus-qcdlb "
Jan  5 13:46:38.150: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-qcdlb -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2549'
Jan  5 13:46:38.282: INFO: stderr: ""
Jan  5 13:46:38.282: INFO: stdout: "true"
Jan  5 13:46:38.282: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-qcdlb -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-2549'
Jan  5 13:46:38.388: INFO: stderr: ""
Jan  5 13:46:38.388: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan  5 13:46:38.388: INFO: validating pod update-demo-nautilus-qcdlb
Jan  5 13:46:38.398: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan  5 13:46:38.398: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan  5 13:46:38.398: INFO: update-demo-nautilus-qcdlb is verified up and running
STEP: scaling up the replication controller
Jan  5 13:46:38.400: INFO: scanned /root for discovery docs: 
Jan  5 13:46:38.400: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-2549'
Jan  5 13:46:39.598: INFO: stderr: ""
Jan  5 13:46:39.598: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jan  5 13:46:39.599: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2549'
Jan  5 13:46:39.743: INFO: stderr: ""
Jan  5 13:46:39.744: INFO: stdout: "update-demo-nautilus-2qmr7 update-demo-nautilus-qcdlb "
Jan  5 13:46:39.744: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2qmr7 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2549'
Jan  5 13:46:39.952: INFO: stderr: ""
Jan  5 13:46:39.952: INFO: stdout: ""
Jan  5 13:46:39.952: INFO: update-demo-nautilus-2qmr7 is created but not running
Jan  5 13:46:44.953: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2549'
Jan  5 13:46:45.108: INFO: stderr: ""
Jan  5 13:46:45.108: INFO: stdout: "update-demo-nautilus-2qmr7 update-demo-nautilus-qcdlb "
Jan  5 13:46:45.108: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2qmr7 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2549'
Jan  5 13:46:45.303: INFO: stderr: ""
Jan  5 13:46:45.303: INFO: stdout: "true"
Jan  5 13:46:45.304: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2qmr7 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-2549'
Jan  5 13:46:45.442: INFO: stderr: ""
Jan  5 13:46:45.442: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan  5 13:46:45.442: INFO: validating pod update-demo-nautilus-2qmr7
Jan  5 13:46:45.455: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan  5 13:46:45.455: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan  5 13:46:45.455: INFO: update-demo-nautilus-2qmr7 is verified up and running
Jan  5 13:46:45.455: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-qcdlb -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2549'
Jan  5 13:46:45.610: INFO: stderr: ""
Jan  5 13:46:45.611: INFO: stdout: "true"
Jan  5 13:46:45.611: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-qcdlb -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-2549'
Jan  5 13:46:45.734: INFO: stderr: ""
Jan  5 13:46:45.734: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan  5 13:46:45.734: INFO: validating pod update-demo-nautilus-qcdlb
Jan  5 13:46:45.739: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan  5 13:46:45.739: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan  5 13:46:45.739: INFO: update-demo-nautilus-qcdlb is verified up and running
STEP: using delete to clean up resources
Jan  5 13:46:45.739: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2549'
Jan  5 13:46:46.898: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan  5 13:46:46.899: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Jan  5 13:46:46.900: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-2549'
Jan  5 13:46:47.210: INFO: stderr: "No resources found.\n"
Jan  5 13:46:47.210: INFO: stdout: ""
Jan  5 13:46:47.211: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-2549 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Jan  5 13:46:48.661: INFO: stderr: ""
Jan  5 13:46:48.662: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  5 13:46:48.662: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-2549" for this suite.
Jan  5 13:47:10.926: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 13:47:11.041: INFO: namespace kubectl-2549 deletion completed in 22.153636309s

• [SLOW TEST:63.656 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should scale a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  5 13:47:11.042: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Jan  5 13:47:11.214: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f856314b-a31e-4398-8f5a-7b347fb62719" in namespace "downward-api-4672" to be "success or failure"
Jan  5 13:47:11.274: INFO: Pod "downwardapi-volume-f856314b-a31e-4398-8f5a-7b347fb62719": Phase="Pending", Reason="", readiness=false. Elapsed: 59.313609ms
Jan  5 13:47:13.280: INFO: Pod "downwardapi-volume-f856314b-a31e-4398-8f5a-7b347fb62719": Phase="Pending", Reason="", readiness=false. Elapsed: 2.06564532s
Jan  5 13:47:15.290: INFO: Pod "downwardapi-volume-f856314b-a31e-4398-8f5a-7b347fb62719": Phase="Pending", Reason="", readiness=false. Elapsed: 4.075778582s
Jan  5 13:47:17.299: INFO: Pod "downwardapi-volume-f856314b-a31e-4398-8f5a-7b347fb62719": Phase="Pending", Reason="", readiness=false. Elapsed: 6.085108056s
Jan  5 13:47:19.309: INFO: Pod "downwardapi-volume-f856314b-a31e-4398-8f5a-7b347fb62719": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.094881024s
STEP: Saw pod success
Jan  5 13:47:19.309: INFO: Pod "downwardapi-volume-f856314b-a31e-4398-8f5a-7b347fb62719" satisfied condition "success or failure"
Jan  5 13:47:19.313: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-f856314b-a31e-4398-8f5a-7b347fb62719 container client-container: 
STEP: delete the pod
Jan  5 13:47:19.422: INFO: Waiting for pod downwardapi-volume-f856314b-a31e-4398-8f5a-7b347fb62719 to disappear
Jan  5 13:47:19.430: INFO: Pod downwardapi-volume-f856314b-a31e-4398-8f5a-7b347fb62719 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  5 13:47:19.430: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-4672" for this suite.
Jan  5 13:47:25.477: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 13:47:25.667: INFO: namespace downward-api-4672 deletion completed in 6.218918581s

• [SLOW TEST:14.625 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  5 13:47:25.668: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-96bdf6f4-7365-4e8d-bc40-53d2e16043b1
STEP: Creating a pod to test consume secrets
Jan  5 13:47:25.815: INFO: Waiting up to 5m0s for pod "pod-secrets-f92be423-026d-4e08-8a33-46eddecb0e40" in namespace "secrets-1230" to be "success or failure"
Jan  5 13:47:25.822: INFO: Pod "pod-secrets-f92be423-026d-4e08-8a33-46eddecb0e40": Phase="Pending", Reason="", readiness=false. Elapsed: 6.735072ms
Jan  5 13:47:27.916: INFO: Pod "pod-secrets-f92be423-026d-4e08-8a33-46eddecb0e40": Phase="Pending", Reason="", readiness=false. Elapsed: 2.100879484s
Jan  5 13:47:29.930: INFO: Pod "pod-secrets-f92be423-026d-4e08-8a33-46eddecb0e40": Phase="Pending", Reason="", readiness=false. Elapsed: 4.115516123s
Jan  5 13:47:31.938: INFO: Pod "pod-secrets-f92be423-026d-4e08-8a33-46eddecb0e40": Phase="Pending", Reason="", readiness=false. Elapsed: 6.123159188s
Jan  5 13:47:33.962: INFO: Pod "pod-secrets-f92be423-026d-4e08-8a33-46eddecb0e40": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.146861555s
STEP: Saw pod success
Jan  5 13:47:33.962: INFO: Pod "pod-secrets-f92be423-026d-4e08-8a33-46eddecb0e40" satisfied condition "success or failure"
Jan  5 13:47:33.974: INFO: Trying to get logs from node iruya-node pod pod-secrets-f92be423-026d-4e08-8a33-46eddecb0e40 container secret-volume-test: 
STEP: delete the pod
Jan  5 13:47:34.129: INFO: Waiting for pod pod-secrets-f92be423-026d-4e08-8a33-46eddecb0e40 to disappear
Jan  5 13:47:34.153: INFO: Pod pod-secrets-f92be423-026d-4e08-8a33-46eddecb0e40 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  5 13:47:34.153: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-1230" for this suite.
Jan  5 13:47:40.261: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 13:47:40.383: INFO: namespace secrets-1230 deletion completed in 6.219633123s

• [SLOW TEST:14.716 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected combined 
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected combined
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  5 13:47:40.385: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-projected-all-test-volume-7210bd19-4cb4-4931-8136-5c1c65e267d1
STEP: Creating secret with name secret-projected-all-test-volume-dfe1c46f-ec3e-4294-a2e2-bd220c90f95c
STEP: Creating a pod to test Check all projections for projected volume plugin
Jan  5 13:47:40.569: INFO: Waiting up to 5m0s for pod "projected-volume-aaa68a4f-47d7-4932-9961-708ec02caccd" in namespace "projected-449" to be "success or failure"
Jan  5 13:47:40.575: INFO: Pod "projected-volume-aaa68a4f-47d7-4932-9961-708ec02caccd": Phase="Pending", Reason="", readiness=false. Elapsed: 5.570644ms
Jan  5 13:47:42.595: INFO: Pod "projected-volume-aaa68a4f-47d7-4932-9961-708ec02caccd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025668558s
Jan  5 13:47:44.615: INFO: Pod "projected-volume-aaa68a4f-47d7-4932-9961-708ec02caccd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.045312966s
Jan  5 13:47:46.626: INFO: Pod "projected-volume-aaa68a4f-47d7-4932-9961-708ec02caccd": Phase="Pending", Reason="", readiness=false. Elapsed: 6.05679368s
Jan  5 13:47:48.651: INFO: Pod "projected-volume-aaa68a4f-47d7-4932-9961-708ec02caccd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.081518924s
STEP: Saw pod success
Jan  5 13:47:48.651: INFO: Pod "projected-volume-aaa68a4f-47d7-4932-9961-708ec02caccd" satisfied condition "success or failure"
Jan  5 13:47:48.657: INFO: Trying to get logs from node iruya-node pod projected-volume-aaa68a4f-47d7-4932-9961-708ec02caccd container projected-all-volume-test: 
STEP: delete the pod
Jan  5 13:47:48.725: INFO: Waiting for pod projected-volume-aaa68a4f-47d7-4932-9961-708ec02caccd to disappear
Jan  5 13:47:48.737: INFO: Pod projected-volume-aaa68a4f-47d7-4932-9961-708ec02caccd no longer exists
[AfterEach] [sig-storage] Projected combined
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  5 13:47:48.737: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-449" for this suite.
Jan  5 13:47:54.803: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 13:47:54.956: INFO: namespace projected-449 deletion completed in 6.211917193s

• [SLOW TEST:14.571 seconds]
[sig-storage] Projected combined
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-storage] HostPath 
  should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  5 13:47:54.957: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename hostpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37
[It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test hostPath mode
Jan  5 13:47:55.059: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-9706" to be "success or failure"
Jan  5 13:47:55.063: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 3.866683ms
Jan  5 13:47:57.079: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019891334s
Jan  5 13:47:59.090: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.031238454s
Jan  5 13:48:01.096: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 6.036361543s
Jan  5 13:48:03.105: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 8.045465075s
Jan  5 13:48:05.115: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 10.055332578s
Jan  5 13:48:07.130: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.070467054s
STEP: Saw pod success
Jan  5 13:48:07.130: INFO: Pod "pod-host-path-test" satisfied condition "success or failure"
Jan  5 13:48:07.134: INFO: Trying to get logs from node iruya-node pod pod-host-path-test container test-container-1: 
STEP: delete the pod
Jan  5 13:48:07.192: INFO: Waiting for pod pod-host-path-test to disappear
Jan  5 13:48:07.225: INFO: Pod pod-host-path-test no longer exists
[AfterEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  5 13:48:07.225: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "hostpath-9706" for this suite.
Jan  5 13:48:13.287: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 13:48:13.419: INFO: namespace hostpath-9706 deletion completed in 6.187550404s

• [SLOW TEST:18.462 seconds]
[sig-storage] HostPath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34
  should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  5 13:48:13.420: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap configmap-842/configmap-test-ebf90889-eec0-4cb1-8459-ca47b41be879
STEP: Creating a pod to test consume configMaps
Jan  5 13:48:13.539: INFO: Waiting up to 5m0s for pod "pod-configmaps-b1cf0eea-21eb-4e4b-bf8a-78eca32e3de9" in namespace "configmap-842" to be "success or failure"
Jan  5 13:48:13.545: INFO: Pod "pod-configmaps-b1cf0eea-21eb-4e4b-bf8a-78eca32e3de9": Phase="Pending", Reason="", readiness=false. Elapsed: 5.199176ms
Jan  5 13:48:15.559: INFO: Pod "pod-configmaps-b1cf0eea-21eb-4e4b-bf8a-78eca32e3de9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019379438s
Jan  5 13:48:17.568: INFO: Pod "pod-configmaps-b1cf0eea-21eb-4e4b-bf8a-78eca32e3de9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.028604879s
Jan  5 13:48:19.576: INFO: Pod "pod-configmaps-b1cf0eea-21eb-4e4b-bf8a-78eca32e3de9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.036712666s
Jan  5 13:48:21.589: INFO: Pod "pod-configmaps-b1cf0eea-21eb-4e4b-bf8a-78eca32e3de9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.04954003s
STEP: Saw pod success
Jan  5 13:48:21.589: INFO: Pod "pod-configmaps-b1cf0eea-21eb-4e4b-bf8a-78eca32e3de9" satisfied condition "success or failure"
Jan  5 13:48:21.593: INFO: Trying to get logs from node iruya-node pod pod-configmaps-b1cf0eea-21eb-4e4b-bf8a-78eca32e3de9 container env-test: 
STEP: delete the pod
Jan  5 13:48:21.655: INFO: Waiting for pod pod-configmaps-b1cf0eea-21eb-4e4b-bf8a-78eca32e3de9 to disappear
Jan  5 13:48:21.680: INFO: Pod pod-configmaps-b1cf0eea-21eb-4e4b-bf8a-78eca32e3de9 no longer exists
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  5 13:48:21.680: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-842" for this suite.
Jan  5 13:48:27.704: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 13:48:27.870: INFO: namespace configmap-842 deletion completed in 6.183098038s

• [SLOW TEST:14.450 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should fail to create ConfigMap with empty key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  5 13:48:27.872: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create ConfigMap with empty key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap that has name configmap-test-emptyKey-3596bab8-9b13-4826-9bc6-aba779337fc1
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  5 13:48:27.993: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-8151" for this suite.
Jan  5 13:48:34.024: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 13:48:34.128: INFO: namespace configmap-8151 deletion completed in 6.128199404s

• [SLOW TEST:6.257 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should fail to create ConfigMap with empty key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[k8s.io] Pods 
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  5 13:48:34.129: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jan  5 13:48:34.245: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  5 13:48:42.322: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-6189" for this suite.
Jan  5 13:49:28.364: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 13:49:28.539: INFO: namespace pods-6189 deletion completed in 46.209080757s

• [SLOW TEST:54.410 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  5 13:49:28.540: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0666 on node default medium
Jan  5 13:49:28.684: INFO: Waiting up to 5m0s for pod "pod-55d9c9cf-6877-412b-9b8b-9733c734156a" in namespace "emptydir-4513" to be "success or failure"
Jan  5 13:49:28.690: INFO: Pod "pod-55d9c9cf-6877-412b-9b8b-9733c734156a": Phase="Pending", Reason="", readiness=false. Elapsed: 5.656028ms
Jan  5 13:49:30.700: INFO: Pod "pod-55d9c9cf-6877-412b-9b8b-9733c734156a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016149863s
Jan  5 13:49:32.708: INFO: Pod "pod-55d9c9cf-6877-412b-9b8b-9733c734156a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.024099812s
Jan  5 13:49:34.717: INFO: Pod "pod-55d9c9cf-6877-412b-9b8b-9733c734156a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.032884522s
Jan  5 13:49:36.727: INFO: Pod "pod-55d9c9cf-6877-412b-9b8b-9733c734156a": Phase="Pending", Reason="", readiness=false. Elapsed: 8.04265592s
Jan  5 13:49:38.740: INFO: Pod "pod-55d9c9cf-6877-412b-9b8b-9733c734156a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.055866433s
STEP: Saw pod success
Jan  5 13:49:38.740: INFO: Pod "pod-55d9c9cf-6877-412b-9b8b-9733c734156a" satisfied condition "success or failure"
Jan  5 13:49:38.745: INFO: Trying to get logs from node iruya-node pod pod-55d9c9cf-6877-412b-9b8b-9733c734156a container test-container: 
STEP: delete the pod
Jan  5 13:49:38.847: INFO: Waiting for pod pod-55d9c9cf-6877-412b-9b8b-9733c734156a to disappear
Jan  5 13:49:38.866: INFO: Pod pod-55d9c9cf-6877-412b-9b8b-9733c734156a no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  5 13:49:38.866: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-4513" for this suite.
Jan  5 13:49:44.910: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 13:49:45.032: INFO: namespace emptydir-4513 deletion completed in 6.152391888s

• [SLOW TEST:16.493 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  5 13:49:45.033: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test substitution in container's args
Jan  5 13:49:45.110: INFO: Waiting up to 5m0s for pod "var-expansion-6b3cae26-8498-47ee-8de1-c7b589a2c3ec" in namespace "var-expansion-9383" to be "success or failure"
Jan  5 13:49:45.162: INFO: Pod "var-expansion-6b3cae26-8498-47ee-8de1-c7b589a2c3ec": Phase="Pending", Reason="", readiness=false. Elapsed: 51.903034ms
Jan  5 13:49:47.172: INFO: Pod "var-expansion-6b3cae26-8498-47ee-8de1-c7b589a2c3ec": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0614385s
Jan  5 13:49:49.178: INFO: Pod "var-expansion-6b3cae26-8498-47ee-8de1-c7b589a2c3ec": Phase="Pending", Reason="", readiness=false. Elapsed: 4.067092542s
Jan  5 13:49:51.190: INFO: Pod "var-expansion-6b3cae26-8498-47ee-8de1-c7b589a2c3ec": Phase="Pending", Reason="", readiness=false. Elapsed: 6.07957376s
Jan  5 13:49:53.201: INFO: Pod "var-expansion-6b3cae26-8498-47ee-8de1-c7b589a2c3ec": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.090747882s
STEP: Saw pod success
Jan  5 13:49:53.201: INFO: Pod "var-expansion-6b3cae26-8498-47ee-8de1-c7b589a2c3ec" satisfied condition "success or failure"
Jan  5 13:49:53.205: INFO: Trying to get logs from node iruya-node pod var-expansion-6b3cae26-8498-47ee-8de1-c7b589a2c3ec container dapi-container: 
STEP: delete the pod
Jan  5 13:49:53.339: INFO: Waiting for pod var-expansion-6b3cae26-8498-47ee-8de1-c7b589a2c3ec to disappear
Jan  5 13:49:53.359: INFO: Pod var-expansion-6b3cae26-8498-47ee-8de1-c7b589a2c3ec no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  5 13:49:53.359: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-9383" for this suite.
Jan  5 13:49:59.395: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 13:49:59.551: INFO: namespace var-expansion-9383 deletion completed in 6.185147136s

• [SLOW TEST:14.519 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  5 13:49:59.552: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-map-dbe3e29a-d622-43a3-ab13-2a2905a2fe0b
STEP: Creating a pod to test consume configMaps
Jan  5 13:49:59.732: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-8a55dbdd-22d5-4b44-8872-31afae71a3c7" in namespace "projected-5331" to be "success or failure"
Jan  5 13:49:59.751: INFO: Pod "pod-projected-configmaps-8a55dbdd-22d5-4b44-8872-31afae71a3c7": Phase="Pending", Reason="", readiness=false. Elapsed: 18.829908ms
Jan  5 13:50:01.760: INFO: Pod "pod-projected-configmaps-8a55dbdd-22d5-4b44-8872-31afae71a3c7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028179548s
Jan  5 13:50:03.773: INFO: Pod "pod-projected-configmaps-8a55dbdd-22d5-4b44-8872-31afae71a3c7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.04077436s
Jan  5 13:50:05.790: INFO: Pod "pod-projected-configmaps-8a55dbdd-22d5-4b44-8872-31afae71a3c7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.057716156s
Jan  5 13:50:07.800: INFO: Pod "pod-projected-configmaps-8a55dbdd-22d5-4b44-8872-31afae71a3c7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.06781542s
STEP: Saw pod success
Jan  5 13:50:07.800: INFO: Pod "pod-projected-configmaps-8a55dbdd-22d5-4b44-8872-31afae71a3c7" satisfied condition "success or failure"
Jan  5 13:50:07.805: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-8a55dbdd-22d5-4b44-8872-31afae71a3c7 container projected-configmap-volume-test: 
STEP: delete the pod
Jan  5 13:50:07.936: INFO: Waiting for pod pod-projected-configmaps-8a55dbdd-22d5-4b44-8872-31afae71a3c7 to disappear
Jan  5 13:50:07.948: INFO: Pod pod-projected-configmaps-8a55dbdd-22d5-4b44-8872-31afae71a3c7 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  5 13:50:07.948: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5331" for this suite.
Jan  5 13:50:13.980: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 13:50:14.091: INFO: namespace projected-5331 deletion completed in 6.135554623s

• [SLOW TEST:14.540 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  5 13:50:14.092: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-1f3ca504-378d-461c-8663-34ae03d1739d
STEP: Creating a pod to test consume configMaps
Jan  5 13:50:14.266: INFO: Waiting up to 5m0s for pod "pod-configmaps-1a0a7c0b-1474-4061-b91c-75417bcdc973" in namespace "configmap-2375" to be "success or failure"
Jan  5 13:50:14.290: INFO: Pod "pod-configmaps-1a0a7c0b-1474-4061-b91c-75417bcdc973": Phase="Pending", Reason="", readiness=false. Elapsed: 23.849293ms
Jan  5 13:50:16.305: INFO: Pod "pod-configmaps-1a0a7c0b-1474-4061-b91c-75417bcdc973": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03808295s
Jan  5 13:50:18.316: INFO: Pod "pod-configmaps-1a0a7c0b-1474-4061-b91c-75417bcdc973": Phase="Pending", Reason="", readiness=false. Elapsed: 4.049267622s
Jan  5 13:50:20.331: INFO: Pod "pod-configmaps-1a0a7c0b-1474-4061-b91c-75417bcdc973": Phase="Pending", Reason="", readiness=false. Elapsed: 6.063992049s
Jan  5 13:50:22.340: INFO: Pod "pod-configmaps-1a0a7c0b-1474-4061-b91c-75417bcdc973": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.073031245s
STEP: Saw pod success
Jan  5 13:50:22.340: INFO: Pod "pod-configmaps-1a0a7c0b-1474-4061-b91c-75417bcdc973" satisfied condition "success or failure"
Jan  5 13:50:22.343: INFO: Trying to get logs from node iruya-node pod pod-configmaps-1a0a7c0b-1474-4061-b91c-75417bcdc973 container configmap-volume-test: 
STEP: delete the pod
Jan  5 13:50:22.495: INFO: Waiting for pod pod-configmaps-1a0a7c0b-1474-4061-b91c-75417bcdc973 to disappear
Jan  5 13:50:22.506: INFO: Pod pod-configmaps-1a0a7c0b-1474-4061-b91c-75417bcdc973 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  5 13:50:22.506: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-2375" for this suite.
Jan  5 13:50:28.660: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 13:50:28.820: INFO: namespace configmap-2375 deletion completed in 6.220270736s

• [SLOW TEST:14.728 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  5 13:50:28.821: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0644 on node default medium
Jan  5 13:50:28.903: INFO: Waiting up to 5m0s for pod "pod-f3321b03-e6ce-4e4a-bf65-313b1c0dbfe9" in namespace "emptydir-2668" to be "success or failure"
Jan  5 13:50:28.969: INFO: Pod "pod-f3321b03-e6ce-4e4a-bf65-313b1c0dbfe9": Phase="Pending", Reason="", readiness=false. Elapsed: 65.515921ms
Jan  5 13:50:30.978: INFO: Pod "pod-f3321b03-e6ce-4e4a-bf65-313b1c0dbfe9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.074870268s
Jan  5 13:50:32.990: INFO: Pod "pod-f3321b03-e6ce-4e4a-bf65-313b1c0dbfe9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.086166194s
Jan  5 13:50:34.999: INFO: Pod "pod-f3321b03-e6ce-4e4a-bf65-313b1c0dbfe9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.095547615s
Jan  5 13:50:37.009: INFO: Pod "pod-f3321b03-e6ce-4e4a-bf65-313b1c0dbfe9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.105364693s
STEP: Saw pod success
Jan  5 13:50:37.009: INFO: Pod "pod-f3321b03-e6ce-4e4a-bf65-313b1c0dbfe9" satisfied condition "success or failure"
Jan  5 13:50:37.013: INFO: Trying to get logs from node iruya-node pod pod-f3321b03-e6ce-4e4a-bf65-313b1c0dbfe9 container test-container: 
STEP: delete the pod
Jan  5 13:50:37.291: INFO: Waiting for pod pod-f3321b03-e6ce-4e4a-bf65-313b1c0dbfe9 to disappear
Jan  5 13:50:37.317: INFO: Pod pod-f3321b03-e6ce-4e4a-bf65-313b1c0dbfe9 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  5 13:50:37.318: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-2668" for this suite.
Jan  5 13:50:43.368: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 13:50:43.515: INFO: namespace emptydir-2668 deletion completed in 6.189557565s

• [SLOW TEST:14.695 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  5 13:50:43.516: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Jan  5 13:50:43.637: INFO: Waiting up to 5m0s for pod "downwardapi-volume-6299be7d-9e38-400f-931f-281ae250547b" in namespace "projected-9846" to be "success or failure"
Jan  5 13:50:43.650: INFO: Pod "downwardapi-volume-6299be7d-9e38-400f-931f-281ae250547b": Phase="Pending", Reason="", readiness=false. Elapsed: 12.387975ms
Jan  5 13:50:45.666: INFO: Pod "downwardapi-volume-6299be7d-9e38-400f-931f-281ae250547b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028552802s
Jan  5 13:50:47.807: INFO: Pod "downwardapi-volume-6299be7d-9e38-400f-931f-281ae250547b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.169987969s
Jan  5 13:50:49.817: INFO: Pod "downwardapi-volume-6299be7d-9e38-400f-931f-281ae250547b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.180242397s
Jan  5 13:50:51.840: INFO: Pod "downwardapi-volume-6299be7d-9e38-400f-931f-281ae250547b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.20238906s
STEP: Saw pod success
Jan  5 13:50:51.840: INFO: Pod "downwardapi-volume-6299be7d-9e38-400f-931f-281ae250547b" satisfied condition "success or failure"
Jan  5 13:50:51.853: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-6299be7d-9e38-400f-931f-281ae250547b container client-container: 
STEP: delete the pod
Jan  5 13:50:51.965: INFO: Waiting for pod downwardapi-volume-6299be7d-9e38-400f-931f-281ae250547b to disappear
Jan  5 13:50:51.981: INFO: Pod downwardapi-volume-6299be7d-9e38-400f-931f-281ae250547b no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  5 13:50:51.981: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9846" for this suite.
Jan  5 13:50:58.077: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 13:50:58.190: INFO: namespace projected-9846 deletion completed in 6.202608553s

• [SLOW TEST:14.674 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  pod should support shared volumes between containers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  5 13:50:58.190: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] pod should support shared volumes between containers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating Pod
STEP: Waiting for the pod running
STEP: Geting the pod
STEP: Reading file content from the nginx-container
Jan  5 13:51:08.353: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec pod-sharedvolume-e5d07bc2-e116-45ee-bf98-744c756c490d -c busybox-main-container --namespace=emptydir-3641 -- cat /usr/share/volumeshare/shareddata.txt'
Jan  5 13:51:09.073: INFO: stderr: "I0105 13:51:08.677556    1787 log.go:172] (0xc000a782c0) (0xc00090a8c0) Create stream\nI0105 13:51:08.678110    1787 log.go:172] (0xc000a782c0) (0xc00090a8c0) Stream added, broadcasting: 1\nI0105 13:51:08.687728    1787 log.go:172] (0xc000a782c0) Reply frame received for 1\nI0105 13:51:08.687804    1787 log.go:172] (0xc000a782c0) (0xc0006a4280) Create stream\nI0105 13:51:08.687822    1787 log.go:172] (0xc000a782c0) (0xc0006a4280) Stream added, broadcasting: 3\nI0105 13:51:08.689344    1787 log.go:172] (0xc000a782c0) Reply frame received for 3\nI0105 13:51:08.689386    1787 log.go:172] (0xc000a782c0) (0xc00090a960) Create stream\nI0105 13:51:08.689399    1787 log.go:172] (0xc000a782c0) (0xc00090a960) Stream added, broadcasting: 5\nI0105 13:51:08.691713    1787 log.go:172] (0xc000a782c0) Reply frame received for 5\nI0105 13:51:08.885108    1787 log.go:172] (0xc000a782c0) Data frame received for 3\nI0105 13:51:08.885236    1787 log.go:172] (0xc0006a4280) (3) Data frame handling\nI0105 13:51:08.885296    1787 log.go:172] (0xc0006a4280) (3) Data frame sent\nI0105 13:51:09.059813    1787 log.go:172] (0xc000a782c0) Data frame received for 1\nI0105 13:51:09.059998    1787 log.go:172] (0xc000a782c0) (0xc0006a4280) Stream removed, broadcasting: 3\nI0105 13:51:09.060229    1787 log.go:172] (0xc00090a8c0) (1) Data frame handling\nI0105 13:51:09.060250    1787 log.go:172] (0xc000a782c0) (0xc00090a960) Stream removed, broadcasting: 5\nI0105 13:51:09.060278    1787 log.go:172] (0xc00090a8c0) (1) Data frame sent\nI0105 13:51:09.060298    1787 log.go:172] (0xc000a782c0) (0xc00090a8c0) Stream removed, broadcasting: 1\nI0105 13:51:09.060320    1787 log.go:172] (0xc000a782c0) Go away received\nI0105 13:51:09.061733    1787 log.go:172] (0xc000a782c0) (0xc00090a8c0) Stream removed, broadcasting: 1\nI0105 13:51:09.061747    1787 log.go:172] (0xc000a782c0) (0xc0006a4280) Stream removed, broadcasting: 3\nI0105 13:51:09.061754    1787 log.go:172] (0xc000a782c0) (0xc00090a960) Stream removed, broadcasting: 5\n"
Jan  5 13:51:09.073: INFO: stdout: "Hello from the busy-box sub-container\n"
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  5 13:51:09.073: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-3641" for this suite.
Jan  5 13:51:15.142: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 13:51:15.295: INFO: namespace emptydir-3641 deletion completed in 6.207683335s

• [SLOW TEST:17.105 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  pod should support shared volumes between containers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  5 13:51:15.296: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod busybox-2b76abec-c8ab-49b9-a61f-64008018bf7e in namespace container-probe-2279
Jan  5 13:51:25.528: INFO: Started pod busybox-2b76abec-c8ab-49b9-a61f-64008018bf7e in namespace container-probe-2279
STEP: checking the pod's current state and verifying that restartCount is present
Jan  5 13:51:25.533: INFO: Initial restart count of pod busybox-2b76abec-c8ab-49b9-a61f-64008018bf7e is 0
Jan  5 13:52:19.897: INFO: Restart count of pod container-probe-2279/busybox-2b76abec-c8ab-49b9-a61f-64008018bf7e is now 1 (54.364150241s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  5 13:52:19.975: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-2279" for this suite.
Jan  5 13:52:26.018: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 13:52:26.117: INFO: namespace container-probe-2279 deletion completed in 6.124654399s

• [SLOW TEST:70.821 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run job 
  should create a job from an image when restart is OnFailure  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  5 13:52:26.118: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1612
[It] should create a job from an image when restart is OnFailure  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Jan  5 13:52:26.182: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-5158'
Jan  5 13:52:26.307: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Jan  5 13:52:26.308: INFO: stdout: "job.batch/e2e-test-nginx-job created\n"
STEP: verifying the job e2e-test-nginx-job was created
[AfterEach] [k8s.io] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1617
Jan  5 13:52:26.327: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-nginx-job --namespace=kubectl-5158'
Jan  5 13:52:26.448: INFO: stderr: ""
Jan  5 13:52:26.448: INFO: stdout: "job.batch \"e2e-test-nginx-job\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  5 13:52:26.448: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5158" for this suite.
Jan  5 13:52:48.524: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 13:52:48.605: INFO: namespace kubectl-5158 deletion completed in 22.143582268s

• [SLOW TEST:22.487 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create a job from an image when restart is OnFailure  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  5 13:52:48.606: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating the pod
Jan  5 13:52:57.378: INFO: Successfully updated pod "labelsupdate69d2a9ca-b682-4905-9ca0-1f7d96a04131"
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  5 13:52:59.480: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3133" for this suite.
Jan  5 13:53:19.517: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 13:53:19.665: INFO: namespace projected-3133 deletion completed in 20.17639817s

• [SLOW TEST:31.060 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  5 13:53:19.666: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-ed1b1fcf-3ab8-43d9-847d-91f7fb130d93
STEP: Creating a pod to test consume configMaps
Jan  5 13:53:19.765: INFO: Waiting up to 5m0s for pod "pod-configmaps-6d92ca79-41bb-4934-8b33-f7a5713fc799" in namespace "configmap-9511" to be "success or failure"
Jan  5 13:53:19.829: INFO: Pod "pod-configmaps-6d92ca79-41bb-4934-8b33-f7a5713fc799": Phase="Pending", Reason="", readiness=false. Elapsed: 64.016364ms
Jan  5 13:53:21.846: INFO: Pod "pod-configmaps-6d92ca79-41bb-4934-8b33-f7a5713fc799": Phase="Pending", Reason="", readiness=false. Elapsed: 2.081347094s
Jan  5 13:53:23.906: INFO: Pod "pod-configmaps-6d92ca79-41bb-4934-8b33-f7a5713fc799": Phase="Pending", Reason="", readiness=false. Elapsed: 4.141179796s
Jan  5 13:53:25.916: INFO: Pod "pod-configmaps-6d92ca79-41bb-4934-8b33-f7a5713fc799": Phase="Pending", Reason="", readiness=false. Elapsed: 6.150896073s
Jan  5 13:53:27.927: INFO: Pod "pod-configmaps-6d92ca79-41bb-4934-8b33-f7a5713fc799": Phase="Pending", Reason="", readiness=false. Elapsed: 8.162282692s
Jan  5 13:53:29.943: INFO: Pod "pod-configmaps-6d92ca79-41bb-4934-8b33-f7a5713fc799": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.177839864s
STEP: Saw pod success
Jan  5 13:53:29.943: INFO: Pod "pod-configmaps-6d92ca79-41bb-4934-8b33-f7a5713fc799" satisfied condition "success or failure"
Jan  5 13:53:29.949: INFO: Trying to get logs from node iruya-node pod pod-configmaps-6d92ca79-41bb-4934-8b33-f7a5713fc799 container configmap-volume-test: 
STEP: delete the pod
Jan  5 13:53:30.027: INFO: Waiting for pod pod-configmaps-6d92ca79-41bb-4934-8b33-f7a5713fc799 to disappear
Jan  5 13:53:30.040: INFO: Pod pod-configmaps-6d92ca79-41bb-4934-8b33-f7a5713fc799 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  5 13:53:30.040: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-9511" for this suite.
Jan  5 13:53:36.075: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 13:53:36.247: INFO: namespace configmap-9511 deletion completed in 6.198352086s

• [SLOW TEST:16.581 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run pod 
  should create a pod from an image when restart is Never  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  5 13:53:36.248: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1685
[It] should create a pod from an image when restart is Never  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Jan  5 13:53:36.362: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-2674'
Jan  5 13:53:36.523: INFO: stderr: ""
Jan  5 13:53:36.523: INFO: stdout: "pod/e2e-test-nginx-pod created\n"
STEP: verifying the pod e2e-test-nginx-pod was created
[AfterEach] [k8s.io] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1690
Jan  5 13:53:36.581: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=kubectl-2674'
Jan  5 13:53:41.005: INFO: stderr: ""
Jan  5 13:53:41.005: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  5 13:53:41.005: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-2674" for this suite.
Jan  5 13:53:47.037: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 13:53:47.188: INFO: namespace kubectl-2674 deletion completed in 6.179182724s

• [SLOW TEST:10.940 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create a pod from an image when restart is Never  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  5 13:53:47.188: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name projected-secret-test-857acac8-8516-4c24-8cdc-53210e2ad810
STEP: Creating a pod to test consume secrets
Jan  5 13:53:47.325: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-1834f3b0-46b7-4578-8f51-b72324b92e70" in namespace "projected-6266" to be "success or failure"
Jan  5 13:53:47.334: INFO: Pod "pod-projected-secrets-1834f3b0-46b7-4578-8f51-b72324b92e70": Phase="Pending", Reason="", readiness=false. Elapsed: 9.355527ms
Jan  5 13:53:49.344: INFO: Pod "pod-projected-secrets-1834f3b0-46b7-4578-8f51-b72324b92e70": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01962631s
Jan  5 13:53:51.354: INFO: Pod "pod-projected-secrets-1834f3b0-46b7-4578-8f51-b72324b92e70": Phase="Pending", Reason="", readiness=false. Elapsed: 4.028656049s
Jan  5 13:53:53.382: INFO: Pod "pod-projected-secrets-1834f3b0-46b7-4578-8f51-b72324b92e70": Phase="Pending", Reason="", readiness=false. Elapsed: 6.056972253s
Jan  5 13:53:55.389: INFO: Pod "pod-projected-secrets-1834f3b0-46b7-4578-8f51-b72324b92e70": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.063749858s
STEP: Saw pod success
Jan  5 13:53:55.389: INFO: Pod "pod-projected-secrets-1834f3b0-46b7-4578-8f51-b72324b92e70" satisfied condition "success or failure"
Jan  5 13:53:55.393: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-1834f3b0-46b7-4578-8f51-b72324b92e70 container projected-secret-volume-test: 
STEP: delete the pod
Jan  5 13:53:55.441: INFO: Waiting for pod pod-projected-secrets-1834f3b0-46b7-4578-8f51-b72324b92e70 to disappear
Jan  5 13:53:55.467: INFO: Pod pod-projected-secrets-1834f3b0-46b7-4578-8f51-b72324b92e70 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  5 13:53:55.467: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6266" for this suite.
Jan  5 13:54:01.586: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 13:54:01.809: INFO: namespace projected-6266 deletion completed in 6.280288951s

• [SLOW TEST:14.621 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  5 13:54:01.816: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
[It] should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jan  5 13:54:02.072: INFO: Create a RollingUpdate DaemonSet
Jan  5 13:54:02.083: INFO: Check that daemon pods launch on every node of the cluster
Jan  5 13:54:02.232: INFO: Number of nodes with available pods: 0
Jan  5 13:54:02.232: INFO: Node iruya-node is running more than one daemon pod
Jan  5 13:54:04.161: INFO: Number of nodes with available pods: 0
Jan  5 13:54:04.161: INFO: Node iruya-node is running more than one daemon pod
Jan  5 13:54:04.306: INFO: Number of nodes with available pods: 0
Jan  5 13:54:04.307: INFO: Node iruya-node is running more than one daemon pod
Jan  5 13:54:05.247: INFO: Number of nodes with available pods: 0
Jan  5 13:54:05.247: INFO: Node iruya-node is running more than one daemon pod
Jan  5 13:54:06.268: INFO: Number of nodes with available pods: 0
Jan  5 13:54:06.268: INFO: Node iruya-node is running more than one daemon pod
Jan  5 13:54:07.895: INFO: Number of nodes with available pods: 0
Jan  5 13:54:07.896: INFO: Node iruya-node is running more than one daemon pod
Jan  5 13:54:08.364: INFO: Number of nodes with available pods: 0
Jan  5 13:54:08.364: INFO: Node iruya-node is running more than one daemon pod
Jan  5 13:54:09.264: INFO: Number of nodes with available pods: 0
Jan  5 13:54:09.264: INFO: Node iruya-node is running more than one daemon pod
Jan  5 13:54:10.252: INFO: Number of nodes with available pods: 0
Jan  5 13:54:10.252: INFO: Node iruya-node is running more than one daemon pod
Jan  5 13:54:11.284: INFO: Number of nodes with available pods: 2
Jan  5 13:54:11.284: INFO: Number of running nodes: 2, number of available pods: 2
Jan  5 13:54:11.284: INFO: Update the DaemonSet to trigger a rollout
Jan  5 13:54:11.300: INFO: Updating DaemonSet daemon-set
Jan  5 13:54:19.372: INFO: Roll back the DaemonSet before rollout is complete
Jan  5 13:54:19.392: INFO: Updating DaemonSet daemon-set
Jan  5 13:54:19.392: INFO: Make sure DaemonSet rollback is complete
Jan  5 13:54:19.428: INFO: Wrong image for pod: daemon-set-g8lzm. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent.
Jan  5 13:54:19.428: INFO: Pod daemon-set-g8lzm is not available
Jan  5 13:54:20.774: INFO: Pod daemon-set-mpmfn is not available
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-7512, will wait for the garbage collector to delete the pods
Jan  5 13:54:21.003: INFO: Deleting DaemonSet.extensions daemon-set took: 9.853281ms
Jan  5 13:54:22.204: INFO: Terminating DaemonSet.extensions daemon-set pods took: 1.200820675s
Jan  5 13:54:36.631: INFO: Number of nodes with available pods: 0
Jan  5 13:54:36.631: INFO: Number of running nodes: 0, number of available pods: 0
Jan  5 13:54:36.641: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-7512/daemonsets","resourceVersion":"19403120"},"items":null}

Jan  5 13:54:36.644: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-7512/pods","resourceVersion":"19403120"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  5 13:54:36.663: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-7512" for this suite.
Jan  5 13:54:42.690: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 13:54:42.832: INFO: namespace daemonsets-7512 deletion completed in 6.1644936s

• [SLOW TEST:41.016 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  5 13:54:42.832: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-3986.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-3986.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-3986.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-3986.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-3986.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-3986.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-3986.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-3986.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-3986.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-3986.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-3986.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-3986.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3986.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 114.195.111.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.111.195.114_udp@PTR;check="$$(dig +tcp +noall +answer +search 114.195.111.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.111.195.114_tcp@PTR;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-3986.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-3986.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-3986.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-3986.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-3986.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-3986.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-3986.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-3986.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-3986.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-3986.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-3986.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-3986.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3986.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 114.195.111.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.111.195.114_udp@PTR;check="$$(dig +tcp +noall +answer +search 114.195.111.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.111.195.114_tcp@PTR;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jan  5 13:54:55.199: INFO: Unable to read wheezy_udp@dns-test-service.dns-3986.svc.cluster.local from pod dns-3986/dns-test-e491ede7-ee40-47e9-b262-2ed885ba6cbd: the server could not find the requested resource (get pods dns-test-e491ede7-ee40-47e9-b262-2ed885ba6cbd)
Jan  5 13:54:55.205: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3986.svc.cluster.local from pod dns-3986/dns-test-e491ede7-ee40-47e9-b262-2ed885ba6cbd: the server could not find the requested resource (get pods dns-test-e491ede7-ee40-47e9-b262-2ed885ba6cbd)
Jan  5 13:54:55.210: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3986.svc.cluster.local from pod dns-3986/dns-test-e491ede7-ee40-47e9-b262-2ed885ba6cbd: the server could not find the requested resource (get pods dns-test-e491ede7-ee40-47e9-b262-2ed885ba6cbd)
Jan  5 13:54:55.216: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3986.svc.cluster.local from pod dns-3986/dns-test-e491ede7-ee40-47e9-b262-2ed885ba6cbd: the server could not find the requested resource (get pods dns-test-e491ede7-ee40-47e9-b262-2ed885ba6cbd)
Jan  5 13:54:55.221: INFO: Unable to read wheezy_udp@_http._tcp.test-service-2.dns-3986.svc.cluster.local from pod dns-3986/dns-test-e491ede7-ee40-47e9-b262-2ed885ba6cbd: the server could not find the requested resource (get pods dns-test-e491ede7-ee40-47e9-b262-2ed885ba6cbd)
Jan  5 13:54:55.226: INFO: Unable to read wheezy_tcp@_http._tcp.test-service-2.dns-3986.svc.cluster.local from pod dns-3986/dns-test-e491ede7-ee40-47e9-b262-2ed885ba6cbd: the server could not find the requested resource (get pods dns-test-e491ede7-ee40-47e9-b262-2ed885ba6cbd)
Jan  5 13:54:55.232: INFO: Unable to read wheezy_udp@PodARecord from pod dns-3986/dns-test-e491ede7-ee40-47e9-b262-2ed885ba6cbd: the server could not find the requested resource (get pods dns-test-e491ede7-ee40-47e9-b262-2ed885ba6cbd)
Jan  5 13:54:55.236: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-3986/dns-test-e491ede7-ee40-47e9-b262-2ed885ba6cbd: the server could not find the requested resource (get pods dns-test-e491ede7-ee40-47e9-b262-2ed885ba6cbd)
Jan  5 13:54:55.241: INFO: Unable to read 10.111.195.114_udp@PTR from pod dns-3986/dns-test-e491ede7-ee40-47e9-b262-2ed885ba6cbd: the server could not find the requested resource (get pods dns-test-e491ede7-ee40-47e9-b262-2ed885ba6cbd)
Jan  5 13:54:55.245: INFO: Unable to read 10.111.195.114_tcp@PTR from pod dns-3986/dns-test-e491ede7-ee40-47e9-b262-2ed885ba6cbd: the server could not find the requested resource (get pods dns-test-e491ede7-ee40-47e9-b262-2ed885ba6cbd)
Jan  5 13:54:55.250: INFO: Unable to read jessie_udp@dns-test-service.dns-3986.svc.cluster.local from pod dns-3986/dns-test-e491ede7-ee40-47e9-b262-2ed885ba6cbd: the server could not find the requested resource (get pods dns-test-e491ede7-ee40-47e9-b262-2ed885ba6cbd)
Jan  5 13:54:55.254: INFO: Unable to read jessie_tcp@dns-test-service.dns-3986.svc.cluster.local from pod dns-3986/dns-test-e491ede7-ee40-47e9-b262-2ed885ba6cbd: the server could not find the requested resource (get pods dns-test-e491ede7-ee40-47e9-b262-2ed885ba6cbd)
Jan  5 13:54:55.262: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3986.svc.cluster.local from pod dns-3986/dns-test-e491ede7-ee40-47e9-b262-2ed885ba6cbd: the server could not find the requested resource (get pods dns-test-e491ede7-ee40-47e9-b262-2ed885ba6cbd)
Jan  5 13:54:55.267: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3986.svc.cluster.local from pod dns-3986/dns-test-e491ede7-ee40-47e9-b262-2ed885ba6cbd: the server could not find the requested resource (get pods dns-test-e491ede7-ee40-47e9-b262-2ed885ba6cbd)
Jan  5 13:54:55.273: INFO: Unable to read jessie_udp@_http._tcp.test-service-2.dns-3986.svc.cluster.local from pod dns-3986/dns-test-e491ede7-ee40-47e9-b262-2ed885ba6cbd: the server could not find the requested resource (get pods dns-test-e491ede7-ee40-47e9-b262-2ed885ba6cbd)
Jan  5 13:54:55.278: INFO: Unable to read jessie_tcp@_http._tcp.test-service-2.dns-3986.svc.cluster.local from pod dns-3986/dns-test-e491ede7-ee40-47e9-b262-2ed885ba6cbd: the server could not find the requested resource (get pods dns-test-e491ede7-ee40-47e9-b262-2ed885ba6cbd)
Jan  5 13:54:55.282: INFO: Unable to read jessie_udp@PodARecord from pod dns-3986/dns-test-e491ede7-ee40-47e9-b262-2ed885ba6cbd: the server could not find the requested resource (get pods dns-test-e491ede7-ee40-47e9-b262-2ed885ba6cbd)
Jan  5 13:54:55.286: INFO: Unable to read jessie_tcp@PodARecord from pod dns-3986/dns-test-e491ede7-ee40-47e9-b262-2ed885ba6cbd: the server could not find the requested resource (get pods dns-test-e491ede7-ee40-47e9-b262-2ed885ba6cbd)
Jan  5 13:54:55.292: INFO: Unable to read 10.111.195.114_udp@PTR from pod dns-3986/dns-test-e491ede7-ee40-47e9-b262-2ed885ba6cbd: the server could not find the requested resource (get pods dns-test-e491ede7-ee40-47e9-b262-2ed885ba6cbd)
Jan  5 13:54:55.299: INFO: Unable to read 10.111.195.114_tcp@PTR from pod dns-3986/dns-test-e491ede7-ee40-47e9-b262-2ed885ba6cbd: the server could not find the requested resource (get pods dns-test-e491ede7-ee40-47e9-b262-2ed885ba6cbd)
Jan  5 13:54:55.299: INFO: Lookups using dns-3986/dns-test-e491ede7-ee40-47e9-b262-2ed885ba6cbd failed for: [wheezy_udp@dns-test-service.dns-3986.svc.cluster.local wheezy_tcp@dns-test-service.dns-3986.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-3986.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-3986.svc.cluster.local wheezy_udp@_http._tcp.test-service-2.dns-3986.svc.cluster.local wheezy_tcp@_http._tcp.test-service-2.dns-3986.svc.cluster.local wheezy_udp@PodARecord wheezy_tcp@PodARecord 10.111.195.114_udp@PTR 10.111.195.114_tcp@PTR jessie_udp@dns-test-service.dns-3986.svc.cluster.local jessie_tcp@dns-test-service.dns-3986.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-3986.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-3986.svc.cluster.local jessie_udp@_http._tcp.test-service-2.dns-3986.svc.cluster.local jessie_tcp@_http._tcp.test-service-2.dns-3986.svc.cluster.local jessie_udp@PodARecord jessie_tcp@PodARecord 10.111.195.114_udp@PTR 10.111.195.114_tcp@PTR]

Jan  5 13:55:00.445: INFO: DNS probes using dns-3986/dns-test-e491ede7-ee40-47e9-b262-2ed885ba6cbd succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  5 13:55:00.733: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-3986" for this suite.
Jan  5 13:55:06.787: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 13:55:06.911: INFO: namespace dns-3986 deletion completed in 6.161929982s

• [SLOW TEST:24.079 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  5 13:55:06.911: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Jan  5 13:55:23.108: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan  5 13:55:23.529: INFO: Pod pod-with-poststart-http-hook still exists
Jan  5 13:55:25.530: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan  5 13:55:25.540: INFO: Pod pod-with-poststart-http-hook still exists
Jan  5 13:55:27.530: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan  5 13:55:27.540: INFO: Pod pod-with-poststart-http-hook still exists
Jan  5 13:55:29.530: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan  5 13:55:29.541: INFO: Pod pod-with-poststart-http-hook still exists
Jan  5 13:55:31.530: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan  5 13:55:31.539: INFO: Pod pod-with-poststart-http-hook still exists
Jan  5 13:55:33.530: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan  5 13:55:33.541: INFO: Pod pod-with-poststart-http-hook still exists
Jan  5 13:55:35.530: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan  5 13:55:35.540: INFO: Pod pod-with-poststart-http-hook still exists
Jan  5 13:55:37.530: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan  5 13:55:37.539: INFO: Pod pod-with-poststart-http-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  5 13:55:37.539: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-8803" for this suite.
Jan  5 13:55:59.757: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 13:55:59.915: INFO: namespace container-lifecycle-hook-8803 deletion completed in 22.367929418s

• [SLOW TEST:53.004 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute poststart http hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should surface a failure condition on a common issue like exceeded quota [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  5 13:55:59.916: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should surface a failure condition on a common issue like exceeded quota [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jan  5 13:56:00.038: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace
STEP: Creating rc "condition-test" that asks for more than the allowed pod quota
STEP: Checking rc "condition-test" has the desired failure condition set
STEP: Scaling down rc "condition-test" to satisfy pod quota
Jan  5 13:56:01.837: INFO: Updating replication controller "condition-test"
STEP: Checking rc "condition-test" has no failure condition set
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  5 13:56:03.163: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-1999" for this suite.
Jan  5 13:56:11.244: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 13:56:11.383: INFO: namespace replication-controller-1999 deletion completed in 8.213756996s

• [SLOW TEST:11.467 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should surface a failure condition on a common issue like exceeded quota [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  5 13:56:11.384: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-c6711341-57ef-4576-8c39-02c7c4be85ee
STEP: Creating a pod to test consume configMaps
Jan  5 13:56:12.940: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-a3b20037-99c3-4100-ae4d-197117e5f39f" in namespace "projected-8611" to be "success or failure"
Jan  5 13:56:13.068: INFO: Pod "pod-projected-configmaps-a3b20037-99c3-4100-ae4d-197117e5f39f": Phase="Pending", Reason="", readiness=false. Elapsed: 127.703411ms
Jan  5 13:56:15.081: INFO: Pod "pod-projected-configmaps-a3b20037-99c3-4100-ae4d-197117e5f39f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.140665226s
Jan  5 13:56:17.096: INFO: Pod "pod-projected-configmaps-a3b20037-99c3-4100-ae4d-197117e5f39f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.156120517s
Jan  5 13:56:19.107: INFO: Pod "pod-projected-configmaps-a3b20037-99c3-4100-ae4d-197117e5f39f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.166671517s
Jan  5 13:56:21.121: INFO: Pod "pod-projected-configmaps-a3b20037-99c3-4100-ae4d-197117e5f39f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.180671375s
STEP: Saw pod success
Jan  5 13:56:21.121: INFO: Pod "pod-projected-configmaps-a3b20037-99c3-4100-ae4d-197117e5f39f" satisfied condition "success or failure"
Jan  5 13:56:21.128: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-a3b20037-99c3-4100-ae4d-197117e5f39f container projected-configmap-volume-test: 
STEP: delete the pod
Jan  5 13:56:21.326: INFO: Waiting for pod pod-projected-configmaps-a3b20037-99c3-4100-ae4d-197117e5f39f to disappear
Jan  5 13:56:21.336: INFO: Pod pod-projected-configmaps-a3b20037-99c3-4100-ae4d-197117e5f39f no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  5 13:56:21.337: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8611" for this suite.
Jan  5 13:56:27.408: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 13:56:27.628: INFO: namespace projected-8611 deletion completed in 6.282882939s

• [SLOW TEST:16.244 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  5 13:56:27.629: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward api env vars
Jan  5 13:56:27.790: INFO: Waiting up to 5m0s for pod "downward-api-64f593cd-dfaa-4fb0-8d7c-829d923565a6" in namespace "downward-api-5870" to be "success or failure"
Jan  5 13:56:27.799: INFO: Pod "downward-api-64f593cd-dfaa-4fb0-8d7c-829d923565a6": Phase="Pending", Reason="", readiness=false. Elapsed: 9.117393ms
Jan  5 13:56:29.811: INFO: Pod "downward-api-64f593cd-dfaa-4fb0-8d7c-829d923565a6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02083124s
Jan  5 13:56:31.826: INFO: Pod "downward-api-64f593cd-dfaa-4fb0-8d7c-829d923565a6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.035373811s
Jan  5 13:56:33.840: INFO: Pod "downward-api-64f593cd-dfaa-4fb0-8d7c-829d923565a6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.049458232s
Jan  5 13:56:35.852: INFO: Pod "downward-api-64f593cd-dfaa-4fb0-8d7c-829d923565a6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.061776359s
STEP: Saw pod success
Jan  5 13:56:35.852: INFO: Pod "downward-api-64f593cd-dfaa-4fb0-8d7c-829d923565a6" satisfied condition "success or failure"
Jan  5 13:56:35.860: INFO: Trying to get logs from node iruya-node pod downward-api-64f593cd-dfaa-4fb0-8d7c-829d923565a6 container dapi-container: 
STEP: delete the pod
Jan  5 13:56:35.930: INFO: Waiting for pod downward-api-64f593cd-dfaa-4fb0-8d7c-829d923565a6 to disappear
Jan  5 13:56:35.937: INFO: Pod downward-api-64f593cd-dfaa-4fb0-8d7c-829d923565a6 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  5 13:56:35.937: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-5870" for this suite.
Jan  5 13:56:41.969: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 13:56:42.143: INFO: namespace downward-api-5870 deletion completed in 6.194530693s

• [SLOW TEST:14.514 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[k8s.io] Pods 
  should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  5 13:56:42.143: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Jan  5 13:56:50.919: INFO: Successfully updated pod "pod-update-e5440deb-ec8d-4866-a0c8-0ac1097a607c"
STEP: verifying the updated pod is in kubernetes
Jan  5 13:56:50.958: INFO: Pod update OK
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  5 13:56:50.958: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-5512" for this suite.
Jan  5 13:57:12.995: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 13:57:13.086: INFO: namespace pods-5512 deletion completed in 22.121960256s

• [SLOW TEST:30.943 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-storage] Projected configMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  5 13:57:13.087: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with configMap that has name projected-configmap-test-upd-311f70e0-ff34-47af-ad5f-740d4c9dab79
STEP: Creating the pod
STEP: Updating configmap projected-configmap-test-upd-311f70e0-ff34-47af-ad5f-740d4c9dab79
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  5 13:57:23.437: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3593" for this suite.
Jan  5 13:57:45.515: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 13:57:45.631: INFO: namespace projected-3593 deletion completed in 22.185303131s

• [SLOW TEST:32.545 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  5 13:57:45.632: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Performing setup for networking test in namespace pod-network-test-3110
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Jan  5 13:57:45.685: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Jan  5 13:58:22.022: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostName&protocol=http&host=10.44.0.1&port=8080&tries=1'] Namespace:pod-network-test-3110 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan  5 13:58:22.023: INFO: >>> kubeConfig: /root/.kube/config
I0105 13:58:22.092734       8 log.go:172] (0xc0017b8d10) (0xc0014ee1e0) Create stream
I0105 13:58:22.092843       8 log.go:172] (0xc0017b8d10) (0xc0014ee1e0) Stream added, broadcasting: 1
I0105 13:58:22.101948       8 log.go:172] (0xc0017b8d10) Reply frame received for 1
I0105 13:58:22.102021       8 log.go:172] (0xc0017b8d10) (0xc00223d400) Create stream
I0105 13:58:22.102037       8 log.go:172] (0xc0017b8d10) (0xc00223d400) Stream added, broadcasting: 3
I0105 13:58:22.104560       8 log.go:172] (0xc0017b8d10) Reply frame received for 3
I0105 13:58:22.104613       8 log.go:172] (0xc0017b8d10) (0xc000a5b9a0) Create stream
I0105 13:58:22.104630       8 log.go:172] (0xc0017b8d10) (0xc000a5b9a0) Stream added, broadcasting: 5
I0105 13:58:22.107533       8 log.go:172] (0xc0017b8d10) Reply frame received for 5
I0105 13:58:22.261862       8 log.go:172] (0xc0017b8d10) Data frame received for 3
I0105 13:58:22.261905       8 log.go:172] (0xc00223d400) (3) Data frame handling
I0105 13:58:22.261923       8 log.go:172] (0xc00223d400) (3) Data frame sent
I0105 13:58:22.387047       8 log.go:172] (0xc0017b8d10) Data frame received for 1
I0105 13:58:22.387318       8 log.go:172] (0xc0017b8d10) (0xc00223d400) Stream removed, broadcasting: 3
I0105 13:58:22.387470       8 log.go:172] (0xc0014ee1e0) (1) Data frame handling
I0105 13:58:22.387820       8 log.go:172] (0xc0014ee1e0) (1) Data frame sent
I0105 13:58:22.387922       8 log.go:172] (0xc0017b8d10) (0xc0014ee1e0) Stream removed, broadcasting: 1
I0105 13:58:22.388018       8 log.go:172] (0xc0017b8d10) (0xc000a5b9a0) Stream removed, broadcasting: 5
I0105 13:58:22.388575       8 log.go:172] (0xc0017b8d10) (0xc0014ee1e0) Stream removed, broadcasting: 1
I0105 13:58:22.388627       8 log.go:172] (0xc0017b8d10) Go away received
I0105 13:58:22.388684       8 log.go:172] (0xc0017b8d10) (0xc00223d400) Stream removed, broadcasting: 3
I0105 13:58:22.388700       8 log.go:172] (0xc0017b8d10) (0xc000a5b9a0) Stream removed, broadcasting: 5
Jan  5 13:58:22.388: INFO: Waiting for endpoints: map[]
Jan  5 13:58:22.397: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostName&protocol=http&host=10.32.0.4&port=8080&tries=1'] Namespace:pod-network-test-3110 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan  5 13:58:22.397: INFO: >>> kubeConfig: /root/.kube/config
I0105 13:58:22.471389       8 log.go:172] (0xc001c680b0) (0xc001956e60) Create stream
I0105 13:58:22.471611       8 log.go:172] (0xc001c680b0) (0xc001956e60) Stream added, broadcasting: 1
I0105 13:58:22.479304       8 log.go:172] (0xc001c680b0) Reply frame received for 1
I0105 13:58:22.479347       8 log.go:172] (0xc001c680b0) (0xc000a5be00) Create stream
I0105 13:58:22.479364       8 log.go:172] (0xc001c680b0) (0xc000a5be00) Stream added, broadcasting: 3
I0105 13:58:22.481152       8 log.go:172] (0xc001c680b0) Reply frame received for 3
I0105 13:58:22.481194       8 log.go:172] (0xc001c680b0) (0xc00213a140) Create stream
I0105 13:58:22.481209       8 log.go:172] (0xc001c680b0) (0xc00213a140) Stream added, broadcasting: 5
I0105 13:58:22.484652       8 log.go:172] (0xc001c680b0) Reply frame received for 5
I0105 13:58:22.646470       8 log.go:172] (0xc001c680b0) Data frame received for 3
I0105 13:58:22.646527       8 log.go:172] (0xc000a5be00) (3) Data frame handling
I0105 13:58:22.646582       8 log.go:172] (0xc000a5be00) (3) Data frame sent
I0105 13:58:22.781052       8 log.go:172] (0xc001c680b0) (0xc000a5be00) Stream removed, broadcasting: 3
I0105 13:58:22.781602       8 log.go:172] (0xc001c680b0) (0xc00213a140) Stream removed, broadcasting: 5
I0105 13:58:22.781716       8 log.go:172] (0xc001c680b0) Data frame received for 1
I0105 13:58:22.781755       8 log.go:172] (0xc001956e60) (1) Data frame handling
I0105 13:58:22.781783       8 log.go:172] (0xc001956e60) (1) Data frame sent
I0105 13:58:22.781797       8 log.go:172] (0xc001c680b0) (0xc001956e60) Stream removed, broadcasting: 1
I0105 13:58:22.781814       8 log.go:172] (0xc001c680b0) Go away received
I0105 13:58:22.782945       8 log.go:172] (0xc001c680b0) (0xc001956e60) Stream removed, broadcasting: 1
I0105 13:58:22.782977       8 log.go:172] (0xc001c680b0) (0xc000a5be00) Stream removed, broadcasting: 3
I0105 13:58:22.782991       8 log.go:172] (0xc001c680b0) (0xc00213a140) Stream removed, broadcasting: 5
Jan  5 13:58:22.783: INFO: Waiting for endpoints: map[]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  5 13:58:22.783: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-3110" for this suite.
Jan  5 13:58:45.072: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 13:58:45.178: INFO: namespace pod-network-test-3110 deletion completed in 22.382517279s

• [SLOW TEST:59.547 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  5 13:58:45.179: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-map-eb64fdd4-529a-4f16-b5f8-1d6592902b75
STEP: Creating a pod to test consume configMaps
Jan  5 13:58:45.337: INFO: Waiting up to 5m0s for pod "pod-configmaps-c87c3f76-b8ef-48f5-93bc-28a386f78e1d" in namespace "configmap-8115" to be "success or failure"
Jan  5 13:58:45.389: INFO: Pod "pod-configmaps-c87c3f76-b8ef-48f5-93bc-28a386f78e1d": Phase="Pending", Reason="", readiness=false. Elapsed: 51.930812ms
Jan  5 13:58:47.402: INFO: Pod "pod-configmaps-c87c3f76-b8ef-48f5-93bc-28a386f78e1d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.065179855s
Jan  5 13:58:49.410: INFO: Pod "pod-configmaps-c87c3f76-b8ef-48f5-93bc-28a386f78e1d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.073334833s
Jan  5 13:58:51.420: INFO: Pod "pod-configmaps-c87c3f76-b8ef-48f5-93bc-28a386f78e1d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.082675325s
Jan  5 13:58:53.429: INFO: Pod "pod-configmaps-c87c3f76-b8ef-48f5-93bc-28a386f78e1d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.091561848s
Jan  5 13:58:55.437: INFO: Pod "pod-configmaps-c87c3f76-b8ef-48f5-93bc-28a386f78e1d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.099601724s
STEP: Saw pod success
Jan  5 13:58:55.437: INFO: Pod "pod-configmaps-c87c3f76-b8ef-48f5-93bc-28a386f78e1d" satisfied condition "success or failure"
Jan  5 13:58:55.441: INFO: Trying to get logs from node iruya-node pod pod-configmaps-c87c3f76-b8ef-48f5-93bc-28a386f78e1d container configmap-volume-test: 
STEP: delete the pod
Jan  5 13:58:55.509: INFO: Waiting for pod pod-configmaps-c87c3f76-b8ef-48f5-93bc-28a386f78e1d to disappear
Jan  5 13:58:55.529: INFO: Pod pod-configmaps-c87c3f76-b8ef-48f5-93bc-28a386f78e1d no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  5 13:58:55.529: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-8115" for this suite.
Jan  5 13:59:01.554: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 13:59:01.671: INFO: namespace configmap-8115 deletion completed in 6.134972905s

• [SLOW TEST:16.492 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  5 13:59:01.671: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-ecc566f8-5e76-4979-84c8-92b2e92b8742
STEP: Creating a pod to test consume configMaps
Jan  5 13:59:02.297: INFO: Waiting up to 5m0s for pod "pod-configmaps-ce6690ea-e595-46b5-af95-6d4230be96f3" in namespace "configmap-6228" to be "success or failure"
Jan  5 13:59:02.302: INFO: Pod "pod-configmaps-ce6690ea-e595-46b5-af95-6d4230be96f3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.587493ms
Jan  5 13:59:04.312: INFO: Pod "pod-configmaps-ce6690ea-e595-46b5-af95-6d4230be96f3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015188909s
Jan  5 13:59:06.346: INFO: Pod "pod-configmaps-ce6690ea-e595-46b5-af95-6d4230be96f3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.048832867s
Jan  5 13:59:08.361: INFO: Pod "pod-configmaps-ce6690ea-e595-46b5-af95-6d4230be96f3": Phase="Pending", Reason="", readiness=false. Elapsed: 6.063449518s
Jan  5 13:59:10.376: INFO: Pod "pod-configmaps-ce6690ea-e595-46b5-af95-6d4230be96f3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.079150128s
STEP: Saw pod success
Jan  5 13:59:10.377: INFO: Pod "pod-configmaps-ce6690ea-e595-46b5-af95-6d4230be96f3" satisfied condition "success or failure"
Jan  5 13:59:10.401: INFO: Trying to get logs from node iruya-node pod pod-configmaps-ce6690ea-e595-46b5-af95-6d4230be96f3 container configmap-volume-test: 
STEP: delete the pod
Jan  5 13:59:10.578: INFO: Waiting for pod pod-configmaps-ce6690ea-e595-46b5-af95-6d4230be96f3 to disappear
Jan  5 13:59:10.627: INFO: Pod pod-configmaps-ce6690ea-e595-46b5-af95-6d4230be96f3 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  5 13:59:10.627: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-6228" for this suite.
Jan  5 13:59:16.676: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 13:59:16.812: INFO: namespace configmap-6228 deletion completed in 6.164341124s

• [SLOW TEST:15.141 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  5 13:59:16.813: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jan  5 13:59:25.052: INFO: Waiting up to 5m0s for pod "client-envvars-8d4c14b9-62f5-4609-b292-578fd035c27c" in namespace "pods-316" to be "success or failure"
Jan  5 13:59:25.063: INFO: Pod "client-envvars-8d4c14b9-62f5-4609-b292-578fd035c27c": Phase="Pending", Reason="", readiness=false. Elapsed: 11.558296ms
Jan  5 13:59:27.077: INFO: Pod "client-envvars-8d4c14b9-62f5-4609-b292-578fd035c27c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024956804s
Jan  5 13:59:29.087: INFO: Pod "client-envvars-8d4c14b9-62f5-4609-b292-578fd035c27c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.035639024s
Jan  5 13:59:31.094: INFO: Pod "client-envvars-8d4c14b9-62f5-4609-b292-578fd035c27c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.042322475s
Jan  5 13:59:33.865: INFO: Pod "client-envvars-8d4c14b9-62f5-4609-b292-578fd035c27c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.813208275s
STEP: Saw pod success
Jan  5 13:59:33.866: INFO: Pod "client-envvars-8d4c14b9-62f5-4609-b292-578fd035c27c" satisfied condition "success or failure"
Jan  5 13:59:33.900: INFO: Trying to get logs from node iruya-node pod client-envvars-8d4c14b9-62f5-4609-b292-578fd035c27c container env3cont: 
STEP: delete the pod
Jan  5 13:59:33.994: INFO: Waiting for pod client-envvars-8d4c14b9-62f5-4609-b292-578fd035c27c to disappear
Jan  5 13:59:33.998: INFO: Pod client-envvars-8d4c14b9-62f5-4609-b292-578fd035c27c no longer exists
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  5 13:59:33.998: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-316" for this suite.
Jan  5 14:00:20.039: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 14:00:20.187: INFO: namespace pods-316 deletion completed in 46.184294712s

• [SLOW TEST:63.374 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-network] Services 
  should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  5 14:00:20.187: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88
[It] should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating service endpoint-test2 in namespace services-3539
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-3539 to expose endpoints map[]
Jan  5 14:00:20.373: INFO: Get endpoints failed (16.906929ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found
Jan  5 14:00:21.379: INFO: successfully validated that service endpoint-test2 in namespace services-3539 exposes endpoints map[] (1.023029529s elapsed)
STEP: Creating pod pod1 in namespace services-3539
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-3539 to expose endpoints map[pod1:[80]]
Jan  5 14:00:25.568: INFO: Unexpected endpoints: found map[], expected map[pod1:[80]] (4.134318322s elapsed, will retry)
Jan  5 14:00:29.635: INFO: successfully validated that service endpoint-test2 in namespace services-3539 exposes endpoints map[pod1:[80]] (8.20137151s elapsed)
STEP: Creating pod pod2 in namespace services-3539
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-3539 to expose endpoints map[pod1:[80] pod2:[80]]
Jan  5 14:00:34.184: INFO: Unexpected endpoints: found map[1449ac4d-6d41-4840-a8f2-3772fce7ff24:[80]], expected map[pod1:[80] pod2:[80]] (4.539317684s elapsed, will retry)
Jan  5 14:00:36.367: INFO: successfully validated that service endpoint-test2 in namespace services-3539 exposes endpoints map[pod1:[80] pod2:[80]] (6.722431066s elapsed)
STEP: Deleting pod pod1 in namespace services-3539
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-3539 to expose endpoints map[pod2:[80]]
Jan  5 14:00:37.488: INFO: successfully validated that service endpoint-test2 in namespace services-3539 exposes endpoints map[pod2:[80]] (1.107099893s elapsed)
STEP: Deleting pod pod2 in namespace services-3539
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-3539 to expose endpoints map[]
Jan  5 14:00:38.681: INFO: successfully validated that service endpoint-test2 in namespace services-3539 exposes endpoints map[] (1.172946305s elapsed)
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  5 14:00:39.050: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-3539" for this suite.
Jan  5 14:01:01.148: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 14:01:01.282: INFO: namespace services-3539 deletion completed in 22.160766279s
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92

• [SLOW TEST:41.095 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  5 14:01:01.284: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Cleaning up the secret
STEP: Cleaning up the configmap
STEP: Cleaning up the pod
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  5 14:01:09.616: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-wrapper-3332" for this suite.
Jan  5 14:01:15.691: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 14:01:15.836: INFO: namespace emptydir-wrapper-3332 deletion completed in 6.207471876s

• [SLOW TEST:14.552 seconds]
[sig-storage] EmptyDir wrapper volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  5 14:01:15.837: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-6fcbf526-1e50-4b99-9045-e4cab1da22d9
STEP: Creating a pod to test consume secrets
Jan  5 14:01:16.389: INFO: Waiting up to 5m0s for pod "pod-secrets-be21ef87-60b3-4f81-a558-de871644825e" in namespace "secrets-4463" to be "success or failure"
Jan  5 14:01:16.416: INFO: Pod "pod-secrets-be21ef87-60b3-4f81-a558-de871644825e": Phase="Pending", Reason="", readiness=false. Elapsed: 27.212979ms
Jan  5 14:01:18.424: INFO: Pod "pod-secrets-be21ef87-60b3-4f81-a558-de871644825e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034721386s
Jan  5 14:01:20.434: INFO: Pod "pod-secrets-be21ef87-60b3-4f81-a558-de871644825e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.044879309s
Jan  5 14:01:22.457: INFO: Pod "pod-secrets-be21ef87-60b3-4f81-a558-de871644825e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.068140984s
Jan  5 14:01:24.467: INFO: Pod "pod-secrets-be21ef87-60b3-4f81-a558-de871644825e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.077866127s
STEP: Saw pod success
Jan  5 14:01:24.467: INFO: Pod "pod-secrets-be21ef87-60b3-4f81-a558-de871644825e" satisfied condition "success or failure"
Jan  5 14:01:24.482: INFO: Trying to get logs from node iruya-node pod pod-secrets-be21ef87-60b3-4f81-a558-de871644825e container secret-volume-test: 
STEP: delete the pod
Jan  5 14:01:24.669: INFO: Waiting for pod pod-secrets-be21ef87-60b3-4f81-a558-de871644825e to disappear
Jan  5 14:01:24.683: INFO: Pod pod-secrets-be21ef87-60b3-4f81-a558-de871644825e no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  5 14:01:24.684: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-4463" for this suite.
Jan  5 14:01:30.719: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 14:01:30.849: INFO: namespace secrets-4463 deletion completed in 6.160020162s

• [SLOW TEST:15.013 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  5 14:01:30.852: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test override arguments
Jan  5 14:01:30.995: INFO: Waiting up to 5m0s for pod "client-containers-cbf57f92-635e-4eb1-9679-8f7824b34d57" in namespace "containers-1648" to be "success or failure"
Jan  5 14:01:31.043: INFO: Pod "client-containers-cbf57f92-635e-4eb1-9679-8f7824b34d57": Phase="Pending", Reason="", readiness=false. Elapsed: 47.643081ms
Jan  5 14:01:33.055: INFO: Pod "client-containers-cbf57f92-635e-4eb1-9679-8f7824b34d57": Phase="Pending", Reason="", readiness=false. Elapsed: 2.059655146s
Jan  5 14:01:35.077: INFO: Pod "client-containers-cbf57f92-635e-4eb1-9679-8f7824b34d57": Phase="Pending", Reason="", readiness=false. Elapsed: 4.081943521s
Jan  5 14:01:37.088: INFO: Pod "client-containers-cbf57f92-635e-4eb1-9679-8f7824b34d57": Phase="Pending", Reason="", readiness=false. Elapsed: 6.092764321s
Jan  5 14:01:39.098: INFO: Pod "client-containers-cbf57f92-635e-4eb1-9679-8f7824b34d57": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.102446044s
STEP: Saw pod success
Jan  5 14:01:39.098: INFO: Pod "client-containers-cbf57f92-635e-4eb1-9679-8f7824b34d57" satisfied condition "success or failure"
Jan  5 14:01:39.102: INFO: Trying to get logs from node iruya-node pod client-containers-cbf57f92-635e-4eb1-9679-8f7824b34d57 container test-container: 
STEP: delete the pod
Jan  5 14:01:39.233: INFO: Waiting for pod client-containers-cbf57f92-635e-4eb1-9679-8f7824b34d57 to disappear
Jan  5 14:01:39.247: INFO: Pod client-containers-cbf57f92-635e-4eb1-9679-8f7824b34d57 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  5 14:01:39.247: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-1648" for this suite.
Jan  5 14:01:45.279: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 14:01:45.447: INFO: namespace containers-1648 deletion completed in 6.192947129s

• [SLOW TEST:14.596 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources Simple CustomResourceDefinition 
  creating/deleting custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  5 14:01:45.448: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] creating/deleting custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jan  5 14:01:45.575: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  5 14:01:46.716: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-3957" for this suite.
Jan  5 14:01:52.764: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 14:01:52.959: INFO: namespace custom-resource-definition-3957 deletion completed in 6.21491595s

• [SLOW TEST:7.511 seconds]
[sig-api-machinery] CustomResourceDefinition resources
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  Simple CustomResourceDefinition
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:35
    creating/deleting custom resource definition objects works  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for ExternalName services [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  5 14:01:52.961: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for ExternalName services [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a test externalName service
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-2678.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-2678.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-2678.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-2678.svc.cluster.local; sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jan  5 14:02:05.180: INFO: File wheezy_udp@dns-test-service-3.dns-2678.svc.cluster.local from pod  dns-2678/dns-test-df7ae51f-c65d-4030-beae-8e2bd3b42aae contains '' instead of 'foo.example.com.'
Jan  5 14:02:05.190: INFO: File jessie_udp@dns-test-service-3.dns-2678.svc.cluster.local from pod  dns-2678/dns-test-df7ae51f-c65d-4030-beae-8e2bd3b42aae contains '' instead of 'foo.example.com.'
Jan  5 14:02:05.190: INFO: Lookups using dns-2678/dns-test-df7ae51f-c65d-4030-beae-8e2bd3b42aae failed for: [wheezy_udp@dns-test-service-3.dns-2678.svc.cluster.local jessie_udp@dns-test-service-3.dns-2678.svc.cluster.local]

Jan  5 14:02:10.218: INFO: DNS probes using dns-test-df7ae51f-c65d-4030-beae-8e2bd3b42aae succeeded

STEP: deleting the pod
STEP: changing the externalName to bar.example.com
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-2678.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-2678.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-2678.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-2678.svc.cluster.local; sleep 1; done

STEP: creating a second pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jan  5 14:02:24.450: INFO: File wheezy_udp@dns-test-service-3.dns-2678.svc.cluster.local from pod  dns-2678/dns-test-a4cca99f-574b-46f6-9d9b-6e3845fa4ad2 contains '' instead of 'bar.example.com.'
Jan  5 14:02:24.461: INFO: File jessie_udp@dns-test-service-3.dns-2678.svc.cluster.local from pod  dns-2678/dns-test-a4cca99f-574b-46f6-9d9b-6e3845fa4ad2 contains '' instead of 'bar.example.com.'
Jan  5 14:02:24.462: INFO: Lookups using dns-2678/dns-test-a4cca99f-574b-46f6-9d9b-6e3845fa4ad2 failed for: [wheezy_udp@dns-test-service-3.dns-2678.svc.cluster.local jessie_udp@dns-test-service-3.dns-2678.svc.cluster.local]

Jan  5 14:02:29.475: INFO: File wheezy_udp@dns-test-service-3.dns-2678.svc.cluster.local from pod  dns-2678/dns-test-a4cca99f-574b-46f6-9d9b-6e3845fa4ad2 contains 'foo.example.com.
' instead of 'bar.example.com.'
Jan  5 14:02:29.488: INFO: File jessie_udp@dns-test-service-3.dns-2678.svc.cluster.local from pod  dns-2678/dns-test-a4cca99f-574b-46f6-9d9b-6e3845fa4ad2 contains 'foo.example.com.
' instead of 'bar.example.com.'
Jan  5 14:02:29.488: INFO: Lookups using dns-2678/dns-test-a4cca99f-574b-46f6-9d9b-6e3845fa4ad2 failed for: [wheezy_udp@dns-test-service-3.dns-2678.svc.cluster.local jessie_udp@dns-test-service-3.dns-2678.svc.cluster.local]

Jan  5 14:02:34.482: INFO: File wheezy_udp@dns-test-service-3.dns-2678.svc.cluster.local from pod  dns-2678/dns-test-a4cca99f-574b-46f6-9d9b-6e3845fa4ad2 contains 'foo.example.com.
' instead of 'bar.example.com.'
Jan  5 14:02:34.490: INFO: File jessie_udp@dns-test-service-3.dns-2678.svc.cluster.local from pod  dns-2678/dns-test-a4cca99f-574b-46f6-9d9b-6e3845fa4ad2 contains 'foo.example.com.
' instead of 'bar.example.com.'
Jan  5 14:02:34.490: INFO: Lookups using dns-2678/dns-test-a4cca99f-574b-46f6-9d9b-6e3845fa4ad2 failed for: [wheezy_udp@dns-test-service-3.dns-2678.svc.cluster.local jessie_udp@dns-test-service-3.dns-2678.svc.cluster.local]

Jan  5 14:02:39.486: INFO: DNS probes using dns-test-a4cca99f-574b-46f6-9d9b-6e3845fa4ad2 succeeded

STEP: deleting the pod
STEP: changing the service to type=ClusterIP
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-2678.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-2678.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-2678.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-2678.svc.cluster.local; sleep 1; done

STEP: creating a third pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jan  5 14:02:51.905: INFO: File wheezy_udp@dns-test-service-3.dns-2678.svc.cluster.local from pod  dns-2678/dns-test-24248301-5d33-4702-83eb-66bd96e09839 contains '' instead of '10.108.69.239'
Jan  5 14:02:51.963: INFO: File jessie_udp@dns-test-service-3.dns-2678.svc.cluster.local from pod  dns-2678/dns-test-24248301-5d33-4702-83eb-66bd96e09839 contains '' instead of '10.108.69.239'
Jan  5 14:02:51.963: INFO: Lookups using dns-2678/dns-test-24248301-5d33-4702-83eb-66bd96e09839 failed for: [wheezy_udp@dns-test-service-3.dns-2678.svc.cluster.local jessie_udp@dns-test-service-3.dns-2678.svc.cluster.local]

Jan  5 14:02:56.987: INFO: DNS probes using dns-test-24248301-5d33-4702-83eb-66bd96e09839 succeeded

STEP: deleting the pod
STEP: deleting the test externalName service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  5 14:02:57.202: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-2678" for this suite.
Jan  5 14:03:03.377: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 14:03:03.484: INFO: namespace dns-2678 deletion completed in 6.258934503s

• [SLOW TEST:70.523 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for ExternalName services [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  5 14:03:03.485: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the rc1
STEP: create the rc2
STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well
STEP: delete the rc simpletest-rc-to-be-deleted
STEP: wait for the rc to be deleted
STEP: Gathering metrics
W0105 14:03:16.208163       8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jan  5 14:03:16.208: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  5 14:03:16.208: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-4421" for this suite.
Jan  5 14:03:26.915: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 14:03:27.771: INFO: namespace gc-4421 deletion completed in 11.551427278s

• [SLOW TEST:24.287 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl label 
  should update the label on a resource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  5 14:03:27.772: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1210
STEP: creating the pod
Jan  5 14:03:28.354: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9640'
Jan  5 14:03:31.725: INFO: stderr: ""
Jan  5 14:03:31.725: INFO: stdout: "pod/pause created\n"
Jan  5 14:03:31.725: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause]
Jan  5 14:03:31.725: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-9640" to be "running and ready"
Jan  5 14:03:31.772: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 46.990623ms
Jan  5 14:03:33.789: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.064062848s
Jan  5 14:03:35.805: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 4.079303336s
Jan  5 14:03:37.815: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 6.08976804s
Jan  5 14:03:39.827: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 8.102000375s
Jan  5 14:03:39.827: INFO: Pod "pause" satisfied condition "running and ready"
Jan  5 14:03:39.827: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause]
[It] should update the label on a resource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: adding the label testing-label with value testing-label-value to a pod
Jan  5 14:03:39.828: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-9640'
Jan  5 14:03:40.021: INFO: stderr: ""
Jan  5 14:03:40.021: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod has the label testing-label with the value testing-label-value
Jan  5 14:03:40.021: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-9640'
Jan  5 14:03:40.149: INFO: stderr: ""
Jan  5 14:03:40.149: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          9s    testing-label-value\n"
STEP: removing the label testing-label of a pod
Jan  5 14:03:40.149: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-9640'
Jan  5 14:03:40.370: INFO: stderr: ""
Jan  5 14:03:40.370: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod doesn't have the label testing-label
Jan  5 14:03:40.370: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-9640'
Jan  5 14:03:40.528: INFO: stderr: ""
Jan  5 14:03:40.529: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          9s    \n"
[AfterEach] [k8s.io] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1217
STEP: using delete to clean up resources
Jan  5 14:03:40.529: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9640'
Jan  5 14:03:40.750: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan  5 14:03:40.750: INFO: stdout: "pod \"pause\" force deleted\n"
Jan  5 14:03:40.751: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-9640'
Jan  5 14:03:40.948: INFO: stderr: "No resources found.\n"
Jan  5 14:03:40.948: INFO: stdout: ""
Jan  5 14:03:40.949: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-9640 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Jan  5 14:03:41.068: INFO: stderr: ""
Jan  5 14:03:41.068: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  5 14:03:41.069: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-9640" for this suite.
Jan  5 14:03:47.114: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 14:03:47.269: INFO: namespace kubectl-9640 deletion completed in 6.190275366s

• [SLOW TEST:19.497 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should update the label on a resource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  5 14:03:47.270: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Jan  5 14:03:47.466: INFO: Waiting up to 5m0s for pod "downwardapi-volume-cb0b47a7-11b5-44bf-97a0-5add461f2d56" in namespace "projected-4879" to be "success or failure"
Jan  5 14:03:47.475: INFO: Pod "downwardapi-volume-cb0b47a7-11b5-44bf-97a0-5add461f2d56": Phase="Pending", Reason="", readiness=false. Elapsed: 8.599724ms
Jan  5 14:03:49.485: INFO: Pod "downwardapi-volume-cb0b47a7-11b5-44bf-97a0-5add461f2d56": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018535793s
Jan  5 14:03:51.494: INFO: Pod "downwardapi-volume-cb0b47a7-11b5-44bf-97a0-5add461f2d56": Phase="Pending", Reason="", readiness=false. Elapsed: 4.027796498s
Jan  5 14:03:53.502: INFO: Pod "downwardapi-volume-cb0b47a7-11b5-44bf-97a0-5add461f2d56": Phase="Pending", Reason="", readiness=false. Elapsed: 6.036240833s
Jan  5 14:03:55.515: INFO: Pod "downwardapi-volume-cb0b47a7-11b5-44bf-97a0-5add461f2d56": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.049314962s
STEP: Saw pod success
Jan  5 14:03:55.516: INFO: Pod "downwardapi-volume-cb0b47a7-11b5-44bf-97a0-5add461f2d56" satisfied condition "success or failure"
Jan  5 14:03:55.522: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-cb0b47a7-11b5-44bf-97a0-5add461f2d56 container client-container: 
STEP: delete the pod
Jan  5 14:03:56.080: INFO: Waiting for pod downwardapi-volume-cb0b47a7-11b5-44bf-97a0-5add461f2d56 to disappear
Jan  5 14:03:56.093: INFO: Pod downwardapi-volume-cb0b47a7-11b5-44bf-97a0-5add461f2d56 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  5 14:03:56.093: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4879" for this suite.
Jan  5 14:04:02.145: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 14:04:02.290: INFO: namespace projected-4879 deletion completed in 6.190764253s

• [SLOW TEST:15.020 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] Events 
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] [sig-node] Events
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  5 14:04:02.291: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename events
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: retrieving the pod
Jan  5 14:04:10.429: INFO: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:send-events-5c65e190-eb2d-4ac1-8c84-b72824e7f1fd,GenerateName:,Namespace:events-1111,SelfLink:/api/v1/namespaces/events-1111/pods/send-events-5c65e190-eb2d-4ac1-8c84-b72824e7f1fd,UID:fb36f682-75c3-4c1c-8abc-7baca7fe22ad,ResourceVersion:19404776,Generation:0,CreationTimestamp:2020-01-05 14:04:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: foo,time: 383007483,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-m56mw {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-m56mw,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{p gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 [] []  [{ 0 80 TCP }] [] [] {map[] map[]} [{default-token-m56mw true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0022b1610} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0022b1630}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-05 14:04:02 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-05 14:04:09 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-05 14:04:09 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-05 14:04:02 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.1,StartTime:2020-01-05 14:04:02 +0000 UTC,ContainerStatuses:[{p {nil ContainerStateRunning{StartedAt:2020-01-05 14:04:08 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 docker-pullable://gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 docker://f33cbb9bcb37f705870588bcbbe474d6e316b3077bd19860aa978992a26cca98}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}

STEP: checking for scheduler event about the pod
Jan  5 14:04:12.444: INFO: Saw scheduler event for our pod.
STEP: checking for kubelet event about the pod
Jan  5 14:04:14.452: INFO: Saw kubelet event for our pod.
STEP: deleting the pod
[AfterEach] [k8s.io] [sig-node] Events
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  5 14:04:14.469: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "events-1111" for this suite.
Jan  5 14:04:58.576: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 14:04:58.692: INFO: namespace events-1111 deletion completed in 44.173393996s

• [SLOW TEST:56.401 seconds]
[k8s.io] [sig-node] Events
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  5 14:04:58.694: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0666 on tmpfs
Jan  5 14:04:58.861: INFO: Waiting up to 5m0s for pod "pod-505052e0-4404-4cb2-ae4f-b0f079d33d22" in namespace "emptydir-13" to be "success or failure"
Jan  5 14:04:58.873: INFO: Pod "pod-505052e0-4404-4cb2-ae4f-b0f079d33d22": Phase="Pending", Reason="", readiness=false. Elapsed: 11.084124ms
Jan  5 14:05:00.888: INFO: Pod "pod-505052e0-4404-4cb2-ae4f-b0f079d33d22": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026150175s
Jan  5 14:05:02.901: INFO: Pod "pod-505052e0-4404-4cb2-ae4f-b0f079d33d22": Phase="Pending", Reason="", readiness=false. Elapsed: 4.039088815s
Jan  5 14:05:04.909: INFO: Pod "pod-505052e0-4404-4cb2-ae4f-b0f079d33d22": Phase="Pending", Reason="", readiness=false. Elapsed: 6.047991826s
Jan  5 14:05:06.923: INFO: Pod "pod-505052e0-4404-4cb2-ae4f-b0f079d33d22": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.061163231s
STEP: Saw pod success
Jan  5 14:05:06.923: INFO: Pod "pod-505052e0-4404-4cb2-ae4f-b0f079d33d22" satisfied condition "success or failure"
Jan  5 14:05:06.928: INFO: Trying to get logs from node iruya-node pod pod-505052e0-4404-4cb2-ae4f-b0f079d33d22 container test-container: 
STEP: delete the pod
Jan  5 14:05:06.976: INFO: Waiting for pod pod-505052e0-4404-4cb2-ae4f-b0f079d33d22 to disappear
Jan  5 14:05:06.985: INFO: Pod pod-505052e0-4404-4cb2-ae4f-b0f079d33d22 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  5 14:05:06.985: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-13" for this suite.
Jan  5 14:05:13.019: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 14:05:13.166: INFO: namespace emptydir-13 deletion completed in 6.175022838s

• [SLOW TEST:14.472 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  5 14:05:13.167: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Jan  5 14:05:21.513: INFO: Expected: &{} to match Container's Termination Message:  --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  5 14:05:21.600: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-4835" for this suite.
Jan  5 14:05:27.635: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 14:05:27.750: INFO: namespace container-runtime-4835 deletion completed in 6.142502258s

• [SLOW TEST:14.584 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129
      should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-storage] Downward API volume 
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  5 14:05:27.751: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Jan  5 14:05:27.841: INFO: Waiting up to 5m0s for pod "downwardapi-volume-59d70332-f59a-46d4-bc64-4196a5f3592b" in namespace "downward-api-6772" to be "success or failure"
Jan  5 14:05:27.889: INFO: Pod "downwardapi-volume-59d70332-f59a-46d4-bc64-4196a5f3592b": Phase="Pending", Reason="", readiness=false. Elapsed: 47.711661ms
Jan  5 14:05:29.899: INFO: Pod "downwardapi-volume-59d70332-f59a-46d4-bc64-4196a5f3592b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.058252867s
Jan  5 14:05:31.909: INFO: Pod "downwardapi-volume-59d70332-f59a-46d4-bc64-4196a5f3592b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.067982547s
Jan  5 14:05:33.951: INFO: Pod "downwardapi-volume-59d70332-f59a-46d4-bc64-4196a5f3592b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.110426438s
Jan  5 14:05:35.960: INFO: Pod "downwardapi-volume-59d70332-f59a-46d4-bc64-4196a5f3592b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.11919704s
STEP: Saw pod success
Jan  5 14:05:35.960: INFO: Pod "downwardapi-volume-59d70332-f59a-46d4-bc64-4196a5f3592b" satisfied condition "success or failure"
Jan  5 14:05:35.964: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-59d70332-f59a-46d4-bc64-4196a5f3592b container client-container: 
STEP: delete the pod
Jan  5 14:05:36.110: INFO: Waiting for pod downwardapi-volume-59d70332-f59a-46d4-bc64-4196a5f3592b to disappear
Jan  5 14:05:36.123: INFO: Pod downwardapi-volume-59d70332-f59a-46d4-bc64-4196a5f3592b no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  5 14:05:36.123: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-6772" for this suite.
Jan  5 14:05:42.164: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 14:05:42.287: INFO: namespace downward-api-6772 deletion completed in 6.143401411s

• [SLOW TEST:14.536 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl replace 
  should update a single-container pod's image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  5 14:05:42.287: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1721
[It] should update a single-container pod's image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Jan  5 14:05:42.372: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --labels=run=e2e-test-nginx-pod --namespace=kubectl-655'
Jan  5 14:05:42.591: INFO: stderr: ""
Jan  5 14:05:42.591: INFO: stdout: "pod/e2e-test-nginx-pod created\n"
STEP: verifying the pod e2e-test-nginx-pod is running
STEP: verifying the pod e2e-test-nginx-pod was created
Jan  5 14:05:52.643: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-nginx-pod --namespace=kubectl-655 -o json'
Jan  5 14:05:52.822: INFO: stderr: ""
Jan  5 14:05:52.822: INFO: stdout: "{\n    \"apiVersion\": \"v1\",\n    \"kind\": \"Pod\",\n    \"metadata\": {\n        \"creationTimestamp\": \"2020-01-05T14:05:42Z\",\n        \"labels\": {\n            \"run\": \"e2e-test-nginx-pod\"\n        },\n        \"name\": \"e2e-test-nginx-pod\",\n        \"namespace\": \"kubectl-655\",\n        \"resourceVersion\": \"19405003\",\n        \"selfLink\": \"/api/v1/namespaces/kubectl-655/pods/e2e-test-nginx-pod\",\n        \"uid\": \"aceae6c2-9c55-4249-b8e5-628d8e7cb77e\"\n    },\n    \"spec\": {\n        \"containers\": [\n            {\n                \"image\": \"docker.io/library/nginx:1.14-alpine\",\n                \"imagePullPolicy\": \"IfNotPresent\",\n                \"name\": \"e2e-test-nginx-pod\",\n                \"resources\": {},\n                \"terminationMessagePath\": \"/dev/termination-log\",\n                \"terminationMessagePolicy\": \"File\",\n                \"volumeMounts\": [\n                    {\n                        \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n                        \"name\": \"default-token-mq82k\",\n                        \"readOnly\": true\n                    }\n                ]\n            }\n        ],\n        \"dnsPolicy\": \"ClusterFirst\",\n        \"enableServiceLinks\": true,\n        \"nodeName\": \"iruya-node\",\n        \"priority\": 0,\n        \"restartPolicy\": \"Always\",\n        \"schedulerName\": \"default-scheduler\",\n        \"securityContext\": {},\n        \"serviceAccount\": \"default\",\n        \"serviceAccountName\": \"default\",\n        \"terminationGracePeriodSeconds\": 30,\n        \"tolerations\": [\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/not-ready\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            },\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/unreachable\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            }\n        ],\n        \"volumes\": [\n            {\n                \"name\": \"default-token-mq82k\",\n                \"secret\": {\n                    \"defaultMode\": 420,\n                    \"secretName\": \"default-token-mq82k\"\n                }\n            }\n        ]\n    },\n    \"status\": {\n        \"conditions\": [\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-01-05T14:05:42Z\",\n                \"status\": \"True\",\n                \"type\": \"Initialized\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-01-05T14:05:50Z\",\n                \"status\": \"True\",\n                \"type\": \"Ready\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-01-05T14:05:50Z\",\n                \"status\": \"True\",\n                \"type\": \"ContainersReady\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-01-05T14:05:42Z\",\n                \"status\": \"True\",\n                \"type\": \"PodScheduled\"\n            }\n        ],\n        \"containerStatuses\": [\n            {\n                \"containerID\": \"docker://d0a6c1befb98ef3cacb78587d315daa4ff7bbcffd17c28d00cbb611b2cb54054\",\n                \"image\": \"nginx:1.14-alpine\",\n                \"imageID\": \"docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7\",\n                \"lastState\": {},\n                \"name\": \"e2e-test-nginx-pod\",\n                \"ready\": true,\n                \"restartCount\": 0,\n                \"state\": {\n                    \"running\": {\n                        \"startedAt\": \"2020-01-05T14:05:49Z\"\n                    }\n                }\n            }\n        ],\n        \"hostIP\": \"10.96.3.65\",\n        \"phase\": \"Running\",\n        \"podIP\": \"10.44.0.1\",\n        \"qosClass\": \"BestEffort\",\n        \"startTime\": \"2020-01-05T14:05:42Z\"\n    }\n}\n"
STEP: replace the image in the pod
Jan  5 14:05:52.823: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-655'
Jan  5 14:05:53.364: INFO: stderr: ""
Jan  5 14:05:53.364: INFO: stdout: "pod/e2e-test-nginx-pod replaced\n"
STEP: verifying the pod e2e-test-nginx-pod has the right image docker.io/library/busybox:1.29
[AfterEach] [k8s.io] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1726
Jan  5 14:05:53.375: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=kubectl-655'
Jan  5 14:05:58.795: INFO: stderr: ""
Jan  5 14:05:58.796: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  5 14:05:58.796: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-655" for this suite.
Jan  5 14:06:04.847: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 14:06:05.036: INFO: namespace kubectl-655 deletion completed in 6.212501045s

• [SLOW TEST:22.749 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should update a single-container pod's image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should fail to create secret due to empty secret key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  5 14:06:05.039: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create secret due to empty secret key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name secret-emptykey-test-dda411dc-9e45-4f1f-8dd2-26128a52f3eb
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  5 14:06:05.197: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-6137" for this suite.
Jan  5 14:06:11.229: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 14:06:11.357: INFO: namespace secrets-6137 deletion completed in 6.146340894s

• [SLOW TEST:6.318 seconds]
[sig-api-machinery] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31
  should fail to create secret due to empty secret key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-api-machinery] Garbage collector 
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  5 14:06:11.357: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
Jan  5 14:06:17.927: INFO: 1 pods remaining
Jan  5 14:06:17.927: INFO: 0 pods has nil DeletionTimestamp
Jan  5 14:06:17.927: INFO: 
STEP: Gathering metrics
W0105 14:06:18.534140       8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jan  5 14:06:18.534: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  5 14:06:18.534: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-7526" for this suite.
Jan  5 14:06:26.628: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 14:06:26.745: INFO: namespace gc-7526 deletion completed in 8.205475752s

• [SLOW TEST:15.388 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  5 14:06:26.745: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Jan  5 14:06:26.915: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0aefd35b-7543-419a-9913-48fdf6df6a7f" in namespace "projected-6448" to be "success or failure"
Jan  5 14:06:26.946: INFO: Pod "downwardapi-volume-0aefd35b-7543-419a-9913-48fdf6df6a7f": Phase="Pending", Reason="", readiness=false. Elapsed: 30.522203ms
Jan  5 14:06:28.957: INFO: Pod "downwardapi-volume-0aefd35b-7543-419a-9913-48fdf6df6a7f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.041442719s
Jan  5 14:06:30.971: INFO: Pod "downwardapi-volume-0aefd35b-7543-419a-9913-48fdf6df6a7f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.055448843s
Jan  5 14:06:32.985: INFO: Pod "downwardapi-volume-0aefd35b-7543-419a-9913-48fdf6df6a7f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.069641905s
Jan  5 14:06:34.995: INFO: Pod "downwardapi-volume-0aefd35b-7543-419a-9913-48fdf6df6a7f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.079458267s
STEP: Saw pod success
Jan  5 14:06:34.995: INFO: Pod "downwardapi-volume-0aefd35b-7543-419a-9913-48fdf6df6a7f" satisfied condition "success or failure"
Jan  5 14:06:34.999: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-0aefd35b-7543-419a-9913-48fdf6df6a7f container client-container: 
STEP: delete the pod
Jan  5 14:06:35.055: INFO: Waiting for pod downwardapi-volume-0aefd35b-7543-419a-9913-48fdf6df6a7f to disappear
Jan  5 14:06:35.071: INFO: Pod downwardapi-volume-0aefd35b-7543-419a-9913-48fdf6df6a7f no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  5 14:06:35.071: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6448" for this suite.
Jan  5 14:06:41.137: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 14:06:41.257: INFO: namespace projected-6448 deletion completed in 6.158281192s

• [SLOW TEST:14.512 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should create and stop a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  5 14:06:41.258: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should create and stop a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a replication controller
Jan  5 14:06:41.332: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-740'
Jan  5 14:06:41.875: INFO: stderr: ""
Jan  5 14:06:41.875: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jan  5 14:06:41.876: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-740'
Jan  5 14:06:42.034: INFO: stderr: ""
Jan  5 14:06:42.035: INFO: stdout: "update-demo-nautilus-df74q update-demo-nautilus-nn899 "
Jan  5 14:06:42.035: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-df74q -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-740'
Jan  5 14:06:42.258: INFO: stderr: ""
Jan  5 14:06:42.258: INFO: stdout: ""
Jan  5 14:06:42.258: INFO: update-demo-nautilus-df74q is created but not running
Jan  5 14:06:47.259: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-740'
Jan  5 14:06:48.558: INFO: stderr: ""
Jan  5 14:06:48.558: INFO: stdout: "update-demo-nautilus-df74q update-demo-nautilus-nn899 "
Jan  5 14:06:48.559: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-df74q -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-740'
Jan  5 14:06:48.753: INFO: stderr: ""
Jan  5 14:06:48.753: INFO: stdout: ""
Jan  5 14:06:48.753: INFO: update-demo-nautilus-df74q is created but not running
Jan  5 14:06:53.754: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-740'
Jan  5 14:06:53.954: INFO: stderr: ""
Jan  5 14:06:53.954: INFO: stdout: "update-demo-nautilus-df74q update-demo-nautilus-nn899 "
Jan  5 14:06:53.954: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-df74q -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-740'
Jan  5 14:06:54.073: INFO: stderr: ""
Jan  5 14:06:54.073: INFO: stdout: "true"
Jan  5 14:06:54.073: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-df74q -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-740'
Jan  5 14:06:54.168: INFO: stderr: ""
Jan  5 14:06:54.168: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan  5 14:06:54.168: INFO: validating pod update-demo-nautilus-df74q
Jan  5 14:06:54.179: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan  5 14:06:54.179: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan  5 14:06:54.179: INFO: update-demo-nautilus-df74q is verified up and running
Jan  5 14:06:54.179: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-nn899 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-740'
Jan  5 14:06:54.278: INFO: stderr: ""
Jan  5 14:06:54.278: INFO: stdout: "true"
Jan  5 14:06:54.279: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-nn899 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-740'
Jan  5 14:06:54.389: INFO: stderr: ""
Jan  5 14:06:54.389: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan  5 14:06:54.390: INFO: validating pod update-demo-nautilus-nn899
Jan  5 14:06:54.403: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan  5 14:06:54.403: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan  5 14:06:54.403: INFO: update-demo-nautilus-nn899 is verified up and running
STEP: using delete to clean up resources
Jan  5 14:06:54.403: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-740'
Jan  5 14:06:54.507: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan  5 14:06:54.507: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Jan  5 14:06:54.507: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-740'
Jan  5 14:06:54.618: INFO: stderr: "No resources found.\n"
Jan  5 14:06:54.619: INFO: stdout: ""
Jan  5 14:06:54.619: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-740 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Jan  5 14:06:54.801: INFO: stderr: ""
Jan  5 14:06:54.801: INFO: stdout: "update-demo-nautilus-df74q\n"
Jan  5 14:06:55.301: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-740'
Jan  5 14:06:55.993: INFO: stderr: "No resources found.\n"
Jan  5 14:06:55.993: INFO: stdout: ""
Jan  5 14:06:55.994: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-740 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Jan  5 14:06:56.157: INFO: stderr: ""
Jan  5 14:06:56.157: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  5 14:06:56.157: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-740" for this suite.
Jan  5 14:07:18.378: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 14:07:18.544: INFO: namespace kubectl-740 deletion completed in 22.38084188s

• [SLOW TEST:37.286 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create and stop a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run rc 
  should create an rc from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  5 14:07:18.544: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1456
[It] should create an rc from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Jan  5 14:07:18.645: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=kubectl-6795'
Jan  5 14:07:18.839: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Jan  5 14:07:18.839: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n"
STEP: verifying the rc e2e-test-nginx-rc was created
STEP: verifying the pod controlled by rc e2e-test-nginx-rc was created
STEP: confirm that you can get logs from an rc
Jan  5 14:07:18.917: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-nginx-rc-ckzhx]
Jan  5 14:07:18.918: INFO: Waiting up to 5m0s for pod "e2e-test-nginx-rc-ckzhx" in namespace "kubectl-6795" to be "running and ready"
Jan  5 14:07:18.922: INFO: Pod "e2e-test-nginx-rc-ckzhx": Phase="Pending", Reason="", readiness=false. Elapsed: 4.06962ms
Jan  5 14:07:20.929: INFO: Pod "e2e-test-nginx-rc-ckzhx": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011302143s
Jan  5 14:07:22.938: INFO: Pod "e2e-test-nginx-rc-ckzhx": Phase="Pending", Reason="", readiness=false. Elapsed: 4.020489521s
Jan  5 14:07:24.954: INFO: Pod "e2e-test-nginx-rc-ckzhx": Phase="Pending", Reason="", readiness=false. Elapsed: 6.036210641s
Jan  5 14:07:27.000: INFO: Pod "e2e-test-nginx-rc-ckzhx": Phase="Running", Reason="", readiness=true. Elapsed: 8.082562425s
Jan  5 14:07:27.000: INFO: Pod "e2e-test-nginx-rc-ckzhx" satisfied condition "running and ready"
Jan  5 14:07:27.000: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-nginx-rc-ckzhx]
Jan  5 14:07:27.001: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-nginx-rc --namespace=kubectl-6795'
Jan  5 14:07:27.221: INFO: stderr: ""
Jan  5 14:07:27.221: INFO: stdout: ""
[AfterEach] [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1461
Jan  5 14:07:27.221: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=kubectl-6795'
Jan  5 14:07:27.315: INFO: stderr: ""
Jan  5 14:07:27.315: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  5 14:07:27.315: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6795" for this suite.
Jan  5 14:07:49.381: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 14:07:49.568: INFO: namespace kubectl-6795 deletion completed in 22.246962093s

• [SLOW TEST:31.024 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create an rc from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  5 14:07:49.569: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward api env vars
Jan  5 14:07:49.680: INFO: Waiting up to 5m0s for pod "downward-api-12eb663a-80d9-4ea7-aafc-28d3297c5098" in namespace "downward-api-8942" to be "success or failure"
Jan  5 14:07:49.730: INFO: Pod "downward-api-12eb663a-80d9-4ea7-aafc-28d3297c5098": Phase="Pending", Reason="", readiness=false. Elapsed: 50.175774ms
Jan  5 14:07:51.744: INFO: Pod "downward-api-12eb663a-80d9-4ea7-aafc-28d3297c5098": Phase="Pending", Reason="", readiness=false. Elapsed: 2.063518934s
Jan  5 14:07:53.751: INFO: Pod "downward-api-12eb663a-80d9-4ea7-aafc-28d3297c5098": Phase="Pending", Reason="", readiness=false. Elapsed: 4.070987929s
Jan  5 14:07:55.800: INFO: Pod "downward-api-12eb663a-80d9-4ea7-aafc-28d3297c5098": Phase="Pending", Reason="", readiness=false. Elapsed: 6.12005812s
Jan  5 14:07:57.815: INFO: Pod "downward-api-12eb663a-80d9-4ea7-aafc-28d3297c5098": Phase="Running", Reason="", readiness=true. Elapsed: 8.135284357s
Jan  5 14:07:59.830: INFO: Pod "downward-api-12eb663a-80d9-4ea7-aafc-28d3297c5098": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.149457693s
STEP: Saw pod success
Jan  5 14:07:59.830: INFO: Pod "downward-api-12eb663a-80d9-4ea7-aafc-28d3297c5098" satisfied condition "success or failure"
Jan  5 14:07:59.838: INFO: Trying to get logs from node iruya-node pod downward-api-12eb663a-80d9-4ea7-aafc-28d3297c5098 container dapi-container: 
STEP: delete the pod
Jan  5 14:08:00.017: INFO: Waiting for pod downward-api-12eb663a-80d9-4ea7-aafc-28d3297c5098 to disappear
Jan  5 14:08:00.026: INFO: Pod downward-api-12eb663a-80d9-4ea7-aafc-28d3297c5098 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  5 14:08:00.027: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-8942" for this suite.
Jan  5 14:08:06.057: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 14:08:06.171: INFO: namespace downward-api-8942 deletion completed in 6.13710226s

• [SLOW TEST:16.602 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  5 14:08:06.172: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a pod in the namespace
STEP: Waiting for the pod to have running status
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
STEP: Recreating the namespace
STEP: Verifying there are no pods in the namespace
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  5 14:08:36.686: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "namespaces-9489" for this suite.
Jan  5 14:08:42.725: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 14:08:42.845: INFO: namespace namespaces-9489 deletion completed in 6.149528698s
STEP: Destroying namespace "nsdeletetest-9393" for this suite.
Jan  5 14:08:42.849: INFO: Namespace nsdeletetest-9393 was already deleted
STEP: Destroying namespace "nsdeletetest-3004" for this suite.
Jan  5 14:08:48.902: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 14:08:49.022: INFO: namespace nsdeletetest-3004 deletion completed in 6.17302432s

• [SLOW TEST:42.851 seconds]
[sig-api-machinery] Namespaces [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  5 14:08:49.023: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods
STEP: Gathering metrics
W0105 14:09:30.509308       8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jan  5 14:09:30.509: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  5 14:09:30.509: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-7524" for this suite.
Jan  5 14:09:42.550: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 14:09:42.696: INFO: namespace gc-7524 deletion completed in 12.181754761s

• [SLOW TEST:53.673 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  5 14:09:42.699: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir volume type on node default medium
Jan  5 14:09:45.189: INFO: Waiting up to 5m0s for pod "pod-0cc18903-6b3c-43f6-be0f-cda2e7416274" in namespace "emptydir-6053" to be "success or failure"
Jan  5 14:09:45.993: INFO: Pod "pod-0cc18903-6b3c-43f6-be0f-cda2e7416274": Phase="Pending", Reason="", readiness=false. Elapsed: 803.81289ms
Jan  5 14:09:48.002: INFO: Pod "pod-0cc18903-6b3c-43f6-be0f-cda2e7416274": Phase="Pending", Reason="", readiness=false. Elapsed: 2.812645205s
Jan  5 14:09:50.013: INFO: Pod "pod-0cc18903-6b3c-43f6-be0f-cda2e7416274": Phase="Pending", Reason="", readiness=false. Elapsed: 4.824375781s
Jan  5 14:09:52.030: INFO: Pod "pod-0cc18903-6b3c-43f6-be0f-cda2e7416274": Phase="Pending", Reason="", readiness=false. Elapsed: 6.840701818s
Jan  5 14:09:54.040: INFO: Pod "pod-0cc18903-6b3c-43f6-be0f-cda2e7416274": Phase="Pending", Reason="", readiness=false. Elapsed: 8.851355714s
Jan  5 14:09:56.058: INFO: Pod "pod-0cc18903-6b3c-43f6-be0f-cda2e7416274": Phase="Pending", Reason="", readiness=false. Elapsed: 10.86925395s
Jan  5 14:09:58.066: INFO: Pod "pod-0cc18903-6b3c-43f6-be0f-cda2e7416274": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.876791398s
STEP: Saw pod success
Jan  5 14:09:58.066: INFO: Pod "pod-0cc18903-6b3c-43f6-be0f-cda2e7416274" satisfied condition "success or failure"
Jan  5 14:09:58.070: INFO: Trying to get logs from node iruya-node pod pod-0cc18903-6b3c-43f6-be0f-cda2e7416274 container test-container: 
STEP: delete the pod
Jan  5 14:09:58.154: INFO: Waiting for pod pod-0cc18903-6b3c-43f6-be0f-cda2e7416274 to disappear
Jan  5 14:09:58.211: INFO: Pod pod-0cc18903-6b3c-43f6-be0f-cda2e7416274 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  5 14:09:58.212: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-6053" for this suite.
Jan  5 14:10:04.260: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 14:10:04.413: INFO: namespace emptydir-6053 deletion completed in 6.191787473s

• [SLOW TEST:21.715 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  5 14:10:04.414: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0777 on node default medium
Jan  5 14:10:04.523: INFO: Waiting up to 5m0s for pod "pod-d590c4af-4a23-47a3-9920-8616a97318c2" in namespace "emptydir-6229" to be "success or failure"
Jan  5 14:10:04.555: INFO: Pod "pod-d590c4af-4a23-47a3-9920-8616a97318c2": Phase="Pending", Reason="", readiness=false. Elapsed: 31.558407ms
Jan  5 14:10:06.568: INFO: Pod "pod-d590c4af-4a23-47a3-9920-8616a97318c2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.044264996s
Jan  5 14:10:08.580: INFO: Pod "pod-d590c4af-4a23-47a3-9920-8616a97318c2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.056390433s
Jan  5 14:10:10.585: INFO: Pod "pod-d590c4af-4a23-47a3-9920-8616a97318c2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.061142405s
Jan  5 14:10:12.611: INFO: Pod "pod-d590c4af-4a23-47a3-9920-8616a97318c2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.087287139s
STEP: Saw pod success
Jan  5 14:10:12.611: INFO: Pod "pod-d590c4af-4a23-47a3-9920-8616a97318c2" satisfied condition "success or failure"
Jan  5 14:10:12.616: INFO: Trying to get logs from node iruya-node pod pod-d590c4af-4a23-47a3-9920-8616a97318c2 container test-container: 
STEP: delete the pod
Jan  5 14:10:12.775: INFO: Waiting for pod pod-d590c4af-4a23-47a3-9920-8616a97318c2 to disappear
Jan  5 14:10:12.784: INFO: Pod pod-d590c4af-4a23-47a3-9920-8616a97318c2 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  5 14:10:12.784: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-6229" for this suite.
Jan  5 14:10:18.817: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 14:10:18.955: INFO: namespace emptydir-6229 deletion completed in 6.16579397s

• [SLOW TEST:14.542 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with downward pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  5 14:10:18.956: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with downward pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod pod-subpath-test-downwardapi-9wmb
STEP: Creating a pod to test atomic-volume-subpath
Jan  5 14:10:19.135: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-9wmb" in namespace "subpath-3849" to be "success or failure"
Jan  5 14:10:19.146: INFO: Pod "pod-subpath-test-downwardapi-9wmb": Phase="Pending", Reason="", readiness=false. Elapsed: 10.872805ms
Jan  5 14:10:21.157: INFO: Pod "pod-subpath-test-downwardapi-9wmb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021932107s
Jan  5 14:10:23.167: INFO: Pod "pod-subpath-test-downwardapi-9wmb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.031273626s
Jan  5 14:10:25.175: INFO: Pod "pod-subpath-test-downwardapi-9wmb": Phase="Pending", Reason="", readiness=false. Elapsed: 6.039464931s
Jan  5 14:10:27.204: INFO: Pod "pod-subpath-test-downwardapi-9wmb": Phase="Running", Reason="", readiness=true. Elapsed: 8.068042847s
Jan  5 14:10:29.212: INFO: Pod "pod-subpath-test-downwardapi-9wmb": Phase="Running", Reason="", readiness=true. Elapsed: 10.07662001s
Jan  5 14:10:31.222: INFO: Pod "pod-subpath-test-downwardapi-9wmb": Phase="Running", Reason="", readiness=true. Elapsed: 12.086070575s
Jan  5 14:10:33.233: INFO: Pod "pod-subpath-test-downwardapi-9wmb": Phase="Running", Reason="", readiness=true. Elapsed: 14.097345962s
Jan  5 14:10:35.246: INFO: Pod "pod-subpath-test-downwardapi-9wmb": Phase="Running", Reason="", readiness=true. Elapsed: 16.110831177s
Jan  5 14:10:37.256: INFO: Pod "pod-subpath-test-downwardapi-9wmb": Phase="Running", Reason="", readiness=true. Elapsed: 18.120831213s
Jan  5 14:10:39.266: INFO: Pod "pod-subpath-test-downwardapi-9wmb": Phase="Running", Reason="", readiness=true. Elapsed: 20.130333693s
Jan  5 14:10:41.279: INFO: Pod "pod-subpath-test-downwardapi-9wmb": Phase="Running", Reason="", readiness=true. Elapsed: 22.143797054s
Jan  5 14:10:43.315: INFO: Pod "pod-subpath-test-downwardapi-9wmb": Phase="Running", Reason="", readiness=true. Elapsed: 24.179862368s
Jan  5 14:10:45.325: INFO: Pod "pod-subpath-test-downwardapi-9wmb": Phase="Running", Reason="", readiness=true. Elapsed: 26.189628095s
Jan  5 14:10:47.338: INFO: Pod "pod-subpath-test-downwardapi-9wmb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 28.202141433s
STEP: Saw pod success
Jan  5 14:10:47.338: INFO: Pod "pod-subpath-test-downwardapi-9wmb" satisfied condition "success or failure"
Jan  5 14:10:47.345: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-downwardapi-9wmb container test-container-subpath-downwardapi-9wmb: 
STEP: delete the pod
Jan  5 14:10:47.407: INFO: Waiting for pod pod-subpath-test-downwardapi-9wmb to disappear
Jan  5 14:10:47.472: INFO: Pod pod-subpath-test-downwardapi-9wmb no longer exists
STEP: Deleting pod pod-subpath-test-downwardapi-9wmb
Jan  5 14:10:47.473: INFO: Deleting pod "pod-subpath-test-downwardapi-9wmb" in namespace "subpath-3849"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  5 14:10:47.539: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-3849" for this suite.
Jan  5 14:10:53.601: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 14:10:53.767: INFO: namespace subpath-3849 deletion completed in 6.220560115s

• [SLOW TEST:34.811 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with downward pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] PreStop 
  should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  5 14:10:53.768: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename prestop
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:167
[It] should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating server pod server in namespace prestop-7901
STEP: Waiting for pods to come up.
STEP: Creating tester pod tester in namespace prestop-7901
STEP: Deleting pre-stop pod
Jan  5 14:11:15.070: INFO: Saw: {
	"Hostname": "server",
	"Sent": null,
	"Received": {
		"prestop": 1
	},
	"Errors": null,
	"Log": [
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up."
	],
	"StillContactingPeers": true
}
STEP: Deleting the server pod
[AfterEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  5 14:11:15.078: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "prestop-7901" for this suite.
Jan  5 14:11:59.153: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 14:11:59.290: INFO: namespace prestop-7901 deletion completed in 44.166793415s

• [SLOW TEST:65.523 seconds]
[k8s.io] [sig-node] PreStop
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  5 14:11:59.291: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Jan  5 14:11:59.388: INFO: Waiting up to 5m0s for pod "downwardapi-volume-544b2406-e1e1-4c8c-a389-f899460e8275" in namespace "downward-api-2467" to be "success or failure"
Jan  5 14:11:59.424: INFO: Pod "downwardapi-volume-544b2406-e1e1-4c8c-a389-f899460e8275": Phase="Pending", Reason="", readiness=false. Elapsed: 35.947528ms
Jan  5 14:12:01.440: INFO: Pod "downwardapi-volume-544b2406-e1e1-4c8c-a389-f899460e8275": Phase="Pending", Reason="", readiness=false. Elapsed: 2.051181126s
Jan  5 14:12:03.455: INFO: Pod "downwardapi-volume-544b2406-e1e1-4c8c-a389-f899460e8275": Phase="Pending", Reason="", readiness=false. Elapsed: 4.066207447s
Jan  5 14:12:05.470: INFO: Pod "downwardapi-volume-544b2406-e1e1-4c8c-a389-f899460e8275": Phase="Pending", Reason="", readiness=false. Elapsed: 6.081337432s
Jan  5 14:12:07.479: INFO: Pod "downwardapi-volume-544b2406-e1e1-4c8c-a389-f899460e8275": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.090398703s
STEP: Saw pod success
Jan  5 14:12:07.479: INFO: Pod "downwardapi-volume-544b2406-e1e1-4c8c-a389-f899460e8275" satisfied condition "success or failure"
Jan  5 14:12:07.488: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-544b2406-e1e1-4c8c-a389-f899460e8275 container client-container: 
STEP: delete the pod
Jan  5 14:12:07.552: INFO: Waiting for pod downwardapi-volume-544b2406-e1e1-4c8c-a389-f899460e8275 to disappear
Jan  5 14:12:07.565: INFO: Pod downwardapi-volume-544b2406-e1e1-4c8c-a389-f899460e8275 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  5 14:12:07.566: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-2467" for this suite.
Jan  5 14:12:13.698: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 14:12:13.863: INFO: namespace downward-api-2467 deletion completed in 6.289378795s

• [SLOW TEST:14.572 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  5 14:12:13.865: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-46b9bad0-4524-4c8e-a142-d11e259b43e0
STEP: Creating a pod to test consume secrets
Jan  5 14:12:14.284: INFO: Waiting up to 5m0s for pod "pod-secrets-83f3da85-19ad-4480-a20b-5427632f7dcf" in namespace "secrets-3212" to be "success or failure"
Jan  5 14:12:14.300: INFO: Pod "pod-secrets-83f3da85-19ad-4480-a20b-5427632f7dcf": Phase="Pending", Reason="", readiness=false. Elapsed: 15.641953ms
Jan  5 14:12:16.313: INFO: Pod "pod-secrets-83f3da85-19ad-4480-a20b-5427632f7dcf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029107728s
Jan  5 14:12:18.322: INFO: Pod "pod-secrets-83f3da85-19ad-4480-a20b-5427632f7dcf": Phase="Pending", Reason="", readiness=false. Elapsed: 4.037396468s
Jan  5 14:12:20.331: INFO: Pod "pod-secrets-83f3da85-19ad-4480-a20b-5427632f7dcf": Phase="Pending", Reason="", readiness=false. Elapsed: 6.046208167s
Jan  5 14:12:22.337: INFO: Pod "pod-secrets-83f3da85-19ad-4480-a20b-5427632f7dcf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.052298193s
STEP: Saw pod success
Jan  5 14:12:22.337: INFO: Pod "pod-secrets-83f3da85-19ad-4480-a20b-5427632f7dcf" satisfied condition "success or failure"
Jan  5 14:12:22.339: INFO: Trying to get logs from node iruya-node pod pod-secrets-83f3da85-19ad-4480-a20b-5427632f7dcf container secret-volume-test: 
STEP: delete the pod
Jan  5 14:12:22.429: INFO: Waiting for pod pod-secrets-83f3da85-19ad-4480-a20b-5427632f7dcf to disappear
Jan  5 14:12:22.440: INFO: Pod pod-secrets-83f3da85-19ad-4480-a20b-5427632f7dcf no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  5 14:12:22.441: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-3212" for this suite.
Jan  5 14:12:28.479: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 14:12:28.699: INFO: namespace secrets-3212 deletion completed in 6.251779277s
STEP: Destroying namespace "secret-namespace-2270" for this suite.
Jan  5 14:12:34.766: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 14:12:34.861: INFO: namespace secret-namespace-2270 deletion completed in 6.161581944s

• [SLOW TEST:20.996 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  5 14:12:34.862: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44
[It] should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
Jan  5 14:12:34.927: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  5 14:12:47.373: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-6637" for this suite.
Jan  5 14:12:53.425: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 14:12:53.571: INFO: namespace init-container-6637 deletion completed in 6.177887s

• [SLOW TEST:18.710 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  5 14:12:53.572: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir volume type on tmpfs
Jan  5 14:12:53.706: INFO: Waiting up to 5m0s for pod "pod-7a88a820-853b-4882-8bea-44c4b2445458" in namespace "emptydir-2408" to be "success or failure"
Jan  5 14:12:53.719: INFO: Pod "pod-7a88a820-853b-4882-8bea-44c4b2445458": Phase="Pending", Reason="", readiness=false. Elapsed: 13.448825ms
Jan  5 14:12:55.728: INFO: Pod "pod-7a88a820-853b-4882-8bea-44c4b2445458": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021908634s
Jan  5 14:12:57.733: INFO: Pod "pod-7a88a820-853b-4882-8bea-44c4b2445458": Phase="Pending", Reason="", readiness=false. Elapsed: 4.026895237s
Jan  5 14:12:59.779: INFO: Pod "pod-7a88a820-853b-4882-8bea-44c4b2445458": Phase="Pending", Reason="", readiness=false. Elapsed: 6.072552396s
Jan  5 14:13:01.808: INFO: Pod "pod-7a88a820-853b-4882-8bea-44c4b2445458": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.101586273s
STEP: Saw pod success
Jan  5 14:13:01.808: INFO: Pod "pod-7a88a820-853b-4882-8bea-44c4b2445458" satisfied condition "success or failure"
Jan  5 14:13:01.821: INFO: Trying to get logs from node iruya-node pod pod-7a88a820-853b-4882-8bea-44c4b2445458 container test-container: 
STEP: delete the pod
Jan  5 14:13:02.028: INFO: Waiting for pod pod-7a88a820-853b-4882-8bea-44c4b2445458 to disappear
Jan  5 14:13:02.080: INFO: Pod pod-7a88a820-853b-4882-8bea-44c4b2445458 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  5 14:13:02.081: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-2408" for this suite.
Jan  5 14:13:08.142: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 14:13:08.293: INFO: namespace emptydir-2408 deletion completed in 6.206673287s

• [SLOW TEST:14.721 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  5 14:13:08.295: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name projected-secret-test-map-b852195a-778a-4555-bd5d-b2683886baab
STEP: Creating a pod to test consume secrets
Jan  5 14:13:08.453: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-ce3ab246-81f0-4ac3-8503-58e266dc4e23" in namespace "projected-4295" to be "success or failure"
Jan  5 14:13:08.464: INFO: Pod "pod-projected-secrets-ce3ab246-81f0-4ac3-8503-58e266dc4e23": Phase="Pending", Reason="", readiness=false. Elapsed: 10.65242ms
Jan  5 14:13:10.488: INFO: Pod "pod-projected-secrets-ce3ab246-81f0-4ac3-8503-58e266dc4e23": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035118772s
Jan  5 14:13:12.500: INFO: Pod "pod-projected-secrets-ce3ab246-81f0-4ac3-8503-58e266dc4e23": Phase="Pending", Reason="", readiness=false. Elapsed: 4.046358193s
Jan  5 14:13:14.724: INFO: Pod "pod-projected-secrets-ce3ab246-81f0-4ac3-8503-58e266dc4e23": Phase="Pending", Reason="", readiness=false. Elapsed: 6.27058947s
Jan  5 14:13:16.781: INFO: Pod "pod-projected-secrets-ce3ab246-81f0-4ac3-8503-58e266dc4e23": Phase="Pending", Reason="", readiness=false. Elapsed: 8.327536995s
Jan  5 14:13:18.792: INFO: Pod "pod-projected-secrets-ce3ab246-81f0-4ac3-8503-58e266dc4e23": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.338371094s
STEP: Saw pod success
Jan  5 14:13:18.792: INFO: Pod "pod-projected-secrets-ce3ab246-81f0-4ac3-8503-58e266dc4e23" satisfied condition "success or failure"
Jan  5 14:13:18.796: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-ce3ab246-81f0-4ac3-8503-58e266dc4e23 container projected-secret-volume-test: 
STEP: delete the pod
Jan  5 14:13:18.883: INFO: Waiting for pod pod-projected-secrets-ce3ab246-81f0-4ac3-8503-58e266dc4e23 to disappear
Jan  5 14:13:18.940: INFO: Pod pod-projected-secrets-ce3ab246-81f0-4ac3-8503-58e266dc4e23 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  5 14:13:18.940: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4295" for this suite.
Jan  5 14:13:24.968: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 14:13:25.096: INFO: namespace projected-4295 deletion completed in 6.148229102s

• [SLOW TEST:16.801 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  5 14:13:25.097: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81
Jan  5 14:13:25.187: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Jan  5 14:13:25.196: INFO: Waiting for terminating namespaces to be deleted...
Jan  5 14:13:25.199: INFO: 
Logging pods the kubelet thinks is on node iruya-node before test
Jan  5 14:13:25.207: INFO: weave-net-rlp57 from kube-system started at 2019-10-12 11:56:39 +0000 UTC (2 container statuses recorded)
Jan  5 14:13:25.208: INFO: 	Container weave ready: true, restart count 0
Jan  5 14:13:25.208: INFO: 	Container weave-npc ready: true, restart count 0
Jan  5 14:13:25.208: INFO: kube-proxy-976zl from kube-system started at 2019-08-04 09:01:39 +0000 UTC (1 container statuses recorded)
Jan  5 14:13:25.208: INFO: 	Container kube-proxy ready: true, restart count 0
Jan  5 14:13:25.208: INFO: 
Logging pods the kubelet thinks is on node iruya-server-sfge57q7djm7 before test
Jan  5 14:13:25.216: INFO: kube-apiserver-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:39 +0000 UTC (1 container statuses recorded)
Jan  5 14:13:25.216: INFO: 	Container kube-apiserver ready: true, restart count 0
Jan  5 14:13:25.216: INFO: kube-scheduler-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:43 +0000 UTC (1 container statuses recorded)
Jan  5 14:13:25.216: INFO: 	Container kube-scheduler ready: true, restart count 12
Jan  5 14:13:25.216: INFO: coredns-5c98db65d4-xx8w8 from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded)
Jan  5 14:13:25.216: INFO: 	Container coredns ready: true, restart count 0
Jan  5 14:13:25.216: INFO: etcd-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:38 +0000 UTC (1 container statuses recorded)
Jan  5 14:13:25.216: INFO: 	Container etcd ready: true, restart count 0
Jan  5 14:13:25.216: INFO: weave-net-bzl4d from kube-system started at 2019-08-04 08:52:37 +0000 UTC (2 container statuses recorded)
Jan  5 14:13:25.216: INFO: 	Container weave ready: true, restart count 0
Jan  5 14:13:25.216: INFO: 	Container weave-npc ready: true, restart count 0
Jan  5 14:13:25.216: INFO: coredns-5c98db65d4-bm4gs from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded)
Jan  5 14:13:25.216: INFO: 	Container coredns ready: true, restart count 0
Jan  5 14:13:25.216: INFO: kube-controller-manager-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:42 +0000 UTC (1 container statuses recorded)
Jan  5 14:13:25.216: INFO: 	Container kube-controller-manager ready: true, restart count 18
Jan  5 14:13:25.216: INFO: kube-proxy-58v95 from kube-system started at 2019-08-04 08:52:37 +0000 UTC (1 container statuses recorded)
Jan  5 14:13:25.216: INFO: 	Container kube-proxy ready: true, restart count 0
[It] validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-142ec5af-0219-4584-adc7-bb14ca757e2d 42
STEP: Trying to relaunch the pod, now with labels.
STEP: removing the label kubernetes.io/e2e-142ec5af-0219-4584-adc7-bb14ca757e2d off the node iruya-node
STEP: verifying the node doesn't have the label kubernetes.io/e2e-142ec5af-0219-4584-adc7-bb14ca757e2d
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  5 14:13:43.571: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-9906" for this suite.
Jan  5 14:13:57.610: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 14:13:57.714: INFO: namespace sched-pred-9906 deletion completed in 14.133490634s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72

• [SLOW TEST:32.617 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23
  validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with secret pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  5 14:13:57.715: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with secret pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod pod-subpath-test-secret-57v4
STEP: Creating a pod to test atomic-volume-subpath
Jan  5 14:13:57.910: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-57v4" in namespace "subpath-9090" to be "success or failure"
Jan  5 14:13:57.937: INFO: Pod "pod-subpath-test-secret-57v4": Phase="Pending", Reason="", readiness=false. Elapsed: 26.217973ms
Jan  5 14:13:59.947: INFO: Pod "pod-subpath-test-secret-57v4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037023181s
Jan  5 14:14:01.961: INFO: Pod "pod-subpath-test-secret-57v4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.050353652s
Jan  5 14:14:03.975: INFO: Pod "pod-subpath-test-secret-57v4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.064537183s
Jan  5 14:14:05.983: INFO: Pod "pod-subpath-test-secret-57v4": Phase="Running", Reason="", readiness=true. Elapsed: 8.072471329s
Jan  5 14:14:07.999: INFO: Pod "pod-subpath-test-secret-57v4": Phase="Running", Reason="", readiness=true. Elapsed: 10.088224029s
Jan  5 14:14:10.021: INFO: Pod "pod-subpath-test-secret-57v4": Phase="Running", Reason="", readiness=true. Elapsed: 12.110259805s
Jan  5 14:14:12.032: INFO: Pod "pod-subpath-test-secret-57v4": Phase="Running", Reason="", readiness=true. Elapsed: 14.121686637s
Jan  5 14:14:14.042: INFO: Pod "pod-subpath-test-secret-57v4": Phase="Running", Reason="", readiness=true. Elapsed: 16.1317241s
Jan  5 14:14:16.054: INFO: Pod "pod-subpath-test-secret-57v4": Phase="Running", Reason="", readiness=true. Elapsed: 18.14318242s
Jan  5 14:14:18.061: INFO: Pod "pod-subpath-test-secret-57v4": Phase="Running", Reason="", readiness=true. Elapsed: 20.15093993s
Jan  5 14:14:20.070: INFO: Pod "pod-subpath-test-secret-57v4": Phase="Running", Reason="", readiness=true. Elapsed: 22.159192026s
Jan  5 14:14:22.079: INFO: Pod "pod-subpath-test-secret-57v4": Phase="Running", Reason="", readiness=true. Elapsed: 24.1681557s
Jan  5 14:14:24.104: INFO: Pod "pod-subpath-test-secret-57v4": Phase="Running", Reason="", readiness=true. Elapsed: 26.193882391s
Jan  5 14:14:26.118: INFO: Pod "pod-subpath-test-secret-57v4": Phase="Running", Reason="", readiness=true. Elapsed: 28.207440496s
Jan  5 14:14:28.126: INFO: Pod "pod-subpath-test-secret-57v4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.215546142s
STEP: Saw pod success
Jan  5 14:14:28.126: INFO: Pod "pod-subpath-test-secret-57v4" satisfied condition "success or failure"
Jan  5 14:14:28.131: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-secret-57v4 container test-container-subpath-secret-57v4: 
STEP: delete the pod
Jan  5 14:14:28.236: INFO: Waiting for pod pod-subpath-test-secret-57v4 to disappear
Jan  5 14:14:28.265: INFO: Pod pod-subpath-test-secret-57v4 no longer exists
STEP: Deleting pod pod-subpath-test-secret-57v4
Jan  5 14:14:28.266: INFO: Deleting pod "pod-subpath-test-secret-57v4" in namespace "subpath-9090"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  5 14:14:28.271: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-9090" for this suite.
Jan  5 14:14:34.311: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 14:14:34.429: INFO: namespace subpath-9090 deletion completed in 6.151481024s

• [SLOW TEST:36.714 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with secret pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run deployment 
  should create a deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  5 14:14:34.430: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1557
[It] should create a deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Jan  5 14:14:34.509: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --generator=deployment/apps.v1 --namespace=kubectl-8818'
Jan  5 14:14:36.952: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Jan  5 14:14:36.952: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n"
STEP: verifying the deployment e2e-test-nginx-deployment was created
STEP: verifying the pod controlled by deployment e2e-test-nginx-deployment was created
[AfterEach] [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1562
Jan  5 14:14:39.029: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=kubectl-8818'
Jan  5 14:14:39.257: INFO: stderr: ""
Jan  5 14:14:39.257: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  5 14:14:39.257: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-8818" for this suite.
Jan  5 14:14:45.297: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 14:14:45.439: INFO: namespace kubectl-8818 deletion completed in 6.173664407s

• [SLOW TEST:11.008 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create a deployment from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-network] Service endpoints latency 
  should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Service endpoints latency
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  5 14:14:45.440: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svc-latency
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating replication controller svc-latency-rc in namespace svc-latency-6393
I0105 14:14:45.639803       8 runners.go:180] Created replication controller with name: svc-latency-rc, namespace: svc-latency-6393, replica count: 1
I0105 14:14:46.691671       8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0105 14:14:47.692424       8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0105 14:14:48.693351       8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0105 14:14:49.693867       8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0105 14:14:50.694318       8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0105 14:14:51.694782       8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0105 14:14:52.695231       8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0105 14:14:53.695946       8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Jan  5 14:14:53.905: INFO: Created: latency-svc-t6q8x
Jan  5 14:14:53.916: INFO: Got endpoints: latency-svc-t6q8x [120.150214ms]
Jan  5 14:14:53.970: INFO: Created: latency-svc-78d6d
Jan  5 14:14:53.987: INFO: Got endpoints: latency-svc-78d6d [69.584068ms]
Jan  5 14:14:54.125: INFO: Created: latency-svc-pcgrs
Jan  5 14:14:54.157: INFO: Created: latency-svc-nmkfg
Jan  5 14:14:54.157: INFO: Got endpoints: latency-svc-pcgrs [239.948578ms]
Jan  5 14:14:54.169: INFO: Got endpoints: latency-svc-nmkfg [250.568295ms]
Jan  5 14:14:54.327: INFO: Created: latency-svc-zrh4t
Jan  5 14:14:54.374: INFO: Got endpoints: latency-svc-zrh4t [457.173576ms]
Jan  5 14:14:54.380: INFO: Created: latency-svc-fpsf6
Jan  5 14:14:54.389: INFO: Got endpoints: latency-svc-fpsf6 [470.116121ms]
Jan  5 14:14:54.893: INFO: Created: latency-svc-242qc
Jan  5 14:14:54.903: INFO: Got endpoints: latency-svc-242qc [984.042917ms]
Jan  5 14:14:55.006: INFO: Created: latency-svc-vqvkb
Jan  5 14:14:55.030: INFO: Created: latency-svc-g5cdg
Jan  5 14:14:55.035: INFO: Got endpoints: latency-svc-vqvkb [1.116111622s]
Jan  5 14:14:55.041: INFO: Got endpoints: latency-svc-g5cdg [1.122895518s]
Jan  5 14:14:55.090: INFO: Created: latency-svc-pswnp
Jan  5 14:14:55.090: INFO: Got endpoints: latency-svc-pswnp [1.171230803s]
Jan  5 14:14:55.204: INFO: Created: latency-svc-f8mw8
Jan  5 14:14:55.212: INFO: Got endpoints: latency-svc-f8mw8 [1.292847724s]
Jan  5 14:14:55.274: INFO: Created: latency-svc-984bb
Jan  5 14:14:55.274: INFO: Got endpoints: latency-svc-984bb [1.355353262s]
Jan  5 14:14:55.394: INFO: Created: latency-svc-m7xjr
Jan  5 14:14:55.396: INFO: Got endpoints: latency-svc-m7xjr [1.477146277s]
Jan  5 14:14:55.416: INFO: Created: latency-svc-jpv8z
Jan  5 14:14:55.426: INFO: Got endpoints: latency-svc-jpv8z [1.506534164s]
Jan  5 14:14:55.547: INFO: Created: latency-svc-cd5wj
Jan  5 14:14:55.556: INFO: Got endpoints: latency-svc-cd5wj [1.63782906s]
Jan  5 14:14:55.692: INFO: Created: latency-svc-lhpdr
Jan  5 14:14:55.698: INFO: Got endpoints: latency-svc-lhpdr [1.781168472s]
Jan  5 14:14:55.765: INFO: Created: latency-svc-44bm7
Jan  5 14:14:55.781: INFO: Got endpoints: latency-svc-44bm7 [1.793277013s]
Jan  5 14:14:55.919: INFO: Created: latency-svc-jb69s
Jan  5 14:14:55.952: INFO: Got endpoints: latency-svc-jb69s [1.794861955s]
Jan  5 14:14:55.965: INFO: Created: latency-svc-tzjc5
Jan  5 14:14:55.976: INFO: Got endpoints: latency-svc-tzjc5 [1.806107367s]
Jan  5 14:14:56.079: INFO: Created: latency-svc-h2rwb
Jan  5 14:14:56.094: INFO: Got endpoints: latency-svc-h2rwb [1.718916631s]
Jan  5 14:14:56.144: INFO: Created: latency-svc-nwfn4
Jan  5 14:14:56.144: INFO: Got endpoints: latency-svc-nwfn4 [1.755517213s]
Jan  5 14:14:56.244: INFO: Created: latency-svc-w2vqx
Jan  5 14:14:56.248: INFO: Got endpoints: latency-svc-w2vqx [1.345109436s]
Jan  5 14:14:56.282: INFO: Created: latency-svc-tjprh
Jan  5 14:14:56.286: INFO: Got endpoints: latency-svc-tjprh [1.251318036s]
Jan  5 14:14:56.340: INFO: Created: latency-svc-cwq82
Jan  5 14:14:56.427: INFO: Got endpoints: latency-svc-cwq82 [1.386394313s]
Jan  5 14:14:56.476: INFO: Created: latency-svc-ggrqz
Jan  5 14:14:56.498: INFO: Got endpoints: latency-svc-ggrqz [1.407751515s]
Jan  5 14:14:56.666: INFO: Created: latency-svc-x2f6n
Jan  5 14:14:56.686: INFO: Got endpoints: latency-svc-x2f6n [1.474379404s]
Jan  5 14:14:56.720: INFO: Created: latency-svc-vbm6h
Jan  5 14:14:56.729: INFO: Got endpoints: latency-svc-vbm6h [1.454701323s]
Jan  5 14:14:56.826: INFO: Created: latency-svc-xq8kq
Jan  5 14:14:56.839: INFO: Got endpoints: latency-svc-xq8kq [1.442635347s]
Jan  5 14:14:56.886: INFO: Created: latency-svc-68d5z
Jan  5 14:14:56.900: INFO: Got endpoints: latency-svc-68d5z [1.473805918s]
Jan  5 14:14:57.008: INFO: Created: latency-svc-95b6h
Jan  5 14:14:57.018: INFO: Got endpoints: latency-svc-95b6h [1.461837236s]
Jan  5 14:14:57.072: INFO: Created: latency-svc-28jcz
Jan  5 14:14:57.103: INFO: Got endpoints: latency-svc-28jcz [1.404806299s]
Jan  5 14:14:57.107: INFO: Created: latency-svc-k2rh6
Jan  5 14:14:57.169: INFO: Got endpoints: latency-svc-k2rh6 [1.388227613s]
Jan  5 14:14:57.218: INFO: Created: latency-svc-59dsd
Jan  5 14:14:57.228: INFO: Got endpoints: latency-svc-59dsd [1.274942684s]
Jan  5 14:14:57.347: INFO: Created: latency-svc-pd496
Jan  5 14:14:57.353: INFO: Got endpoints: latency-svc-pd496 [1.377335319s]
Jan  5 14:14:57.386: INFO: Created: latency-svc-zxrgg
Jan  5 14:14:57.391: INFO: Got endpoints: latency-svc-zxrgg [1.297294436s]
Jan  5 14:14:57.544: INFO: Created: latency-svc-xcfl9
Jan  5 14:14:57.551: INFO: Got endpoints: latency-svc-xcfl9 [1.406721933s]
Jan  5 14:14:57.655: INFO: Created: latency-svc-ddb95
Jan  5 14:14:57.737: INFO: Got endpoints: latency-svc-ddb95 [1.488434143s]
Jan  5 14:14:57.737: INFO: Created: latency-svc-47cf5
Jan  5 14:14:57.753: INFO: Got endpoints: latency-svc-47cf5 [1.466363369s]
Jan  5 14:14:57.833: INFO: Created: latency-svc-7n4gx
Jan  5 14:14:57.893: INFO: Got endpoints: latency-svc-7n4gx [1.465665487s]
Jan  5 14:14:57.938: INFO: Created: latency-svc-9wrf9
Jan  5 14:14:57.952: INFO: Got endpoints: latency-svc-9wrf9 [1.453005652s]
Jan  5 14:14:57.994: INFO: Created: latency-svc-92sz8
Jan  5 14:14:58.076: INFO: Got endpoints: latency-svc-92sz8 [1.388959646s]
Jan  5 14:14:58.103: INFO: Created: latency-svc-xzwb6
Jan  5 14:14:58.105: INFO: Got endpoints: latency-svc-xzwb6 [1.376091253s]
Jan  5 14:14:58.151: INFO: Created: latency-svc-8mrqz
Jan  5 14:14:58.163: INFO: Got endpoints: latency-svc-8mrqz [1.324004156s]
Jan  5 14:14:58.292: INFO: Created: latency-svc-qnskc
Jan  5 14:14:58.296: INFO: Got endpoints: latency-svc-qnskc [1.396024531s]
Jan  5 14:14:58.436: INFO: Created: latency-svc-v4s45
Jan  5 14:14:58.452: INFO: Got endpoints: latency-svc-v4s45 [1.433697171s]
Jan  5 14:14:58.510: INFO: Created: latency-svc-54ls5
Jan  5 14:14:58.523: INFO: Got endpoints: latency-svc-54ls5 [1.419291814s]
Jan  5 14:14:58.643: INFO: Created: latency-svc-jkgmf
Jan  5 14:14:58.654: INFO: Got endpoints: latency-svc-jkgmf [1.483685734s]
Jan  5 14:14:58.700: INFO: Created: latency-svc-265m2
Jan  5 14:14:58.709: INFO: Got endpoints: latency-svc-265m2 [1.480814435s]
Jan  5 14:14:58.819: INFO: Created: latency-svc-rwnb4
Jan  5 14:14:58.837: INFO: Got endpoints: latency-svc-rwnb4 [1.483607691s]
Jan  5 14:14:58.890: INFO: Created: latency-svc-gc9jp
Jan  5 14:14:58.903: INFO: Got endpoints: latency-svc-gc9jp [1.511244295s]
Jan  5 14:14:59.089: INFO: Created: latency-svc-vmw4d
Jan  5 14:14:59.089: INFO: Got endpoints: latency-svc-vmw4d [1.537358736s]
Jan  5 14:14:59.187: INFO: Created: latency-svc-pz4rh
Jan  5 14:14:59.198: INFO: Got endpoints: latency-svc-pz4rh [1.460871576s]
Jan  5 14:14:59.241: INFO: Created: latency-svc-phhcn
Jan  5 14:14:59.247: INFO: Got endpoints: latency-svc-phhcn [1.493650846s]
Jan  5 14:14:59.283: INFO: Created: latency-svc-gt8bs
Jan  5 14:14:59.362: INFO: Got endpoints: latency-svc-gt8bs [1.468684795s]
Jan  5 14:14:59.436: INFO: Created: latency-svc-wx2rs
Jan  5 14:14:59.440: INFO: Got endpoints: latency-svc-wx2rs [1.488297911s]
Jan  5 14:14:59.543: INFO: Created: latency-svc-gdlfq
Jan  5 14:14:59.550: INFO: Got endpoints: latency-svc-gdlfq [1.474285086s]
Jan  5 14:14:59.769: INFO: Created: latency-svc-k9l9x
Jan  5 14:14:59.793: INFO: Got endpoints: latency-svc-k9l9x [1.688003372s]
Jan  5 14:15:00.003: INFO: Created: latency-svc-c75jm
Jan  5 14:15:00.023: INFO: Got endpoints: latency-svc-c75jm [1.859263326s]
Jan  5 14:15:00.070: INFO: Created: latency-svc-bk2b6
Jan  5 14:15:00.137: INFO: Got endpoints: latency-svc-bk2b6 [1.839970584s]
Jan  5 14:15:00.172: INFO: Created: latency-svc-zhc2j
Jan  5 14:15:00.185: INFO: Got endpoints: latency-svc-zhc2j [1.732713395s]
Jan  5 14:15:00.210: INFO: Created: latency-svc-7fwp9
Jan  5 14:15:00.213: INFO: Got endpoints: latency-svc-7fwp9 [75.662393ms]
Jan  5 14:15:00.309: INFO: Created: latency-svc-ll9vh
Jan  5 14:15:00.324: INFO: Got endpoints: latency-svc-ll9vh [1.800549003s]
Jan  5 14:15:00.385: INFO: Created: latency-svc-zh7gn
Jan  5 14:15:00.404: INFO: Got endpoints: latency-svc-zh7gn [1.749524638s]
Jan  5 14:15:00.506: INFO: Created: latency-svc-r64c8
Jan  5 14:15:00.550: INFO: Got endpoints: latency-svc-r64c8 [1.841121359s]
Jan  5 14:15:00.551: INFO: Created: latency-svc-sh2gf
Jan  5 14:15:00.557: INFO: Got endpoints: latency-svc-sh2gf [1.719746746s]
Jan  5 14:15:00.671: INFO: Created: latency-svc-5fbkc
Jan  5 14:15:00.684: INFO: Got endpoints: latency-svc-5fbkc [1.78145295s]
Jan  5 14:15:00.740: INFO: Created: latency-svc-vxmdb
Jan  5 14:15:00.808: INFO: Got endpoints: latency-svc-vxmdb [1.719424196s]
Jan  5 14:15:00.826: INFO: Created: latency-svc-tq9r7
Jan  5 14:15:00.844: INFO: Got endpoints: latency-svc-tq9r7 [1.64491741s]
Jan  5 14:15:00.887: INFO: Created: latency-svc-r9qqf
Jan  5 14:15:00.968: INFO: Got endpoints: latency-svc-r9qqf [1.721091667s]
Jan  5 14:15:01.012: INFO: Created: latency-svc-4ftnd
Jan  5 14:15:01.016: INFO: Got endpoints: latency-svc-4ftnd [1.652466764s]
Jan  5 14:15:01.051: INFO: Created: latency-svc-bq5d4
Jan  5 14:15:01.126: INFO: Got endpoints: latency-svc-bq5d4 [1.685625122s]
Jan  5 14:15:01.194: INFO: Created: latency-svc-jnrgv
Jan  5 14:15:01.207: INFO: Got endpoints: latency-svc-jnrgv [1.656579794s]
Jan  5 14:15:01.307: INFO: Created: latency-svc-8wh5l
Jan  5 14:15:01.313: INFO: Got endpoints: latency-svc-8wh5l [1.519415952s]
Jan  5 14:15:01.350: INFO: Created: latency-svc-rv8mm
Jan  5 14:15:01.461: INFO: Got endpoints: latency-svc-rv8mm [1.438541402s]
Jan  5 14:15:01.473: INFO: Created: latency-svc-gfv6w
Jan  5 14:15:01.480: INFO: Got endpoints: latency-svc-gfv6w [1.294974289s]
Jan  5 14:15:01.556: INFO: Created: latency-svc-b7shd
Jan  5 14:15:01.556: INFO: Got endpoints: latency-svc-b7shd [1.34356016s]
Jan  5 14:15:01.711: INFO: Created: latency-svc-4fhjj
Jan  5 14:15:01.732: INFO: Got endpoints: latency-svc-4fhjj [1.407690586s]
Jan  5 14:15:01.758: INFO: Created: latency-svc-7lk9z
Jan  5 14:15:01.761: INFO: Got endpoints: latency-svc-7lk9z [1.356743419s]
Jan  5 14:15:01.916: INFO: Created: latency-svc-fdk6k
Jan  5 14:15:01.929: INFO: Got endpoints: latency-svc-fdk6k [1.378296865s]
Jan  5 14:15:02.011: INFO: Created: latency-svc-7xgxr
Jan  5 14:15:02.086: INFO: Got endpoints: latency-svc-7xgxr [1.528238452s]
Jan  5 14:15:02.126: INFO: Created: latency-svc-8qjs7
Jan  5 14:15:02.136: INFO: Got endpoints: latency-svc-8qjs7 [1.451207053s]
Jan  5 14:15:02.186: INFO: Created: latency-svc-swfpj
Jan  5 14:15:02.283: INFO: Got endpoints: latency-svc-swfpj [1.473903128s]
Jan  5 14:15:02.322: INFO: Created: latency-svc-l6gp9
Jan  5 14:15:02.334: INFO: Got endpoints: latency-svc-l6gp9 [1.489951256s]
Jan  5 14:15:02.464: INFO: Created: latency-svc-zl26m
Jan  5 14:15:02.486: INFO: Got endpoints: latency-svc-zl26m [1.517486589s]
Jan  5 14:15:02.553: INFO: Created: latency-svc-kmwdh
Jan  5 14:15:02.636: INFO: Got endpoints: latency-svc-kmwdh [1.619911947s]
Jan  5 14:15:02.679: INFO: Created: latency-svc-qb7gw
Jan  5 14:15:02.693: INFO: Got endpoints: latency-svc-qb7gw [1.566221242s]
Jan  5 14:15:02.826: INFO: Created: latency-svc-qt27c
Jan  5 14:15:02.876: INFO: Got endpoints: latency-svc-qt27c [1.669058206s]
Jan  5 14:15:02.882: INFO: Created: latency-svc-djhvr
Jan  5 14:15:02.898: INFO: Got endpoints: latency-svc-djhvr [1.584870511s]
Jan  5 14:15:02.998: INFO: Created: latency-svc-mf27n
Jan  5 14:15:03.004: INFO: Got endpoints: latency-svc-mf27n [1.5420291s]
Jan  5 14:15:03.073: INFO: Created: latency-svc-l9nl5
Jan  5 14:15:03.233: INFO: Got endpoints: latency-svc-l9nl5 [1.752396814s]
Jan  5 14:15:03.247: INFO: Created: latency-svc-2s6b6
Jan  5 14:15:03.253: INFO: Got endpoints: latency-svc-2s6b6 [1.696607412s]
Jan  5 14:15:03.287: INFO: Created: latency-svc-87zmb
Jan  5 14:15:03.296: INFO: Got endpoints: latency-svc-87zmb [1.563921482s]
Jan  5 14:15:03.442: INFO: Created: latency-svc-mhkwq
Jan  5 14:15:03.456: INFO: Got endpoints: latency-svc-mhkwq [1.695241789s]
Jan  5 14:15:03.517: INFO: Created: latency-svc-l7bp7
Jan  5 14:15:03.717: INFO: Got endpoints: latency-svc-l7bp7 [1.787233903s]
Jan  5 14:15:03.760: INFO: Created: latency-svc-4ttd5
Jan  5 14:15:03.764: INFO: Got endpoints: latency-svc-4ttd5 [1.678726982s]
Jan  5 14:15:03.817: INFO: Created: latency-svc-j4dft
Jan  5 14:15:03.942: INFO: Got endpoints: latency-svc-j4dft [1.806003109s]
Jan  5 14:15:03.962: INFO: Created: latency-svc-c8gck
Jan  5 14:15:03.988: INFO: Got endpoints: latency-svc-c8gck [1.704904604s]
Jan  5 14:15:04.017: INFO: Created: latency-svc-prsgx
Jan  5 14:15:04.033: INFO: Got endpoints: latency-svc-prsgx [1.699415496s]
Jan  5 14:15:04.146: INFO: Created: latency-svc-695n8
Jan  5 14:15:04.187: INFO: Got endpoints: latency-svc-695n8 [1.700048533s]
Jan  5 14:15:04.193: INFO: Created: latency-svc-664hp
Jan  5 14:15:04.325: INFO: Got endpoints: latency-svc-664hp [1.688722316s]
Jan  5 14:15:04.344: INFO: Created: latency-svc-dtxq6
Jan  5 14:15:04.348: INFO: Got endpoints: latency-svc-dtxq6 [1.65460894s]
Jan  5 14:15:04.400: INFO: Created: latency-svc-d4wh8
Jan  5 14:15:04.413: INFO: Got endpoints: latency-svc-d4wh8 [1.535950421s]
Jan  5 14:15:04.549: INFO: Created: latency-svc-kmr7x
Jan  5 14:15:04.556: INFO: Got endpoints: latency-svc-kmr7x [1.657793578s]
Jan  5 14:15:04.721: INFO: Created: latency-svc-ctjgv
Jan  5 14:15:04.734: INFO: Got endpoints: latency-svc-ctjgv [1.729665635s]
Jan  5 14:15:04.819: INFO: Created: latency-svc-z2rft
Jan  5 14:15:04.942: INFO: Got endpoints: latency-svc-z2rft [1.708720312s]
Jan  5 14:15:04.972: INFO: Created: latency-svc-vvgwj
Jan  5 14:15:04.983: INFO: Got endpoints: latency-svc-vvgwj [1.729829638s]
Jan  5 14:15:05.023: INFO: Created: latency-svc-mqb95
Jan  5 14:15:05.030: INFO: Got endpoints: latency-svc-mqb95 [1.733407505s]
Jan  5 14:15:05.241: INFO: Created: latency-svc-xm898
Jan  5 14:15:05.272: INFO: Got endpoints: latency-svc-xm898 [1.81585666s]
Jan  5 14:15:05.331: INFO: Created: latency-svc-vn6xk
Jan  5 14:15:05.339: INFO: Got endpoints: latency-svc-vn6xk [1.621434961s]
Jan  5 14:15:05.498: INFO: Created: latency-svc-mjffg
Jan  5 14:15:05.549: INFO: Got endpoints: latency-svc-mjffg [1.784675166s]
Jan  5 14:15:05.731: INFO: Created: latency-svc-cmkl6
Jan  5 14:15:05.736: INFO: Got endpoints: latency-svc-cmkl6 [1.792777103s]
Jan  5 14:15:05.807: INFO: Created: latency-svc-bdpx5
Jan  5 14:15:05.929: INFO: Got endpoints: latency-svc-bdpx5 [1.940041015s]
Jan  5 14:15:05.944: INFO: Created: latency-svc-crpsm
Jan  5 14:15:05.953: INFO: Got endpoints: latency-svc-crpsm [1.919187932s]
Jan  5 14:15:05.991: INFO: Created: latency-svc-pvsnx
Jan  5 14:15:06.101: INFO: Got endpoints: latency-svc-pvsnx [1.913791407s]
Jan  5 14:15:06.105: INFO: Created: latency-svc-6w5rz
Jan  5 14:15:06.115: INFO: Got endpoints: latency-svc-6w5rz [1.788705651s]
Jan  5 14:15:06.155: INFO: Created: latency-svc-x6k8w
Jan  5 14:15:06.159: INFO: Got endpoints: latency-svc-x6k8w [1.810563479s]
Jan  5 14:15:06.340: INFO: Created: latency-svc-d5fjl
Jan  5 14:15:06.356: INFO: Got endpoints: latency-svc-d5fjl [1.942777518s]
Jan  5 14:15:06.427: INFO: Created: latency-svc-tbxh9
Jan  5 14:15:06.586: INFO: Got endpoints: latency-svc-tbxh9 [2.029072297s]
Jan  5 14:15:06.620: INFO: Created: latency-svc-dtznm
Jan  5 14:15:06.774: INFO: Got endpoints: latency-svc-dtznm [2.039633555s]
Jan  5 14:15:06.777: INFO: Created: latency-svc-ntzdl
Jan  5 14:15:06.788: INFO: Got endpoints: latency-svc-ntzdl [1.845416784s]
Jan  5 14:15:06.846: INFO: Created: latency-svc-vqnst
Jan  5 14:15:06.846: INFO: Got endpoints: latency-svc-vqnst [1.863000761s]
Jan  5 14:15:06.993: INFO: Created: latency-svc-pzw7v
Jan  5 14:15:07.004: INFO: Got endpoints: latency-svc-pzw7v [1.9740858s]
Jan  5 14:15:07.059: INFO: Created: latency-svc-kldd6
Jan  5 14:15:07.073: INFO: Got endpoints: latency-svc-kldd6 [1.79978353s]
Jan  5 14:15:07.203: INFO: Created: latency-svc-fcwrk
Jan  5 14:15:07.229: INFO: Got endpoints: latency-svc-fcwrk [1.889862802s]
Jan  5 14:15:07.274: INFO: Created: latency-svc-qs5br
Jan  5 14:15:07.283: INFO: Got endpoints: latency-svc-qs5br [1.733202024s]
Jan  5 14:15:07.413: INFO: Created: latency-svc-q8qbv
Jan  5 14:15:07.418: INFO: Got endpoints: latency-svc-q8qbv [1.682442737s]
Jan  5 14:15:07.478: INFO: Created: latency-svc-5g5jd
Jan  5 14:15:07.492: INFO: Got endpoints: latency-svc-5g5jd [1.563291369s]
Jan  5 14:15:07.590: INFO: Created: latency-svc-86hhk
Jan  5 14:15:07.615: INFO: Got endpoints: latency-svc-86hhk [1.6614413s]
Jan  5 14:15:07.680: INFO: Created: latency-svc-r54nn
Jan  5 14:15:07.686: INFO: Got endpoints: latency-svc-r54nn [1.583971762s]
Jan  5 14:15:07.835: INFO: Created: latency-svc-gll6k
Jan  5 14:15:07.846: INFO: Got endpoints: latency-svc-gll6k [1.730804105s]
Jan  5 14:15:07.988: INFO: Created: latency-svc-c9qrp
Jan  5 14:15:08.000: INFO: Got endpoints: latency-svc-c9qrp [1.841456864s]
Jan  5 14:15:08.050: INFO: Created: latency-svc-gxzfp
Jan  5 14:15:08.059: INFO: Got endpoints: latency-svc-gxzfp [1.701906324s]
Jan  5 14:15:08.179: INFO: Created: latency-svc-phmwv
Jan  5 14:15:08.206: INFO: Got endpoints: latency-svc-phmwv [1.620151886s]
Jan  5 14:15:08.301: INFO: Created: latency-svc-x5fs2
Jan  5 14:15:08.320: INFO: Got endpoints: latency-svc-x5fs2 [1.54588717s]
Jan  5 14:15:08.369: INFO: Created: latency-svc-b8zg6
Jan  5 14:15:08.380: INFO: Got endpoints: latency-svc-b8zg6 [1.591595618s]
Jan  5 14:15:08.544: INFO: Created: latency-svc-wsqrt
Jan  5 14:15:08.613: INFO: Got endpoints: latency-svc-wsqrt [1.766500132s]
Jan  5 14:15:08.619: INFO: Created: latency-svc-6ct2p
Jan  5 14:15:08.689: INFO: Got endpoints: latency-svc-6ct2p [1.684454935s]
Jan  5 14:15:08.726: INFO: Created: latency-svc-79jfk
Jan  5 14:15:08.736: INFO: Got endpoints: latency-svc-79jfk [1.66274141s]
Jan  5 14:15:08.772: INFO: Created: latency-svc-rkxp2
Jan  5 14:15:08.913: INFO: Got endpoints: latency-svc-rkxp2 [1.684334895s]
Jan  5 14:15:08.929: INFO: Created: latency-svc-rdgbj
Jan  5 14:15:08.944: INFO: Got endpoints: latency-svc-rdgbj [1.659921374s]
Jan  5 14:15:08.996: INFO: Created: latency-svc-xln2c
Jan  5 14:15:08.998: INFO: Got endpoints: latency-svc-xln2c [1.579655657s]
Jan  5 14:15:09.148: INFO: Created: latency-svc-7gwd5
Jan  5 14:15:09.155: INFO: Got endpoints: latency-svc-7gwd5 [1.662329522s]
Jan  5 14:15:09.218: INFO: Created: latency-svc-vbjnq
Jan  5 14:15:09.306: INFO: Got endpoints: latency-svc-vbjnq [1.691029835s]
Jan  5 14:15:09.313: INFO: Created: latency-svc-7vrsd
Jan  5 14:15:09.369: INFO: Got endpoints: latency-svc-7vrsd [1.683378379s]
Jan  5 14:15:09.402: INFO: Created: latency-svc-z5v87
Jan  5 14:15:09.531: INFO: Got endpoints: latency-svc-z5v87 [1.68419819s]
Jan  5 14:15:09.600: INFO: Created: latency-svc-h9zk2
Jan  5 14:15:09.737: INFO: Got endpoints: latency-svc-h9zk2 [1.736432712s]
Jan  5 14:15:09.765: INFO: Created: latency-svc-7q785
Jan  5 14:15:09.814: INFO: Got endpoints: latency-svc-7q785 [1.754890141s]
Jan  5 14:15:09.823: INFO: Created: latency-svc-4zb5w
Jan  5 14:15:09.890: INFO: Got endpoints: latency-svc-4zb5w [1.6838904s]
Jan  5 14:15:09.922: INFO: Created: latency-svc-tv9xc
Jan  5 14:15:09.931: INFO: Got endpoints: latency-svc-tv9xc [1.611071973s]
Jan  5 14:15:09.971: INFO: Created: latency-svc-bgzr4
Jan  5 14:15:09.977: INFO: Got endpoints: latency-svc-bgzr4 [1.596711912s]
Jan  5 14:15:10.114: INFO: Created: latency-svc-srjbx
Jan  5 14:15:10.120: INFO: Got endpoints: latency-svc-srjbx [1.506624618s]
Jan  5 14:15:10.167: INFO: Created: latency-svc-t6hxk
Jan  5 14:15:10.173: INFO: Got endpoints: latency-svc-t6hxk [1.483557003s]
Jan  5 14:15:10.262: INFO: Created: latency-svc-hfhrd
Jan  5 14:15:10.266: INFO: Got endpoints: latency-svc-hfhrd [1.530203855s]
Jan  5 14:15:10.339: INFO: Created: latency-svc-ff6fk
Jan  5 14:15:10.350: INFO: Got endpoints: latency-svc-ff6fk [1.435966286s]
Jan  5 14:15:10.447: INFO: Created: latency-svc-8psbm
Jan  5 14:15:10.460: INFO: Got endpoints: latency-svc-8psbm [1.515263021s]
Jan  5 14:15:10.504: INFO: Created: latency-svc-kp8tz
Jan  5 14:15:10.614: INFO: Got endpoints: latency-svc-kp8tz [1.615518162s]
Jan  5 14:15:10.641: INFO: Created: latency-svc-hcpz9
Jan  5 14:15:10.655: INFO: Got endpoints: latency-svc-hcpz9 [1.499431721s]
Jan  5 14:15:10.803: INFO: Created: latency-svc-twsnf
Jan  5 14:15:10.809: INFO: Got endpoints: latency-svc-twsnf [1.502186389s]
Jan  5 14:15:10.854: INFO: Created: latency-svc-dgxhk
Jan  5 14:15:10.858: INFO: Got endpoints: latency-svc-dgxhk [1.488287347s]
Jan  5 14:15:10.986: INFO: Created: latency-svc-ll5ds
Jan  5 14:15:11.005: INFO: Got endpoints: latency-svc-ll5ds [1.474039058s]
Jan  5 14:15:11.102: INFO: Created: latency-svc-vvlfx
Jan  5 14:15:11.103: INFO: Got endpoints: latency-svc-vvlfx [1.365750661s]
Jan  5 14:15:11.158: INFO: Created: latency-svc-zdtxb
Jan  5 14:15:11.160: INFO: Got endpoints: latency-svc-zdtxb [1.345790773s]
Jan  5 14:15:11.276: INFO: Created: latency-svc-vpw6n
Jan  5 14:15:11.277: INFO: Got endpoints: latency-svc-vpw6n [1.385952504s]
Jan  5 14:15:11.355: INFO: Created: latency-svc-pgflb
Jan  5 14:15:11.462: INFO: Got endpoints: latency-svc-pgflb [1.531216068s]
Jan  5 14:15:11.540: INFO: Created: latency-svc-xjz4v
Jan  5 14:15:11.551: INFO: Got endpoints: latency-svc-xjz4v [1.573895854s]
Jan  5 14:15:11.697: INFO: Created: latency-svc-wk76t
Jan  5 14:15:11.715: INFO: Got endpoints: latency-svc-wk76t [1.594858068s]
Jan  5 14:15:11.817: INFO: Created: latency-svc-l7zqk
Jan  5 14:15:11.838: INFO: Got endpoints: latency-svc-l7zqk [1.664941458s]
Jan  5 14:15:11.906: INFO: Created: latency-svc-r2qkj
Jan  5 14:15:11.999: INFO: Got endpoints: latency-svc-r2qkj [1.732540172s]
Jan  5 14:15:12.070: INFO: Created: latency-svc-9bjfq
Jan  5 14:15:12.092: INFO: Got endpoints: latency-svc-9bjfq [1.742185426s]
Jan  5 14:15:12.214: INFO: Created: latency-svc-rxszh
Jan  5 14:15:12.237: INFO: Got endpoints: latency-svc-rxszh [1.776927384s]
Jan  5 14:15:12.371: INFO: Created: latency-svc-xhzbs
Jan  5 14:15:12.390: INFO: Got endpoints: latency-svc-xhzbs [1.776191701s]
Jan  5 14:15:12.446: INFO: Created: latency-svc-wqbkp
Jan  5 14:15:12.517: INFO: Got endpoints: latency-svc-wqbkp [1.862014254s]
Jan  5 14:15:12.558: INFO: Created: latency-svc-s75zn
Jan  5 14:15:12.575: INFO: Got endpoints: latency-svc-s75zn [1.766349105s]
Jan  5 14:15:12.761: INFO: Created: latency-svc-8m9qb
Jan  5 14:15:12.768: INFO: Got endpoints: latency-svc-8m9qb [1.910192478s]
Jan  5 14:15:12.930: INFO: Created: latency-svc-pbtmb
Jan  5 14:15:12.948: INFO: Got endpoints: latency-svc-pbtmb [1.942741127s]
Jan  5 14:15:13.011: INFO: Created: latency-svc-9gkp9
Jan  5 14:15:13.017: INFO: Got endpoints: latency-svc-9gkp9 [1.913085026s]
Jan  5 14:15:13.092: INFO: Created: latency-svc-bxxcz
Jan  5 14:15:13.253: INFO: Got endpoints: latency-svc-bxxcz [2.093008151s]
Jan  5 14:15:13.301: INFO: Created: latency-svc-6j4qd
Jan  5 14:15:13.312: INFO: Got endpoints: latency-svc-6j4qd [2.034355139s]
Jan  5 14:15:13.422: INFO: Created: latency-svc-jn4xk
Jan  5 14:15:13.433: INFO: Got endpoints: latency-svc-jn4xk [1.970150245s]
Jan  5 14:15:13.556: INFO: Created: latency-svc-99gs8
Jan  5 14:15:13.600: INFO: Got endpoints: latency-svc-99gs8 [2.048136607s]
Jan  5 14:15:13.709: INFO: Created: latency-svc-8lgmg
Jan  5 14:15:13.712: INFO: Got endpoints: latency-svc-8lgmg [1.996414807s]
Jan  5 14:15:13.767: INFO: Created: latency-svc-wrb6x
Jan  5 14:15:13.774: INFO: Got endpoints: latency-svc-wrb6x [1.936028981s]
Jan  5 14:15:13.872: INFO: Created: latency-svc-wplmc
Jan  5 14:15:13.883: INFO: Got endpoints: latency-svc-wplmc [1.88413444s]
Jan  5 14:15:13.932: INFO: Created: latency-svc-zh7lc
Jan  5 14:15:13.990: INFO: Got endpoints: latency-svc-zh7lc [1.896757122s]
Jan  5 14:15:14.072: INFO: Created: latency-svc-qqm9f
Jan  5 14:15:14.159: INFO: Got endpoints: latency-svc-qqm9f [1.921773817s]
Jan  5 14:15:14.174: INFO: Created: latency-svc-zk4ww
Jan  5 14:15:14.187: INFO: Got endpoints: latency-svc-zk4ww [1.796704462s]
Jan  5 14:15:14.327: INFO: Created: latency-svc-khbkf
Jan  5 14:15:14.356: INFO: Created: latency-svc-8vhvm
Jan  5 14:15:14.356: INFO: Got endpoints: latency-svc-khbkf [1.838141861s]
Jan  5 14:15:14.371: INFO: Got endpoints: latency-svc-8vhvm [1.794498211s]
Jan  5 14:15:14.468: INFO: Created: latency-svc-b7vbz
Jan  5 14:15:14.475: INFO: Got endpoints: latency-svc-b7vbz [1.706623028s]
Jan  5 14:15:14.524: INFO: Created: latency-svc-x4blp
Jan  5 14:15:14.559: INFO: Got endpoints: latency-svc-x4blp [1.610630199s]
Jan  5 14:15:14.708: INFO: Created: latency-svc-6mzxh
Jan  5 14:15:14.716: INFO: Got endpoints: latency-svc-6mzxh [1.698969731s]
Jan  5 14:15:14.810: INFO: Created: latency-svc-89779
Jan  5 14:15:14.813: INFO: Got endpoints: latency-svc-89779 [1.559989169s]
Jan  5 14:15:14.883: INFO: Created: latency-svc-kh2wt
Jan  5 14:15:14.976: INFO: Got endpoints: latency-svc-kh2wt [1.664075393s]
Jan  5 14:15:15.002: INFO: Created: latency-svc-8ddqz
Jan  5 14:15:15.021: INFO: Got endpoints: latency-svc-8ddqz [1.58758693s]
Jan  5 14:15:15.135: INFO: Created: latency-svc-zl7ht
Jan  5 14:15:15.140: INFO: Got endpoints: latency-svc-zl7ht [1.540105042s]
Jan  5 14:15:15.189: INFO: Created: latency-svc-7465b
Jan  5 14:15:15.199: INFO: Got endpoints: latency-svc-7465b [1.486666238s]
Jan  5 14:15:15.300: INFO: Created: latency-svc-pbh8r
Jan  5 14:15:15.351: INFO: Created: latency-svc-lkct8
Jan  5 14:15:15.351: INFO: Got endpoints: latency-svc-pbh8r [1.576437508s]
Jan  5 14:15:15.359: INFO: Got endpoints: latency-svc-lkct8 [1.475079442s]
Jan  5 14:15:15.557: INFO: Created: latency-svc-7p2s9
Jan  5 14:15:15.633: INFO: Got endpoints: latency-svc-7p2s9 [1.641656824s]
Jan  5 14:15:15.633: INFO: Created: latency-svc-trqp7
Jan  5 14:15:15.710: INFO: Got endpoints: latency-svc-trqp7 [1.549933584s]
Jan  5 14:15:15.758: INFO: Created: latency-svc-qbgcq
Jan  5 14:15:15.766: INFO: Got endpoints: latency-svc-qbgcq [1.578375235s]
Jan  5 14:15:15.766: INFO: Latencies: [69.584068ms 75.662393ms 239.948578ms 250.568295ms 457.173576ms 470.116121ms 984.042917ms 1.116111622s 1.122895518s 1.171230803s 1.251318036s 1.274942684s 1.292847724s 1.294974289s 1.297294436s 1.324004156s 1.34356016s 1.345109436s 1.345790773s 1.355353262s 1.356743419s 1.365750661s 1.376091253s 1.377335319s 1.378296865s 1.385952504s 1.386394313s 1.388227613s 1.388959646s 1.396024531s 1.404806299s 1.406721933s 1.407690586s 1.407751515s 1.419291814s 1.433697171s 1.435966286s 1.438541402s 1.442635347s 1.451207053s 1.453005652s 1.454701323s 1.460871576s 1.461837236s 1.465665487s 1.466363369s 1.468684795s 1.473805918s 1.473903128s 1.474039058s 1.474285086s 1.474379404s 1.475079442s 1.477146277s 1.480814435s 1.483557003s 1.483607691s 1.483685734s 1.486666238s 1.488287347s 1.488297911s 1.488434143s 1.489951256s 1.493650846s 1.499431721s 1.502186389s 1.506534164s 1.506624618s 1.511244295s 1.515263021s 1.517486589s 1.519415952s 1.528238452s 1.530203855s 1.531216068s 1.535950421s 1.537358736s 1.540105042s 1.5420291s 1.54588717s 1.549933584s 1.559989169s 1.563291369s 1.563921482s 1.566221242s 1.573895854s 1.576437508s 1.578375235s 1.579655657s 1.583971762s 1.584870511s 1.58758693s 1.591595618s 1.594858068s 1.596711912s 1.610630199s 1.611071973s 1.615518162s 1.619911947s 1.620151886s 1.621434961s 1.63782906s 1.641656824s 1.64491741s 1.652466764s 1.65460894s 1.656579794s 1.657793578s 1.659921374s 1.6614413s 1.662329522s 1.66274141s 1.664075393s 1.664941458s 1.669058206s 1.678726982s 1.682442737s 1.683378379s 1.6838904s 1.68419819s 1.684334895s 1.684454935s 1.685625122s 1.688003372s 1.688722316s 1.691029835s 1.695241789s 1.696607412s 1.698969731s 1.699415496s 1.700048533s 1.701906324s 1.704904604s 1.706623028s 1.708720312s 1.718916631s 1.719424196s 1.719746746s 1.721091667s 1.729665635s 1.729829638s 1.730804105s 1.732540172s 1.732713395s 1.733202024s 1.733407505s 1.736432712s 1.742185426s 1.749524638s 1.752396814s 1.754890141s 1.755517213s 1.766349105s 1.766500132s 1.776191701s 1.776927384s 1.781168472s 1.78145295s 1.784675166s 1.787233903s 1.788705651s 1.792777103s 1.793277013s 1.794498211s 1.794861955s 1.796704462s 1.79978353s 1.800549003s 1.806003109s 1.806107367s 1.810563479s 1.81585666s 1.838141861s 1.839970584s 1.841121359s 1.841456864s 1.845416784s 1.859263326s 1.862014254s 1.863000761s 1.88413444s 1.889862802s 1.896757122s 1.910192478s 1.913085026s 1.913791407s 1.919187932s 1.921773817s 1.936028981s 1.940041015s 1.942741127s 1.942777518s 1.970150245s 1.9740858s 1.996414807s 2.029072297s 2.034355139s 2.039633555s 2.048136607s 2.093008151s]
Jan  5 14:15:15.767: INFO: 50 %ile: 1.621434961s
Jan  5 14:15:15.767: INFO: 90 %ile: 1.88413444s
Jan  5 14:15:15.767: INFO: 99 %ile: 2.048136607s
Jan  5 14:15:15.767: INFO: Total sample count: 200
[AfterEach] [sig-network] Service endpoints latency
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  5 14:15:15.767: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svc-latency-6393" for this suite.
Jan  5 14:15:57.880: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 14:15:57.990: INFO: namespace svc-latency-6393 deletion completed in 42.206487817s

• [SLOW TEST:72.550 seconds]
[sig-network] Service endpoints latency
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-apps] Deployment 
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  5 14:15:57.990: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72
[It] RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jan  5 14:15:58.076: INFO: Creating deployment "test-recreate-deployment"
Jan  5 14:15:58.095: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1
Jan  5 14:15:58.104: INFO: deployment "test-recreate-deployment" doesn't have the required revision set
Jan  5 14:16:00.124: INFO: Waiting deployment "test-recreate-deployment" to complete
Jan  5 14:16:00.129: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713830558, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713830558, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713830558, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713830558, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  5 14:16:02.144: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713830558, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713830558, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713830558, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713830558, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  5 14:16:04.136: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713830558, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713830558, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713830558, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713830558, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  5 14:16:06.137: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713830558, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713830558, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713830558, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713830558, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  5 14:16:08.152: INFO: Triggering a new rollout for deployment "test-recreate-deployment"
Jan  5 14:16:08.200: INFO: Updating deployment test-recreate-deployment
Jan  5 14:16:08.201: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66
Jan  5 14:16:08.766: INFO: Deployment "test-recreate-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment,GenerateName:,Namespace:deployment-2357,SelfLink:/apis/apps/v1/namespaces/deployment-2357/deployments/test-recreate-deployment,UID:9cebe207-1c65-4ff3-8d6b-9f24bc7a9642,ResourceVersion:19407951,Generation:2,CreationTimestamp:2020-01-05 14:15:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[{Available False 2020-01-05 14:16:08 +0000 UTC 2020-01-05 14:16:08 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2020-01-05 14:16:08 +0000 UTC 2020-01-05 14:15:58 +0000 UTC ReplicaSetUpdated ReplicaSet "test-recreate-deployment-5c8c9cc69d" is progressing.}],ReadyReplicas:0,CollisionCount:nil,},}

Jan  5 14:16:08.776: INFO: New ReplicaSet "test-recreate-deployment-5c8c9cc69d" of Deployment "test-recreate-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5c8c9cc69d,GenerateName:,Namespace:deployment-2357,SelfLink:/apis/apps/v1/namespaces/deployment-2357/replicasets/test-recreate-deployment-5c8c9cc69d,UID:d840b16b-dc0a-4192-8750-8efb5ad47976,ResourceVersion:19407947,Generation:1,CreationTimestamp:2020-01-05 14:16:08 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 9cebe207-1c65-4ff3-8d6b-9f24bc7a9642 0xc00251ca67 0xc00251ca68}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Jan  5 14:16:08.776: INFO: All old ReplicaSets of Deployment "test-recreate-deployment":
Jan  5 14:16:08.776: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-6df85df6b9,GenerateName:,Namespace:deployment-2357,SelfLink:/apis/apps/v1/namespaces/deployment-2357/replicasets/test-recreate-deployment-6df85df6b9,UID:91c1554c-8858-4482-b97a-d6d0e2daea68,ResourceVersion:19407939,Generation:2,CreationTimestamp:2020-01-05 14:15:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 9cebe207-1c65-4ff3-8d6b-9f24bc7a9642 0xc00251cb57 0xc00251cb58}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Jan  5 14:16:08.783: INFO: Pod "test-recreate-deployment-5c8c9cc69d-wglj7" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5c8c9cc69d-wglj7,GenerateName:test-recreate-deployment-5c8c9cc69d-,Namespace:deployment-2357,SelfLink:/api/v1/namespaces/deployment-2357/pods/test-recreate-deployment-5c8c9cc69d-wglj7,UID:1129f2f6-dbc3-43d4-98cb-a65458d24f21,ResourceVersion:19407946,Generation:0,CreationTimestamp:2020-01-05 14:16:08 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-recreate-deployment-5c8c9cc69d d840b16b-dc0a-4192-8750-8efb5ad47976 0xc002a81b17 0xc002a81b18}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-w9q68 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-w9q68,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-w9q68 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002a81bf0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002a81c10}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-05 14:16:08 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  5 14:16:08.783: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-2357" for this suite.
Jan  5 14:16:16.863: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 14:16:16.946: INFO: namespace deployment-2357 deletion completed in 8.154234678s

• [SLOW TEST:18.956 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period 
  should be submitted and removed [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  5 14:16:16.947: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Delete Grace Period
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:47
[It] should be submitted and removed [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: setting up selector
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
Jan  5 14:16:25.261: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0'
STEP: deleting the pod gracefully
STEP: verifying the kubelet observed the termination notice
Jan  5 14:16:40.445: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed
[AfterEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  5 14:16:40.460: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-3926" for this suite.
Jan  5 14:16:46.507: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 14:16:46.634: INFO: namespace pods-3926 deletion completed in 6.162644913s

• [SLOW TEST:29.687 seconds]
[k8s.io] [sig-node] Pods Extended
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  [k8s.io] Delete Grace Period
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should be submitted and removed [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Proxy server 
  should support proxy with --port 0  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  5 14:16:46.635: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should support proxy with --port 0  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: starting the proxy server
Jan  5 14:16:46.717: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter'
STEP: curling proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  5 14:16:46.823: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-7341" for this suite.
Jan  5 14:16:52.881: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 14:16:53.008: INFO: namespace kubectl-7341 deletion completed in 6.177444303s

• [SLOW TEST:6.373 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Proxy server
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should support proxy with --port 0  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  5 14:16:53.008: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating secret secrets-554/secret-test-f91811df-f63c-40eb-8cea-722ddf90962f
STEP: Creating a pod to test consume secrets
Jan  5 14:16:53.144: INFO: Waiting up to 5m0s for pod "pod-configmaps-47c07d1b-afc1-4dbe-bca1-42b787d39525" in namespace "secrets-554" to be "success or failure"
Jan  5 14:16:53.149: INFO: Pod "pod-configmaps-47c07d1b-afc1-4dbe-bca1-42b787d39525": Phase="Pending", Reason="", readiness=false. Elapsed: 5.609158ms
Jan  5 14:16:55.164: INFO: Pod "pod-configmaps-47c07d1b-afc1-4dbe-bca1-42b787d39525": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020430384s
Jan  5 14:16:57.169: INFO: Pod "pod-configmaps-47c07d1b-afc1-4dbe-bca1-42b787d39525": Phase="Pending", Reason="", readiness=false. Elapsed: 4.025449443s
Jan  5 14:16:59.178: INFO: Pod "pod-configmaps-47c07d1b-afc1-4dbe-bca1-42b787d39525": Phase="Pending", Reason="", readiness=false. Elapsed: 6.034155657s
Jan  5 14:17:01.186: INFO: Pod "pod-configmaps-47c07d1b-afc1-4dbe-bca1-42b787d39525": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.041887097s
STEP: Saw pod success
Jan  5 14:17:01.186: INFO: Pod "pod-configmaps-47c07d1b-afc1-4dbe-bca1-42b787d39525" satisfied condition "success or failure"
Jan  5 14:17:01.190: INFO: Trying to get logs from node iruya-node pod pod-configmaps-47c07d1b-afc1-4dbe-bca1-42b787d39525 container env-test: 
STEP: delete the pod
Jan  5 14:17:01.272: INFO: Waiting for pod pod-configmaps-47c07d1b-afc1-4dbe-bca1-42b787d39525 to disappear
Jan  5 14:17:01.316: INFO: Pod pod-configmaps-47c07d1b-afc1-4dbe-bca1-42b787d39525 no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  5 14:17:01.316: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-554" for this suite.
Jan  5 14:17:07.375: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 14:17:07.571: INFO: namespace secrets-554 deletion completed in 6.245635856s

• [SLOW TEST:14.563 seconds]
[sig-api-machinery] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  5 14:17:07.572: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Jan  5 14:17:07.700: INFO: Waiting up to 5m0s for pod "downwardapi-volume-64f8131f-1119-417a-99bb-f19dd10551e2" in namespace "downward-api-2452" to be "success or failure"
Jan  5 14:17:07.704: INFO: Pod "downwardapi-volume-64f8131f-1119-417a-99bb-f19dd10551e2": Phase="Pending", Reason="", readiness=false. Elapsed: 3.780622ms
Jan  5 14:17:09.724: INFO: Pod "downwardapi-volume-64f8131f-1119-417a-99bb-f19dd10551e2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023956954s
Jan  5 14:17:11.732: INFO: Pod "downwardapi-volume-64f8131f-1119-417a-99bb-f19dd10551e2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.031837655s
Jan  5 14:17:13.741: INFO: Pod "downwardapi-volume-64f8131f-1119-417a-99bb-f19dd10551e2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.040586095s
Jan  5 14:17:15.822: INFO: Pod "downwardapi-volume-64f8131f-1119-417a-99bb-f19dd10551e2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.122119847s
STEP: Saw pod success
Jan  5 14:17:15.823: INFO: Pod "downwardapi-volume-64f8131f-1119-417a-99bb-f19dd10551e2" satisfied condition "success or failure"
Jan  5 14:17:15.828: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-64f8131f-1119-417a-99bb-f19dd10551e2 container client-container: 
STEP: delete the pod
Jan  5 14:17:15.911: INFO: Waiting for pod downwardapi-volume-64f8131f-1119-417a-99bb-f19dd10551e2 to disappear
Jan  5 14:17:15.916: INFO: Pod downwardapi-volume-64f8131f-1119-417a-99bb-f19dd10551e2 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  5 14:17:15.916: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-2452" for this suite.
Jan  5 14:17:21.993: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 14:17:22.128: INFO: namespace downward-api-2452 deletion completed in 6.173272149s

• [SLOW TEST:14.557 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  5 14:17:22.130: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-1ada4704-c234-4bcf-a5bf-810d07aa4bc5
STEP: Creating a pod to test consume configMaps
Jan  5 14:17:22.328: INFO: Waiting up to 5m0s for pod "pod-configmaps-7cfb1605-1cb8-422c-840b-b9f6262a6139" in namespace "configmap-9658" to be "success or failure"
Jan  5 14:17:22.397: INFO: Pod "pod-configmaps-7cfb1605-1cb8-422c-840b-b9f6262a6139": Phase="Pending", Reason="", readiness=false. Elapsed: 68.277721ms
Jan  5 14:17:24.407: INFO: Pod "pod-configmaps-7cfb1605-1cb8-422c-840b-b9f6262a6139": Phase="Pending", Reason="", readiness=false. Elapsed: 2.078776055s
Jan  5 14:17:26.425: INFO: Pod "pod-configmaps-7cfb1605-1cb8-422c-840b-b9f6262a6139": Phase="Pending", Reason="", readiness=false. Elapsed: 4.096444051s
Jan  5 14:17:28.462: INFO: Pod "pod-configmaps-7cfb1605-1cb8-422c-840b-b9f6262a6139": Phase="Pending", Reason="", readiness=false. Elapsed: 6.133423909s
Jan  5 14:17:30.475: INFO: Pod "pod-configmaps-7cfb1605-1cb8-422c-840b-b9f6262a6139": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.146485768s
STEP: Saw pod success
Jan  5 14:17:30.475: INFO: Pod "pod-configmaps-7cfb1605-1cb8-422c-840b-b9f6262a6139" satisfied condition "success or failure"
Jan  5 14:17:30.482: INFO: Trying to get logs from node iruya-node pod pod-configmaps-7cfb1605-1cb8-422c-840b-b9f6262a6139 container configmap-volume-test: 
STEP: delete the pod
Jan  5 14:17:30.571: INFO: Waiting for pod pod-configmaps-7cfb1605-1cb8-422c-840b-b9f6262a6139 to disappear
Jan  5 14:17:30.589: INFO: Pod pod-configmaps-7cfb1605-1cb8-422c-840b-b9f6262a6139 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  5 14:17:30.590: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-9658" for this suite.
Jan  5 14:17:36.638: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 14:17:36.732: INFO: namespace configmap-9658 deletion completed in 6.130271452s

• [SLOW TEST:14.602 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  5 14:17:36.733: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-e4253983-6728-432f-9598-ada590c947e4
STEP: Creating a pod to test consume configMaps
Jan  5 14:17:36.814: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-1c9e762f-a314-4876-abec-5f666480068d" in namespace "projected-2170" to be "success or failure"
Jan  5 14:17:36.822: INFO: Pod "pod-projected-configmaps-1c9e762f-a314-4876-abec-5f666480068d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.259469ms
Jan  5 14:17:38.863: INFO: Pod "pod-projected-configmaps-1c9e762f-a314-4876-abec-5f666480068d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.049543501s
Jan  5 14:17:40.876: INFO: Pod "pod-projected-configmaps-1c9e762f-a314-4876-abec-5f666480068d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.062040511s
Jan  5 14:17:42.902: INFO: Pod "pod-projected-configmaps-1c9e762f-a314-4876-abec-5f666480068d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.087680849s
Jan  5 14:17:44.911: INFO: Pod "pod-projected-configmaps-1c9e762f-a314-4876-abec-5f666480068d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.096653801s
STEP: Saw pod success
Jan  5 14:17:44.911: INFO: Pod "pod-projected-configmaps-1c9e762f-a314-4876-abec-5f666480068d" satisfied condition "success or failure"
Jan  5 14:17:44.914: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-1c9e762f-a314-4876-abec-5f666480068d container projected-configmap-volume-test: 
STEP: delete the pod
Jan  5 14:17:44.981: INFO: Waiting for pod pod-projected-configmaps-1c9e762f-a314-4876-abec-5f666480068d to disappear
Jan  5 14:17:44.988: INFO: Pod pod-projected-configmaps-1c9e762f-a314-4876-abec-5f666480068d no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  5 14:17:44.989: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2170" for this suite.
Jan  5 14:17:51.128: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 14:17:51.245: INFO: namespace projected-2170 deletion completed in 6.1985056s

• [SLOW TEST:14.513 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  5 14:17:51.246: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0777 on tmpfs
Jan  5 14:17:51.383: INFO: Waiting up to 5m0s for pod "pod-cbda10d6-b3c8-4465-9944-7a05ed704ab4" in namespace "emptydir-7303" to be "success or failure"
Jan  5 14:17:51.489: INFO: Pod "pod-cbda10d6-b3c8-4465-9944-7a05ed704ab4": Phase="Pending", Reason="", readiness=false. Elapsed: 106.225579ms
Jan  5 14:17:53.503: INFO: Pod "pod-cbda10d6-b3c8-4465-9944-7a05ed704ab4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.119914318s
Jan  5 14:17:55.517: INFO: Pod "pod-cbda10d6-b3c8-4465-9944-7a05ed704ab4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.134406762s
Jan  5 14:17:57.530: INFO: Pod "pod-cbda10d6-b3c8-4465-9944-7a05ed704ab4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.147552125s
Jan  5 14:17:59.537: INFO: Pod "pod-cbda10d6-b3c8-4465-9944-7a05ed704ab4": Phase="Pending", Reason="", readiness=false. Elapsed: 8.154575307s
Jan  5 14:18:01.547: INFO: Pod "pod-cbda10d6-b3c8-4465-9944-7a05ed704ab4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.164209598s
STEP: Saw pod success
Jan  5 14:18:01.547: INFO: Pod "pod-cbda10d6-b3c8-4465-9944-7a05ed704ab4" satisfied condition "success or failure"
Jan  5 14:18:01.553: INFO: Trying to get logs from node iruya-node pod pod-cbda10d6-b3c8-4465-9944-7a05ed704ab4 container test-container: 
STEP: delete the pod
Jan  5 14:18:01.600: INFO: Waiting for pod pod-cbda10d6-b3c8-4465-9944-7a05ed704ab4 to disappear
Jan  5 14:18:01.604: INFO: Pod pod-cbda10d6-b3c8-4465-9944-7a05ed704ab4 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  5 14:18:01.604: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-7303" for this suite.
Jan  5 14:18:07.640: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 14:18:07.799: INFO: namespace emptydir-7303 deletion completed in 6.18900027s

• [SLOW TEST:16.553 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  5 14:18:07.799: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-map-442e9d92-804f-41c2-b2f3-16e00ac58b8a
STEP: Creating a pod to test consume configMaps
Jan  5 14:18:07.957: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-0b1e4382-e7d7-4ce8-9765-870ea7b01e84" in namespace "projected-5509" to be "success or failure"
Jan  5 14:18:07.966: INFO: Pod "pod-projected-configmaps-0b1e4382-e7d7-4ce8-9765-870ea7b01e84": Phase="Pending", Reason="", readiness=false. Elapsed: 9.548156ms
Jan  5 14:18:09.977: INFO: Pod "pod-projected-configmaps-0b1e4382-e7d7-4ce8-9765-870ea7b01e84": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020666495s
Jan  5 14:18:11.990: INFO: Pod "pod-projected-configmaps-0b1e4382-e7d7-4ce8-9765-870ea7b01e84": Phase="Pending", Reason="", readiness=false. Elapsed: 4.032968877s
Jan  5 14:18:14.001: INFO: Pod "pod-projected-configmaps-0b1e4382-e7d7-4ce8-9765-870ea7b01e84": Phase="Pending", Reason="", readiness=false. Elapsed: 6.044213263s
Jan  5 14:18:16.009: INFO: Pod "pod-projected-configmaps-0b1e4382-e7d7-4ce8-9765-870ea7b01e84": Phase="Pending", Reason="", readiness=false. Elapsed: 8.052618495s
Jan  5 14:18:18.021: INFO: Pod "pod-projected-configmaps-0b1e4382-e7d7-4ce8-9765-870ea7b01e84": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.064504037s
STEP: Saw pod success
Jan  5 14:18:18.021: INFO: Pod "pod-projected-configmaps-0b1e4382-e7d7-4ce8-9765-870ea7b01e84" satisfied condition "success or failure"
Jan  5 14:18:18.026: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-0b1e4382-e7d7-4ce8-9765-870ea7b01e84 container projected-configmap-volume-test: 
STEP: delete the pod
Jan  5 14:18:18.553: INFO: Waiting for pod pod-projected-configmaps-0b1e4382-e7d7-4ce8-9765-870ea7b01e84 to disappear
Jan  5 14:18:18.574: INFO: Pod pod-projected-configmaps-0b1e4382-e7d7-4ce8-9765-870ea7b01e84 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  5 14:18:18.575: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5509" for this suite.
Jan  5 14:18:24.693: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 14:18:24.819: INFO: namespace projected-5509 deletion completed in 6.233021227s

• [SLOW TEST:17.020 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  5 14:18:24.823: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
[It] should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Jan  5 14:18:25.040: INFO: Number of nodes with available pods: 0
Jan  5 14:18:25.040: INFO: Node iruya-node is running more than one daemon pod
Jan  5 14:18:26.059: INFO: Number of nodes with available pods: 0
Jan  5 14:18:26.059: INFO: Node iruya-node is running more than one daemon pod
Jan  5 14:18:27.059: INFO: Number of nodes with available pods: 0
Jan  5 14:18:27.059: INFO: Node iruya-node is running more than one daemon pod
Jan  5 14:18:28.070: INFO: Number of nodes with available pods: 0
Jan  5 14:18:28.070: INFO: Node iruya-node is running more than one daemon pod
Jan  5 14:18:29.081: INFO: Number of nodes with available pods: 0
Jan  5 14:18:29.082: INFO: Node iruya-node is running more than one daemon pod
Jan  5 14:18:30.945: INFO: Number of nodes with available pods: 0
Jan  5 14:18:30.945: INFO: Node iruya-node is running more than one daemon pod
Jan  5 14:18:31.794: INFO: Number of nodes with available pods: 0
Jan  5 14:18:31.795: INFO: Node iruya-node is running more than one daemon pod
Jan  5 14:18:32.704: INFO: Number of nodes with available pods: 0
Jan  5 14:18:32.704: INFO: Node iruya-node is running more than one daemon pod
Jan  5 14:18:33.056: INFO: Number of nodes with available pods: 0
Jan  5 14:18:33.056: INFO: Node iruya-node is running more than one daemon pod
Jan  5 14:18:34.062: INFO: Number of nodes with available pods: 0
Jan  5 14:18:34.063: INFO: Node iruya-node is running more than one daemon pod
Jan  5 14:18:35.104: INFO: Number of nodes with available pods: 2
Jan  5 14:18:35.105: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Stop a daemon pod, check that the daemon pod is revived.
Jan  5 14:18:35.176: INFO: Number of nodes with available pods: 1
Jan  5 14:18:35.176: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan  5 14:18:36.195: INFO: Number of nodes with available pods: 1
Jan  5 14:18:36.195: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan  5 14:18:37.198: INFO: Number of nodes with available pods: 1
Jan  5 14:18:37.198: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan  5 14:18:38.201: INFO: Number of nodes with available pods: 1
Jan  5 14:18:38.201: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan  5 14:18:39.420: INFO: Number of nodes with available pods: 1
Jan  5 14:18:39.421: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan  5 14:18:40.205: INFO: Number of nodes with available pods: 1
Jan  5 14:18:40.205: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan  5 14:18:41.208: INFO: Number of nodes with available pods: 1
Jan  5 14:18:41.208: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan  5 14:18:42.210: INFO: Number of nodes with available pods: 1
Jan  5 14:18:42.211: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan  5 14:18:43.195: INFO: Number of nodes with available pods: 1
Jan  5 14:18:43.195: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan  5 14:18:44.197: INFO: Number of nodes with available pods: 1
Jan  5 14:18:44.197: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan  5 14:18:45.195: INFO: Number of nodes with available pods: 1
Jan  5 14:18:45.195: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan  5 14:18:46.198: INFO: Number of nodes with available pods: 1
Jan  5 14:18:46.198: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan  5 14:18:47.221: INFO: Number of nodes with available pods: 1
Jan  5 14:18:47.221: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan  5 14:18:48.197: INFO: Number of nodes with available pods: 1
Jan  5 14:18:48.198: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan  5 14:18:49.206: INFO: Number of nodes with available pods: 1
Jan  5 14:18:49.207: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan  5 14:18:50.202: INFO: Number of nodes with available pods: 1
Jan  5 14:18:50.202: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan  5 14:18:51.189: INFO: Number of nodes with available pods: 1
Jan  5 14:18:51.190: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan  5 14:18:52.657: INFO: Number of nodes with available pods: 1
Jan  5 14:18:52.657: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan  5 14:18:53.188: INFO: Number of nodes with available pods: 1
Jan  5 14:18:53.189: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan  5 14:18:54.192: INFO: Number of nodes with available pods: 1
Jan  5 14:18:54.192: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan  5 14:18:55.197: INFO: Number of nodes with available pods: 2
Jan  5 14:18:55.198: INFO: Number of running nodes: 2, number of available pods: 2
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-4850, will wait for the garbage collector to delete the pods
Jan  5 14:18:55.273: INFO: Deleting DaemonSet.extensions daemon-set took: 15.36664ms
Jan  5 14:18:55.673: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.71878ms
Jan  5 14:19:06.584: INFO: Number of nodes with available pods: 0
Jan  5 14:19:06.584: INFO: Number of running nodes: 0, number of available pods: 0
Jan  5 14:19:06.589: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-4850/daemonsets","resourceVersion":"19408449"},"items":null}

Jan  5 14:19:06.592: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-4850/pods","resourceVersion":"19408449"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  5 14:19:06.609: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-4850" for this suite.
Jan  5 14:19:12.644: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 14:19:12.794: INFO: namespace daemonsets-4850 deletion completed in 6.180871037s

• [SLOW TEST:47.971 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  5 14:19:12.794: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
[It] should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Jan  5 14:19:12.981: INFO: Number of nodes with available pods: 0
Jan  5 14:19:12.981: INFO: Node iruya-node is running more than one daemon pod
Jan  5 14:19:15.012: INFO: Number of nodes with available pods: 0
Jan  5 14:19:15.012: INFO: Node iruya-node is running more than one daemon pod
Jan  5 14:19:16.256: INFO: Number of nodes with available pods: 0
Jan  5 14:19:16.256: INFO: Node iruya-node is running more than one daemon pod
Jan  5 14:19:17.003: INFO: Number of nodes with available pods: 0
Jan  5 14:19:17.004: INFO: Node iruya-node is running more than one daemon pod
Jan  5 14:19:18.000: INFO: Number of nodes with available pods: 0
Jan  5 14:19:18.001: INFO: Node iruya-node is running more than one daemon pod
Jan  5 14:19:19.224: INFO: Number of nodes with available pods: 0
Jan  5 14:19:19.224: INFO: Node iruya-node is running more than one daemon pod
Jan  5 14:19:20.543: INFO: Number of nodes with available pods: 0
Jan  5 14:19:20.544: INFO: Node iruya-node is running more than one daemon pod
Jan  5 14:19:20.999: INFO: Number of nodes with available pods: 0
Jan  5 14:19:20.999: INFO: Node iruya-node is running more than one daemon pod
Jan  5 14:19:22.019: INFO: Number of nodes with available pods: 0
Jan  5 14:19:22.019: INFO: Node iruya-node is running more than one daemon pod
Jan  5 14:19:22.997: INFO: Number of nodes with available pods: 1
Jan  5 14:19:22.998: INFO: Node iruya-node is running more than one daemon pod
Jan  5 14:19:23.996: INFO: Number of nodes with available pods: 1
Jan  5 14:19:23.996: INFO: Node iruya-node is running more than one daemon pod
Jan  5 14:19:24.995: INFO: Number of nodes with available pods: 2
Jan  5 14:19:24.995: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived.
Jan  5 14:19:25.103: INFO: Number of nodes with available pods: 1
Jan  5 14:19:25.103: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan  5 14:19:26.184: INFO: Number of nodes with available pods: 1
Jan  5 14:19:26.184: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan  5 14:19:27.117: INFO: Number of nodes with available pods: 1
Jan  5 14:19:27.117: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan  5 14:19:28.361: INFO: Number of nodes with available pods: 1
Jan  5 14:19:28.361: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan  5 14:19:29.116: INFO: Number of nodes with available pods: 1
Jan  5 14:19:29.116: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan  5 14:19:30.671: INFO: Number of nodes with available pods: 1
Jan  5 14:19:30.671: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan  5 14:19:31.119: INFO: Number of nodes with available pods: 1
Jan  5 14:19:31.119: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan  5 14:19:32.163: INFO: Number of nodes with available pods: 1
Jan  5 14:19:32.164: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan  5 14:19:33.120: INFO: Number of nodes with available pods: 2
Jan  5 14:19:33.120: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Wait for the failed daemon pod to be completely deleted.
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-1987, will wait for the garbage collector to delete the pods
Jan  5 14:19:33.197: INFO: Deleting DaemonSet.extensions daemon-set took: 13.23211ms
Jan  5 14:19:33.497: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.573898ms
Jan  5 14:19:40.229: INFO: Number of nodes with available pods: 0
Jan  5 14:19:40.229: INFO: Number of running nodes: 0, number of available pods: 0
Jan  5 14:19:40.241: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-1987/daemonsets","resourceVersion":"19408572"},"items":null}

Jan  5 14:19:40.250: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-1987/pods","resourceVersion":"19408572"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  5 14:19:40.276: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-1987" for this suite.
Jan  5 14:19:46.394: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 14:19:46.551: INFO: namespace daemonsets-1987 deletion completed in 6.261815303s

• [SLOW TEST:33.757 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl describe 
  should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  5 14:19:46.554: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jan  5 14:19:46.638: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8341'
Jan  5 14:19:47.105: INFO: stderr: ""
Jan  5 14:19:47.106: INFO: stdout: "replicationcontroller/redis-master created\n"
Jan  5 14:19:47.106: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8341'
Jan  5 14:19:47.769: INFO: stderr: ""
Jan  5 14:19:47.769: INFO: stdout: "service/redis-master created\n"
STEP: Waiting for Redis master to start.
Jan  5 14:19:48.781: INFO: Selector matched 1 pods for map[app:redis]
Jan  5 14:19:48.781: INFO: Found 0 / 1
Jan  5 14:19:49.781: INFO: Selector matched 1 pods for map[app:redis]
Jan  5 14:19:49.781: INFO: Found 0 / 1
Jan  5 14:19:50.797: INFO: Selector matched 1 pods for map[app:redis]
Jan  5 14:19:50.798: INFO: Found 0 / 1
Jan  5 14:19:51.789: INFO: Selector matched 1 pods for map[app:redis]
Jan  5 14:19:51.790: INFO: Found 0 / 1
Jan  5 14:19:52.781: INFO: Selector matched 1 pods for map[app:redis]
Jan  5 14:19:52.782: INFO: Found 0 / 1
Jan  5 14:19:53.793: INFO: Selector matched 1 pods for map[app:redis]
Jan  5 14:19:53.794: INFO: Found 0 / 1
Jan  5 14:19:54.781: INFO: Selector matched 1 pods for map[app:redis]
Jan  5 14:19:54.781: INFO: Found 0 / 1
Jan  5 14:19:55.779: INFO: Selector matched 1 pods for map[app:redis]
Jan  5 14:19:55.779: INFO: Found 1 / 1
Jan  5 14:19:55.779: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Jan  5 14:19:55.784: INFO: Selector matched 1 pods for map[app:redis]
Jan  5 14:19:55.784: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Jan  5 14:19:55.784: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe pod redis-master-xz275 --namespace=kubectl-8341'
Jan  5 14:19:56.029: INFO: stderr: ""
Jan  5 14:19:56.030: INFO: stdout: "Name:           redis-master-xz275\nNamespace:      kubectl-8341\nPriority:       0\nNode:           iruya-node/10.96.3.65\nStart Time:     Sun, 05 Jan 2020 14:19:47 +0000\nLabels:         app=redis\n                role=master\nAnnotations:    \nStatus:         Running\nIP:             10.44.0.1\nControlled By:  ReplicationController/redis-master\nContainers:\n  redis-master:\n    Container ID:   docker://8f25ea7f32db1359f81c2c15d50fcb64999322343779f0ac58671077dc887f13\n    Image:          gcr.io/kubernetes-e2e-test-images/redis:1.0\n    Image ID:       docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830\n    Port:           6379/TCP\n    Host Port:      0/TCP\n    State:          Running\n      Started:      Sun, 05 Jan 2020 14:19:54 +0000\n    Ready:          True\n    Restart Count:  0\n    Environment:    \n    Mounts:\n      /var/run/secrets/kubernetes.io/serviceaccount from default-token-mj5l2 (ro)\nConditions:\n  Type              Status\n  Initialized       True \n  Ready             True \n  ContainersReady   True \n  PodScheduled      True \nVolumes:\n  default-token-mj5l2:\n    Type:        Secret (a volume populated by a Secret)\n    SecretName:  default-token-mj5l2\n    Optional:    false\nQoS Class:       BestEffort\nNode-Selectors:  \nTolerations:     node.kubernetes.io/not-ready:NoExecute for 300s\n                 node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n  Type    Reason     Age   From                 Message\n  ----    ------     ----  ----                 -------\n  Normal  Scheduled  9s    default-scheduler    Successfully assigned kubectl-8341/redis-master-xz275 to iruya-node\n  Normal  Pulled     5s    kubelet, iruya-node  Container image \"gcr.io/kubernetes-e2e-test-images/redis:1.0\" already present on machine\n  Normal  Created    3s    kubelet, iruya-node  Created container redis-master\n  Normal  Started    2s    kubelet, iruya-node  Started container redis-master\n"
Jan  5 14:19:56.030: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe rc redis-master --namespace=kubectl-8341'
Jan  5 14:19:56.222: INFO: stderr: ""
Jan  5 14:19:56.223: INFO: stdout: "Name:         redis-master\nNamespace:    kubectl-8341\nSelector:     app=redis,role=master\nLabels:       app=redis\n              role=master\nAnnotations:  \nReplicas:     1 current / 1 desired\nPods Status:  1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n  Labels:  app=redis\n           role=master\n  Containers:\n   redis-master:\n    Image:        gcr.io/kubernetes-e2e-test-images/redis:1.0\n    Port:         6379/TCP\n    Host Port:    0/TCP\n    Environment:  \n    Mounts:       \n  Volumes:        \nEvents:\n  Type    Reason            Age   From                    Message\n  ----    ------            ----  ----                    -------\n  Normal  SuccessfulCreate  9s    replication-controller  Created pod: redis-master-xz275\n"
Jan  5 14:19:56.223: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe service redis-master --namespace=kubectl-8341'
Jan  5 14:19:56.370: INFO: stderr: ""
Jan  5 14:19:56.370: INFO: stdout: "Name:              redis-master\nNamespace:         kubectl-8341\nLabels:            app=redis\n                   role=master\nAnnotations:       \nSelector:          app=redis,role=master\nType:              ClusterIP\nIP:                10.96.161.174\nPort:                6379/TCP\nTargetPort:        redis-server/TCP\nEndpoints:         10.44.0.1:6379\nSession Affinity:  None\nEvents:            \n"
Jan  5 14:19:56.374: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe node iruya-node'
Jan  5 14:19:56.485: INFO: stderr: ""
Jan  5 14:19:56.486: INFO: stdout: "Name:               iruya-node\nRoles:              \nLabels:             beta.kubernetes.io/arch=amd64\n                    beta.kubernetes.io/os=linux\n                    kubernetes.io/arch=amd64\n                    kubernetes.io/hostname=iruya-node\n                    kubernetes.io/os=linux\nAnnotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock\n                    node.alpha.kubernetes.io/ttl: 0\n                    volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp:  Sun, 04 Aug 2019 09:01:39 +0000\nTaints:             \nUnschedulable:      false\nConditions:\n  Type                 Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message\n  ----                 ------  -----------------                 ------------------                ------                       -------\n  NetworkUnavailable   False   Sat, 12 Oct 2019 11:56:49 +0000   Sat, 12 Oct 2019 11:56:49 +0000   WeaveIsUp                    Weave pod has set this\n  MemoryPressure       False   Sun, 05 Jan 2020 14:19:44 +0000   Sun, 04 Aug 2019 09:01:39 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available\n  DiskPressure         False   Sun, 05 Jan 2020 14:19:44 +0000   Sun, 04 Aug 2019 09:01:39 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure\n  PIDPressure          False   Sun, 05 Jan 2020 14:19:44 +0000   Sun, 04 Aug 2019 09:01:39 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available\n  Ready                True    Sun, 05 Jan 2020 14:19:44 +0000   Sun, 04 Aug 2019 09:02:19 +0000   KubeletReady                 kubelet is posting ready status. AppArmor enabled\nAddresses:\n  InternalIP:  10.96.3.65\n  Hostname:    iruya-node\nCapacity:\n cpu:                4\n ephemeral-storage:  20145724Ki\n hugepages-2Mi:      0\n memory:             4039076Ki\n pods:               110\nAllocatable:\n cpu:                4\n ephemeral-storage:  18566299208\n hugepages-2Mi:      0\n memory:             3936676Ki\n pods:               110\nSystem Info:\n Machine ID:                 f573dcf04d6f4a87856a35d266a2fa7a\n System UUID:                F573DCF0-4D6F-4A87-856A-35D266A2FA7A\n Boot ID:                    8baf4beb-8391-43e6-b17b-b1e184b5370a\n Kernel Version:             4.15.0-52-generic\n OS Image:                   Ubuntu 18.04.2 LTS\n Operating System:           linux\n Architecture:               amd64\n Container Runtime Version:  docker://18.9.7\n Kubelet Version:            v1.15.1\n Kube-Proxy Version:         v1.15.1\nPodCIDR:                     10.96.1.0/24\nNon-terminated Pods:         (3 in total)\n  Namespace                  Name                  CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE\n  ---------                  ----                  ------------  ----------  ---------------  -------------  ---\n  kube-system                kube-proxy-976zl      0 (0%)        0 (0%)      0 (0%)           0 (0%)         154d\n  kube-system                weave-net-rlp57       20m (0%)      0 (0%)      0 (0%)           0 (0%)         85d\n  kubectl-8341               redis-master-xz275    0 (0%)        0 (0%)      0 (0%)           0 (0%)         9s\nAllocated resources:\n  (Total limits may be over 100 percent, i.e., overcommitted.)\n  Resource           Requests  Limits\n  --------           --------  ------\n  cpu                20m (0%)  0 (0%)\n  memory             0 (0%)    0 (0%)\n  ephemeral-storage  0 (0%)    0 (0%)\nEvents:              \n"
Jan  5 14:19:56.486: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe namespace kubectl-8341'
Jan  5 14:19:56.604: INFO: stderr: ""
Jan  5 14:19:56.605: INFO: stdout: "Name:         kubectl-8341\nLabels:       e2e-framework=kubectl\n              e2e-run=17eced7b-853b-42a2-9b89-c0ac36014a8f\nAnnotations:  \nStatus:       Active\n\nNo resource quota.\n\nNo resource limits.\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  5 14:19:56.605: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-8341" for this suite.
Jan  5 14:20:20.640: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 14:20:20.749: INFO: namespace kubectl-8341 deletion completed in 24.134701495s

• [SLOW TEST:34.195 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl describe
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should check if kubectl describe prints relevant information for rc and pods  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-storage] ConfigMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  5 14:20:20.749: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-upd-250031a5-9048-4f69-bca5-b9826e4f466d
STEP: Creating the pod
STEP: Updating configmap configmap-test-upd-250031a5-9048-4f69-bca5-b9826e4f466d
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  5 14:20:33.142: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-3008" for this suite.
Jan  5 14:20:55.176: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 14:20:55.294: INFO: namespace configmap-3008 deletion completed in 22.146569647s

• [SLOW TEST:34.545 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should have an terminated reason [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  5 14:20:55.294: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should have an terminated reason [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  5 14:21:07.453: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-4409" for this suite.
Jan  5 14:21:13.484: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 14:21:13.631: INFO: namespace kubelet-test-4409 deletion completed in 6.170437038s

• [SLOW TEST:18.336 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78
    should have an terminated reason [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  5 14:21:13.632: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating replication controller my-hostname-basic-805d4e57-4958-4f0d-8033-2a51d100e395
Jan  5 14:21:13.878: INFO: Pod name my-hostname-basic-805d4e57-4958-4f0d-8033-2a51d100e395: Found 1 pods out of 1
Jan  5 14:21:13.878: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-805d4e57-4958-4f0d-8033-2a51d100e395" are running
Jan  5 14:21:21.935: INFO: Pod "my-hostname-basic-805d4e57-4958-4f0d-8033-2a51d100e395-767vb" is running (conditions: [{Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-05 14:21:13 +0000 UTC Reason: Message:}])
Jan  5 14:21:21.935: INFO: Trying to dial the pod
Jan  5 14:21:26.971: INFO: Controller my-hostname-basic-805d4e57-4958-4f0d-8033-2a51d100e395: Got expected result from replica 1 [my-hostname-basic-805d4e57-4958-4f0d-8033-2a51d100e395-767vb]: "my-hostname-basic-805d4e57-4958-4f0d-8033-2a51d100e395-767vb", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  5 14:21:26.971: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-3827" for this suite.
Jan  5 14:21:33.014: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 14:21:33.121: INFO: namespace replication-controller-3827 deletion completed in 6.142780184s

• [SLOW TEST:19.489 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  5 14:21:33.121: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name projected-secret-test-d5042bd4-6f20-418e-aa42-12024062cabe
STEP: Creating a pod to test consume secrets
Jan  5 14:21:33.201: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-90fd1cd2-a256-4a29-81cb-324983bf21b5" in namespace "projected-6477" to be "success or failure"
Jan  5 14:21:33.212: INFO: Pod "pod-projected-secrets-90fd1cd2-a256-4a29-81cb-324983bf21b5": Phase="Pending", Reason="", readiness=false. Elapsed: 10.93859ms
Jan  5 14:21:35.222: INFO: Pod "pod-projected-secrets-90fd1cd2-a256-4a29-81cb-324983bf21b5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020439851s
Jan  5 14:21:37.230: INFO: Pod "pod-projected-secrets-90fd1cd2-a256-4a29-81cb-324983bf21b5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.028871921s
Jan  5 14:21:39.241: INFO: Pod "pod-projected-secrets-90fd1cd2-a256-4a29-81cb-324983bf21b5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.039698158s
Jan  5 14:21:41.251: INFO: Pod "pod-projected-secrets-90fd1cd2-a256-4a29-81cb-324983bf21b5": Phase="Pending", Reason="", readiness=false. Elapsed: 8.049445991s
Jan  5 14:21:43.263: INFO: Pod "pod-projected-secrets-90fd1cd2-a256-4a29-81cb-324983bf21b5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.061729269s
STEP: Saw pod success
Jan  5 14:21:43.263: INFO: Pod "pod-projected-secrets-90fd1cd2-a256-4a29-81cb-324983bf21b5" satisfied condition "success or failure"
Jan  5 14:21:43.268: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-90fd1cd2-a256-4a29-81cb-324983bf21b5 container projected-secret-volume-test: 
STEP: delete the pod
Jan  5 14:21:43.903: INFO: Waiting for pod pod-projected-secrets-90fd1cd2-a256-4a29-81cb-324983bf21b5 to disappear
Jan  5 14:21:43.915: INFO: Pod pod-projected-secrets-90fd1cd2-a256-4a29-81cb-324983bf21b5 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  5 14:21:43.915: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6477" for this suite.
Jan  5 14:21:49.965: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 14:21:50.103: INFO: namespace projected-6477 deletion completed in 6.181782973s

• [SLOW TEST:16.982 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  5 14:21:50.104: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72
[It] deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jan  5 14:21:50.227: INFO: Pod name cleanup-pod: Found 0 pods out of 1
Jan  5 14:21:55.238: INFO: Pod name cleanup-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Jan  5 14:21:59.251: INFO: Creating deployment test-cleanup-deployment
STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66
Jan  5 14:21:59.325: INFO: Deployment "test-cleanup-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment,GenerateName:,Namespace:deployment-1764,SelfLink:/apis/apps/v1/namespaces/deployment-1764/deployments/test-cleanup-deployment,UID:94b3aa1e-d2e6-458d-9481-97a408027ed8,ResourceVersion:19408945,Generation:1,CreationTimestamp:2020-01-05 14:21:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[],ReadyReplicas:0,CollisionCount:nil,},}

Jan  5 14:21:59.344: INFO: New ReplicaSet of Deployment "test-cleanup-deployment" is nil.
Jan  5 14:21:59.344: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment":
Jan  5 14:21:59.346: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller,GenerateName:,Namespace:deployment-1764,SelfLink:/apis/apps/v1/namespaces/deployment-1764/replicasets/test-cleanup-controller,UID:17d3fec5-a5e2-4287-92f0-3a5d088ae351,ResourceVersion:19408946,Generation:1,CreationTimestamp:2020-01-05 14:21:50 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment 94b3aa1e-d2e6-458d-9481-97a408027ed8 0xc002614f2f 0xc002614f40}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Jan  5 14:21:59.375: INFO: Pod "test-cleanup-controller-8nntj" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller-8nntj,GenerateName:test-cleanup-controller-,Namespace:deployment-1764,SelfLink:/api/v1/namespaces/deployment-1764/pods/test-cleanup-controller-8nntj,UID:4e850c60-8c8f-4a86-9f01-7270b56d17dc,ResourceVersion:19408941,Generation:0,CreationTimestamp:2020-01-05 14:21:50 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-controller 17d3fec5-a5e2-4287-92f0-3a5d088ae351 0xc002615af7 0xc002615af8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-tv69s {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-tv69s,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-tv69s true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002615bd0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002615c20}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-05 14:21:50 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-05 14:21:58 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-05 14:21:58 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-05 14:21:50 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.1,StartTime:2020-01-05 14:21:50 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-05 14:21:56 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://e615d8169998de9469b3bae9c78f381d76a7e6dcd57d231447f321041d6109fa}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  5 14:21:59.375: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-1764" for this suite.
Jan  5 14:22:05.535: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 14:22:05.680: INFO: namespace deployment-1764 deletion completed in 6.244998233s

• [SLOW TEST:15.577 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run default 
  should create an rc or deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  5 14:22:05.682: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1420
[It] should create an rc or deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Jan  5 14:22:05.851: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-3396'
Jan  5 14:22:05.982: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Jan  5 14:22:05.982: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n"
STEP: verifying the pod controlled by e2e-test-nginx-deployment gets created
[AfterEach] [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1426
Jan  5 14:22:08.083: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=kubectl-3396'
Jan  5 14:22:08.420: INFO: stderr: ""
Jan  5 14:22:08.420: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  5 14:22:08.421: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3396" for this suite.
Jan  5 14:22:14.576: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 14:22:14.701: INFO: namespace kubectl-3396 deletion completed in 6.181805335s

• [SLOW TEST:9.019 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create an rc or deployment from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  5 14:22:14.702: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating the pod
Jan  5 14:22:23.471: INFO: Successfully updated pod "labelsupdate768d0714-8e1f-4019-8c5f-4eb379bef95a"
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  5 14:22:25.531: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-7639" for this suite.
Jan  5 14:22:47.576: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 14:22:47.662: INFO: namespace downward-api-7639 deletion completed in 22.124151075s

• [SLOW TEST:32.960 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  5 14:22:47.662: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
[It] should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jan  5 14:22:47.854: INFO: Creating daemon "daemon-set" with a node selector
STEP: Initially, daemon pods should not be running on any nodes.
Jan  5 14:22:47.873: INFO: Number of nodes with available pods: 0
Jan  5 14:22:47.873: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Change node label to blue, check that daemon pod is launched.
Jan  5 14:22:47.989: INFO: Number of nodes with available pods: 0
Jan  5 14:22:47.989: INFO: Node iruya-node is running more than one daemon pod
Jan  5 14:22:49.003: INFO: Number of nodes with available pods: 0
Jan  5 14:22:49.003: INFO: Node iruya-node is running more than one daemon pod
Jan  5 14:22:50.004: INFO: Number of nodes with available pods: 0
Jan  5 14:22:50.004: INFO: Node iruya-node is running more than one daemon pod
Jan  5 14:22:51.001: INFO: Number of nodes with available pods: 0
Jan  5 14:22:51.001: INFO: Node iruya-node is running more than one daemon pod
Jan  5 14:22:51.999: INFO: Number of nodes with available pods: 0
Jan  5 14:22:51.999: INFO: Node iruya-node is running more than one daemon pod
Jan  5 14:22:52.996: INFO: Number of nodes with available pods: 0
Jan  5 14:22:52.996: INFO: Node iruya-node is running more than one daemon pod
Jan  5 14:22:54.003: INFO: Number of nodes with available pods: 0
Jan  5 14:22:54.003: INFO: Node iruya-node is running more than one daemon pod
Jan  5 14:22:54.998: INFO: Number of nodes with available pods: 0
Jan  5 14:22:54.998: INFO: Node iruya-node is running more than one daemon pod
Jan  5 14:22:56.003: INFO: Number of nodes with available pods: 1
Jan  5 14:22:56.003: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Update the node label to green, and wait for daemons to be unscheduled
Jan  5 14:22:56.072: INFO: Number of nodes with available pods: 1
Jan  5 14:22:56.072: INFO: Number of running nodes: 0, number of available pods: 1
Jan  5 14:22:57.082: INFO: Number of nodes with available pods: 0
Jan  5 14:22:57.082: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate
Jan  5 14:22:57.157: INFO: Number of nodes with available pods: 0
Jan  5 14:22:57.158: INFO: Node iruya-node is running more than one daemon pod
Jan  5 14:22:58.166: INFO: Number of nodes with available pods: 0
Jan  5 14:22:58.166: INFO: Node iruya-node is running more than one daemon pod
Jan  5 14:22:59.174: INFO: Number of nodes with available pods: 0
Jan  5 14:22:59.174: INFO: Node iruya-node is running more than one daemon pod
Jan  5 14:23:00.167: INFO: Number of nodes with available pods: 0
Jan  5 14:23:00.167: INFO: Node iruya-node is running more than one daemon pod
Jan  5 14:23:01.166: INFO: Number of nodes with available pods: 0
Jan  5 14:23:01.166: INFO: Node iruya-node is running more than one daemon pod
Jan  5 14:23:02.180: INFO: Number of nodes with available pods: 0
Jan  5 14:23:02.180: INFO: Node iruya-node is running more than one daemon pod
Jan  5 14:23:03.172: INFO: Number of nodes with available pods: 0
Jan  5 14:23:03.172: INFO: Node iruya-node is running more than one daemon pod
Jan  5 14:23:04.171: INFO: Number of nodes with available pods: 0
Jan  5 14:23:04.172: INFO: Node iruya-node is running more than one daemon pod
Jan  5 14:23:05.166: INFO: Number of nodes with available pods: 0
Jan  5 14:23:05.166: INFO: Node iruya-node is running more than one daemon pod
Jan  5 14:23:06.171: INFO: Number of nodes with available pods: 0
Jan  5 14:23:06.171: INFO: Node iruya-node is running more than one daemon pod
Jan  5 14:23:07.168: INFO: Number of nodes with available pods: 0
Jan  5 14:23:07.168: INFO: Node iruya-node is running more than one daemon pod
Jan  5 14:23:08.169: INFO: Number of nodes with available pods: 0
Jan  5 14:23:08.169: INFO: Node iruya-node is running more than one daemon pod
Jan  5 14:23:09.167: INFO: Number of nodes with available pods: 0
Jan  5 14:23:09.167: INFO: Node iruya-node is running more than one daemon pod
Jan  5 14:23:10.231: INFO: Number of nodes with available pods: 1
Jan  5 14:23:10.231: INFO: Number of running nodes: 1, number of available pods: 1
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-4850, will wait for the garbage collector to delete the pods
Jan  5 14:23:10.314: INFO: Deleting DaemonSet.extensions daemon-set took: 13.17773ms
Jan  5 14:23:10.614: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.51039ms
Jan  5 14:23:26.621: INFO: Number of nodes with available pods: 0
Jan  5 14:23:26.621: INFO: Number of running nodes: 0, number of available pods: 0
Jan  5 14:23:26.624: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-4850/daemonsets","resourceVersion":"19409202"},"items":null}

Jan  5 14:23:26.627: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-4850/pods","resourceVersion":"19409202"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  5 14:23:26.740: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-4850" for this suite.
Jan  5 14:23:32.768: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 14:23:32.888: INFO: namespace daemonsets-4850 deletion completed in 6.142717571s

• [SLOW TEST:45.226 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[k8s.io] Variable Expansion 
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  5 14:23:32.889: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test env composition
Jan  5 14:23:32.958: INFO: Waiting up to 5m0s for pod "var-expansion-f13eb614-dc23-41ab-a680-a62fee0510c5" in namespace "var-expansion-3905" to be "success or failure"
Jan  5 14:23:32.963: INFO: Pod "var-expansion-f13eb614-dc23-41ab-a680-a62fee0510c5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.711117ms
Jan  5 14:23:34.970: INFO: Pod "var-expansion-f13eb614-dc23-41ab-a680-a62fee0510c5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011997987s
Jan  5 14:23:36.977: INFO: Pod "var-expansion-f13eb614-dc23-41ab-a680-a62fee0510c5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.0191729s
Jan  5 14:23:38.994: INFO: Pod "var-expansion-f13eb614-dc23-41ab-a680-a62fee0510c5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.036092625s
Jan  5 14:23:41.002: INFO: Pod "var-expansion-f13eb614-dc23-41ab-a680-a62fee0510c5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.043353211s
STEP: Saw pod success
Jan  5 14:23:41.002: INFO: Pod "var-expansion-f13eb614-dc23-41ab-a680-a62fee0510c5" satisfied condition "success or failure"
Jan  5 14:23:41.005: INFO: Trying to get logs from node iruya-node pod var-expansion-f13eb614-dc23-41ab-a680-a62fee0510c5 container dapi-container: 
STEP: delete the pod
Jan  5 14:23:41.047: INFO: Waiting for pod var-expansion-f13eb614-dc23-41ab-a680-a62fee0510c5 to disappear
Jan  5 14:23:41.117: INFO: Pod var-expansion-f13eb614-dc23-41ab-a680-a62fee0510c5 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  5 14:23:41.117: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-3905" for this suite.
Jan  5 14:23:47.153: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 14:23:47.287: INFO: namespace var-expansion-3905 deletion completed in 6.162730642s

• [SLOW TEST:14.398 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy through a service and a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  5 14:23:47.288: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy through a service and a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: starting an echo server on multiple ports
STEP: creating replication controller proxy-service-52tdc in namespace proxy-4080
I0105 14:23:47.491261       8 runners.go:180] Created replication controller with name: proxy-service-52tdc, namespace: proxy-4080, replica count: 1
I0105 14:23:48.542762       8 runners.go:180] proxy-service-52tdc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0105 14:23:49.543641       8 runners.go:180] proxy-service-52tdc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0105 14:23:50.544517       8 runners.go:180] proxy-service-52tdc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0105 14:23:51.545852       8 runners.go:180] proxy-service-52tdc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0105 14:23:52.547485       8 runners.go:180] proxy-service-52tdc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0105 14:23:53.548301       8 runners.go:180] proxy-service-52tdc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0105 14:23:54.549097       8 runners.go:180] proxy-service-52tdc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0105 14:23:55.550057       8 runners.go:180] proxy-service-52tdc Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0105 14:23:56.551286       8 runners.go:180] proxy-service-52tdc Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0105 14:23:57.551822       8 runners.go:180] proxy-service-52tdc Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0105 14:23:58.552779       8 runners.go:180] proxy-service-52tdc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Jan  5 14:23:58.566: INFO: setup took 11.199700686s, starting test cases
STEP: running 16 cases, 20 attempts per case, 320 total attempts
Jan  5 14:23:58.608: INFO: (0) /api/v1/namespaces/proxy-4080/pods/proxy-service-52tdc-25jz7:162/proxy/: bar (200; 41.010724ms)
Jan  5 14:23:58.637: INFO: (0) /api/v1/namespaces/proxy-4080/pods/http:proxy-service-52tdc-25jz7:162/proxy/: bar (200; 69.170195ms)
Jan  5 14:23:58.641: INFO: (0) /api/v1/namespaces/proxy-4080/services/http:proxy-service-52tdc:portname2/proxy/: bar (200; 73.441798ms)
Jan  5 14:23:58.642: INFO: (0) /api/v1/namespaces/proxy-4080/pods/proxy-service-52tdc-25jz7/proxy/: test (200; 73.536881ms)
Jan  5 14:23:58.643: INFO: (0) /api/v1/namespaces/proxy-4080/services/proxy-service-52tdc:portname2/proxy/: bar (200; 73.993789ms)
Jan  5 14:23:58.643: INFO: (0) /api/v1/namespaces/proxy-4080/pods/proxy-service-52tdc-25jz7:1080/proxy/: test<... (200; 74.137217ms)
Jan  5 14:23:58.643: INFO: (0) /api/v1/namespaces/proxy-4080/pods/http:proxy-service-52tdc-25jz7:1080/proxy/: ... (200; 74.995112ms)
Jan  5 14:23:58.645: INFO: (0) /api/v1/namespaces/proxy-4080/services/proxy-service-52tdc:portname1/proxy/: foo (200; 76.548886ms)
Jan  5 14:23:58.645: INFO: (0) /api/v1/namespaces/proxy-4080/services/http:proxy-service-52tdc:portname1/proxy/: foo (200; 77.146178ms)
Jan  5 14:23:58.645: INFO: (0) /api/v1/namespaces/proxy-4080/pods/http:proxy-service-52tdc-25jz7:160/proxy/: foo (200; 77.437866ms)
Jan  5 14:23:58.646: INFO: (0) /api/v1/namespaces/proxy-4080/pods/proxy-service-52tdc-25jz7:160/proxy/: foo (200; 77.121745ms)
Jan  5 14:23:58.682: INFO: (0) /api/v1/namespaces/proxy-4080/pods/https:proxy-service-52tdc-25jz7:443/proxy/: test (200; 19.416721ms)
Jan  5 14:23:58.703: INFO: (1) /api/v1/namespaces/proxy-4080/pods/proxy-service-52tdc-25jz7:162/proxy/: bar (200; 19.337264ms)
Jan  5 14:23:58.703: INFO: (1) /api/v1/namespaces/proxy-4080/pods/proxy-service-52tdc-25jz7:160/proxy/: foo (200; 19.615562ms)
Jan  5 14:23:58.703: INFO: (1) /api/v1/namespaces/proxy-4080/pods/https:proxy-service-52tdc-25jz7:460/proxy/: tls baz (200; 20.117366ms)
Jan  5 14:23:58.703: INFO: (1) /api/v1/namespaces/proxy-4080/pods/proxy-service-52tdc-25jz7:1080/proxy/: test<... (200; 19.594663ms)
Jan  5 14:23:58.703: INFO: (1) /api/v1/namespaces/proxy-4080/pods/https:proxy-service-52tdc-25jz7:462/proxy/: tls qux (200; 19.729678ms)
Jan  5 14:23:58.703: INFO: (1) /api/v1/namespaces/proxy-4080/pods/http:proxy-service-52tdc-25jz7:160/proxy/: foo (200; 20.242953ms)
Jan  5 14:23:58.703: INFO: (1) /api/v1/namespaces/proxy-4080/pods/https:proxy-service-52tdc-25jz7:443/proxy/: ... (200; 19.459846ms)
Jan  5 14:23:58.708: INFO: (1) /api/v1/namespaces/proxy-4080/services/proxy-service-52tdc:portname2/proxy/: bar (200; 22.840332ms)
Jan  5 14:23:58.708: INFO: (1) /api/v1/namespaces/proxy-4080/services/http:proxy-service-52tdc:portname1/proxy/: foo (200; 24.605409ms)
Jan  5 14:23:58.709: INFO: (1) /api/v1/namespaces/proxy-4080/services/https:proxy-service-52tdc:tlsportname2/proxy/: tls qux (200; 26.23171ms)
Jan  5 14:23:58.710: INFO: (1) /api/v1/namespaces/proxy-4080/services/https:proxy-service-52tdc:tlsportname1/proxy/: tls baz (200; 26.802257ms)
Jan  5 14:23:58.711: INFO: (1) /api/v1/namespaces/proxy-4080/services/http:proxy-service-52tdc:portname2/proxy/: bar (200; 27.565508ms)
Jan  5 14:23:58.721: INFO: (2) /api/v1/namespaces/proxy-4080/pods/proxy-service-52tdc-25jz7/proxy/: test (200; 10.190473ms)
Jan  5 14:23:58.722: INFO: (2) /api/v1/namespaces/proxy-4080/pods/proxy-service-52tdc-25jz7:162/proxy/: bar (200; 10.90436ms)
Jan  5 14:23:58.722: INFO: (2) /api/v1/namespaces/proxy-4080/pods/proxy-service-52tdc-25jz7:1080/proxy/: test<... (200; 10.945951ms)
Jan  5 14:23:58.722: INFO: (2) /api/v1/namespaces/proxy-4080/pods/proxy-service-52tdc-25jz7:160/proxy/: foo (200; 10.420579ms)
Jan  5 14:23:58.723: INFO: (2) /api/v1/namespaces/proxy-4080/pods/https:proxy-service-52tdc-25jz7:462/proxy/: tls qux (200; 12.050233ms)
Jan  5 14:23:58.723: INFO: (2) /api/v1/namespaces/proxy-4080/pods/https:proxy-service-52tdc-25jz7:443/proxy/: ... (200; 16.425829ms)
Jan  5 14:23:58.729: INFO: (2) /api/v1/namespaces/proxy-4080/services/http:proxy-service-52tdc:portname2/proxy/: bar (200; 17.823412ms)
Jan  5 14:23:58.729: INFO: (2) /api/v1/namespaces/proxy-4080/services/https:proxy-service-52tdc:tlsportname1/proxy/: tls baz (200; 18.697594ms)
Jan  5 14:23:58.731: INFO: (2) /api/v1/namespaces/proxy-4080/pods/http:proxy-service-52tdc-25jz7:162/proxy/: bar (200; 19.360183ms)
Jan  5 14:23:58.731: INFO: (2) /api/v1/namespaces/proxy-4080/pods/http:proxy-service-52tdc-25jz7:160/proxy/: foo (200; 19.546589ms)
Jan  5 14:23:58.731: INFO: (2) /api/v1/namespaces/proxy-4080/services/http:proxy-service-52tdc:portname1/proxy/: foo (200; 19.741626ms)
Jan  5 14:23:58.736: INFO: (2) /api/v1/namespaces/proxy-4080/services/proxy-service-52tdc:portname1/proxy/: foo (200; 24.708655ms)
Jan  5 14:23:58.748: INFO: (3) /api/v1/namespaces/proxy-4080/pods/https:proxy-service-52tdc-25jz7:443/proxy/: ... (200; 16.90997ms)
Jan  5 14:23:58.755: INFO: (3) /api/v1/namespaces/proxy-4080/pods/proxy-service-52tdc-25jz7/proxy/: test (200; 17.677427ms)
Jan  5 14:23:58.755: INFO: (3) /api/v1/namespaces/proxy-4080/pods/proxy-service-52tdc-25jz7:1080/proxy/: test<... (200; 18.079271ms)
Jan  5 14:23:58.756: INFO: (3) /api/v1/namespaces/proxy-4080/pods/proxy-service-52tdc-25jz7:160/proxy/: foo (200; 18.675602ms)
Jan  5 14:23:58.769: INFO: (4) /api/v1/namespaces/proxy-4080/pods/proxy-service-52tdc-25jz7:160/proxy/: foo (200; 12.578303ms)
Jan  5 14:23:58.769: INFO: (4) /api/v1/namespaces/proxy-4080/pods/proxy-service-52tdc-25jz7:162/proxy/: bar (200; 12.475166ms)
Jan  5 14:23:58.769: INFO: (4) /api/v1/namespaces/proxy-4080/services/proxy-service-52tdc:portname2/proxy/: bar (200; 12.55101ms)
Jan  5 14:23:58.769: INFO: (4) /api/v1/namespaces/proxy-4080/pods/http:proxy-service-52tdc-25jz7:1080/proxy/: ... (200; 12.611759ms)
Jan  5 14:23:58.769: INFO: (4) /api/v1/namespaces/proxy-4080/pods/http:proxy-service-52tdc-25jz7:160/proxy/: foo (200; 12.879735ms)
Jan  5 14:23:58.769: INFO: (4) /api/v1/namespaces/proxy-4080/pods/https:proxy-service-52tdc-25jz7:443/proxy/: test<... (200; 13.732766ms)
Jan  5 14:23:58.770: INFO: (4) /api/v1/namespaces/proxy-4080/pods/https:proxy-service-52tdc-25jz7:460/proxy/: tls baz (200; 13.844959ms)
Jan  5 14:23:58.771: INFO: (4) /api/v1/namespaces/proxy-4080/services/proxy-service-52tdc:portname1/proxy/: foo (200; 14.696486ms)
Jan  5 14:23:58.771: INFO: (4) /api/v1/namespaces/proxy-4080/pods/http:proxy-service-52tdc-25jz7:162/proxy/: bar (200; 14.494823ms)
Jan  5 14:23:58.771: INFO: (4) /api/v1/namespaces/proxy-4080/pods/proxy-service-52tdc-25jz7/proxy/: test (200; 14.579769ms)
Jan  5 14:23:58.771: INFO: (4) /api/v1/namespaces/proxy-4080/services/http:proxy-service-52tdc:portname2/proxy/: bar (200; 14.992231ms)
Jan  5 14:23:58.771: INFO: (4) /api/v1/namespaces/proxy-4080/pods/https:proxy-service-52tdc-25jz7:462/proxy/: tls qux (200; 14.586412ms)
Jan  5 14:23:58.772: INFO: (4) /api/v1/namespaces/proxy-4080/services/https:proxy-service-52tdc:tlsportname2/proxy/: tls qux (200; 15.170655ms)
Jan  5 14:23:58.772: INFO: (4) /api/v1/namespaces/proxy-4080/services/https:proxy-service-52tdc:tlsportname1/proxy/: tls baz (200; 15.514829ms)
Jan  5 14:23:58.772: INFO: (4) /api/v1/namespaces/proxy-4080/services/http:proxy-service-52tdc:portname1/proxy/: foo (200; 15.531656ms)
Jan  5 14:23:58.787: INFO: (5) /api/v1/namespaces/proxy-4080/services/https:proxy-service-52tdc:tlsportname2/proxy/: tls qux (200; 14.793059ms)
Jan  5 14:23:58.787: INFO: (5) /api/v1/namespaces/proxy-4080/services/proxy-service-52tdc:portname2/proxy/: bar (200; 14.758301ms)
Jan  5 14:23:58.788: INFO: (5) /api/v1/namespaces/proxy-4080/services/proxy-service-52tdc:portname1/proxy/: foo (200; 15.245444ms)
Jan  5 14:23:58.789: INFO: (5) /api/v1/namespaces/proxy-4080/services/https:proxy-service-52tdc:tlsportname1/proxy/: tls baz (200; 16.019978ms)
Jan  5 14:23:58.789: INFO: (5) /api/v1/namespaces/proxy-4080/services/http:proxy-service-52tdc:portname1/proxy/: foo (200; 16.266675ms)
Jan  5 14:23:58.789: INFO: (5) /api/v1/namespaces/proxy-4080/pods/https:proxy-service-52tdc-25jz7:443/proxy/: test<... (200; 17.341795ms)
Jan  5 14:23:58.790: INFO: (5) /api/v1/namespaces/proxy-4080/pods/proxy-service-52tdc-25jz7:160/proxy/: foo (200; 17.381632ms)
Jan  5 14:23:58.790: INFO: (5) /api/v1/namespaces/proxy-4080/pods/proxy-service-52tdc-25jz7/proxy/: test (200; 17.174989ms)
Jan  5 14:23:58.790: INFO: (5) /api/v1/namespaces/proxy-4080/pods/http:proxy-service-52tdc-25jz7:160/proxy/: foo (200; 17.257208ms)
Jan  5 14:23:58.790: INFO: (5) /api/v1/namespaces/proxy-4080/pods/https:proxy-service-52tdc-25jz7:460/proxy/: tls baz (200; 17.206681ms)
Jan  5 14:23:58.790: INFO: (5) /api/v1/namespaces/proxy-4080/pods/https:proxy-service-52tdc-25jz7:462/proxy/: tls qux (200; 17.35516ms)
Jan  5 14:23:58.790: INFO: (5) /api/v1/namespaces/proxy-4080/services/http:proxy-service-52tdc:portname2/proxy/: bar (200; 17.55627ms)
Jan  5 14:23:58.790: INFO: (5) /api/v1/namespaces/proxy-4080/pods/http:proxy-service-52tdc-25jz7:1080/proxy/: ... (200; 17.354574ms)
Jan  5 14:23:58.790: INFO: (5) /api/v1/namespaces/proxy-4080/pods/proxy-service-52tdc-25jz7:162/proxy/: bar (200; 17.64586ms)
Jan  5 14:23:58.799: INFO: (6) /api/v1/namespaces/proxy-4080/pods/proxy-service-52tdc-25jz7:1080/proxy/: test<... (200; 8.43888ms)
Jan  5 14:23:58.799: INFO: (6) /api/v1/namespaces/proxy-4080/pods/proxy-service-52tdc-25jz7/proxy/: test (200; 8.855865ms)
Jan  5 14:23:58.801: INFO: (6) /api/v1/namespaces/proxy-4080/pods/https:proxy-service-52tdc-25jz7:462/proxy/: tls qux (200; 9.928734ms)
Jan  5 14:23:58.807: INFO: (6) /api/v1/namespaces/proxy-4080/pods/http:proxy-service-52tdc-25jz7:1080/proxy/: ... (200; 16.052744ms)
Jan  5 14:23:58.807: INFO: (6) /api/v1/namespaces/proxy-4080/services/proxy-service-52tdc:portname1/proxy/: foo (200; 15.710458ms)
Jan  5 14:23:58.809: INFO: (6) /api/v1/namespaces/proxy-4080/pods/https:proxy-service-52tdc-25jz7:460/proxy/: tls baz (200; 17.08599ms)
Jan  5 14:23:58.810: INFO: (6) /api/v1/namespaces/proxy-4080/services/https:proxy-service-52tdc:tlsportname1/proxy/: tls baz (200; 17.734679ms)
Jan  5 14:23:58.810: INFO: (6) /api/v1/namespaces/proxy-4080/services/proxy-service-52tdc:portname2/proxy/: bar (200; 18.714267ms)
Jan  5 14:23:58.810: INFO: (6) /api/v1/namespaces/proxy-4080/services/http:proxy-service-52tdc:portname2/proxy/: bar (200; 19.010744ms)
Jan  5 14:23:58.810: INFO: (6) /api/v1/namespaces/proxy-4080/pods/https:proxy-service-52tdc-25jz7:443/proxy/: ... (200; 20.105594ms)
Jan  5 14:23:58.838: INFO: (7) /api/v1/namespaces/proxy-4080/pods/proxy-service-52tdc-25jz7:160/proxy/: foo (200; 20.210532ms)
Jan  5 14:23:58.838: INFO: (7) /api/v1/namespaces/proxy-4080/pods/proxy-service-52tdc-25jz7:162/proxy/: bar (200; 20.481958ms)
Jan  5 14:23:58.839: INFO: (7) /api/v1/namespaces/proxy-4080/pods/proxy-service-52tdc-25jz7/proxy/: test (200; 21.525821ms)
Jan  5 14:23:58.839: INFO: (7) /api/v1/namespaces/proxy-4080/pods/https:proxy-service-52tdc-25jz7:462/proxy/: tls qux (200; 21.979597ms)
Jan  5 14:23:58.840: INFO: (7) /api/v1/namespaces/proxy-4080/pods/http:proxy-service-52tdc-25jz7:160/proxy/: foo (200; 22.094044ms)
Jan  5 14:23:58.840: INFO: (7) /api/v1/namespaces/proxy-4080/pods/http:proxy-service-52tdc-25jz7:162/proxy/: bar (200; 21.895156ms)
Jan  5 14:23:58.840: INFO: (7) /api/v1/namespaces/proxy-4080/pods/https:proxy-service-52tdc-25jz7:460/proxy/: tls baz (200; 22.436723ms)
Jan  5 14:23:58.840: INFO: (7) /api/v1/namespaces/proxy-4080/pods/proxy-service-52tdc-25jz7:1080/proxy/: test<... (200; 22.376064ms)
Jan  5 14:23:58.845: INFO: (7) /api/v1/namespaces/proxy-4080/services/http:proxy-service-52tdc:portname1/proxy/: foo (200; 27.302533ms)
Jan  5 14:23:58.846: INFO: (7) /api/v1/namespaces/proxy-4080/services/https:proxy-service-52tdc:tlsportname2/proxy/: tls qux (200; 28.427658ms)
Jan  5 14:23:58.846: INFO: (7) /api/v1/namespaces/proxy-4080/services/proxy-service-52tdc:portname1/proxy/: foo (200; 28.610163ms)
Jan  5 14:23:58.846: INFO: (7) /api/v1/namespaces/proxy-4080/services/proxy-service-52tdc:portname2/proxy/: bar (200; 28.732507ms)
Jan  5 14:23:58.848: INFO: (7) /api/v1/namespaces/proxy-4080/services/http:proxy-service-52tdc:portname2/proxy/: bar (200; 29.797842ms)
Jan  5 14:23:58.853: INFO: (7) /api/v1/namespaces/proxy-4080/services/https:proxy-service-52tdc:tlsportname1/proxy/: tls baz (200; 35.045442ms)
Jan  5 14:23:58.890: INFO: (8) /api/v1/namespaces/proxy-4080/pods/proxy-service-52tdc-25jz7:162/proxy/: bar (200; 35.773147ms)
Jan  5 14:23:58.890: INFO: (8) /api/v1/namespaces/proxy-4080/services/https:proxy-service-52tdc:tlsportname2/proxy/: tls qux (200; 36.088548ms)
Jan  5 14:23:58.890: INFO: (8) /api/v1/namespaces/proxy-4080/pods/http:proxy-service-52tdc-25jz7:1080/proxy/: ... (200; 35.886256ms)
Jan  5 14:23:58.890: INFO: (8) /api/v1/namespaces/proxy-4080/pods/https:proxy-service-52tdc-25jz7:462/proxy/: tls qux (200; 37.219129ms)
Jan  5 14:23:58.890: INFO: (8) /api/v1/namespaces/proxy-4080/pods/proxy-service-52tdc-25jz7/proxy/: test (200; 36.317893ms)
Jan  5 14:23:58.890: INFO: (8) /api/v1/namespaces/proxy-4080/services/http:proxy-service-52tdc:portname2/proxy/: bar (200; 37.304104ms)
Jan  5 14:23:58.890: INFO: (8) /api/v1/namespaces/proxy-4080/pods/http:proxy-service-52tdc-25jz7:160/proxy/: foo (200; 36.369241ms)
Jan  5 14:23:58.890: INFO: (8) /api/v1/namespaces/proxy-4080/pods/http:proxy-service-52tdc-25jz7:162/proxy/: bar (200; 36.428657ms)
Jan  5 14:23:58.890: INFO: (8) /api/v1/namespaces/proxy-4080/pods/proxy-service-52tdc-25jz7:1080/proxy/: test<... (200; 36.690893ms)
Jan  5 14:23:58.891: INFO: (8) /api/v1/namespaces/proxy-4080/services/https:proxy-service-52tdc:tlsportname1/proxy/: tls baz (200; 36.818934ms)
Jan  5 14:23:58.891: INFO: (8) /api/v1/namespaces/proxy-4080/services/http:proxy-service-52tdc:portname1/proxy/: foo (200; 37.023508ms)
Jan  5 14:23:58.891: INFO: (8) /api/v1/namespaces/proxy-4080/pods/https:proxy-service-52tdc-25jz7:443/proxy/: test<... (200; 21.014353ms)
Jan  5 14:23:58.915: INFO: (9) /api/v1/namespaces/proxy-4080/services/proxy-service-52tdc:portname2/proxy/: bar (200; 21.809071ms)
Jan  5 14:23:58.915: INFO: (9) /api/v1/namespaces/proxy-4080/services/http:proxy-service-52tdc:portname1/proxy/: foo (200; 21.446516ms)
Jan  5 14:23:58.915: INFO: (9) /api/v1/namespaces/proxy-4080/pods/proxy-service-52tdc-25jz7:162/proxy/: bar (200; 21.935742ms)
Jan  5 14:23:58.915: INFO: (9) /api/v1/namespaces/proxy-4080/services/https:proxy-service-52tdc:tlsportname1/proxy/: tls baz (200; 21.557977ms)
Jan  5 14:23:58.915: INFO: (9) /api/v1/namespaces/proxy-4080/pods/https:proxy-service-52tdc-25jz7:460/proxy/: tls baz (200; 24.040757ms)
Jan  5 14:23:58.915: INFO: (9) /api/v1/namespaces/proxy-4080/pods/https:proxy-service-52tdc-25jz7:443/proxy/: test (200; 22.298205ms)
Jan  5 14:23:58.915: INFO: (9) /api/v1/namespaces/proxy-4080/pods/http:proxy-service-52tdc-25jz7:160/proxy/: foo (200; 22.068858ms)
Jan  5 14:23:58.916: INFO: (9) /api/v1/namespaces/proxy-4080/services/https:proxy-service-52tdc:tlsportname2/proxy/: tls qux (200; 22.068242ms)
Jan  5 14:23:58.921: INFO: (9) /api/v1/namespaces/proxy-4080/pods/http:proxy-service-52tdc-25jz7:1080/proxy/: ... (200; 27.263909ms)
Jan  5 14:23:58.921: INFO: (9) /api/v1/namespaces/proxy-4080/pods/http:proxy-service-52tdc-25jz7:162/proxy/: bar (200; 27.542629ms)
Jan  5 14:23:58.921: INFO: (9) /api/v1/namespaces/proxy-4080/pods/proxy-service-52tdc-25jz7:160/proxy/: foo (200; 27.386888ms)
Jan  5 14:23:58.921: INFO: (9) /api/v1/namespaces/proxy-4080/pods/https:proxy-service-52tdc-25jz7:462/proxy/: tls qux (200; 27.97993ms)
Jan  5 14:23:58.921: INFO: (9) /api/v1/namespaces/proxy-4080/services/http:proxy-service-52tdc:portname2/proxy/: bar (200; 27.943191ms)
Jan  5 14:23:58.921: INFO: (9) /api/v1/namespaces/proxy-4080/services/proxy-service-52tdc:portname1/proxy/: foo (200; 27.801551ms)
Jan  5 14:23:58.931: INFO: (10) /api/v1/namespaces/proxy-4080/pods/http:proxy-service-52tdc-25jz7:1080/proxy/: ... (200; 9.738118ms)
Jan  5 14:23:58.931: INFO: (10) /api/v1/namespaces/proxy-4080/pods/https:proxy-service-52tdc-25jz7:460/proxy/: tls baz (200; 9.793796ms)
Jan  5 14:23:58.932: INFO: (10) /api/v1/namespaces/proxy-4080/pods/http:proxy-service-52tdc-25jz7:160/proxy/: foo (200; 10.595685ms)
Jan  5 14:23:58.932: INFO: (10) /api/v1/namespaces/proxy-4080/services/proxy-service-52tdc:portname2/proxy/: bar (200; 10.559392ms)
Jan  5 14:23:58.932: INFO: (10) /api/v1/namespaces/proxy-4080/pods/proxy-service-52tdc-25jz7:160/proxy/: foo (200; 10.56853ms)
Jan  5 14:23:58.933: INFO: (10) /api/v1/namespaces/proxy-4080/pods/http:proxy-service-52tdc-25jz7:162/proxy/: bar (200; 10.826032ms)
Jan  5 14:23:58.935: INFO: (10) /api/v1/namespaces/proxy-4080/pods/https:proxy-service-52tdc-25jz7:443/proxy/: test (200; 13.349992ms)
Jan  5 14:23:58.935: INFO: (10) /api/v1/namespaces/proxy-4080/pods/proxy-service-52tdc-25jz7:1080/proxy/: test<... (200; 13.430578ms)
Jan  5 14:23:58.935: INFO: (10) /api/v1/namespaces/proxy-4080/pods/https:proxy-service-52tdc-25jz7:462/proxy/: tls qux (200; 13.83303ms)
Jan  5 14:23:58.936: INFO: (10) /api/v1/namespaces/proxy-4080/pods/proxy-service-52tdc-25jz7:162/proxy/: bar (200; 13.812241ms)
Jan  5 14:23:58.943: INFO: (11) /api/v1/namespaces/proxy-4080/pods/proxy-service-52tdc-25jz7:162/proxy/: bar (200; 7.295321ms)
Jan  5 14:23:58.944: INFO: (11) /api/v1/namespaces/proxy-4080/pods/proxy-service-52tdc-25jz7:1080/proxy/: test<... (200; 7.845153ms)
Jan  5 14:23:58.944: INFO: (11) /api/v1/namespaces/proxy-4080/pods/https:proxy-service-52tdc-25jz7:443/proxy/: ... (200; 7.82339ms)
Jan  5 14:23:58.944: INFO: (11) /api/v1/namespaces/proxy-4080/pods/http:proxy-service-52tdc-25jz7:160/proxy/: foo (200; 7.814449ms)
Jan  5 14:23:58.944: INFO: (11) /api/v1/namespaces/proxy-4080/pods/https:proxy-service-52tdc-25jz7:462/proxy/: tls qux (200; 8.486411ms)
Jan  5 14:23:58.945: INFO: (11) /api/v1/namespaces/proxy-4080/pods/proxy-service-52tdc-25jz7:160/proxy/: foo (200; 9.411381ms)
Jan  5 14:23:58.945: INFO: (11) /api/v1/namespaces/proxy-4080/pods/http:proxy-service-52tdc-25jz7:162/proxy/: bar (200; 9.407376ms)
Jan  5 14:23:58.945: INFO: (11) /api/v1/namespaces/proxy-4080/pods/proxy-service-52tdc-25jz7/proxy/: test (200; 9.507447ms)
Jan  5 14:23:58.946: INFO: (11) /api/v1/namespaces/proxy-4080/pods/https:proxy-service-52tdc-25jz7:460/proxy/: tls baz (200; 9.942739ms)
Jan  5 14:23:58.947: INFO: (11) /api/v1/namespaces/proxy-4080/services/https:proxy-service-52tdc:tlsportname1/proxy/: tls baz (200; 10.923153ms)
Jan  5 14:23:58.947: INFO: (11) /api/v1/namespaces/proxy-4080/services/http:proxy-service-52tdc:portname1/proxy/: foo (200; 10.793848ms)
Jan  5 14:23:58.947: INFO: (11) /api/v1/namespaces/proxy-4080/services/proxy-service-52tdc:portname1/proxy/: foo (200; 10.906703ms)
Jan  5 14:23:58.947: INFO: (11) /api/v1/namespaces/proxy-4080/services/http:proxy-service-52tdc:portname2/proxy/: bar (200; 11.641841ms)
Jan  5 14:23:58.947: INFO: (11) /api/v1/namespaces/proxy-4080/services/https:proxy-service-52tdc:tlsportname2/proxy/: tls qux (200; 11.515019ms)
Jan  5 14:23:58.947: INFO: (11) /api/v1/namespaces/proxy-4080/services/proxy-service-52tdc:portname2/proxy/: bar (200; 11.696841ms)
Jan  5 14:23:58.955: INFO: (12) /api/v1/namespaces/proxy-4080/pods/proxy-service-52tdc-25jz7/proxy/: test (200; 7.501344ms)
Jan  5 14:23:58.955: INFO: (12) /api/v1/namespaces/proxy-4080/pods/http:proxy-service-52tdc-25jz7:162/proxy/: bar (200; 7.438746ms)
Jan  5 14:23:58.955: INFO: (12) /api/v1/namespaces/proxy-4080/pods/https:proxy-service-52tdc-25jz7:462/proxy/: tls qux (200; 7.499928ms)
Jan  5 14:23:58.955: INFO: (12) /api/v1/namespaces/proxy-4080/pods/http:proxy-service-52tdc-25jz7:1080/proxy/: ... (200; 7.502218ms)
Jan  5 14:23:58.955: INFO: (12) /api/v1/namespaces/proxy-4080/pods/proxy-service-52tdc-25jz7:162/proxy/: bar (200; 7.930457ms)
Jan  5 14:23:58.955: INFO: (12) /api/v1/namespaces/proxy-4080/pods/proxy-service-52tdc-25jz7:160/proxy/: foo (200; 7.811651ms)
Jan  5 14:23:58.955: INFO: (12) /api/v1/namespaces/proxy-4080/pods/proxy-service-52tdc-25jz7:1080/proxy/: test<... (200; 7.890654ms)
Jan  5 14:23:58.955: INFO: (12) /api/v1/namespaces/proxy-4080/pods/https:proxy-service-52tdc-25jz7:443/proxy/: test (200; 6.618476ms)
Jan  5 14:23:58.966: INFO: (13) /api/v1/namespaces/proxy-4080/pods/http:proxy-service-52tdc-25jz7:160/proxy/: foo (200; 6.68749ms)
Jan  5 14:23:58.967: INFO: (13) /api/v1/namespaces/proxy-4080/pods/proxy-service-52tdc-25jz7:160/proxy/: foo (200; 7.240487ms)
Jan  5 14:23:58.967: INFO: (13) /api/v1/namespaces/proxy-4080/pods/http:proxy-service-52tdc-25jz7:1080/proxy/: ... (200; 7.072011ms)
Jan  5 14:23:58.967: INFO: (13) /api/v1/namespaces/proxy-4080/pods/https:proxy-service-52tdc-25jz7:460/proxy/: tls baz (200; 7.201012ms)
Jan  5 14:23:58.971: INFO: (13) /api/v1/namespaces/proxy-4080/services/http:proxy-service-52tdc:portname2/proxy/: bar (200; 11.52776ms)
Jan  5 14:23:58.972: INFO: (13) /api/v1/namespaces/proxy-4080/services/http:proxy-service-52tdc:portname1/proxy/: foo (200; 12.289969ms)
Jan  5 14:23:58.972: INFO: (13) /api/v1/namespaces/proxy-4080/pods/proxy-service-52tdc-25jz7:1080/proxy/: test<... (200; 12.083724ms)
Jan  5 14:23:58.973: INFO: (13) /api/v1/namespaces/proxy-4080/pods/http:proxy-service-52tdc-25jz7:162/proxy/: bar (200; 13.155924ms)
Jan  5 14:23:58.973: INFO: (13) /api/v1/namespaces/proxy-4080/services/proxy-service-52tdc:portname1/proxy/: foo (200; 13.403256ms)
Jan  5 14:23:58.973: INFO: (13) /api/v1/namespaces/proxy-4080/pods/proxy-service-52tdc-25jz7:162/proxy/: bar (200; 13.221841ms)
Jan  5 14:23:58.974: INFO: (13) /api/v1/namespaces/proxy-4080/services/https:proxy-service-52tdc:tlsportname1/proxy/: tls baz (200; 14.36835ms)
Jan  5 14:23:58.974: INFO: (13) /api/v1/namespaces/proxy-4080/services/proxy-service-52tdc:portname2/proxy/: bar (200; 14.880327ms)
Jan  5 14:23:58.975: INFO: (13) /api/v1/namespaces/proxy-4080/pods/https:proxy-service-52tdc-25jz7:462/proxy/: tls qux (200; 15.474764ms)
Jan  5 14:23:58.975: INFO: (13) /api/v1/namespaces/proxy-4080/services/https:proxy-service-52tdc:tlsportname2/proxy/: tls qux (200; 15.528881ms)
Jan  5 14:23:58.975: INFO: (13) /api/v1/namespaces/proxy-4080/pods/https:proxy-service-52tdc-25jz7:443/proxy/: ... (200; 10.445558ms)
Jan  5 14:23:58.986: INFO: (14) /api/v1/namespaces/proxy-4080/pods/proxy-service-52tdc-25jz7:1080/proxy/: test<... (200; 10.937063ms)
Jan  5 14:23:58.987: INFO: (14) /api/v1/namespaces/proxy-4080/pods/http:proxy-service-52tdc-25jz7:160/proxy/: foo (200; 11.25826ms)
Jan  5 14:23:58.990: INFO: (14) /api/v1/namespaces/proxy-4080/services/https:proxy-service-52tdc:tlsportname1/proxy/: tls baz (200; 14.431301ms)
Jan  5 14:23:58.990: INFO: (14) /api/v1/namespaces/proxy-4080/services/proxy-service-52tdc:portname2/proxy/: bar (200; 14.437364ms)
Jan  5 14:23:58.990: INFO: (14) /api/v1/namespaces/proxy-4080/services/https:proxy-service-52tdc:tlsportname2/proxy/: tls qux (200; 14.313031ms)
Jan  5 14:23:58.990: INFO: (14) /api/v1/namespaces/proxy-4080/services/http:proxy-service-52tdc:portname1/proxy/: foo (200; 14.350702ms)
Jan  5 14:23:58.990: INFO: (14) /api/v1/namespaces/proxy-4080/services/http:proxy-service-52tdc:portname2/proxy/: bar (200; 14.652888ms)
Jan  5 14:23:58.992: INFO: (14) /api/v1/namespaces/proxy-4080/pods/http:proxy-service-52tdc-25jz7:162/proxy/: bar (200; 16.142874ms)
Jan  5 14:23:58.992: INFO: (14) /api/v1/namespaces/proxy-4080/pods/proxy-service-52tdc-25jz7/proxy/: test (200; 16.62283ms)
Jan  5 14:23:58.992: INFO: (14) /api/v1/namespaces/proxy-4080/services/proxy-service-52tdc:portname1/proxy/: foo (200; 16.905118ms)
Jan  5 14:23:58.999: INFO: (15) /api/v1/namespaces/proxy-4080/pods/proxy-service-52tdc-25jz7/proxy/: test (200; 6.004528ms)
Jan  5 14:23:58.999: INFO: (15) /api/v1/namespaces/proxy-4080/pods/proxy-service-52tdc-25jz7:162/proxy/: bar (200; 6.501089ms)
Jan  5 14:23:58.999: INFO: (15) /api/v1/namespaces/proxy-4080/pods/https:proxy-service-52tdc-25jz7:460/proxy/: tls baz (200; 6.787594ms)
Jan  5 14:23:58.999: INFO: (15) /api/v1/namespaces/proxy-4080/pods/http:proxy-service-52tdc-25jz7:160/proxy/: foo (200; 6.801359ms)
Jan  5 14:23:58.999: INFO: (15) /api/v1/namespaces/proxy-4080/pods/proxy-service-52tdc-25jz7:1080/proxy/: test<... (200; 6.796172ms)
Jan  5 14:23:58.999: INFO: (15) /api/v1/namespaces/proxy-4080/pods/http:proxy-service-52tdc-25jz7:1080/proxy/: ... (200; 6.874153ms)
Jan  5 14:23:58.999: INFO: (15) /api/v1/namespaces/proxy-4080/pods/https:proxy-service-52tdc-25jz7:462/proxy/: tls qux (200; 6.988784ms)
Jan  5 14:23:58.999: INFO: (15) /api/v1/namespaces/proxy-4080/pods/https:proxy-service-52tdc-25jz7:443/proxy/: test<... (200; 7.599248ms)
Jan  5 14:23:59.011: INFO: (16) /api/v1/namespaces/proxy-4080/pods/proxy-service-52tdc-25jz7:160/proxy/: foo (200; 7.871898ms)
Jan  5 14:23:59.012: INFO: (16) /api/v1/namespaces/proxy-4080/services/https:proxy-service-52tdc:tlsportname2/proxy/: tls qux (200; 8.820715ms)
Jan  5 14:23:59.012: INFO: (16) /api/v1/namespaces/proxy-4080/services/http:proxy-service-52tdc:portname1/proxy/: foo (200; 8.957852ms)
Jan  5 14:23:59.012: INFO: (16) /api/v1/namespaces/proxy-4080/pods/http:proxy-service-52tdc-25jz7:162/proxy/: bar (200; 8.938213ms)
Jan  5 14:23:59.013: INFO: (16) /api/v1/namespaces/proxy-4080/services/proxy-service-52tdc:portname1/proxy/: foo (200; 9.364863ms)
Jan  5 14:23:59.015: INFO: (16) /api/v1/namespaces/proxy-4080/pods/proxy-service-52tdc-25jz7:162/proxy/: bar (200; 11.102403ms)
Jan  5 14:23:59.015: INFO: (16) /api/v1/namespaces/proxy-4080/services/http:proxy-service-52tdc:portname2/proxy/: bar (200; 11.072237ms)
Jan  5 14:23:59.015: INFO: (16) /api/v1/namespaces/proxy-4080/pods/http:proxy-service-52tdc-25jz7:1080/proxy/: ... (200; 11.70787ms)
Jan  5 14:23:59.015: INFO: (16) /api/v1/namespaces/proxy-4080/pods/https:proxy-service-52tdc-25jz7:460/proxy/: tls baz (200; 12.085235ms)
Jan  5 14:23:59.015: INFO: (16) /api/v1/namespaces/proxy-4080/pods/https:proxy-service-52tdc-25jz7:443/proxy/: test (200; 12.872185ms)
Jan  5 14:23:59.028: INFO: (17) /api/v1/namespaces/proxy-4080/services/proxy-service-52tdc:portname2/proxy/: bar (200; 11.230946ms)
Jan  5 14:23:59.028: INFO: (17) /api/v1/namespaces/proxy-4080/pods/https:proxy-service-52tdc-25jz7:462/proxy/: tls qux (200; 11.585898ms)
Jan  5 14:23:59.028: INFO: (17) /api/v1/namespaces/proxy-4080/pods/http:proxy-service-52tdc-25jz7:1080/proxy/: ... (200; 11.524848ms)
Jan  5 14:23:59.028: INFO: (17) /api/v1/namespaces/proxy-4080/services/https:proxy-service-52tdc:tlsportname2/proxy/: tls qux (200; 11.787918ms)
Jan  5 14:23:59.029: INFO: (17) /api/v1/namespaces/proxy-4080/pods/https:proxy-service-52tdc-25jz7:443/proxy/: test<... (200; 13.378593ms)
Jan  5 14:23:59.030: INFO: (17) /api/v1/namespaces/proxy-4080/services/http:proxy-service-52tdc:portname1/proxy/: foo (200; 13.880013ms)
Jan  5 14:23:59.030: INFO: (17) /api/v1/namespaces/proxy-4080/pods/proxy-service-52tdc-25jz7/proxy/: test (200; 14.095733ms)
Jan  5 14:23:59.031: INFO: (17) /api/v1/namespaces/proxy-4080/services/https:proxy-service-52tdc:tlsportname1/proxy/: tls baz (200; 14.270418ms)
Jan  5 14:23:59.031: INFO: (17) /api/v1/namespaces/proxy-4080/pods/proxy-service-52tdc-25jz7:162/proxy/: bar (200; 14.381026ms)
Jan  5 14:23:59.031: INFO: (17) /api/v1/namespaces/proxy-4080/pods/http:proxy-service-52tdc-25jz7:162/proxy/: bar (200; 14.723928ms)
Jan  5 14:23:59.031: INFO: (17) /api/v1/namespaces/proxy-4080/pods/https:proxy-service-52tdc-25jz7:460/proxy/: tls baz (200; 14.664486ms)
Jan  5 14:23:59.031: INFO: (17) /api/v1/namespaces/proxy-4080/services/proxy-service-52tdc:portname1/proxy/: foo (200; 14.780418ms)
Jan  5 14:23:59.031: INFO: (17) /api/v1/namespaces/proxy-4080/services/http:proxy-service-52tdc:portname2/proxy/: bar (200; 14.899849ms)
Jan  5 14:23:59.036: INFO: (18) /api/v1/namespaces/proxy-4080/pods/proxy-service-52tdc-25jz7:162/proxy/: bar (200; 4.329087ms)
Jan  5 14:23:59.036: INFO: (18) /api/v1/namespaces/proxy-4080/pods/http:proxy-service-52tdc-25jz7:1080/proxy/: ... (200; 4.686458ms)
Jan  5 14:23:59.037: INFO: (18) /api/v1/namespaces/proxy-4080/pods/https:proxy-service-52tdc-25jz7:443/proxy/: test (200; 5.966463ms)
Jan  5 14:23:59.038: INFO: (18) /api/v1/namespaces/proxy-4080/pods/proxy-service-52tdc-25jz7:160/proxy/: foo (200; 6.367079ms)
Jan  5 14:23:59.039: INFO: (18) /api/v1/namespaces/proxy-4080/pods/proxy-service-52tdc-25jz7:1080/proxy/: test<... (200; 6.833977ms)
Jan  5 14:23:59.039: INFO: (18) /api/v1/namespaces/proxy-4080/pods/http:proxy-service-52tdc-25jz7:162/proxy/: bar (200; 7.06556ms)
Jan  5 14:23:59.039: INFO: (18) /api/v1/namespaces/proxy-4080/services/http:proxy-service-52tdc:portname2/proxy/: bar (200; 7.11691ms)
Jan  5 14:23:59.039: INFO: (18) /api/v1/namespaces/proxy-4080/pods/https:proxy-service-52tdc-25jz7:460/proxy/: tls baz (200; 7.122552ms)
Jan  5 14:23:59.039: INFO: (18) /api/v1/namespaces/proxy-4080/pods/http:proxy-service-52tdc-25jz7:160/proxy/: foo (200; 7.105865ms)
Jan  5 14:23:59.040: INFO: (18) /api/v1/namespaces/proxy-4080/services/proxy-service-52tdc:portname2/proxy/: bar (200; 8.324811ms)
Jan  5 14:23:59.040: INFO: (18) /api/v1/namespaces/proxy-4080/services/http:proxy-service-52tdc:portname1/proxy/: foo (200; 8.430596ms)
Jan  5 14:23:59.040: INFO: (18) /api/v1/namespaces/proxy-4080/services/proxy-service-52tdc:portname1/proxy/: foo (200; 8.626003ms)
Jan  5 14:23:59.041: INFO: (18) /api/v1/namespaces/proxy-4080/services/https:proxy-service-52tdc:tlsportname2/proxy/: tls qux (200; 8.922199ms)
Jan  5 14:23:59.041: INFO: (18) /api/v1/namespaces/proxy-4080/services/https:proxy-service-52tdc:tlsportname1/proxy/: tls baz (200; 8.790629ms)
Jan  5 14:23:59.048: INFO: (19) /api/v1/namespaces/proxy-4080/services/http:proxy-service-52tdc:portname2/proxy/: bar (200; 6.87844ms)
Jan  5 14:23:59.048: INFO: (19) /api/v1/namespaces/proxy-4080/services/https:proxy-service-52tdc:tlsportname1/proxy/: tls baz (200; 7.299964ms)
Jan  5 14:23:59.056: INFO: (19) /api/v1/namespaces/proxy-4080/services/https:proxy-service-52tdc:tlsportname2/proxy/: tls qux (200; 15.740837ms)
Jan  5 14:23:59.057: INFO: (19) /api/v1/namespaces/proxy-4080/pods/proxy-service-52tdc-25jz7/proxy/: test (200; 16.656096ms)
Jan  5 14:23:59.057: INFO: (19) /api/v1/namespaces/proxy-4080/pods/https:proxy-service-52tdc-25jz7:460/proxy/: tls baz (200; 16.874162ms)
Jan  5 14:23:59.058: INFO: (19) /api/v1/namespaces/proxy-4080/pods/proxy-service-52tdc-25jz7:160/proxy/: foo (200; 17.183995ms)
Jan  5 14:23:59.058: INFO: (19) /api/v1/namespaces/proxy-4080/pods/https:proxy-service-52tdc-25jz7:462/proxy/: tls qux (200; 17.057618ms)
Jan  5 14:23:59.058: INFO: (19) /api/v1/namespaces/proxy-4080/pods/http:proxy-service-52tdc-25jz7:1080/proxy/: ... (200; 17.119872ms)
Jan  5 14:23:59.058: INFO: (19) /api/v1/namespaces/proxy-4080/pods/proxy-service-52tdc-25jz7:162/proxy/: bar (200; 17.213203ms)
Jan  5 14:23:59.058: INFO: (19) /api/v1/namespaces/proxy-4080/pods/https:proxy-service-52tdc-25jz7:443/proxy/: test<... (200; 17.583486ms)
Jan  5 14:23:59.059: INFO: (19) /api/v1/namespaces/proxy-4080/pods/http:proxy-service-52tdc-25jz7:160/proxy/: foo (200; 17.931209ms)
Jan  5 14:23:59.059: INFO: (19) /api/v1/namespaces/proxy-4080/services/proxy-service-52tdc:portname1/proxy/: foo (200; 18.024316ms)
Jan  5 14:23:59.059: INFO: (19) /api/v1/namespaces/proxy-4080/services/proxy-service-52tdc:portname2/proxy/: bar (200; 18.139279ms)
Jan  5 14:23:59.059: INFO: (19) /api/v1/namespaces/proxy-4080/pods/http:proxy-service-52tdc-25jz7:162/proxy/: bar (200; 18.581622ms)
Jan  5 14:23:59.059: INFO: (19) /api/v1/namespaces/proxy-4080/services/http:proxy-service-52tdc:portname1/proxy/: foo (200; 18.788107ms)
STEP: deleting ReplicationController proxy-service-52tdc in namespace proxy-4080, will wait for the garbage collector to delete the pods
Jan  5 14:23:59.125: INFO: Deleting ReplicationController proxy-service-52tdc took: 11.655687ms
Jan  5 14:23:59.426: INFO: Terminating ReplicationController proxy-service-52tdc pods took: 300.402686ms
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  5 14:24:04.827: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-4080" for this suite.
Jan  5 14:24:10.874: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 14:24:11.032: INFO: namespace proxy-4080 deletion completed in 6.195436605s

• [SLOW TEST:23.745 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58
    should proxy through a service and a pod  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  5 14:24:11.033: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward api env vars
Jan  5 14:24:11.132: INFO: Waiting up to 5m0s for pod "downward-api-9ac86de5-6a2f-4aec-adc3-83471bcc5b06" in namespace "downward-api-1323" to be "success or failure"
Jan  5 14:24:11.139: INFO: Pod "downward-api-9ac86de5-6a2f-4aec-adc3-83471bcc5b06": Phase="Pending", Reason="", readiness=false. Elapsed: 7.377359ms
Jan  5 14:24:13.147: INFO: Pod "downward-api-9ac86de5-6a2f-4aec-adc3-83471bcc5b06": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014496659s
Jan  5 14:24:15.156: INFO: Pod "downward-api-9ac86de5-6a2f-4aec-adc3-83471bcc5b06": Phase="Pending", Reason="", readiness=false. Elapsed: 4.023838193s
Jan  5 14:24:17.167: INFO: Pod "downward-api-9ac86de5-6a2f-4aec-adc3-83471bcc5b06": Phase="Pending", Reason="", readiness=false. Elapsed: 6.034577718s
Jan  5 14:24:19.180: INFO: Pod "downward-api-9ac86de5-6a2f-4aec-adc3-83471bcc5b06": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.048121514s
STEP: Saw pod success
Jan  5 14:24:19.180: INFO: Pod "downward-api-9ac86de5-6a2f-4aec-adc3-83471bcc5b06" satisfied condition "success or failure"
Jan  5 14:24:19.189: INFO: Trying to get logs from node iruya-node pod downward-api-9ac86de5-6a2f-4aec-adc3-83471bcc5b06 container dapi-container: 
STEP: delete the pod
Jan  5 14:24:19.248: INFO: Waiting for pod downward-api-9ac86de5-6a2f-4aec-adc3-83471bcc5b06 to disappear
Jan  5 14:24:19.270: INFO: Pod downward-api-9ac86de5-6a2f-4aec-adc3-83471bcc5b06 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  5 14:24:19.271: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-1323" for this suite.
Jan  5 14:24:25.386: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 14:24:25.499: INFO: namespace downward-api-1323 deletion completed in 6.19539452s

• [SLOW TEST:14.465 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32
  should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  5 14:24:25.499: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81
Jan  5 14:24:25.560: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Jan  5 14:24:25.586: INFO: Waiting for terminating namespaces to be deleted...
Jan  5 14:24:25.589: INFO: 
Logging pods the kubelet thinks is on node iruya-node before test
Jan  5 14:24:25.597: INFO: kube-proxy-976zl from kube-system started at 2019-08-04 09:01:39 +0000 UTC (1 container statuses recorded)
Jan  5 14:24:25.597: INFO: 	Container kube-proxy ready: true, restart count 0
Jan  5 14:24:25.597: INFO: weave-net-rlp57 from kube-system started at 2019-10-12 11:56:39 +0000 UTC (2 container statuses recorded)
Jan  5 14:24:25.597: INFO: 	Container weave ready: true, restart count 0
Jan  5 14:24:25.597: INFO: 	Container weave-npc ready: true, restart count 0
Jan  5 14:24:25.597: INFO: 
Logging pods the kubelet thinks is on node iruya-server-sfge57q7djm7 before test
Jan  5 14:24:25.607: INFO: kube-apiserver-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:39 +0000 UTC (1 container statuses recorded)
Jan  5 14:24:25.607: INFO: 	Container kube-apiserver ready: true, restart count 0
Jan  5 14:24:25.607: INFO: kube-scheduler-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:43 +0000 UTC (1 container statuses recorded)
Jan  5 14:24:25.607: INFO: 	Container kube-scheduler ready: true, restart count 12
Jan  5 14:24:25.607: INFO: coredns-5c98db65d4-xx8w8 from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded)
Jan  5 14:24:25.607: INFO: 	Container coredns ready: true, restart count 0
Jan  5 14:24:25.607: INFO: etcd-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:38 +0000 UTC (1 container statuses recorded)
Jan  5 14:24:25.607: INFO: 	Container etcd ready: true, restart count 0
Jan  5 14:24:25.607: INFO: weave-net-bzl4d from kube-system started at 2019-08-04 08:52:37 +0000 UTC (2 container statuses recorded)
Jan  5 14:24:25.607: INFO: 	Container weave ready: true, restart count 0
Jan  5 14:24:25.607: INFO: 	Container weave-npc ready: true, restart count 0
Jan  5 14:24:25.607: INFO: coredns-5c98db65d4-bm4gs from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded)
Jan  5 14:24:25.607: INFO: 	Container coredns ready: true, restart count 0
Jan  5 14:24:25.607: INFO: kube-controller-manager-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:42 +0000 UTC (1 container statuses recorded)
Jan  5 14:24:25.607: INFO: 	Container kube-controller-manager ready: true, restart count 18
Jan  5 14:24:25.607: INFO: kube-proxy-58v95 from kube-system started at 2019-08-04 08:52:37 +0000 UTC (1 container statuses recorded)
Jan  5 14:24:25.607: INFO: 	Container kube-proxy ready: true, restart count 0
[It] validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: verifying the node has the label node iruya-node
STEP: verifying the node has the label node iruya-server-sfge57q7djm7
Jan  5 14:24:25.721: INFO: Pod coredns-5c98db65d4-bm4gs requesting resource cpu=100m on Node iruya-server-sfge57q7djm7
Jan  5 14:24:25.721: INFO: Pod coredns-5c98db65d4-xx8w8 requesting resource cpu=100m on Node iruya-server-sfge57q7djm7
Jan  5 14:24:25.721: INFO: Pod etcd-iruya-server-sfge57q7djm7 requesting resource cpu=0m on Node iruya-server-sfge57q7djm7
Jan  5 14:24:25.721: INFO: Pod kube-apiserver-iruya-server-sfge57q7djm7 requesting resource cpu=250m on Node iruya-server-sfge57q7djm7
Jan  5 14:24:25.721: INFO: Pod kube-controller-manager-iruya-server-sfge57q7djm7 requesting resource cpu=200m on Node iruya-server-sfge57q7djm7
Jan  5 14:24:25.721: INFO: Pod kube-proxy-58v95 requesting resource cpu=0m on Node iruya-server-sfge57q7djm7
Jan  5 14:24:25.721: INFO: Pod kube-proxy-976zl requesting resource cpu=0m on Node iruya-node
Jan  5 14:24:25.721: INFO: Pod kube-scheduler-iruya-server-sfge57q7djm7 requesting resource cpu=100m on Node iruya-server-sfge57q7djm7
Jan  5 14:24:25.721: INFO: Pod weave-net-bzl4d requesting resource cpu=20m on Node iruya-server-sfge57q7djm7
Jan  5 14:24:25.721: INFO: Pod weave-net-rlp57 requesting resource cpu=20m on Node iruya-node
STEP: Starting Pods to consume most of the cluster CPU.
STEP: Creating another pod that requires unavailable amount of CPU.
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-34fcb578-0829-4662-81d3-27855553a9ea.15e703b3f3c79c5a], Reason = [Scheduled], Message = [Successfully assigned sched-pred-7992/filler-pod-34fcb578-0829-4662-81d3-27855553a9ea to iruya-server-sfge57q7djm7]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-34fcb578-0829-4662-81d3-27855553a9ea.15e703b501e965b2], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-34fcb578-0829-4662-81d3-27855553a9ea.15e703b5c41a5aaa], Reason = [Created], Message = [Created container filler-pod-34fcb578-0829-4662-81d3-27855553a9ea]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-34fcb578-0829-4662-81d3-27855553a9ea.15e703b5e6ffd26f], Reason = [Started], Message = [Started container filler-pod-34fcb578-0829-4662-81d3-27855553a9ea]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-4b6cd0c0-71da-4c4a-a35a-9e431bca916f.15e703b3f11bb212], Reason = [Scheduled], Message = [Successfully assigned sched-pred-7992/filler-pod-4b6cd0c0-71da-4c4a-a35a-9e431bca916f to iruya-node]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-4b6cd0c0-71da-4c4a-a35a-9e431bca916f.15e703b4f9f52012], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-4b6cd0c0-71da-4c4a-a35a-9e431bca916f.15e703b5b1ff0753], Reason = [Created], Message = [Created container filler-pod-4b6cd0c0-71da-4c4a-a35a-9e431bca916f]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-4b6cd0c0-71da-4c4a-a35a-9e431bca916f.15e703b5ea2c7a09], Reason = [Started], Message = [Started container filler-pod-4b6cd0c0-71da-4c4a-a35a-9e431bca916f]
STEP: Considering event: 
Type = [Warning], Name = [additional-pod.15e703b648fddb83], Reason = [FailedScheduling], Message = [0/2 nodes are available: 2 Insufficient cpu.]
STEP: removing the label node off the node iruya-node
STEP: verifying the node doesn't have the label node
STEP: removing the label node off the node iruya-server-sfge57q7djm7
STEP: verifying the node doesn't have the label node
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  5 14:24:37.049: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-7992" for this suite.
Jan  5 14:24:43.084: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 14:24:43.150: INFO: namespace sched-pred-7992 deletion completed in 6.091809263s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72

• [SLOW TEST:17.651 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23
  validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  5 14:24:43.151: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod test-webserver-53c8eca8-f37d-45d1-958c-3fc33348d986 in namespace container-probe-5236
Jan  5 14:24:54.662: INFO: Started pod test-webserver-53c8eca8-f37d-45d1-958c-3fc33348d986 in namespace container-probe-5236
STEP: checking the pod's current state and verifying that restartCount is present
Jan  5 14:24:54.666: INFO: Initial restart count of pod test-webserver-53c8eca8-f37d-45d1-958c-3fc33348d986 is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  5 14:28:55.849: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-5236" for this suite.
Jan  5 14:29:01.915: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 14:29:02.033: INFO: namespace container-probe-5236 deletion completed in 6.160327233s

• [SLOW TEST:258.881 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  5 14:29:02.033: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod busybox-3f7fcd2d-477e-4d1e-8bd4-44276003f35e in namespace container-probe-1980
Jan  5 14:29:14.222: INFO: Started pod busybox-3f7fcd2d-477e-4d1e-8bd4-44276003f35e in namespace container-probe-1980
STEP: checking the pod's current state and verifying that restartCount is present
Jan  5 14:29:14.226: INFO: Initial restart count of pod busybox-3f7fcd2d-477e-4d1e-8bd4-44276003f35e is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  5 14:33:14.930: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-1980" for this suite.
Jan  5 14:33:22.028: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 14:33:22.167: INFO: namespace container-probe-1980 deletion completed in 6.34059133s

• [SLOW TEST:260.134 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  5 14:33:22.167: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0666 on tmpfs
Jan  5 14:33:22.352: INFO: Waiting up to 5m0s for pod "pod-6d64cc23-4613-4684-9574-fddfe1e41809" in namespace "emptydir-389" to be "success or failure"
Jan  5 14:33:22.378: INFO: Pod "pod-6d64cc23-4613-4684-9574-fddfe1e41809": Phase="Pending", Reason="", readiness=false. Elapsed: 26.421531ms
Jan  5 14:33:24.386: INFO: Pod "pod-6d64cc23-4613-4684-9574-fddfe1e41809": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034658448s
Jan  5 14:33:26.401: INFO: Pod "pod-6d64cc23-4613-4684-9574-fddfe1e41809": Phase="Pending", Reason="", readiness=false. Elapsed: 4.04947633s
Jan  5 14:33:28.409: INFO: Pod "pod-6d64cc23-4613-4684-9574-fddfe1e41809": Phase="Pending", Reason="", readiness=false. Elapsed: 6.057677072s
Jan  5 14:33:30.419: INFO: Pod "pod-6d64cc23-4613-4684-9574-fddfe1e41809": Phase="Pending", Reason="", readiness=false. Elapsed: 8.067369755s
Jan  5 14:33:32.432: INFO: Pod "pod-6d64cc23-4613-4684-9574-fddfe1e41809": Phase="Pending", Reason="", readiness=false. Elapsed: 10.080459476s
Jan  5 14:33:34.440: INFO: Pod "pod-6d64cc23-4613-4684-9574-fddfe1e41809": Phase="Pending", Reason="", readiness=false. Elapsed: 12.088387696s
Jan  5 14:33:36.450: INFO: Pod "pod-6d64cc23-4613-4684-9574-fddfe1e41809": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.098498694s
STEP: Saw pod success
Jan  5 14:33:36.450: INFO: Pod "pod-6d64cc23-4613-4684-9574-fddfe1e41809" satisfied condition "success or failure"
Jan  5 14:33:36.457: INFO: Trying to get logs from node iruya-node pod pod-6d64cc23-4613-4684-9574-fddfe1e41809 container test-container: 
STEP: delete the pod
Jan  5 14:33:36.732: INFO: Waiting for pod pod-6d64cc23-4613-4684-9574-fddfe1e41809 to disappear
Jan  5 14:33:36.761: INFO: Pod pod-6d64cc23-4613-4684-9574-fddfe1e41809 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  5 14:33:36.761: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-389" for this suite.
Jan  5 14:33:42.813: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 14:33:42.917: INFO: namespace emptydir-389 deletion completed in 6.146001692s

• [SLOW TEST:20.750 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox Pod with hostAliases 
  should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  5 14:33:42.918: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  5 14:33:57.191: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-4077" for this suite.
Jan  5 14:34:41.239: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 14:34:41.443: INFO: namespace kubelet-test-4077 deletion completed in 44.242008718s

• [SLOW TEST:58.526 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when scheduling a busybox Pod with hostAliases
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136
    should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  5 14:34:41.444: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating the pod
Jan  5 14:34:56.320: INFO: Successfully updated pod "annotationupdate935e6698-2537-4cfb-8a00-2401c960bac1"
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  5 14:34:58.494: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-87" for this suite.
Jan  5 14:35:20.546: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 14:35:20.733: INFO: namespace projected-87 deletion completed in 22.228464849s

• [SLOW TEST:39.289 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  5 14:35:20.734: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Jan  5 14:35:20.870: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3862a3ad-7a48-42fa-a6df-2ed76ed51577" in namespace "projected-2688" to be "success or failure"
Jan  5 14:35:20.891: INFO: Pod "downwardapi-volume-3862a3ad-7a48-42fa-a6df-2ed76ed51577": Phase="Pending", Reason="", readiness=false. Elapsed: 20.088127ms
Jan  5 14:35:22.902: INFO: Pod "downwardapi-volume-3862a3ad-7a48-42fa-a6df-2ed76ed51577": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031635029s
Jan  5 14:35:24.912: INFO: Pod "downwardapi-volume-3862a3ad-7a48-42fa-a6df-2ed76ed51577": Phase="Pending", Reason="", readiness=false. Elapsed: 4.041755877s
Jan  5 14:35:27.102: INFO: Pod "downwardapi-volume-3862a3ad-7a48-42fa-a6df-2ed76ed51577": Phase="Pending", Reason="", readiness=false. Elapsed: 6.23185928s
Jan  5 14:35:29.112: INFO: Pod "downwardapi-volume-3862a3ad-7a48-42fa-a6df-2ed76ed51577": Phase="Pending", Reason="", readiness=false. Elapsed: 8.241140903s
Jan  5 14:35:31.122: INFO: Pod "downwardapi-volume-3862a3ad-7a48-42fa-a6df-2ed76ed51577": Phase="Pending", Reason="", readiness=false. Elapsed: 10.251286659s
Jan  5 14:35:33.260: INFO: Pod "downwardapi-volume-3862a3ad-7a48-42fa-a6df-2ed76ed51577": Phase="Pending", Reason="", readiness=false. Elapsed: 12.389852282s
Jan  5 14:35:35.266: INFO: Pod "downwardapi-volume-3862a3ad-7a48-42fa-a6df-2ed76ed51577": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.395458454s
STEP: Saw pod success
Jan  5 14:35:35.266: INFO: Pod "downwardapi-volume-3862a3ad-7a48-42fa-a6df-2ed76ed51577" satisfied condition "success or failure"
Jan  5 14:35:35.270: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-3862a3ad-7a48-42fa-a6df-2ed76ed51577 container client-container: 
STEP: delete the pod
Jan  5 14:35:35.420: INFO: Waiting for pod downwardapi-volume-3862a3ad-7a48-42fa-a6df-2ed76ed51577 to disappear
Jan  5 14:35:35.431: INFO: Pod downwardapi-volume-3862a3ad-7a48-42fa-a6df-2ed76ed51577 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  5 14:35:35.432: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2688" for this suite.
Jan  5 14:35:41.493: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 14:35:41.619: INFO: namespace projected-2688 deletion completed in 6.180455785s

• [SLOW TEST:20.885 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy logs on node using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  5 14:35:41.619: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy logs on node using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jan  5 14:35:41.805: INFO: (0) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 16.382517ms)
Jan  5 14:35:41.811: INFO: (1) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.925471ms)
Jan  5 14:35:41.822: INFO: (2) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 11.018666ms)
Jan  5 14:35:41.835: INFO: (3) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 13.110639ms)
Jan  5 14:35:41.844: INFO: (4) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.451127ms)
Jan  5 14:35:41.854: INFO: (5) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 9.583152ms)
Jan  5 14:35:41.872: INFO: (6) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 18.386193ms)
Jan  5 14:35:41.935: INFO: (7) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 62.366788ms)
Jan  5 14:35:41.941: INFO: (8) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.514196ms)
Jan  5 14:35:41.947: INFO: (9) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.852378ms)
Jan  5 14:35:41.952: INFO: (10) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.242105ms)
Jan  5 14:35:41.957: INFO: (11) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.312314ms)
Jan  5 14:35:41.962: INFO: (12) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.131384ms)
Jan  5 14:35:41.967: INFO: (13) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.797074ms)
Jan  5 14:35:41.982: INFO: (14) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 15.090029ms)
Jan  5 14:35:41.995: INFO: (15) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 13.10542ms)
Jan  5 14:35:42.001: INFO: (16) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.838952ms)
Jan  5 14:35:42.007: INFO: (17) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.590593ms)
Jan  5 14:35:42.013: INFO: (18) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.686078ms)
Jan  5 14:35:42.021: INFO: (19) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.72377ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  5 14:35:42.021: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-5084" for this suite.
Jan  5 14:35:48.650: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 14:35:48.849: INFO: namespace proxy-5084 deletion completed in 6.820353849s

• [SLOW TEST:7.230 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58
    should proxy logs on node using proxy subresource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run --rm job 
  should create a job from an image, then delete the job  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  5 14:35:48.851: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should create a job from an image, then delete the job  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: executing a command with run --rm and attach with stdin
Jan  5 14:35:49.301: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-1500 run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed''
Jan  5 14:36:06.244: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0105 14:36:04.582606    2736 log.go:172] (0xc000118790) (0xc00018e280) Create stream\nI0105 14:36:04.582841    2736 log.go:172] (0xc000118790) (0xc00018e280) Stream added, broadcasting: 1\nI0105 14:36:04.598310    2736 log.go:172] (0xc000118790) Reply frame received for 1\nI0105 14:36:04.598627    2736 log.go:172] (0xc000118790) (0xc000618e60) Create stream\nI0105 14:36:04.598670    2736 log.go:172] (0xc000118790) (0xc000618e60) Stream added, broadcasting: 3\nI0105 14:36:04.600744    2736 log.go:172] (0xc000118790) Reply frame received for 3\nI0105 14:36:04.601091    2736 log.go:172] (0xc000118790) (0xc00092c000) Create stream\nI0105 14:36:04.601129    2736 log.go:172] (0xc000118790) (0xc00092c000) Stream added, broadcasting: 5\nI0105 14:36:04.602916    2736 log.go:172] (0xc000118790) Reply frame received for 5\nI0105 14:36:04.602964    2736 log.go:172] (0xc000118790) (0xc00092c0a0) Create stream\nI0105 14:36:04.602976    2736 log.go:172] (0xc000118790) (0xc00092c0a0) Stream added, broadcasting: 7\nI0105 14:36:04.605376    2736 log.go:172] (0xc000118790) Reply frame received for 7\nI0105 14:36:04.606666    2736 log.go:172] (0xc000618e60) (3) Writing data frame\nI0105 14:36:04.607821    2736 log.go:172] (0xc000618e60) (3) Writing data frame\nI0105 14:36:04.621204    2736 log.go:172] (0xc000118790) Data frame received for 5\nI0105 14:36:04.621243    2736 log.go:172] (0xc00092c000) (5) Data frame handling\nI0105 14:36:04.621266    2736 log.go:172] (0xc00092c000) (5) Data frame sent\nI0105 14:36:04.624420    2736 log.go:172] (0xc000118790) Data frame received for 5\nI0105 14:36:04.624460    2736 log.go:172] (0xc00092c000) (5) Data frame handling\nI0105 14:36:04.624482    2736 log.go:172] (0xc00092c000) (5) Data frame sent\nI0105 14:36:06.159455    2736 log.go:172] (0xc000118790) Data frame received for 1\nI0105 14:36:06.160243    2736 log.go:172] (0xc000118790) (0xc00092c000) Stream removed, broadcasting: 5\nI0105 14:36:06.160453    2736 log.go:172] (0xc000118790) (0xc000618e60) Stream removed, broadcasting: 3\nI0105 14:36:06.160696    2736 log.go:172] (0xc00018e280) (1) Data frame handling\nI0105 14:36:06.160766    2736 log.go:172] (0xc00018e280) (1) Data frame sent\nI0105 14:36:06.160800    2736 log.go:172] (0xc000118790) (0xc00018e280) Stream removed, broadcasting: 1\nI0105 14:36:06.161103    2736 log.go:172] (0xc000118790) (0xc00092c0a0) Stream removed, broadcasting: 7\nI0105 14:36:06.162052    2736 log.go:172] (0xc000118790) (0xc00018e280) Stream removed, broadcasting: 1\nI0105 14:36:06.162082    2736 log.go:172] (0xc000118790) (0xc000618e60) Stream removed, broadcasting: 3\nI0105 14:36:06.162106    2736 log.go:172] (0xc000118790) (0xc00092c000) Stream removed, broadcasting: 5\nI0105 14:36:06.162134    2736 log.go:172] (0xc000118790) (0xc00092c0a0) Stream removed, broadcasting: 7\nI0105 14:36:06.162429    2736 log.go:172] (0xc000118790) Go away received\n"
Jan  5 14:36:06.245: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n"
STEP: verifying the job e2e-test-rm-busybox-job was deleted
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  5 14:36:08.256: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1500" for this suite.
Jan  5 14:36:18.470: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 14:36:18.659: INFO: namespace kubectl-1500 deletion completed in 10.396065248s

• [SLOW TEST:29.808 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run --rm job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create a job from an image, then delete the job  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  5 14:36:18.660: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Jan  5 14:36:18.898: INFO: Waiting up to 5m0s for pod "downwardapi-volume-6be4ea6e-e72b-48f3-9998-ae05fcb8e9ca" in namespace "downward-api-2117" to be "success or failure"
Jan  5 14:36:19.103: INFO: Pod "downwardapi-volume-6be4ea6e-e72b-48f3-9998-ae05fcb8e9ca": Phase="Pending", Reason="", readiness=false. Elapsed: 203.784227ms
Jan  5 14:36:21.123: INFO: Pod "downwardapi-volume-6be4ea6e-e72b-48f3-9998-ae05fcb8e9ca": Phase="Pending", Reason="", readiness=false. Elapsed: 2.223589143s
Jan  5 14:36:23.133: INFO: Pod "downwardapi-volume-6be4ea6e-e72b-48f3-9998-ae05fcb8e9ca": Phase="Pending", Reason="", readiness=false. Elapsed: 4.234334732s
Jan  5 14:36:25.141: INFO: Pod "downwardapi-volume-6be4ea6e-e72b-48f3-9998-ae05fcb8e9ca": Phase="Pending", Reason="", readiness=false. Elapsed: 6.241576457s
Jan  5 14:36:27.152: INFO: Pod "downwardapi-volume-6be4ea6e-e72b-48f3-9998-ae05fcb8e9ca": Phase="Pending", Reason="", readiness=false. Elapsed: 8.252673994s
Jan  5 14:36:29.161: INFO: Pod "downwardapi-volume-6be4ea6e-e72b-48f3-9998-ae05fcb8e9ca": Phase="Pending", Reason="", readiness=false. Elapsed: 10.262335938s
Jan  5 14:36:31.171: INFO: Pod "downwardapi-volume-6be4ea6e-e72b-48f3-9998-ae05fcb8e9ca": Phase="Pending", Reason="", readiness=false. Elapsed: 12.271925605s
Jan  5 14:36:33.178: INFO: Pod "downwardapi-volume-6be4ea6e-e72b-48f3-9998-ae05fcb8e9ca": Phase="Pending", Reason="", readiness=false. Elapsed: 14.278710869s
Jan  5 14:36:35.213: INFO: Pod "downwardapi-volume-6be4ea6e-e72b-48f3-9998-ae05fcb8e9ca": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.313755524s
STEP: Saw pod success
Jan  5 14:36:35.213: INFO: Pod "downwardapi-volume-6be4ea6e-e72b-48f3-9998-ae05fcb8e9ca" satisfied condition "success or failure"
Jan  5 14:36:35.218: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-6be4ea6e-e72b-48f3-9998-ae05fcb8e9ca container client-container: 
STEP: delete the pod
Jan  5 14:36:35.358: INFO: Waiting for pod downwardapi-volume-6be4ea6e-e72b-48f3-9998-ae05fcb8e9ca to disappear
Jan  5 14:36:35.389: INFO: Pod downwardapi-volume-6be4ea6e-e72b-48f3-9998-ae05fcb8e9ca no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  5 14:36:35.390: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-2117" for this suite.
Jan  5 14:36:41.458: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 14:36:41.651: INFO: namespace downward-api-2117 deletion completed in 6.238449079s

• [SLOW TEST:22.991 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  5 14:36:41.653: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Jan  5 14:36:41.839: INFO: Waiting up to 5m0s for pod "downwardapi-volume-dbe0b1c8-080b-49fe-a1ca-028ab4113eca" in namespace "projected-763" to be "success or failure"
Jan  5 14:36:41.854: INFO: Pod "downwardapi-volume-dbe0b1c8-080b-49fe-a1ca-028ab4113eca": Phase="Pending", Reason="", readiness=false. Elapsed: 13.976436ms
Jan  5 14:36:43.874: INFO: Pod "downwardapi-volume-dbe0b1c8-080b-49fe-a1ca-028ab4113eca": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033793403s
Jan  5 14:36:45.890: INFO: Pod "downwardapi-volume-dbe0b1c8-080b-49fe-a1ca-028ab4113eca": Phase="Pending", Reason="", readiness=false. Elapsed: 4.050143168s
Jan  5 14:36:47.911: INFO: Pod "downwardapi-volume-dbe0b1c8-080b-49fe-a1ca-028ab4113eca": Phase="Pending", Reason="", readiness=false. Elapsed: 6.070874028s
Jan  5 14:36:49.922: INFO: Pod "downwardapi-volume-dbe0b1c8-080b-49fe-a1ca-028ab4113eca": Phase="Pending", Reason="", readiness=false. Elapsed: 8.081952173s
Jan  5 14:36:51.929: INFO: Pod "downwardapi-volume-dbe0b1c8-080b-49fe-a1ca-028ab4113eca": Phase="Pending", Reason="", readiness=false. Elapsed: 10.089016056s
Jan  5 14:36:53.938: INFO: Pod "downwardapi-volume-dbe0b1c8-080b-49fe-a1ca-028ab4113eca": Phase="Pending", Reason="", readiness=false. Elapsed: 12.098416537s
Jan  5 14:36:55.962: INFO: Pod "downwardapi-volume-dbe0b1c8-080b-49fe-a1ca-028ab4113eca": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.12203089s
STEP: Saw pod success
Jan  5 14:36:55.962: INFO: Pod "downwardapi-volume-dbe0b1c8-080b-49fe-a1ca-028ab4113eca" satisfied condition "success or failure"
Jan  5 14:36:55.967: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-dbe0b1c8-080b-49fe-a1ca-028ab4113eca container client-container: 
STEP: delete the pod
Jan  5 14:36:56.094: INFO: Waiting for pod downwardapi-volume-dbe0b1c8-080b-49fe-a1ca-028ab4113eca to disappear
Jan  5 14:36:56.102: INFO: Pod downwardapi-volume-dbe0b1c8-080b-49fe-a1ca-028ab4113eca no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  5 14:36:56.103: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-763" for this suite.
Jan  5 14:37:02.304: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 14:37:02.465: INFO: namespace projected-763 deletion completed in 6.309281409s

• [SLOW TEST:20.813 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  5 14:37:02.466: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test override all
Jan  5 14:37:02.662: INFO: Waiting up to 5m0s for pod "client-containers-5a89b033-23c8-4363-a8f9-433ce0e3f759" in namespace "containers-7937" to be "success or failure"
Jan  5 14:37:02.689: INFO: Pod "client-containers-5a89b033-23c8-4363-a8f9-433ce0e3f759": Phase="Pending", Reason="", readiness=false. Elapsed: 26.344828ms
Jan  5 14:37:04.701: INFO: Pod "client-containers-5a89b033-23c8-4363-a8f9-433ce0e3f759": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038910575s
Jan  5 14:37:06.716: INFO: Pod "client-containers-5a89b033-23c8-4363-a8f9-433ce0e3f759": Phase="Pending", Reason="", readiness=false. Elapsed: 4.053911212s
Jan  5 14:37:08.895: INFO: Pod "client-containers-5a89b033-23c8-4363-a8f9-433ce0e3f759": Phase="Pending", Reason="", readiness=false. Elapsed: 6.232742635s
Jan  5 14:37:10.908: INFO: Pod "client-containers-5a89b033-23c8-4363-a8f9-433ce0e3f759": Phase="Pending", Reason="", readiness=false. Elapsed: 8.245377204s
Jan  5 14:37:12.913: INFO: Pod "client-containers-5a89b033-23c8-4363-a8f9-433ce0e3f759": Phase="Pending", Reason="", readiness=false. Elapsed: 10.25105138s
Jan  5 14:37:14.924: INFO: Pod "client-containers-5a89b033-23c8-4363-a8f9-433ce0e3f759": Phase="Pending", Reason="", readiness=false. Elapsed: 12.26190812s
Jan  5 14:37:16.930: INFO: Pod "client-containers-5a89b033-23c8-4363-a8f9-433ce0e3f759": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.267887709s
STEP: Saw pod success
Jan  5 14:37:16.930: INFO: Pod "client-containers-5a89b033-23c8-4363-a8f9-433ce0e3f759" satisfied condition "success or failure"
Jan  5 14:37:16.933: INFO: Trying to get logs from node iruya-node pod client-containers-5a89b033-23c8-4363-a8f9-433ce0e3f759 container test-container: 
STEP: delete the pod
Jan  5 14:37:16.998: INFO: Waiting for pod client-containers-5a89b033-23c8-4363-a8f9-433ce0e3f759 to disappear
Jan  5 14:37:17.038: INFO: Pod client-containers-5a89b033-23c8-4363-a8f9-433ce0e3f759 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  5 14:37:17.038: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-7937" for this suite.
Jan  5 14:37:23.132: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 14:37:23.243: INFO: namespace containers-7937 deletion completed in 6.199780156s

• [SLOW TEST:20.777 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  5 14:37:23.244: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the container
STEP: wait for the container to reach Failed
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Jan  5 14:37:35.865: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  5 14:37:35.912: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-9427" for this suite.
Jan  5 14:37:42.090: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 14:37:43.636: INFO: namespace container-runtime-9427 deletion completed in 7.715457975s

• [SLOW TEST:20.393 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129
      should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  5 14:37:43.637: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test override command
Jan  5 14:37:43.869: INFO: Waiting up to 5m0s for pod "client-containers-7b5b4987-388f-4146-9579-9eb0ac96aa3e" in namespace "containers-1371" to be "success or failure"
Jan  5 14:37:43.948: INFO: Pod "client-containers-7b5b4987-388f-4146-9579-9eb0ac96aa3e": Phase="Pending", Reason="", readiness=false. Elapsed: 78.7858ms
Jan  5 14:37:45.957: INFO: Pod "client-containers-7b5b4987-388f-4146-9579-9eb0ac96aa3e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.088063542s
Jan  5 14:37:47.965: INFO: Pod "client-containers-7b5b4987-388f-4146-9579-9eb0ac96aa3e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.096027726s
Jan  5 14:37:49.972: INFO: Pod "client-containers-7b5b4987-388f-4146-9579-9eb0ac96aa3e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.103371124s
Jan  5 14:37:51.980: INFO: Pod "client-containers-7b5b4987-388f-4146-9579-9eb0ac96aa3e": Phase="Pending", Reason="", readiness=false. Elapsed: 8.11134115s
Jan  5 14:37:53.994: INFO: Pod "client-containers-7b5b4987-388f-4146-9579-9eb0ac96aa3e": Phase="Pending", Reason="", readiness=false. Elapsed: 10.12492136s
Jan  5 14:37:56.003: INFO: Pod "client-containers-7b5b4987-388f-4146-9579-9eb0ac96aa3e": Phase="Pending", Reason="", readiness=false. Elapsed: 12.134255153s
Jan  5 14:37:58.011: INFO: Pod "client-containers-7b5b4987-388f-4146-9579-9eb0ac96aa3e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.1421548s
STEP: Saw pod success
Jan  5 14:37:58.011: INFO: Pod "client-containers-7b5b4987-388f-4146-9579-9eb0ac96aa3e" satisfied condition "success or failure"
Jan  5 14:37:58.020: INFO: Trying to get logs from node iruya-node pod client-containers-7b5b4987-388f-4146-9579-9eb0ac96aa3e container test-container: 
STEP: delete the pod
Jan  5 14:37:58.321: INFO: Waiting for pod client-containers-7b5b4987-388f-4146-9579-9eb0ac96aa3e to disappear
Jan  5 14:37:58.425: INFO: Pod client-containers-7b5b4987-388f-4146-9579-9eb0ac96aa3e no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  5 14:37:58.425: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-1371" for this suite.
Jan  5 14:38:06.524: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 14:38:06.744: INFO: namespace containers-1371 deletion completed in 8.307628762s

• [SLOW TEST:23.107 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  5 14:38:06.745: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test substitution in container's command
Jan  5 14:38:06.961: INFO: Waiting up to 5m0s for pod "var-expansion-86e8c18c-a5cf-47f0-aeb2-329c6bf5a355" in namespace "var-expansion-2307" to be "success or failure"
Jan  5 14:38:06.982: INFO: Pod "var-expansion-86e8c18c-a5cf-47f0-aeb2-329c6bf5a355": Phase="Pending", Reason="", readiness=false. Elapsed: 20.123773ms
Jan  5 14:38:08.992: INFO: Pod "var-expansion-86e8c18c-a5cf-47f0-aeb2-329c6bf5a355": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030524661s
Jan  5 14:38:11.009: INFO: Pod "var-expansion-86e8c18c-a5cf-47f0-aeb2-329c6bf5a355": Phase="Pending", Reason="", readiness=false. Elapsed: 4.047656343s
Jan  5 14:38:13.016: INFO: Pod "var-expansion-86e8c18c-a5cf-47f0-aeb2-329c6bf5a355": Phase="Pending", Reason="", readiness=false. Elapsed: 6.0539618s
Jan  5 14:38:15.022: INFO: Pod "var-expansion-86e8c18c-a5cf-47f0-aeb2-329c6bf5a355": Phase="Pending", Reason="", readiness=false. Elapsed: 8.060560487s
Jan  5 14:38:17.149: INFO: Pod "var-expansion-86e8c18c-a5cf-47f0-aeb2-329c6bf5a355": Phase="Pending", Reason="", readiness=false. Elapsed: 10.187081221s
Jan  5 14:38:19.185: INFO: Pod "var-expansion-86e8c18c-a5cf-47f0-aeb2-329c6bf5a355": Phase="Pending", Reason="", readiness=false. Elapsed: 12.222774969s
Jan  5 14:38:21.215: INFO: Pod "var-expansion-86e8c18c-a5cf-47f0-aeb2-329c6bf5a355": Phase="Pending", Reason="", readiness=false. Elapsed: 14.253241151s
Jan  5 14:38:23.225: INFO: Pod "var-expansion-86e8c18c-a5cf-47f0-aeb2-329c6bf5a355": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.263579458s
STEP: Saw pod success
Jan  5 14:38:23.225: INFO: Pod "var-expansion-86e8c18c-a5cf-47f0-aeb2-329c6bf5a355" satisfied condition "success or failure"
Jan  5 14:38:23.231: INFO: Trying to get logs from node iruya-node pod var-expansion-86e8c18c-a5cf-47f0-aeb2-329c6bf5a355 container dapi-container: 
STEP: delete the pod
Jan  5 14:38:23.291: INFO: Waiting for pod var-expansion-86e8c18c-a5cf-47f0-aeb2-329c6bf5a355 to disappear
Jan  5 14:38:23.392: INFO: Pod var-expansion-86e8c18c-a5cf-47f0-aeb2-329c6bf5a355 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  5 14:38:23.392: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-2307" for this suite.
Jan  5 14:38:29.487: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 14:38:29.626: INFO: namespace var-expansion-2307 deletion completed in 6.226378225s

• [SLOW TEST:22.881 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  5 14:38:29.628: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Jan  5 14:38:29.847: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2832e82a-7188-44a9-9e29-5a79c9661bbc" in namespace "downward-api-5637" to be "success or failure"
Jan  5 14:38:29.895: INFO: Pod "downwardapi-volume-2832e82a-7188-44a9-9e29-5a79c9661bbc": Phase="Pending", Reason="", readiness=false. Elapsed: 47.355338ms
Jan  5 14:38:31.906: INFO: Pod "downwardapi-volume-2832e82a-7188-44a9-9e29-5a79c9661bbc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.058916535s
Jan  5 14:38:33.919: INFO: Pod "downwardapi-volume-2832e82a-7188-44a9-9e29-5a79c9661bbc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.072057895s
Jan  5 14:38:35.930: INFO: Pod "downwardapi-volume-2832e82a-7188-44a9-9e29-5a79c9661bbc": Phase="Pending", Reason="", readiness=false. Elapsed: 6.082764481s
Jan  5 14:38:37.942: INFO: Pod "downwardapi-volume-2832e82a-7188-44a9-9e29-5a79c9661bbc": Phase="Pending", Reason="", readiness=false. Elapsed: 8.094945362s
Jan  5 14:38:39.954: INFO: Pod "downwardapi-volume-2832e82a-7188-44a9-9e29-5a79c9661bbc": Phase="Pending", Reason="", readiness=false. Elapsed: 10.107039816s
Jan  5 14:38:41.971: INFO: Pod "downwardapi-volume-2832e82a-7188-44a9-9e29-5a79c9661bbc": Phase="Pending", Reason="", readiness=false. Elapsed: 12.123906647s
Jan  5 14:38:43.993: INFO: Pod "downwardapi-volume-2832e82a-7188-44a9-9e29-5a79c9661bbc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.145885869s
STEP: Saw pod success
Jan  5 14:38:43.994: INFO: Pod "downwardapi-volume-2832e82a-7188-44a9-9e29-5a79c9661bbc" satisfied condition "success or failure"
Jan  5 14:38:44.001: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-2832e82a-7188-44a9-9e29-5a79c9661bbc container client-container: 
STEP: delete the pod
Jan  5 14:38:44.071: INFO: Waiting for pod downwardapi-volume-2832e82a-7188-44a9-9e29-5a79c9661bbc to disappear
Jan  5 14:38:44.081: INFO: Pod downwardapi-volume-2832e82a-7188-44a9-9e29-5a79c9661bbc no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  5 14:38:44.081: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-5637" for this suite.
Jan  5 14:38:51.133: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 14:38:51.355: INFO: namespace downward-api-5637 deletion completed in 7.252673037s

• [SLOW TEST:21.728 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl patch 
  should add annotations for pods in rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  5 14:38:51.356: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should add annotations for pods in rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating Redis RC
Jan  5 14:38:51.536: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2216'
Jan  5 14:38:52.156: INFO: stderr: ""
Jan  5 14:38:52.156: INFO: stdout: "replicationcontroller/redis-master created\n"
STEP: Waiting for Redis master to start.
Jan  5 14:38:53.165: INFO: Selector matched 1 pods for map[app:redis]
Jan  5 14:38:53.166: INFO: Found 0 / 1
Jan  5 14:38:54.162: INFO: Selector matched 1 pods for map[app:redis]
Jan  5 14:38:54.162: INFO: Found 0 / 1
Jan  5 14:38:55.167: INFO: Selector matched 1 pods for map[app:redis]
Jan  5 14:38:55.167: INFO: Found 0 / 1
Jan  5 14:38:56.165: INFO: Selector matched 1 pods for map[app:redis]
Jan  5 14:38:56.165: INFO: Found 0 / 1
Jan  5 14:38:57.169: INFO: Selector matched 1 pods for map[app:redis]
Jan  5 14:38:57.169: INFO: Found 0 / 1
Jan  5 14:38:58.164: INFO: Selector matched 1 pods for map[app:redis]
Jan  5 14:38:58.164: INFO: Found 0 / 1
Jan  5 14:38:59.164: INFO: Selector matched 1 pods for map[app:redis]
Jan  5 14:38:59.164: INFO: Found 0 / 1
Jan  5 14:39:00.166: INFO: Selector matched 1 pods for map[app:redis]
Jan  5 14:39:00.166: INFO: Found 0 / 1
Jan  5 14:39:01.167: INFO: Selector matched 1 pods for map[app:redis]
Jan  5 14:39:01.167: INFO: Found 0 / 1
Jan  5 14:39:02.164: INFO: Selector matched 1 pods for map[app:redis]
Jan  5 14:39:02.164: INFO: Found 0 / 1
Jan  5 14:39:03.173: INFO: Selector matched 1 pods for map[app:redis]
Jan  5 14:39:03.173: INFO: Found 0 / 1
Jan  5 14:39:04.166: INFO: Selector matched 1 pods for map[app:redis]
Jan  5 14:39:04.166: INFO: Found 0 / 1
Jan  5 14:39:05.164: INFO: Selector matched 1 pods for map[app:redis]
Jan  5 14:39:05.164: INFO: Found 1 / 1
Jan  5 14:39:05.164: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
STEP: patching all pods
Jan  5 14:39:05.168: INFO: Selector matched 1 pods for map[app:redis]
Jan  5 14:39:05.168: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Jan  5 14:39:05.169: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod redis-master-v49xs --namespace=kubectl-2216 -p {"metadata":{"annotations":{"x":"y"}}}'
Jan  5 14:39:05.465: INFO: stderr: ""
Jan  5 14:39:05.465: INFO: stdout: "pod/redis-master-v49xs patched\n"
STEP: checking annotations
Jan  5 14:39:05.493: INFO: Selector matched 1 pods for map[app:redis]
Jan  5 14:39:05.494: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  5 14:39:05.494: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-2216" for this suite.
Jan  5 14:39:27.549: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 14:39:27.834: INFO: namespace kubectl-2216 deletion completed in 22.334251344s

• [SLOW TEST:36.478 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl patch
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should add annotations for pods in rc  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  5 14:39:27.835: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0777 on node default medium
Jan  5 14:39:28.048: INFO: Waiting up to 5m0s for pod "pod-d67f372e-cfc4-4119-8617-8f64bfb6fcd5" in namespace "emptydir-5359" to be "success or failure"
Jan  5 14:39:28.065: INFO: Pod "pod-d67f372e-cfc4-4119-8617-8f64bfb6fcd5": Phase="Pending", Reason="", readiness=false. Elapsed: 16.346681ms
Jan  5 14:39:30.077: INFO: Pod "pod-d67f372e-cfc4-4119-8617-8f64bfb6fcd5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028509584s
Jan  5 14:39:32.093: INFO: Pod "pod-d67f372e-cfc4-4119-8617-8f64bfb6fcd5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.044605813s
Jan  5 14:39:34.105: INFO: Pod "pod-d67f372e-cfc4-4119-8617-8f64bfb6fcd5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.055801361s
Jan  5 14:39:36.109: INFO: Pod "pod-d67f372e-cfc4-4119-8617-8f64bfb6fcd5": Phase="Pending", Reason="", readiness=false. Elapsed: 8.060464625s
Jan  5 14:39:38.121: INFO: Pod "pod-d67f372e-cfc4-4119-8617-8f64bfb6fcd5": Phase="Pending", Reason="", readiness=false. Elapsed: 10.072696433s
Jan  5 14:39:40.129: INFO: Pod "pod-d67f372e-cfc4-4119-8617-8f64bfb6fcd5": Phase="Pending", Reason="", readiness=false. Elapsed: 12.080022375s
Jan  5 14:39:42.143: INFO: Pod "pod-d67f372e-cfc4-4119-8617-8f64bfb6fcd5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.094390914s
STEP: Saw pod success
Jan  5 14:39:42.143: INFO: Pod "pod-d67f372e-cfc4-4119-8617-8f64bfb6fcd5" satisfied condition "success or failure"
Jan  5 14:39:42.147: INFO: Trying to get logs from node iruya-node pod pod-d67f372e-cfc4-4119-8617-8f64bfb6fcd5 container test-container: 
STEP: delete the pod
Jan  5 14:39:42.251: INFO: Waiting for pod pod-d67f372e-cfc4-4119-8617-8f64bfb6fcd5 to disappear
Jan  5 14:39:42.415: INFO: Pod pod-d67f372e-cfc4-4119-8617-8f64bfb6fcd5 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  5 14:39:42.416: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-5359" for this suite.
Jan  5 14:39:48.479: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 14:39:48.669: INFO: namespace emptydir-5359 deletion completed in 6.244945178s

• [SLOW TEST:20.835 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  5 14:39:48.670: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-map-d54e3cd6-4428-45f7-8cf0-33a13e3f52fe
STEP: Creating a pod to test consume configMaps
Jan  5 14:39:48.956: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-94d58f8b-b329-4320-8b63-9b9569b874ce" in namespace "projected-2381" to be "success or failure"
Jan  5 14:39:48.965: INFO: Pod "pod-projected-configmaps-94d58f8b-b329-4320-8b63-9b9569b874ce": Phase="Pending", Reason="", readiness=false. Elapsed: 8.467081ms
Jan  5 14:39:51.482: INFO: Pod "pod-projected-configmaps-94d58f8b-b329-4320-8b63-9b9569b874ce": Phase="Pending", Reason="", readiness=false. Elapsed: 2.525500136s
Jan  5 14:39:57.283: INFO: Pod "pod-projected-configmaps-94d58f8b-b329-4320-8b63-9b9569b874ce": Phase="Pending", Reason="", readiness=false. Elapsed: 8.326848765s
Jan  5 14:39:59.294: INFO: Pod "pod-projected-configmaps-94d58f8b-b329-4320-8b63-9b9569b874ce": Phase="Pending", Reason="", readiness=false. Elapsed: 10.337200848s
Jan  5 14:40:01.305: INFO: Pod "pod-projected-configmaps-94d58f8b-b329-4320-8b63-9b9569b874ce": Phase="Pending", Reason="", readiness=false. Elapsed: 12.349162423s
Jan  5 14:40:03.314: INFO: Pod "pod-projected-configmaps-94d58f8b-b329-4320-8b63-9b9569b874ce": Phase="Pending", Reason="", readiness=false. Elapsed: 14.357935051s
Jan  5 14:40:05.321: INFO: Pod "pod-projected-configmaps-94d58f8b-b329-4320-8b63-9b9569b874ce": Phase="Pending", Reason="", readiness=false. Elapsed: 16.365059853s
Jan  5 14:40:07.330: INFO: Pod "pod-projected-configmaps-94d58f8b-b329-4320-8b63-9b9569b874ce": Phase="Succeeded", Reason="", readiness=false. Elapsed: 18.373412186s
STEP: Saw pod success
Jan  5 14:40:07.330: INFO: Pod "pod-projected-configmaps-94d58f8b-b329-4320-8b63-9b9569b874ce" satisfied condition "success or failure"
Jan  5 14:40:07.335: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-94d58f8b-b329-4320-8b63-9b9569b874ce container projected-configmap-volume-test: 
STEP: delete the pod
Jan  5 14:40:07.396: INFO: Waiting for pod pod-projected-configmaps-94d58f8b-b329-4320-8b63-9b9569b874ce to disappear
Jan  5 14:40:07.406: INFO: Pod pod-projected-configmaps-94d58f8b-b329-4320-8b63-9b9569b874ce no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  5 14:40:07.406: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2381" for this suite.
Jan  5 14:40:13.484: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 14:40:13.619: INFO: namespace projected-2381 deletion completed in 6.208649578s

• [SLOW TEST:24.950 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  5 14:40:13.620: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-fb6d9217-cb4f-43f7-ae49-557b1bf319c5
STEP: Creating a pod to test consume secrets
Jan  5 14:40:13.813: INFO: Waiting up to 5m0s for pod "pod-secrets-1da17318-be36-4924-960c-cc9b9ec787bc" in namespace "secrets-1299" to be "success or failure"
Jan  5 14:40:13.836: INFO: Pod "pod-secrets-1da17318-be36-4924-960c-cc9b9ec787bc": Phase="Pending", Reason="", readiness=false. Elapsed: 22.763022ms
Jan  5 14:40:15.851: INFO: Pod "pod-secrets-1da17318-be36-4924-960c-cc9b9ec787bc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037794595s
Jan  5 14:40:18.219: INFO: Pod "pod-secrets-1da17318-be36-4924-960c-cc9b9ec787bc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.406167337s
Jan  5 14:40:20.230: INFO: Pod "pod-secrets-1da17318-be36-4924-960c-cc9b9ec787bc": Phase="Pending", Reason="", readiness=false. Elapsed: 6.416620314s
Jan  5 14:40:22.241: INFO: Pod "pod-secrets-1da17318-be36-4924-960c-cc9b9ec787bc": Phase="Pending", Reason="", readiness=false. Elapsed: 8.428280773s
Jan  5 14:40:24.254: INFO: Pod "pod-secrets-1da17318-be36-4924-960c-cc9b9ec787bc": Phase="Pending", Reason="", readiness=false. Elapsed: 10.441313571s
Jan  5 14:40:26.262: INFO: Pod "pod-secrets-1da17318-be36-4924-960c-cc9b9ec787bc": Phase="Pending", Reason="", readiness=false. Elapsed: 12.448974163s
Jan  5 14:40:28.275: INFO: Pod "pod-secrets-1da17318-be36-4924-960c-cc9b9ec787bc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.461468627s
STEP: Saw pod success
Jan  5 14:40:28.275: INFO: Pod "pod-secrets-1da17318-be36-4924-960c-cc9b9ec787bc" satisfied condition "success or failure"
Jan  5 14:40:28.288: INFO: Trying to get logs from node iruya-node pod pod-secrets-1da17318-be36-4924-960c-cc9b9ec787bc container secret-env-test: 
STEP: delete the pod
Jan  5 14:40:29.707: INFO: Waiting for pod pod-secrets-1da17318-be36-4924-960c-cc9b9ec787bc to disappear
Jan  5 14:40:29.803: INFO: Pod pod-secrets-1da17318-be36-4924-960c-cc9b9ec787bc no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  5 14:40:29.803: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-1299" for this suite.
Jan  5 14:40:35.873: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 14:40:35.995: INFO: namespace secrets-1299 deletion completed in 6.178304529s

• [SLOW TEST:22.376 seconds]
[sig-api-machinery] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform canary updates and phased rolling updates of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  5 14:40:35.996: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75
STEP: Creating service test in namespace statefulset-3095
[It] should perform canary updates and phased rolling updates of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a new StatefulSet
Jan  5 14:40:36.320: INFO: Found 0 stateful pods, waiting for 3
Jan  5 14:40:46.329: INFO: Found 1 stateful pods, waiting for 3
Jan  5 14:40:56.843: INFO: Found 2 stateful pods, waiting for 3
Jan  5 14:41:06.336: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jan  5 14:41:06.336: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jan  5 14:41:06.336: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Jan  5 14:41:16.333: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jan  5 14:41:16.333: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jan  5 14:41:16.333: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Updating stateful set template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine
Jan  5 14:41:16.370: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Not applying an update when the partition is greater than the number of replicas
STEP: Performing a canary update
Jan  5 14:41:26.563: INFO: Updating stateful set ss2
Jan  5 14:41:26.577: INFO: Waiting for Pod statefulset-3095/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
STEP: Restoring Pods to the correct revision when they are deleted
Jan  5 14:41:37.227: INFO: Found 2 stateful pods, waiting for 3
Jan  5 14:41:47.236: INFO: Found 2 stateful pods, waiting for 3
Jan  5 14:41:57.336: INFO: Found 2 stateful pods, waiting for 3
Jan  5 14:42:07.237: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jan  5 14:42:07.237: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jan  5 14:42:07.237: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Jan  5 14:42:17.238: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jan  5 14:42:17.239: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jan  5 14:42:17.239: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Performing a phased rolling update
Jan  5 14:42:17.267: INFO: Updating stateful set ss2
Jan  5 14:42:17.430: INFO: Waiting for Pod statefulset-3095/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan  5 14:42:27.914: INFO: Updating stateful set ss2
Jan  5 14:42:28.321: INFO: Waiting for StatefulSet statefulset-3095/ss2 to complete update
Jan  5 14:42:28.321: INFO: Waiting for Pod statefulset-3095/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan  5 14:42:38.336: INFO: Waiting for StatefulSet statefulset-3095/ss2 to complete update
Jan  5 14:42:38.336: INFO: Waiting for Pod statefulset-3095/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan  5 14:42:48.346: INFO: Waiting for StatefulSet statefulset-3095/ss2 to complete update
Jan  5 14:42:48.347: INFO: Waiting for Pod statefulset-3095/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan  5 14:42:58.347: INFO: Waiting for StatefulSet statefulset-3095/ss2 to complete update
Jan  5 14:43:08.390: INFO: Waiting for StatefulSet statefulset-3095/ss2 to complete update
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86
Jan  5 14:43:18.342: INFO: Deleting all statefulset in ns statefulset-3095
Jan  5 14:43:18.348: INFO: Scaling statefulset ss2 to 0
Jan  5 14:43:48.392: INFO: Waiting for statefulset status.replicas updated to 0
Jan  5 14:43:48.398: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  5 14:43:48.513: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-3095" for this suite.
Jan  5 14:43:56.558: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 14:43:56.683: INFO: namespace statefulset-3095 deletion completed in 8.164539451s

• [SLOW TEST:200.687 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should perform canary updates and phased rolling updates of template modifications [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  5 14:43:56.683: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Given a Pod with a 'name' label pod-adoption is created
STEP: When a replication controller with a matching selector is created
STEP: Then the orphan pod is adopted
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  5 14:44:11.998: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-6425" for this suite.
Jan  5 14:44:36.064: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 14:44:36.207: INFO: namespace replication-controller-6425 deletion completed in 24.196296073s

• [SLOW TEST:39.524 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  5 14:44:36.208: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-map-52886ffd-dd5b-43c5-a814-58529c64ac4b
STEP: Creating a pod to test consume secrets
Jan  5 14:44:36.469: INFO: Waiting up to 5m0s for pod "pod-secrets-42c48352-b919-4708-a35f-e4a32bab95d2" in namespace "secrets-5300" to be "success or failure"
Jan  5 14:44:36.635: INFO: Pod "pod-secrets-42c48352-b919-4708-a35f-e4a32bab95d2": Phase="Pending", Reason="", readiness=false. Elapsed: 166.397564ms
Jan  5 14:44:38.649: INFO: Pod "pod-secrets-42c48352-b919-4708-a35f-e4a32bab95d2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.179589888s
Jan  5 14:44:40.659: INFO: Pod "pod-secrets-42c48352-b919-4708-a35f-e4a32bab95d2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.189671884s
Jan  5 14:44:42.669: INFO: Pod "pod-secrets-42c48352-b919-4708-a35f-e4a32bab95d2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.200034125s
Jan  5 14:44:44.677: INFO: Pod "pod-secrets-42c48352-b919-4708-a35f-e4a32bab95d2": Phase="Pending", Reason="", readiness=false. Elapsed: 8.207542585s
Jan  5 14:44:46.693: INFO: Pod "pod-secrets-42c48352-b919-4708-a35f-e4a32bab95d2": Phase="Pending", Reason="", readiness=false. Elapsed: 10.224186808s
Jan  5 14:44:48.738: INFO: Pod "pod-secrets-42c48352-b919-4708-a35f-e4a32bab95d2": Phase="Pending", Reason="", readiness=false. Elapsed: 12.268675973s
Jan  5 14:44:50.758: INFO: Pod "pod-secrets-42c48352-b919-4708-a35f-e4a32bab95d2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.288589204s
STEP: Saw pod success
Jan  5 14:44:50.758: INFO: Pod "pod-secrets-42c48352-b919-4708-a35f-e4a32bab95d2" satisfied condition "success or failure"
Jan  5 14:44:50.763: INFO: Trying to get logs from node iruya-node pod pod-secrets-42c48352-b919-4708-a35f-e4a32bab95d2 container secret-volume-test: 
STEP: delete the pod
Jan  5 14:44:51.055: INFO: Waiting for pod pod-secrets-42c48352-b919-4708-a35f-e4a32bab95d2 to disappear
Jan  5 14:44:51.067: INFO: Pod pod-secrets-42c48352-b919-4708-a35f-e4a32bab95d2 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  5 14:44:51.068: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-5300" for this suite.
Jan  5 14:44:57.109: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 14:44:57.228: INFO: namespace secrets-5300 deletion completed in 6.151601753s

• [SLOW TEST:21.020 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-apps] ReplicationController 
  should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  5 14:44:57.228: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Given a ReplicationController is created
STEP: When the matched label of one of its pods change
Jan  5 14:44:57.650: INFO: Pod name pod-release: Found 0 pods out of 1
Jan  5 14:45:02.664: INFO: Pod name pod-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  5 14:45:02.891: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-4930" for this suite.
Jan  5 14:45:09.102: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 14:45:09.215: INFO: namespace replication-controller-4930 deletion completed in 6.264729471s

• [SLOW TEST:11.987 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  5 14:45:09.216: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81
Jan  5 14:45:09.368: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Jan  5 14:45:09.427: INFO: Waiting for terminating namespaces to be deleted...
Jan  5 14:45:09.430: INFO: 
Logging pods the kubelet thinks is on node iruya-node before test
Jan  5 14:45:09.450: INFO: weave-net-rlp57 from kube-system started at 2019-10-12 11:56:39 +0000 UTC (2 container statuses recorded)
Jan  5 14:45:09.450: INFO: 	Container weave ready: true, restart count 0
Jan  5 14:45:09.450: INFO: 	Container weave-npc ready: true, restart count 0
Jan  5 14:45:09.450: INFO: kube-proxy-976zl from kube-system started at 2019-08-04 09:01:39 +0000 UTC (1 container statuses recorded)
Jan  5 14:45:09.450: INFO: 	Container kube-proxy ready: true, restart count 0
Jan  5 14:45:09.450: INFO: 
Logging pods the kubelet thinks is on node iruya-server-sfge57q7djm7 before test
Jan  5 14:45:09.484: INFO: kube-controller-manager-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:42 +0000 UTC (1 container statuses recorded)
Jan  5 14:45:09.484: INFO: 	Container kube-controller-manager ready: true, restart count 18
Jan  5 14:45:09.484: INFO: kube-proxy-58v95 from kube-system started at 2019-08-04 08:52:37 +0000 UTC (1 container statuses recorded)
Jan  5 14:45:09.484: INFO: 	Container kube-proxy ready: true, restart count 0
Jan  5 14:45:09.484: INFO: kube-apiserver-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:39 +0000 UTC (1 container statuses recorded)
Jan  5 14:45:09.484: INFO: 	Container kube-apiserver ready: true, restart count 0
Jan  5 14:45:09.484: INFO: kube-scheduler-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:43 +0000 UTC (1 container statuses recorded)
Jan  5 14:45:09.484: INFO: 	Container kube-scheduler ready: true, restart count 12
Jan  5 14:45:09.484: INFO: coredns-5c98db65d4-xx8w8 from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded)
Jan  5 14:45:09.484: INFO: 	Container coredns ready: true, restart count 0
Jan  5 14:45:09.484: INFO: etcd-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:38 +0000 UTC (1 container statuses recorded)
Jan  5 14:45:09.484: INFO: 	Container etcd ready: true, restart count 0
Jan  5 14:45:09.484: INFO: weave-net-bzl4d from kube-system started at 2019-08-04 08:52:37 +0000 UTC (2 container statuses recorded)
Jan  5 14:45:09.484: INFO: 	Container weave ready: true, restart count 0
Jan  5 14:45:09.484: INFO: 	Container weave-npc ready: true, restart count 0
Jan  5 14:45:09.484: INFO: coredns-5c98db65d4-bm4gs from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded)
Jan  5 14:45:09.484: INFO: 	Container coredns ready: true, restart count 0
[It] validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Trying to schedule Pod with nonempty NodeSelector.
STEP: Considering event: 
Type = [Warning], Name = [restricted-pod.15e704d584b734e4], Reason = [FailedScheduling], Message = [0/2 nodes are available: 2 node(s) didn't match node selector.]
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  5 14:45:10.537: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-5345" for this suite.
Jan  5 14:45:16.593: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 14:45:16.678: INFO: namespace sched-pred-5345 deletion completed in 6.133769631s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72

• [SLOW TEST:7.463 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23
  validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  5 14:45:16.679: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0644 on tmpfs
Jan  5 14:45:16.831: INFO: Waiting up to 5m0s for pod "pod-42d4934e-7c33-4070-8f8b-cc2f62fc8529" in namespace "emptydir-9610" to be "success or failure"
Jan  5 14:45:16.907: INFO: Pod "pod-42d4934e-7c33-4070-8f8b-cc2f62fc8529": Phase="Pending", Reason="", readiness=false. Elapsed: 75.880025ms
Jan  5 14:45:18.920: INFO: Pod "pod-42d4934e-7c33-4070-8f8b-cc2f62fc8529": Phase="Pending", Reason="", readiness=false. Elapsed: 2.088534202s
Jan  5 14:45:20.930: INFO: Pod "pod-42d4934e-7c33-4070-8f8b-cc2f62fc8529": Phase="Pending", Reason="", readiness=false. Elapsed: 4.098259859s
Jan  5 14:45:22.942: INFO: Pod "pod-42d4934e-7c33-4070-8f8b-cc2f62fc8529": Phase="Pending", Reason="", readiness=false. Elapsed: 6.110493079s
Jan  5 14:45:24.957: INFO: Pod "pod-42d4934e-7c33-4070-8f8b-cc2f62fc8529": Phase="Pending", Reason="", readiness=false. Elapsed: 8.125113227s
Jan  5 14:45:26.974: INFO: Pod "pod-42d4934e-7c33-4070-8f8b-cc2f62fc8529": Phase="Pending", Reason="", readiness=false. Elapsed: 10.142252479s
Jan  5 14:45:29.026: INFO: Pod "pod-42d4934e-7c33-4070-8f8b-cc2f62fc8529": Phase="Pending", Reason="", readiness=false. Elapsed: 12.19460278s
Jan  5 14:45:31.037: INFO: Pod "pod-42d4934e-7c33-4070-8f8b-cc2f62fc8529": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.205784882s
STEP: Saw pod success
Jan  5 14:45:31.037: INFO: Pod "pod-42d4934e-7c33-4070-8f8b-cc2f62fc8529" satisfied condition "success or failure"
Jan  5 14:45:31.042: INFO: Trying to get logs from node iruya-node pod pod-42d4934e-7c33-4070-8f8b-cc2f62fc8529 container test-container: 
STEP: delete the pod
Jan  5 14:45:31.128: INFO: Waiting for pod pod-42d4934e-7c33-4070-8f8b-cc2f62fc8529 to disappear
Jan  5 14:45:31.231: INFO: Pod pod-42d4934e-7c33-4070-8f8b-cc2f62fc8529 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  5 14:45:31.232: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-9610" for this suite.
Jan  5 14:45:37.284: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 14:45:37.404: INFO: namespace emptydir-9610 deletion completed in 6.162466062s

• [SLOW TEST:20.725 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  5 14:45:37.404: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a watch on configmaps with label A
STEP: creating a watch on configmaps with label B
STEP: creating a watch on configmaps with label A or B
STEP: creating a configmap with label A and ensuring the correct watchers observe the notification
Jan  5 14:45:37.555: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-7252,SelfLink:/api/v1/namespaces/watch-7252/configmaps/e2e-watch-test-configmap-a,UID:f98ef94c-1003-4f22-be4b-06c9dd19558c,ResourceVersion:19411948,Generation:0,CreationTimestamp:2020-01-05 14:45:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
Jan  5 14:45:37.556: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-7252,SelfLink:/api/v1/namespaces/watch-7252/configmaps/e2e-watch-test-configmap-a,UID:f98ef94c-1003-4f22-be4b-06c9dd19558c,ResourceVersion:19411948,Generation:0,CreationTimestamp:2020-01-05 14:45:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
STEP: modifying configmap A and ensuring the correct watchers observe the notification
Jan  5 14:45:47.575: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-7252,SelfLink:/api/v1/namespaces/watch-7252/configmaps/e2e-watch-test-configmap-a,UID:f98ef94c-1003-4f22-be4b-06c9dd19558c,ResourceVersion:19411962,Generation:0,CreationTimestamp:2020-01-05 14:45:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
Jan  5 14:45:47.576: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-7252,SelfLink:/api/v1/namespaces/watch-7252/configmaps/e2e-watch-test-configmap-a,UID:f98ef94c-1003-4f22-be4b-06c9dd19558c,ResourceVersion:19411962,Generation:0,CreationTimestamp:2020-01-05 14:45:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying configmap A again and ensuring the correct watchers observe the notification
Jan  5 14:45:57.593: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-7252,SelfLink:/api/v1/namespaces/watch-7252/configmaps/e2e-watch-test-configmap-a,UID:f98ef94c-1003-4f22-be4b-06c9dd19558c,ResourceVersion:19411977,Generation:0,CreationTimestamp:2020-01-05 14:45:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Jan  5 14:45:57.594: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-7252,SelfLink:/api/v1/namespaces/watch-7252/configmaps/e2e-watch-test-configmap-a,UID:f98ef94c-1003-4f22-be4b-06c9dd19558c,ResourceVersion:19411977,Generation:0,CreationTimestamp:2020-01-05 14:45:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
STEP: deleting configmap A and ensuring the correct watchers observe the notification
Jan  5 14:46:07.610: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-7252,SelfLink:/api/v1/namespaces/watch-7252/configmaps/e2e-watch-test-configmap-a,UID:f98ef94c-1003-4f22-be4b-06c9dd19558c,ResourceVersion:19411992,Generation:0,CreationTimestamp:2020-01-05 14:45:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Jan  5 14:46:07.611: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-7252,SelfLink:/api/v1/namespaces/watch-7252/configmaps/e2e-watch-test-configmap-a,UID:f98ef94c-1003-4f22-be4b-06c9dd19558c,ResourceVersion:19411992,Generation:0,CreationTimestamp:2020-01-05 14:45:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
STEP: creating a configmap with label B and ensuring the correct watchers observe the notification
Jan  5 14:46:17.637: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-7252,SelfLink:/api/v1/namespaces/watch-7252/configmaps/e2e-watch-test-configmap-b,UID:e626cfe0-75ee-40bd-a66b-df430c286082,ResourceVersion:19412007,Generation:0,CreationTimestamp:2020-01-05 14:46:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
Jan  5 14:46:17.638: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-7252,SelfLink:/api/v1/namespaces/watch-7252/configmaps/e2e-watch-test-configmap-b,UID:e626cfe0-75ee-40bd-a66b-df430c286082,ResourceVersion:19412007,Generation:0,CreationTimestamp:2020-01-05 14:46:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
STEP: deleting configmap B and ensuring the correct watchers observe the notification
Jan  5 14:46:27.701: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-7252,SelfLink:/api/v1/namespaces/watch-7252/configmaps/e2e-watch-test-configmap-b,UID:e626cfe0-75ee-40bd-a66b-df430c286082,ResourceVersion:19412021,Generation:0,CreationTimestamp:2020-01-05 14:46:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
Jan  5 14:46:27.702: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-7252,SelfLink:/api/v1/namespaces/watch-7252/configmaps/e2e-watch-test-configmap-b,UID:e626cfe0-75ee-40bd-a66b-df430c286082,ResourceVersion:19412021,Generation:0,CreationTimestamp:2020-01-05 14:46:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  5 14:46:37.703: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-7252" for this suite.
Jan  5 14:46:43.797: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 14:46:43.948: INFO: namespace watch-7252 deletion completed in 6.201635086s

• [SLOW TEST:66.544 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  5 14:46:43.949: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0644 on node default medium
Jan  5 14:46:44.156: INFO: Waiting up to 5m0s for pod "pod-44369e8a-03d9-4048-8aea-c1f48b922561" in namespace "emptydir-4744" to be "success or failure"
Jan  5 14:46:44.179: INFO: Pod "pod-44369e8a-03d9-4048-8aea-c1f48b922561": Phase="Pending", Reason="", readiness=false. Elapsed: 23.240554ms
Jan  5 14:46:46.186: INFO: Pod "pod-44369e8a-03d9-4048-8aea-c1f48b922561": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029780769s
Jan  5 14:46:48.191: INFO: Pod "pod-44369e8a-03d9-4048-8aea-c1f48b922561": Phase="Pending", Reason="", readiness=false. Elapsed: 4.035383512s
Jan  5 14:46:50.199: INFO: Pod "pod-44369e8a-03d9-4048-8aea-c1f48b922561": Phase="Pending", Reason="", readiness=false. Elapsed: 6.04304804s
Jan  5 14:46:52.223: INFO: Pod "pod-44369e8a-03d9-4048-8aea-c1f48b922561": Phase="Pending", Reason="", readiness=false. Elapsed: 8.067022026s
Jan  5 14:46:54.231: INFO: Pod "pod-44369e8a-03d9-4048-8aea-c1f48b922561": Phase="Pending", Reason="", readiness=false. Elapsed: 10.074972346s
Jan  5 14:46:56.240: INFO: Pod "pod-44369e8a-03d9-4048-8aea-c1f48b922561": Phase="Pending", Reason="", readiness=false. Elapsed: 12.084279848s
Jan  5 14:46:58.253: INFO: Pod "pod-44369e8a-03d9-4048-8aea-c1f48b922561": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.096891893s
STEP: Saw pod success
Jan  5 14:46:58.253: INFO: Pod "pod-44369e8a-03d9-4048-8aea-c1f48b922561" satisfied condition "success or failure"
Jan  5 14:46:58.258: INFO: Trying to get logs from node iruya-node pod pod-44369e8a-03d9-4048-8aea-c1f48b922561 container test-container: 
STEP: delete the pod
Jan  5 14:46:58.340: INFO: Waiting for pod pod-44369e8a-03d9-4048-8aea-c1f48b922561 to disappear
Jan  5 14:46:58.352: INFO: Pod pod-44369e8a-03d9-4048-8aea-c1f48b922561 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  5 14:46:58.352: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-4744" for this suite.
Jan  5 14:47:04.527: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 14:47:04.651: INFO: namespace emptydir-4744 deletion completed in 6.292660612s

• [SLOW TEST:20.702 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  5 14:47:04.651: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0644 on tmpfs
Jan  5 14:47:04.968: INFO: Waiting up to 5m0s for pod "pod-f6f38def-700c-411b-9a6a-7551a03df6fe" in namespace "emptydir-4657" to be "success or failure"
Jan  5 14:47:05.140: INFO: Pod "pod-f6f38def-700c-411b-9a6a-7551a03df6fe": Phase="Pending", Reason="", readiness=false. Elapsed: 171.380337ms
Jan  5 14:47:07.153: INFO: Pod "pod-f6f38def-700c-411b-9a6a-7551a03df6fe": Phase="Pending", Reason="", readiness=false. Elapsed: 2.184148488s
Jan  5 14:47:09.173: INFO: Pod "pod-f6f38def-700c-411b-9a6a-7551a03df6fe": Phase="Pending", Reason="", readiness=false. Elapsed: 4.204018126s
Jan  5 14:47:11.183: INFO: Pod "pod-f6f38def-700c-411b-9a6a-7551a03df6fe": Phase="Pending", Reason="", readiness=false. Elapsed: 6.214816594s
Jan  5 14:47:13.197: INFO: Pod "pod-f6f38def-700c-411b-9a6a-7551a03df6fe": Phase="Pending", Reason="", readiness=false. Elapsed: 8.228377037s
Jan  5 14:47:15.210: INFO: Pod "pod-f6f38def-700c-411b-9a6a-7551a03df6fe": Phase="Pending", Reason="", readiness=false. Elapsed: 10.24149038s
Jan  5 14:47:17.216: INFO: Pod "pod-f6f38def-700c-411b-9a6a-7551a03df6fe": Phase="Pending", Reason="", readiness=false. Elapsed: 12.247861053s
Jan  5 14:47:19.225: INFO: Pod "pod-f6f38def-700c-411b-9a6a-7551a03df6fe": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.256879512s
STEP: Saw pod success
Jan  5 14:47:19.226: INFO: Pod "pod-f6f38def-700c-411b-9a6a-7551a03df6fe" satisfied condition "success or failure"
Jan  5 14:47:19.230: INFO: Trying to get logs from node iruya-node pod pod-f6f38def-700c-411b-9a6a-7551a03df6fe container test-container: 
STEP: delete the pod
Jan  5 14:47:19.802: INFO: Waiting for pod pod-f6f38def-700c-411b-9a6a-7551a03df6fe to disappear
Jan  5 14:47:19.814: INFO: Pod pod-f6f38def-700c-411b-9a6a-7551a03df6fe no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  5 14:47:19.814: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-4657" for this suite.
Jan  5 14:47:25.875: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 14:47:26.004: INFO: namespace emptydir-4657 deletion completed in 6.177465244s

• [SLOW TEST:21.352 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  5 14:47:26.004: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
[It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jan  5 14:47:26.283: INFO: Creating simple daemon set daemon-set
STEP: Check that daemon pods launch on every node of the cluster.
Jan  5 14:47:26.384: INFO: Number of nodes with available pods: 0
Jan  5 14:47:26.384: INFO: Node iruya-node is running more than one daemon pod
Jan  5 14:47:28.108: INFO: Number of nodes with available pods: 0
Jan  5 14:47:28.108: INFO: Node iruya-node is running more than one daemon pod
Jan  5 14:47:29.061: INFO: Number of nodes with available pods: 0
Jan  5 14:47:29.062: INFO: Node iruya-node is running more than one daemon pod
Jan  5 14:47:29.395: INFO: Number of nodes with available pods: 0
Jan  5 14:47:29.395: INFO: Node iruya-node is running more than one daemon pod
Jan  5 14:47:30.571: INFO: Number of nodes with available pods: 0
Jan  5 14:47:30.571: INFO: Node iruya-node is running more than one daemon pod
Jan  5 14:47:31.405: INFO: Number of nodes with available pods: 0
Jan  5 14:47:31.405: INFO: Node iruya-node is running more than one daemon pod
Jan  5 14:47:32.400: INFO: Number of nodes with available pods: 0
Jan  5 14:47:32.401: INFO: Node iruya-node is running more than one daemon pod
Jan  5 14:47:33.398: INFO: Number of nodes with available pods: 0
Jan  5 14:47:33.398: INFO: Node iruya-node is running more than one daemon pod
Jan  5 14:47:35.791: INFO: Number of nodes with available pods: 0
Jan  5 14:47:35.791: INFO: Node iruya-node is running more than one daemon pod
Jan  5 14:47:36.595: INFO: Number of nodes with available pods: 0
Jan  5 14:47:36.595: INFO: Node iruya-node is running more than one daemon pod
Jan  5 14:47:39.368: INFO: Number of nodes with available pods: 0
Jan  5 14:47:39.368: INFO: Node iruya-node is running more than one daemon pod
Jan  5 14:47:39.662: INFO: Number of nodes with available pods: 0
Jan  5 14:47:39.663: INFO: Node iruya-node is running more than one daemon pod
Jan  5 14:47:40.398: INFO: Number of nodes with available pods: 0
Jan  5 14:47:40.398: INFO: Node iruya-node is running more than one daemon pod
Jan  5 14:47:41.400: INFO: Number of nodes with available pods: 2
Jan  5 14:47:41.400: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Update daemon pods image.
STEP: Check that daemon pods images are updated.
Jan  5 14:47:41.612: INFO: Wrong image for pod: daemon-set-4gckc. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  5 14:47:41.613: INFO: Wrong image for pod: daemon-set-wjf9f. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  5 14:47:42.686: INFO: Wrong image for pod: daemon-set-4gckc. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  5 14:47:42.686: INFO: Wrong image for pod: daemon-set-wjf9f. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  5 14:47:44.746: INFO: Wrong image for pod: daemon-set-4gckc. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  5 14:47:44.746: INFO: Wrong image for pod: daemon-set-wjf9f. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  5 14:47:45.684: INFO: Wrong image for pod: daemon-set-4gckc. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  5 14:47:45.684: INFO: Wrong image for pod: daemon-set-wjf9f. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  5 14:47:46.696: INFO: Wrong image for pod: daemon-set-4gckc. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  5 14:47:46.696: INFO: Wrong image for pod: daemon-set-wjf9f. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  5 14:47:47.782: INFO: Wrong image for pod: daemon-set-4gckc. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  5 14:47:47.782: INFO: Wrong image for pod: daemon-set-wjf9f. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  5 14:47:48.684: INFO: Wrong image for pod: daemon-set-4gckc. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  5 14:47:48.684: INFO: Wrong image for pod: daemon-set-wjf9f. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  5 14:47:49.685: INFO: Wrong image for pod: daemon-set-4gckc. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  5 14:47:49.685: INFO: Wrong image for pod: daemon-set-wjf9f. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  5 14:47:49.685: INFO: Pod daemon-set-wjf9f is not available
Jan  5 14:47:51.831: INFO: Wrong image for pod: daemon-set-4gckc. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  5 14:47:51.831: INFO: Pod daemon-set-ghdh2 is not available
Jan  5 14:47:52.695: INFO: Wrong image for pod: daemon-set-4gckc. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  5 14:47:52.695: INFO: Pod daemon-set-ghdh2 is not available
Jan  5 14:47:54.781: INFO: Wrong image for pod: daemon-set-4gckc. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  5 14:47:54.781: INFO: Pod daemon-set-ghdh2 is not available
Jan  5 14:47:55.690: INFO: Wrong image for pod: daemon-set-4gckc. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  5 14:47:55.690: INFO: Pod daemon-set-ghdh2 is not available
Jan  5 14:47:56.686: INFO: Wrong image for pod: daemon-set-4gckc. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  5 14:47:56.686: INFO: Pod daemon-set-ghdh2 is not available
Jan  5 14:47:59.559: INFO: Wrong image for pod: daemon-set-4gckc. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  5 14:47:59.560: INFO: Pod daemon-set-ghdh2 is not available
Jan  5 14:47:59.877: INFO: Wrong image for pod: daemon-set-4gckc. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  5 14:47:59.877: INFO: Pod daemon-set-ghdh2 is not available
Jan  5 14:48:01.066: INFO: Wrong image for pod: daemon-set-4gckc. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  5 14:48:01.067: INFO: Pod daemon-set-ghdh2 is not available
Jan  5 14:48:01.855: INFO: Wrong image for pod: daemon-set-4gckc. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  5 14:48:01.855: INFO: Pod daemon-set-ghdh2 is not available
Jan  5 14:48:02.696: INFO: Wrong image for pod: daemon-set-4gckc. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  5 14:48:02.696: INFO: Pod daemon-set-ghdh2 is not available
Jan  5 14:48:03.735: INFO: Wrong image for pod: daemon-set-4gckc. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  5 14:48:04.684: INFO: Wrong image for pod: daemon-set-4gckc. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  5 14:48:05.687: INFO: Wrong image for pod: daemon-set-4gckc. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  5 14:48:06.690: INFO: Wrong image for pod: daemon-set-4gckc. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  5 14:48:07.693: INFO: Wrong image for pod: daemon-set-4gckc. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  5 14:48:08.690: INFO: Wrong image for pod: daemon-set-4gckc. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  5 14:48:09.688: INFO: Wrong image for pod: daemon-set-4gckc. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  5 14:48:10.704: INFO: Wrong image for pod: daemon-set-4gckc. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  5 14:48:11.685: INFO: Wrong image for pod: daemon-set-4gckc. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  5 14:48:11.685: INFO: Pod daemon-set-4gckc is not available
Jan  5 14:48:12.683: INFO: Wrong image for pod: daemon-set-4gckc. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  5 14:48:12.683: INFO: Pod daemon-set-4gckc is not available
Jan  5 14:48:13.691: INFO: Wrong image for pod: daemon-set-4gckc. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  5 14:48:13.692: INFO: Pod daemon-set-4gckc is not available
Jan  5 14:48:14.688: INFO: Wrong image for pod: daemon-set-4gckc. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  5 14:48:14.688: INFO: Pod daemon-set-4gckc is not available
Jan  5 14:48:15.685: INFO: Wrong image for pod: daemon-set-4gckc. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  5 14:48:15.685: INFO: Pod daemon-set-4gckc is not available
Jan  5 14:48:17.687: INFO: Pod daemon-set-9xtjs is not available
STEP: Check that daemon pods are still running on every node of the cluster.
Jan  5 14:48:17.698: INFO: Number of nodes with available pods: 1
Jan  5 14:48:17.698: INFO: Node iruya-node is running more than one daemon pod
Jan  5 14:48:18.726: INFO: Number of nodes with available pods: 1
Jan  5 14:48:18.726: INFO: Node iruya-node is running more than one daemon pod
Jan  5 14:48:19.713: INFO: Number of nodes with available pods: 1
Jan  5 14:48:19.713: INFO: Node iruya-node is running more than one daemon pod
Jan  5 14:48:20.737: INFO: Number of nodes with available pods: 1
Jan  5 14:48:20.737: INFO: Node iruya-node is running more than one daemon pod
Jan  5 14:48:21.724: INFO: Number of nodes with available pods: 1
Jan  5 14:48:21.725: INFO: Node iruya-node is running more than one daemon pod
Jan  5 14:48:22.730: INFO: Number of nodes with available pods: 1
Jan  5 14:48:22.730: INFO: Node iruya-node is running more than one daemon pod
Jan  5 14:48:23.726: INFO: Number of nodes with available pods: 1
Jan  5 14:48:23.727: INFO: Node iruya-node is running more than one daemon pod
Jan  5 14:48:24.723: INFO: Number of nodes with available pods: 1
Jan  5 14:48:24.723: INFO: Node iruya-node is running more than one daemon pod
Jan  5 14:48:25.712: INFO: Number of nodes with available pods: 1
Jan  5 14:48:25.712: INFO: Node iruya-node is running more than one daemon pod
Jan  5 14:48:26.716: INFO: Number of nodes with available pods: 1
Jan  5 14:48:26.716: INFO: Node iruya-node is running more than one daemon pod
Jan  5 14:48:27.711: INFO: Number of nodes with available pods: 1
Jan  5 14:48:27.711: INFO: Node iruya-node is running more than one daemon pod
Jan  5 14:48:28.711: INFO: Number of nodes with available pods: 1
Jan  5 14:48:28.711: INFO: Node iruya-node is running more than one daemon pod
Jan  5 14:48:29.714: INFO: Number of nodes with available pods: 2
Jan  5 14:48:29.714: INFO: Number of running nodes: 2, number of available pods: 2
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-9313, will wait for the garbage collector to delete the pods
Jan  5 14:48:29.813: INFO: Deleting DaemonSet.extensions daemon-set took: 15.054136ms
Jan  5 14:48:30.114: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.722843ms
Jan  5 14:48:47.924: INFO: Number of nodes with available pods: 0
Jan  5 14:48:47.924: INFO: Number of running nodes: 0, number of available pods: 0
Jan  5 14:48:47.927: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-9313/daemonsets","resourceVersion":"19412327"},"items":null}

Jan  5 14:48:47.929: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-9313/pods","resourceVersion":"19412327"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  5 14:48:47.939: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-9313" for this suite.
Jan  5 14:48:55.973: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 14:48:56.104: INFO: namespace daemonsets-9313 deletion completed in 8.161637764s

• [SLOW TEST:90.099 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  5 14:48:56.105: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-d7ac2fdb-a5f2-4b53-acdf-6a4ea1b72c2b
STEP: Creating a pod to test consume configMaps
Jan  5 14:48:56.363: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-f73b7a10-927a-443d-87a1-0feb8ec8ec4e" in namespace "projected-824" to be "success or failure"
Jan  5 14:48:56.410: INFO: Pod "pod-projected-configmaps-f73b7a10-927a-443d-87a1-0feb8ec8ec4e": Phase="Pending", Reason="", readiness=false. Elapsed: 47.27821ms
Jan  5 14:48:58.421: INFO: Pod "pod-projected-configmaps-f73b7a10-927a-443d-87a1-0feb8ec8ec4e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.057790791s
Jan  5 14:49:00.429: INFO: Pod "pod-projected-configmaps-f73b7a10-927a-443d-87a1-0feb8ec8ec4e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.066050007s
Jan  5 14:49:02.439: INFO: Pod "pod-projected-configmaps-f73b7a10-927a-443d-87a1-0feb8ec8ec4e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.075760325s
Jan  5 14:49:04.450: INFO: Pod "pod-projected-configmaps-f73b7a10-927a-443d-87a1-0feb8ec8ec4e": Phase="Pending", Reason="", readiness=false. Elapsed: 8.087102904s
Jan  5 14:49:06.461: INFO: Pod "pod-projected-configmaps-f73b7a10-927a-443d-87a1-0feb8ec8ec4e": Phase="Pending", Reason="", readiness=false. Elapsed: 10.097698112s
Jan  5 14:49:08.474: INFO: Pod "pod-projected-configmaps-f73b7a10-927a-443d-87a1-0feb8ec8ec4e": Phase="Pending", Reason="", readiness=false. Elapsed: 12.11079435s
Jan  5 14:49:10.489: INFO: Pod "pod-projected-configmaps-f73b7a10-927a-443d-87a1-0feb8ec8ec4e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.126472415s
STEP: Saw pod success
Jan  5 14:49:10.490: INFO: Pod "pod-projected-configmaps-f73b7a10-927a-443d-87a1-0feb8ec8ec4e" satisfied condition "success or failure"
Jan  5 14:49:10.498: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-f73b7a10-927a-443d-87a1-0feb8ec8ec4e container projected-configmap-volume-test: 
STEP: delete the pod
Jan  5 14:49:12.243: INFO: Waiting for pod pod-projected-configmaps-f73b7a10-927a-443d-87a1-0feb8ec8ec4e to disappear
Jan  5 14:49:12.265: INFO: Pod pod-projected-configmaps-f73b7a10-927a-443d-87a1-0feb8ec8ec4e no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  5 14:49:12.266: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-824" for this suite.
Jan  5 14:49:18.517: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 14:49:18.723: INFO: namespace projected-824 deletion completed in 6.44121505s

• [SLOW TEST:22.619 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  5 14:49:18.726: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44
[It] should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
Jan  5 14:49:18.869: INFO: PodSpec: initContainers in spec.initContainers
Jan  5 14:50:36.381: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-47cc2bcc-5b60-4584-b638-27193062cc1d", GenerateName:"", Namespace:"init-container-4308", SelfLink:"/api/v1/namespaces/init-container-4308/pods/pod-init-47cc2bcc-5b60-4584-b638-27193062cc1d", UID:"4762d545-301a-4ecd-a048-8a7fdcb7efe4", ResourceVersion:"19412557", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63713832559, loc:(*time.Location)(0x7ea48a0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"869098110"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-2tzjk", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc003006240), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-2tzjk", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-2tzjk", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-2tzjk", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc002d809a8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"iruya-node", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc00112c060), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002d80bb0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002d80c50)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc002d80c58), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc002d80c5c), PreemptionPolicy:(*v1.PreemptionPolicy)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713832559, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713832559, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713832559, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713832559, loc:(*time.Location)(0x7ea48a0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.96.3.65", PodIP:"10.44.0.1", StartTime:(*v1.Time)(0xc0020a2300), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0022d2310)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0022d2380)}, Ready:false, RestartCount:3, Image:"busybox:1.29", ImageID:"docker-pullable://busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"docker://1516f05559754f71e8a9978378cfac6e1ae996ab2eeb13197e2fe322a10d5e3c"}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0020a2340), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:""}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0020a2320), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:""}}, QOSClass:"Guaranteed"}}
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  5 14:50:36.384: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-4308" for this suite.
Jan  5 14:51:00.555: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 14:51:00.676: INFO: namespace init-container-4308 deletion completed in 24.225803888s

• [SLOW TEST:101.950 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-apps] ReplicaSet 
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  5 14:51:00.676: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Given a Pod with a 'name' label pod-adoption-release is created
STEP: When a replicaset with a matching selector is created
STEP: Then the orphan pod is adopted
STEP: When the matched label of one of its pods change
Jan  5 14:51:13.970: INFO: Pod name pod-adoption-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  5 14:51:15.016: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replicaset-5563" for this suite.
Jan  5 14:51:39.053: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 14:51:39.198: INFO: namespace replicaset-5563 deletion completed in 24.175410398s

• [SLOW TEST:38.522 seconds]
[sig-apps] ReplicaSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  5 14:51:39.199: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a service in the namespace
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
STEP: Recreating the namespace
STEP: Verifying there is no service in the namespace
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  5 14:51:46.155: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "namespaces-4014" for this suite.
Jan  5 14:51:52.356: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 14:51:52.685: INFO: namespace namespaces-4014 deletion completed in 6.523546941s
STEP: Destroying namespace "nsdeletetest-2860" for this suite.
Jan  5 14:51:52.694: INFO: Namespace nsdeletetest-2860 was already deleted
STEP: Destroying namespace "nsdeletetest-8459" for this suite.
Jan  5 14:51:58.798: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 14:51:58.906: INFO: namespace nsdeletetest-8459 deletion completed in 6.211145379s

• [SLOW TEST:19.707 seconds]
[sig-api-machinery] Namespaces [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-auth] ServiceAccounts 
  should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  5 14:51:58.907: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: getting the auto-created API token
STEP: reading a file in the container
Jan  5 14:52:14.184: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-4157 pod-service-account-6620c13e-3962-449f-b0ff-b969701b12b1 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token'
STEP: reading a file in the container
Jan  5 14:52:17.481: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-4157 pod-service-account-6620c13e-3962-449f-b0ff-b969701b12b1 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt'
STEP: reading a file in the container
Jan  5 14:52:18.067: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-4157 pod-service-account-6620c13e-3962-449f-b0ff-b969701b12b1 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace'
[AfterEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  5 14:52:18.542: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-4157" for this suite.
Jan  5 14:52:24.657: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 14:52:24.767: INFO: namespace svcaccounts-4157 deletion completed in 6.219147791s

• [SLOW TEST:25.861 seconds]
[sig-auth] ServiceAccounts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23
  should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  5 14:52:24.768: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Jan  5 14:52:44.366: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  5 14:52:44.505: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-5183" for this suite.
Jan  5 14:52:50.544: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 14:52:50.644: INFO: namespace container-runtime-5183 deletion completed in 6.131930492s

• [SLOW TEST:25.877 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129
      should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  5 14:52:50.645: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jan  5 14:53:19.829: INFO: Container started at 2020-01-05 14:53:01 +0000 UTC, pod became ready at 2020-01-05 14:53:18 +0000 UTC
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  5 14:53:19.829: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-3182" for this suite.
Jan  5 14:53:41.875: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 14:53:41.967: INFO: namespace container-probe-3182 deletion completed in 22.130720807s

• [SLOW TEST:51.321 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Guestbook application 
  should create and stop a working application  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  5 14:53:41.967: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should create and stop a working application  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating all guestbook components
Jan  5 14:53:42.135: INFO: apiVersion: v1
kind: Service
metadata:
  name: redis-slave
  labels:
    app: redis
    role: slave
    tier: backend
spec:
  ports:
  - port: 6379
  selector:
    app: redis
    role: slave
    tier: backend

Jan  5 14:53:42.136: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4730'
Jan  5 14:53:42.977: INFO: stderr: ""
Jan  5 14:53:42.977: INFO: stdout: "service/redis-slave created\n"
Jan  5 14:53:42.977: INFO: apiVersion: v1
kind: Service
metadata:
  name: redis-master
  labels:
    app: redis
    role: master
    tier: backend
spec:
  ports:
  - port: 6379
    targetPort: 6379
  selector:
    app: redis
    role: master
    tier: backend

Jan  5 14:53:42.978: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4730'
Jan  5 14:53:43.684: INFO: stderr: ""
Jan  5 14:53:43.685: INFO: stdout: "service/redis-master created\n"
Jan  5 14:53:43.685: INFO: apiVersion: v1
kind: Service
metadata:
  name: frontend
  labels:
    app: guestbook
    tier: frontend
spec:
  # if your cluster supports it, uncomment the following to automatically create
  # an external load-balanced IP for the frontend service.
  # type: LoadBalancer
  ports:
  - port: 80
  selector:
    app: guestbook
    tier: frontend

Jan  5 14:53:43.685: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4730'
Jan  5 14:53:44.352: INFO: stderr: ""
Jan  5 14:53:44.353: INFO: stdout: "service/frontend created\n"
Jan  5 14:53:44.354: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: frontend
spec:
  replicas: 3
  selector:
    matchLabels:
      app: guestbook
      tier: frontend
  template:
    metadata:
      labels:
        app: guestbook
        tier: frontend
    spec:
      containers:
      - name: php-redis
        image: gcr.io/google-samples/gb-frontend:v6
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        env:
        - name: GET_HOSTS_FROM
          value: dns
          # If your cluster config does not include a dns service, then to
          # instead access environment variables to find service host
          # info, comment out the 'value: dns' line above, and uncomment the
          # line below:
          # value: env
        ports:
        - containerPort: 80

Jan  5 14:53:44.354: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4730'
Jan  5 14:53:44.847: INFO: stderr: ""
Jan  5 14:53:44.847: INFO: stdout: "deployment.apps/frontend created\n"
Jan  5 14:53:44.848: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: redis-master
spec:
  replicas: 1
  selector:
    matchLabels:
      app: redis
      role: master
      tier: backend
  template:
    metadata:
      labels:
        app: redis
        role: master
        tier: backend
    spec:
      containers:
      - name: master
        image: gcr.io/kubernetes-e2e-test-images/redis:1.0
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 6379

Jan  5 14:53:44.848: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4730'
Jan  5 14:53:45.579: INFO: stderr: ""
Jan  5 14:53:45.579: INFO: stdout: "deployment.apps/redis-master created\n"
Jan  5 14:53:45.579: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: redis-slave
spec:
  replicas: 2
  selector:
    matchLabels:
      app: redis
      role: slave
      tier: backend
  template:
    metadata:
      labels:
        app: redis
        role: slave
        tier: backend
    spec:
      containers:
      - name: slave
        image: gcr.io/google-samples/gb-redisslave:v3
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        env:
        - name: GET_HOSTS_FROM
          value: dns
          # If your cluster config does not include a dns service, then to
          # instead access an environment variable to find the master
          # service's host, comment out the 'value: dns' line above, and
          # uncomment the line below:
          # value: env
        ports:
        - containerPort: 6379

Jan  5 14:53:45.579: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4730'
Jan  5 14:53:47.349: INFO: stderr: ""
Jan  5 14:53:47.349: INFO: stdout: "deployment.apps/redis-slave created\n"
STEP: validating guestbook app
Jan  5 14:53:47.349: INFO: Waiting for all frontend pods to be Running.
Jan  5 14:54:17.404: INFO: Waiting for frontend to serve content.
Jan  5 14:54:21.261: INFO: Trying to add a new entry to the guestbook.
Jan  5 14:54:21.435: INFO: Verifying that added entry can be retrieved.
STEP: using delete to clean up resources
Jan  5 14:54:21.512: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4730'
Jan  5 14:54:21.870: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan  5 14:54:21.871: INFO: stdout: "service \"redis-slave\" force deleted\n"
STEP: using delete to clean up resources
Jan  5 14:54:21.872: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4730'
Jan  5 14:54:22.439: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan  5 14:54:22.439: INFO: stdout: "service \"redis-master\" force deleted\n"
STEP: using delete to clean up resources
Jan  5 14:54:22.441: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4730'
Jan  5 14:54:22.883: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan  5 14:54:22.883: INFO: stdout: "service \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Jan  5 14:54:22.886: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4730'
Jan  5 14:54:23.095: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan  5 14:54:23.095: INFO: stdout: "deployment.apps \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Jan  5 14:54:23.096: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4730'
Jan  5 14:54:23.377: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan  5 14:54:23.378: INFO: stdout: "deployment.apps \"redis-master\" force deleted\n"
STEP: using delete to clean up resources
Jan  5 14:54:23.380: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4730'
Jan  5 14:54:23.828: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan  5 14:54:23.829: INFO: stdout: "deployment.apps \"redis-slave\" force deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  5 14:54:23.829: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4730" for this suite.
Jan  5 14:55:08.049: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 14:55:08.183: INFO: namespace kubectl-4730 deletion completed in 44.346072872s

• [SLOW TEST:86.216 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Guestbook application
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create and stop a working application  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-storage] ConfigMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  5 14:55:08.183: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name cm-test-opt-del-f882f6bd-a022-4159-baef-3e90abd8b07e
STEP: Creating configMap with name cm-test-opt-upd-156abdb5-f085-4b37-94c2-51d1d5b7f489
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-f882f6bd-a022-4159-baef-3e90abd8b07e
STEP: Updating configmap cm-test-opt-upd-156abdb5-f085-4b37-94c2-51d1d5b7f489
STEP: Creating configMap with name cm-test-opt-create-482902f4-6b60-41e1-b715-55342cd20316
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  5 14:56:46.774: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-4153" for this suite.
Jan  5 14:57:08.799: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 14:57:08.916: INFO: namespace configmap-4153 deletion completed in 22.13685822s

• [SLOW TEST:120.733 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  5 14:57:08.917: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: modifying the configmap a second time
STEP: deleting the configmap
STEP: creating a watch on configmaps from the resource version returned by the first update
STEP: Expecting to observe notifications for all changes to the configmap after the first update
Jan  5 14:57:09.066: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:watch-1539,SelfLink:/api/v1/namespaces/watch-1539/configmaps/e2e-watch-test-resource-version,UID:361f5516-8561-46f8-bb0b-6b46199f6886,ResourceVersion:19413482,Generation:0,CreationTimestamp:2020-01-05 14:57:08 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Jan  5 14:57:09.067: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:watch-1539,SelfLink:/api/v1/namespaces/watch-1539/configmaps/e2e-watch-test-resource-version,UID:361f5516-8561-46f8-bb0b-6b46199f6886,ResourceVersion:19413483,Generation:0,CreationTimestamp:2020-01-05 14:57:08 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  5 14:57:09.067: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-1539" for this suite.
Jan  5 14:57:15.100: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 14:57:15.225: INFO: namespace watch-1539 deletion completed in 6.151926484s

• [SLOW TEST:6.309 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-apps] Job 
  should delete a job [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  5 14:57:15.225: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete a job [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a job
STEP: Ensuring active pods == parallelism
STEP: delete a job
STEP: deleting Job.batch foo in namespace job-1726, will wait for the garbage collector to delete the pods
Jan  5 14:57:25.385: INFO: Deleting Job.batch foo took: 13.104919ms
Jan  5 14:57:25.685: INFO: Terminating Job.batch foo pods took: 300.421585ms
STEP: Ensuring job was deleted
[AfterEach] [sig-apps] Job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  5 14:58:06.595: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-1726" for this suite.
Jan  5 14:58:12.704: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 14:58:12.816: INFO: namespace job-1726 deletion completed in 6.212821222s

• [SLOW TEST:57.591 seconds]
[sig-apps] Job
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should delete a job [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  5 14:58:12.816: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44
[It] should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
Jan  5 14:58:12.886: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  5 14:58:28.912: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-5394" for this suite.
Jan  5 14:58:50.948: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 14:58:51.131: INFO: namespace init-container-5394 deletion completed in 22.206280843s

• [SLOW TEST:38.315 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should do a rolling update of a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  5 14:58:51.133: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should do a rolling update of a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the initial replication controller
Jan  5 14:58:51.184: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8112'
Jan  5 14:58:51.581: INFO: stderr: ""
Jan  5 14:58:51.581: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jan  5 14:58:51.581: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8112'
Jan  5 14:58:51.798: INFO: stderr: ""
Jan  5 14:58:51.798: INFO: stdout: "update-demo-nautilus-g6rv5 update-demo-nautilus-r6v5v "
Jan  5 14:58:51.798: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-g6rv5 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8112'
Jan  5 14:58:51.981: INFO: stderr: ""
Jan  5 14:58:51.982: INFO: stdout: ""
Jan  5 14:58:51.982: INFO: update-demo-nautilus-g6rv5 is created but not running
Jan  5 14:58:56.982: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8112'
Jan  5 14:58:57.985: INFO: stderr: ""
Jan  5 14:58:57.985: INFO: stdout: "update-demo-nautilus-g6rv5 update-demo-nautilus-r6v5v "
Jan  5 14:58:57.985: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-g6rv5 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8112'
Jan  5 14:58:58.683: INFO: stderr: ""
Jan  5 14:58:58.683: INFO: stdout: ""
Jan  5 14:58:58.683: INFO: update-demo-nautilus-g6rv5 is created but not running
Jan  5 14:59:03.684: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8112'
Jan  5 14:59:03.916: INFO: stderr: ""
Jan  5 14:59:03.916: INFO: stdout: "update-demo-nautilus-g6rv5 update-demo-nautilus-r6v5v "
Jan  5 14:59:03.917: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-g6rv5 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8112'
Jan  5 14:59:04.051: INFO: stderr: ""
Jan  5 14:59:04.051: INFO: stdout: "true"
Jan  5 14:59:04.052: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-g6rv5 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8112'
Jan  5 14:59:04.183: INFO: stderr: ""
Jan  5 14:59:04.183: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan  5 14:59:04.183: INFO: validating pod update-demo-nautilus-g6rv5
Jan  5 14:59:04.198: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan  5 14:59:04.198: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan  5 14:59:04.198: INFO: update-demo-nautilus-g6rv5 is verified up and running
Jan  5 14:59:04.199: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-r6v5v -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8112'
Jan  5 14:59:04.281: INFO: stderr: ""
Jan  5 14:59:04.281: INFO: stdout: "true"
Jan  5 14:59:04.282: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-r6v5v -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8112'
Jan  5 14:59:04.538: INFO: stderr: ""
Jan  5 14:59:04.538: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan  5 14:59:04.538: INFO: validating pod update-demo-nautilus-r6v5v
Jan  5 14:59:04.557: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan  5 14:59:04.557: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan  5 14:59:04.557: INFO: update-demo-nautilus-r6v5v is verified up and running
STEP: rolling-update to new replication controller
Jan  5 14:59:04.561: INFO: scanned /root for discovery docs: 
Jan  5 14:59:04.561: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=kubectl-8112'
Jan  5 14:59:34.245: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Jan  5 14:59:34.245: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jan  5 14:59:34.246: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8112'
Jan  5 14:59:34.373: INFO: stderr: ""
Jan  5 14:59:34.373: INFO: stdout: "update-demo-kitten-hbsvk update-demo-kitten-hwnz5 "
Jan  5 14:59:34.373: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-hbsvk -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8112'
Jan  5 14:59:34.559: INFO: stderr: ""
Jan  5 14:59:34.559: INFO: stdout: "true"
Jan  5 14:59:34.559: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-hbsvk -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8112'
Jan  5 14:59:34.698: INFO: stderr: ""
Jan  5 14:59:34.698: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Jan  5 14:59:34.698: INFO: validating pod update-demo-kitten-hbsvk
Jan  5 14:59:34.720: INFO: got data: {
  "image": "kitten.jpg"
}

Jan  5 14:59:34.721: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Jan  5 14:59:34.721: INFO: update-demo-kitten-hbsvk is verified up and running
Jan  5 14:59:34.721: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-hwnz5 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8112'
Jan  5 14:59:34.832: INFO: stderr: ""
Jan  5 14:59:34.832: INFO: stdout: "true"
Jan  5 14:59:34.833: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-hwnz5 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8112'
Jan  5 14:59:34.956: INFO: stderr: ""
Jan  5 14:59:34.956: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Jan  5 14:59:34.957: INFO: validating pod update-demo-kitten-hwnz5
Jan  5 14:59:34.988: INFO: got data: {
  "image": "kitten.jpg"
}

Jan  5 14:59:34.989: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Jan  5 14:59:34.989: INFO: update-demo-kitten-hwnz5 is verified up and running
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  5 14:59:34.989: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-8112" for this suite.
Jan  5 14:59:57.013: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 14:59:57.144: INFO: namespace kubectl-8112 deletion completed in 22.149504019s

• [SLOW TEST:66.011 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should do a rolling update of a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  5 14:59:57.145: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75
STEP: Creating service test in namespace statefulset-57
[It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Initializing watcher for selector baz=blah,foo=bar
STEP: Creating stateful set ss in namespace statefulset-57
STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-57
Jan  5 14:59:57.283: INFO: Found 0 stateful pods, waiting for 1
Jan  5 15:00:07.300: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod
Jan  5 15:00:07.312: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-57 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan  5 15:00:07.980: INFO: stderr: "I0105 15:00:07.600069    3425 log.go:172] (0xc000116fd0) (0xc000608b40) Create stream\nI0105 15:00:07.600376    3425 log.go:172] (0xc000116fd0) (0xc000608b40) Stream added, broadcasting: 1\nI0105 15:00:07.607279    3425 log.go:172] (0xc000116fd0) Reply frame received for 1\nI0105 15:00:07.607331    3425 log.go:172] (0xc000116fd0) (0xc0008f8000) Create stream\nI0105 15:00:07.607347    3425 log.go:172] (0xc000116fd0) (0xc0008f8000) Stream added, broadcasting: 3\nI0105 15:00:07.609015    3425 log.go:172] (0xc000116fd0) Reply frame received for 3\nI0105 15:00:07.609067    3425 log.go:172] (0xc000116fd0) (0xc000a46000) Create stream\nI0105 15:00:07.609097    3425 log.go:172] (0xc000116fd0) (0xc000a46000) Stream added, broadcasting: 5\nI0105 15:00:07.611033    3425 log.go:172] (0xc000116fd0) Reply frame received for 5\nI0105 15:00:07.781841    3425 log.go:172] (0xc000116fd0) Data frame received for 5\nI0105 15:00:07.782001    3425 log.go:172] (0xc000a46000) (5) Data frame handling\nI0105 15:00:07.782050    3425 log.go:172] (0xc000a46000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0105 15:00:07.826052    3425 log.go:172] (0xc000116fd0) Data frame received for 3\nI0105 15:00:07.826126    3425 log.go:172] (0xc0008f8000) (3) Data frame handling\nI0105 15:00:07.826160    3425 log.go:172] (0xc0008f8000) (3) Data frame sent\nI0105 15:00:07.968221    3425 log.go:172] (0xc000116fd0) Data frame received for 1\nI0105 15:00:07.968679    3425 log.go:172] (0xc000116fd0) (0xc000a46000) Stream removed, broadcasting: 5\nI0105 15:00:07.968815    3425 log.go:172] (0xc000608b40) (1) Data frame handling\nI0105 15:00:07.968866    3425 log.go:172] (0xc000608b40) (1) Data frame sent\nI0105 15:00:07.968891    3425 log.go:172] (0xc000116fd0) (0xc0008f8000) Stream removed, broadcasting: 3\nI0105 15:00:07.968937    3425 log.go:172] (0xc000116fd0) (0xc000608b40) Stream removed, broadcasting: 1\nI0105 15:00:07.968978    3425 log.go:172] (0xc000116fd0) Go away received\nI0105 15:00:07.970576    3425 log.go:172] (0xc000116fd0) (0xc000608b40) Stream removed, broadcasting: 1\nI0105 15:00:07.970606    3425 log.go:172] (0xc000116fd0) (0xc0008f8000) Stream removed, broadcasting: 3\nI0105 15:00:07.970625    3425 log.go:172] (0xc000116fd0) (0xc000a46000) Stream removed, broadcasting: 5\n"
Jan  5 15:00:07.980: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan  5 15:00:07.980: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan  5 15:00:07.988: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
Jan  5 15:00:18.000: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Jan  5 15:00:18.000: INFO: Waiting for statefulset status.replicas updated to 0
Jan  5 15:00:18.046: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999997726s
Jan  5 15:00:19.062: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.990994059s
Jan  5 15:00:20.073: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.974731887s
Jan  5 15:00:21.082: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.963861002s
Jan  5 15:00:22.093: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.954869295s
Jan  5 15:00:23.103: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.943780574s
Jan  5 15:00:24.113: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.933728458s
Jan  5 15:00:25.129: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.924583212s
Jan  5 15:00:26.136: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.907750464s
Jan  5 15:00:27.153: INFO: Verifying statefulset ss doesn't scale past 1 for another 901.077729ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-57
Jan  5 15:00:28.161: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-57 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  5 15:00:28.745: INFO: stderr: "I0105 15:00:28.404921    3446 log.go:172] (0xc000950420) (0xc0005be6e0) Create stream\nI0105 15:00:28.405118    3446 log.go:172] (0xc000950420) (0xc0005be6e0) Stream added, broadcasting: 1\nI0105 15:00:28.412425    3446 log.go:172] (0xc000950420) Reply frame received for 1\nI0105 15:00:28.412537    3446 log.go:172] (0xc000950420) (0xc0005ba320) Create stream\nI0105 15:00:28.412555    3446 log.go:172] (0xc000950420) (0xc0005ba320) Stream added, broadcasting: 3\nI0105 15:00:28.415379    3446 log.go:172] (0xc000950420) Reply frame received for 3\nI0105 15:00:28.415448    3446 log.go:172] (0xc000950420) (0xc0005be780) Create stream\nI0105 15:00:28.415466    3446 log.go:172] (0xc000950420) (0xc0005be780) Stream added, broadcasting: 5\nI0105 15:00:28.423330    3446 log.go:172] (0xc000950420) Reply frame received for 5\nI0105 15:00:28.587997    3446 log.go:172] (0xc000950420) Data frame received for 5\nI0105 15:00:28.588114    3446 log.go:172] (0xc0005be780) (5) Data frame handling\nI0105 15:00:28.588155    3446 log.go:172] (0xc0005be780) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0105 15:00:28.595708    3446 log.go:172] (0xc000950420) Data frame received for 3\nI0105 15:00:28.595834    3446 log.go:172] (0xc0005ba320) (3) Data frame handling\nI0105 15:00:28.596896    3446 log.go:172] (0xc0005ba320) (3) Data frame sent\nI0105 15:00:28.729017    3446 log.go:172] (0xc000950420) (0xc0005ba320) Stream removed, broadcasting: 3\nI0105 15:00:28.729356    3446 log.go:172] (0xc000950420) Data frame received for 1\nI0105 15:00:28.729414    3446 log.go:172] (0xc000950420) (0xc0005be780) Stream removed, broadcasting: 5\nI0105 15:00:28.729566    3446 log.go:172] (0xc0005be6e0) (1) Data frame handling\nI0105 15:00:28.729640    3446 log.go:172] (0xc0005be6e0) (1) Data frame sent\nI0105 15:00:28.729654    3446 log.go:172] (0xc000950420) (0xc0005be6e0) Stream removed, broadcasting: 1\nI0105 15:00:28.729692    3446 log.go:172] (0xc000950420) Go away received\nI0105 15:00:28.731353    3446 log.go:172] (0xc000950420) (0xc0005be6e0) Stream removed, broadcasting: 1\nI0105 15:00:28.731367    3446 log.go:172] (0xc000950420) (0xc0005ba320) Stream removed, broadcasting: 3\nI0105 15:00:28.731379    3446 log.go:172] (0xc000950420) (0xc0005be780) Stream removed, broadcasting: 5\n"
Jan  5 15:00:28.746: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jan  5 15:00:28.746: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jan  5 15:00:28.752: INFO: Found 1 stateful pods, waiting for 3
Jan  5 15:00:38.764: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Jan  5 15:00:38.765: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Jan  5 15:00:38.765: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Pending - Ready=false
Jan  5 15:00:48.765: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Jan  5 15:00:48.765: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Jan  5 15:00:48.765: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Verifying that stateful set ss was scaled up in order
STEP: Scale down will halt with unhealthy stateful pod
Jan  5 15:00:48.775: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-57 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan  5 15:00:49.408: INFO: stderr: "I0105 15:00:49.116399    3466 log.go:172] (0xc00093e370) (0xc0009c26e0) Create stream\nI0105 15:00:49.116713    3466 log.go:172] (0xc00093e370) (0xc0009c26e0) Stream added, broadcasting: 1\nI0105 15:00:49.125468    3466 log.go:172] (0xc00093e370) Reply frame received for 1\nI0105 15:00:49.125653    3466 log.go:172] (0xc00093e370) (0xc000586280) Create stream\nI0105 15:00:49.125689    3466 log.go:172] (0xc00093e370) (0xc000586280) Stream added, broadcasting: 3\nI0105 15:00:49.130012    3466 log.go:172] (0xc00093e370) Reply frame received for 3\nI0105 15:00:49.130178    3466 log.go:172] (0xc00093e370) (0xc0009c2780) Create stream\nI0105 15:00:49.130202    3466 log.go:172] (0xc00093e370) (0xc0009c2780) Stream added, broadcasting: 5\nI0105 15:00:49.133951    3466 log.go:172] (0xc00093e370) Reply frame received for 5\nI0105 15:00:49.249525    3466 log.go:172] (0xc00093e370) Data frame received for 5\nI0105 15:00:49.249629    3466 log.go:172] (0xc0009c2780) (5) Data frame handling\nI0105 15:00:49.249661    3466 log.go:172] (0xc0009c2780) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0105 15:00:49.251273    3466 log.go:172] (0xc00093e370) Data frame received for 3\nI0105 15:00:49.251294    3466 log.go:172] (0xc000586280) (3) Data frame handling\nI0105 15:00:49.251307    3466 log.go:172] (0xc000586280) (3) Data frame sent\nI0105 15:00:49.395199    3466 log.go:172] (0xc00093e370) Data frame received for 1\nI0105 15:00:49.395361    3466 log.go:172] (0xc0009c26e0) (1) Data frame handling\nI0105 15:00:49.395402    3466 log.go:172] (0xc0009c26e0) (1) Data frame sent\nI0105 15:00:49.395455    3466 log.go:172] (0xc00093e370) (0xc0009c26e0) Stream removed, broadcasting: 1\nI0105 15:00:49.395987    3466 log.go:172] (0xc00093e370) (0xc000586280) Stream removed, broadcasting: 3\nI0105 15:00:49.396415    3466 log.go:172] (0xc00093e370) (0xc0009c2780) Stream removed, broadcasting: 5\nI0105 15:00:49.397365    3466 log.go:172] (0xc00093e370) (0xc0009c26e0) Stream removed, broadcasting: 1\nI0105 15:00:49.397670    3466 log.go:172] (0xc00093e370) (0xc000586280) Stream removed, broadcasting: 3\nI0105 15:00:49.397756    3466 log.go:172] (0xc00093e370) (0xc0009c2780) Stream removed, broadcasting: 5\n"
Jan  5 15:00:49.408: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan  5 15:00:49.408: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan  5 15:00:49.408: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-57 ss-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan  5 15:00:49.944: INFO: stderr: "I0105 15:00:49.636628    3486 log.go:172] (0xc000522420) (0xc0003f6640) Create stream\nI0105 15:00:49.636954    3486 log.go:172] (0xc000522420) (0xc0003f6640) Stream added, broadcasting: 1\nI0105 15:00:49.650370    3486 log.go:172] (0xc000522420) Reply frame received for 1\nI0105 15:00:49.650454    3486 log.go:172] (0xc000522420) (0xc00027e000) Create stream\nI0105 15:00:49.650468    3486 log.go:172] (0xc000522420) (0xc00027e000) Stream added, broadcasting: 3\nI0105 15:00:49.651830    3486 log.go:172] (0xc000522420) Reply frame received for 3\nI0105 15:00:49.651857    3486 log.go:172] (0xc000522420) (0xc0003f66e0) Create stream\nI0105 15:00:49.651864    3486 log.go:172] (0xc000522420) (0xc0003f66e0) Stream added, broadcasting: 5\nI0105 15:00:49.652952    3486 log.go:172] (0xc000522420) Reply frame received for 5\nI0105 15:00:49.780690    3486 log.go:172] (0xc000522420) Data frame received for 5\nI0105 15:00:49.780757    3486 log.go:172] (0xc0003f66e0) (5) Data frame handling\nI0105 15:00:49.780785    3486 log.go:172] (0xc0003f66e0) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0105 15:00:49.818755    3486 log.go:172] (0xc000522420) Data frame received for 3\nI0105 15:00:49.818847    3486 log.go:172] (0xc00027e000) (3) Data frame handling\nI0105 15:00:49.818879    3486 log.go:172] (0xc00027e000) (3) Data frame sent\nI0105 15:00:49.929784    3486 log.go:172] (0xc000522420) (0xc00027e000) Stream removed, broadcasting: 3\nI0105 15:00:49.929924    3486 log.go:172] (0xc000522420) Data frame received for 1\nI0105 15:00:49.929942    3486 log.go:172] (0xc0003f6640) (1) Data frame handling\nI0105 15:00:49.929955    3486 log.go:172] (0xc0003f6640) (1) Data frame sent\nI0105 15:00:49.930004    3486 log.go:172] (0xc000522420) (0xc0003f6640) Stream removed, broadcasting: 1\nI0105 15:00:49.930746    3486 log.go:172] (0xc000522420) (0xc0003f66e0) Stream removed, broadcasting: 5\nI0105 15:00:49.930822    3486 log.go:172] (0xc000522420) Go away received\nI0105 15:00:49.931148    3486 log.go:172] (0xc000522420) (0xc0003f6640) Stream removed, broadcasting: 1\nI0105 15:00:49.931163    3486 log.go:172] (0xc000522420) (0xc00027e000) Stream removed, broadcasting: 3\nI0105 15:00:49.931171    3486 log.go:172] (0xc000522420) (0xc0003f66e0) Stream removed, broadcasting: 5\n"
Jan  5 15:00:49.944: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan  5 15:00:49.944: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan  5 15:00:49.944: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-57 ss-2 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan  5 15:00:50.487: INFO: stderr: "I0105 15:00:50.143790    3503 log.go:172] (0xc0006dab00) (0xc00064e8c0) Create stream\nI0105 15:00:50.144086    3503 log.go:172] (0xc0006dab00) (0xc00064e8c0) Stream added, broadcasting: 1\nI0105 15:00:50.152788    3503 log.go:172] (0xc0006dab00) Reply frame received for 1\nI0105 15:00:50.152849    3503 log.go:172] (0xc0006dab00) (0xc000768000) Create stream\nI0105 15:00:50.152866    3503 log.go:172] (0xc0006dab00) (0xc000768000) Stream added, broadcasting: 3\nI0105 15:00:50.154380    3503 log.go:172] (0xc0006dab00) Reply frame received for 3\nI0105 15:00:50.154457    3503 log.go:172] (0xc0006dab00) (0xc00067e000) Create stream\nI0105 15:00:50.154481    3503 log.go:172] (0xc0006dab00) (0xc00067e000) Stream added, broadcasting: 5\nI0105 15:00:50.156688    3503 log.go:172] (0xc0006dab00) Reply frame received for 5\nI0105 15:00:50.277837    3503 log.go:172] (0xc0006dab00) Data frame received for 5\nI0105 15:00:50.277899    3503 log.go:172] (0xc00067e000) (5) Data frame handling\nI0105 15:00:50.277927    3503 log.go:172] (0xc00067e000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0105 15:00:50.328752    3503 log.go:172] (0xc0006dab00) Data frame received for 3\nI0105 15:00:50.328978    3503 log.go:172] (0xc000768000) (3) Data frame handling\nI0105 15:00:50.329014    3503 log.go:172] (0xc000768000) (3) Data frame sent\nI0105 15:00:50.469194    3503 log.go:172] (0xc0006dab00) Data frame received for 1\nI0105 15:00:50.469321    3503 log.go:172] (0xc0006dab00) (0xc000768000) Stream removed, broadcasting: 3\nI0105 15:00:50.469422    3503 log.go:172] (0xc00064e8c0) (1) Data frame handling\nI0105 15:00:50.469463    3503 log.go:172] (0xc0006dab00) (0xc00067e000) Stream removed, broadcasting: 5\nI0105 15:00:50.469487    3503 log.go:172] (0xc00064e8c0) (1) Data frame sent\nI0105 15:00:50.469497    3503 log.go:172] (0xc0006dab00) (0xc00064e8c0) Stream removed, broadcasting: 1\nI0105 15:00:50.469520    3503 log.go:172] (0xc0006dab00) Go away received\nI0105 15:00:50.470982    3503 log.go:172] (0xc0006dab00) (0xc00064e8c0) Stream removed, broadcasting: 1\nI0105 15:00:50.471008    3503 log.go:172] (0xc0006dab00) (0xc000768000) Stream removed, broadcasting: 3\nI0105 15:00:50.471038    3503 log.go:172] (0xc0006dab00) (0xc00067e000) Stream removed, broadcasting: 5\n"
Jan  5 15:00:50.487: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan  5 15:00:50.487: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan  5 15:00:50.487: INFO: Waiting for statefulset status.replicas updated to 0
Jan  5 15:00:50.515: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1
Jan  5 15:01:00.533: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Jan  5 15:01:00.533: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Jan  5 15:01:00.533: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Jan  5 15:01:00.560: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999502s
Jan  5 15:01:01.573: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.989075972s
Jan  5 15:01:02.596: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.976080304s
Jan  5 15:01:03.614: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.952456211s
Jan  5 15:01:04.620: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.935258259s
Jan  5 15:01:05.633: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.92866477s
Jan  5 15:01:06.645: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.915456184s
Jan  5 15:01:07.656: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.90346405s
Jan  5 15:01:08.669: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.892871345s
Jan  5 15:01:09.681: INFO: Verifying statefulset ss doesn't scale past 3 for another 879.53193ms
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-57
Jan  5 15:01:10.691: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-57 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  5 15:01:11.248: INFO: stderr: "I0105 15:01:10.963829    3521 log.go:172] (0xc0009f4420) (0xc0005f0820) Create stream\nI0105 15:01:10.964009    3521 log.go:172] (0xc0009f4420) (0xc0005f0820) Stream added, broadcasting: 1\nI0105 15:01:10.973288    3521 log.go:172] (0xc0009f4420) Reply frame received for 1\nI0105 15:01:10.973436    3521 log.go:172] (0xc0009f4420) (0xc0005f08c0) Create stream\nI0105 15:01:10.973450    3521 log.go:172] (0xc0009f4420) (0xc0005f08c0) Stream added, broadcasting: 3\nI0105 15:01:10.976146    3521 log.go:172] (0xc0009f4420) Reply frame received for 3\nI0105 15:01:10.976206    3521 log.go:172] (0xc0009f4420) (0xc0008ec0a0) Create stream\nI0105 15:01:10.976225    3521 log.go:172] (0xc0009f4420) (0xc0008ec0a0) Stream added, broadcasting: 5\nI0105 15:01:10.978745    3521 log.go:172] (0xc0009f4420) Reply frame received for 5\nI0105 15:01:11.086159    3521 log.go:172] (0xc0009f4420) Data frame received for 5\nI0105 15:01:11.086265    3521 log.go:172] (0xc0008ec0a0) (5) Data frame handling\nI0105 15:01:11.086297    3521 log.go:172] (0xc0008ec0a0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0105 15:01:11.086867    3521 log.go:172] (0xc0009f4420) Data frame received for 3\nI0105 15:01:11.086881    3521 log.go:172] (0xc0005f08c0) (3) Data frame handling\nI0105 15:01:11.086899    3521 log.go:172] (0xc0005f08c0) (3) Data frame sent\nI0105 15:01:11.223741    3521 log.go:172] (0xc0009f4420) (0xc0005f08c0) Stream removed, broadcasting: 3\nI0105 15:01:11.223896    3521 log.go:172] (0xc0009f4420) Data frame received for 1\nI0105 15:01:11.223929    3521 log.go:172] (0xc0005f0820) (1) Data frame handling\nI0105 15:01:11.223953    3521 log.go:172] (0xc0005f0820) (1) Data frame sent\nI0105 15:01:11.223978    3521 log.go:172] (0xc0009f4420) (0xc0008ec0a0) Stream removed, broadcasting: 5\nI0105 15:01:11.224079    3521 log.go:172] (0xc0009f4420) (0xc0005f0820) Stream removed, broadcasting: 1\nI0105 15:01:11.224104    3521 log.go:172] (0xc0009f4420) Go away received\nI0105 15:01:11.229995    3521 log.go:172] (0xc0009f4420) (0xc0005f0820) Stream removed, broadcasting: 1\nI0105 15:01:11.230224    3521 log.go:172] (0xc0009f4420) (0xc0005f08c0) Stream removed, broadcasting: 3\nI0105 15:01:11.230289    3521 log.go:172] (0xc0009f4420) (0xc0008ec0a0) Stream removed, broadcasting: 5\n"
Jan  5 15:01:11.248: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jan  5 15:01:11.248: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jan  5 15:01:11.248: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-57 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  5 15:01:11.634: INFO: stderr: "I0105 15:01:11.414150    3541 log.go:172] (0xc000650420) (0xc000358820) Create stream\nI0105 15:01:11.414332    3541 log.go:172] (0xc000650420) (0xc000358820) Stream added, broadcasting: 1\nI0105 15:01:11.417650    3541 log.go:172] (0xc000650420) Reply frame received for 1\nI0105 15:01:11.417704    3541 log.go:172] (0xc000650420) (0xc000988000) Create stream\nI0105 15:01:11.417713    3541 log.go:172] (0xc000650420) (0xc000988000) Stream added, broadcasting: 3\nI0105 15:01:11.421494    3541 log.go:172] (0xc000650420) Reply frame received for 3\nI0105 15:01:11.421518    3541 log.go:172] (0xc000650420) (0xc0006941e0) Create stream\nI0105 15:01:11.421525    3541 log.go:172] (0xc000650420) (0xc0006941e0) Stream added, broadcasting: 5\nI0105 15:01:11.422295    3541 log.go:172] (0xc000650420) Reply frame received for 5\nI0105 15:01:11.517131    3541 log.go:172] (0xc000650420) Data frame received for 3\nI0105 15:01:11.517345    3541 log.go:172] (0xc000988000) (3) Data frame handling\nI0105 15:01:11.517375    3541 log.go:172] (0xc000988000) (3) Data frame sent\nI0105 15:01:11.517424    3541 log.go:172] (0xc000650420) Data frame received for 5\nI0105 15:01:11.517447    3541 log.go:172] (0xc0006941e0) (5) Data frame handling\nI0105 15:01:11.517487    3541 log.go:172] (0xc0006941e0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0105 15:01:11.623713    3541 log.go:172] (0xc000650420) Data frame received for 1\nI0105 15:01:11.623825    3541 log.go:172] (0xc000650420) (0xc000988000) Stream removed, broadcasting: 3\nI0105 15:01:11.623987    3541 log.go:172] (0xc000358820) (1) Data frame handling\nI0105 15:01:11.624035    3541 log.go:172] (0xc000650420) (0xc0006941e0) Stream removed, broadcasting: 5\nI0105 15:01:11.624088    3541 log.go:172] (0xc000358820) (1) Data frame sent\nI0105 15:01:11.624112    3541 log.go:172] (0xc000650420) (0xc000358820) Stream removed, broadcasting: 1\nI0105 15:01:11.624151    3541 log.go:172] (0xc000650420) Go away received\nI0105 15:01:11.625249    3541 log.go:172] (0xc000650420) (0xc000358820) Stream removed, broadcasting: 1\nI0105 15:01:11.625261    3541 log.go:172] (0xc000650420) (0xc000988000) Stream removed, broadcasting: 3\nI0105 15:01:11.625266    3541 log.go:172] (0xc000650420) (0xc0006941e0) Stream removed, broadcasting: 5\n"
Jan  5 15:01:11.634: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jan  5 15:01:11.634: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jan  5 15:01:11.635: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-57 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  5 15:01:12.287: INFO: stderr: "I0105 15:01:11.842372    3561 log.go:172] (0xc00078a420) (0xc0008aad20) Create stream\nI0105 15:01:11.842874    3561 log.go:172] (0xc00078a420) (0xc0008aad20) Stream added, broadcasting: 1\nI0105 15:01:11.852144    3561 log.go:172] (0xc00078a420) Reply frame received for 1\nI0105 15:01:11.852246    3561 log.go:172] (0xc00078a420) (0xc00078e000) Create stream\nI0105 15:01:11.852280    3561 log.go:172] (0xc00078a420) (0xc00078e000) Stream added, broadcasting: 3\nI0105 15:01:11.857927    3561 log.go:172] (0xc00078a420) Reply frame received for 3\nI0105 15:01:11.858025    3561 log.go:172] (0xc00078a420) (0xc0008aadc0) Create stream\nI0105 15:01:11.858045    3561 log.go:172] (0xc00078a420) (0xc0008aadc0) Stream added, broadcasting: 5\nI0105 15:01:11.861397    3561 log.go:172] (0xc00078a420) Reply frame received for 5\nI0105 15:01:12.041768    3561 log.go:172] (0xc00078a420) Data frame received for 3\nI0105 15:01:12.042003    3561 log.go:172] (0xc00078e000) (3) Data frame handling\nI0105 15:01:12.042047    3561 log.go:172] (0xc00078e000) (3) Data frame sent\nI0105 15:01:12.042115    3561 log.go:172] (0xc00078a420) Data frame received for 5\nI0105 15:01:12.042132    3561 log.go:172] (0xc0008aadc0) (5) Data frame handling\nI0105 15:01:12.042152    3561 log.go:172] (0xc0008aadc0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0105 15:01:12.260439    3561 log.go:172] (0xc00078a420) Data frame received for 1\nI0105 15:01:12.260691    3561 log.go:172] (0xc00078a420) (0xc0008aadc0) Stream removed, broadcasting: 5\nI0105 15:01:12.260828    3561 log.go:172] (0xc0008aad20) (1) Data frame handling\nI0105 15:01:12.260870    3561 log.go:172] (0xc0008aad20) (1) Data frame sent\nI0105 15:01:12.260921    3561 log.go:172] (0xc00078a420) (0xc00078e000) Stream removed, broadcasting: 3\nI0105 15:01:12.260998    3561 log.go:172] (0xc00078a420) (0xc0008aad20) Stream removed, broadcasting: 1\nI0105 15:01:12.261071    3561 log.go:172] (0xc00078a420) Go away received\nI0105 15:01:12.262661    3561 log.go:172] (0xc00078a420) (0xc0008aad20) Stream removed, broadcasting: 1\nI0105 15:01:12.262683    3561 log.go:172] (0xc00078a420) (0xc00078e000) Stream removed, broadcasting: 3\nI0105 15:01:12.262687    3561 log.go:172] (0xc00078a420) (0xc0008aadc0) Stream removed, broadcasting: 5\n"
Jan  5 15:01:12.287: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jan  5 15:01:12.287: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jan  5 15:01:12.287: INFO: Scaling statefulset ss to 0
STEP: Verifying that stateful set ss was scaled down in reverse order
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86
Jan  5 15:01:42.328: INFO: Deleting all statefulset in ns statefulset-57
Jan  5 15:01:42.333: INFO: Scaling statefulset ss to 0
Jan  5 15:01:42.342: INFO: Waiting for statefulset status.replicas updated to 0
Jan  5 15:01:42.344: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  5 15:01:42.416: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-57" for this suite.
Jan  5 15:01:48.442: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 15:01:48.590: INFO: namespace statefulset-57 deletion completed in 6.169335797s

• [SLOW TEST:111.445 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  5 15:01:48.590: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Jan  5 15:02:04.816: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan  5 15:02:04.834: INFO: Pod pod-with-prestop-exec-hook still exists
Jan  5 15:02:06.834: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan  5 15:02:06.847: INFO: Pod pod-with-prestop-exec-hook still exists
Jan  5 15:02:08.834: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan  5 15:02:08.841: INFO: Pod pod-with-prestop-exec-hook still exists
Jan  5 15:02:10.834: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan  5 15:02:10.843: INFO: Pod pod-with-prestop-exec-hook still exists
Jan  5 15:02:12.834: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan  5 15:02:12.843: INFO: Pod pod-with-prestop-exec-hook still exists
Jan  5 15:02:14.834: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan  5 15:02:14.843: INFO: Pod pod-with-prestop-exec-hook still exists
Jan  5 15:02:16.834: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan  5 15:02:16.852: INFO: Pod pod-with-prestop-exec-hook still exists
Jan  5 15:02:18.834: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan  5 15:02:18.844: INFO: Pod pod-with-prestop-exec-hook still exists
Jan  5 15:02:20.834: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan  5 15:02:20.849: INFO: Pod pod-with-prestop-exec-hook still exists
Jan  5 15:02:22.834: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan  5 15:02:23.123: INFO: Pod pod-with-prestop-exec-hook still exists
Jan  5 15:02:24.834: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan  5 15:02:24.847: INFO: Pod pod-with-prestop-exec-hook still exists
Jan  5 15:02:26.834: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan  5 15:02:26.842: INFO: Pod pod-with-prestop-exec-hook still exists
Jan  5 15:02:28.834: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan  5 15:02:28.845: INFO: Pod pod-with-prestop-exec-hook still exists
Jan  5 15:02:30.834: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan  5 15:02:30.849: INFO: Pod pod-with-prestop-exec-hook still exists
Jan  5 15:02:32.834: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan  5 15:02:32.844: INFO: Pod pod-with-prestop-exec-hook still exists
Jan  5 15:02:34.834: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan  5 15:02:34.843: INFO: Pod pod-with-prestop-exec-hook still exists
Jan  5 15:02:36.835: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan  5 15:02:36.844: INFO: Pod pod-with-prestop-exec-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  5 15:02:36.938: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-4874" for this suite.
Jan  5 15:02:58.970: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 15:02:59.081: INFO: namespace container-lifecycle-hook-4874 deletion completed in 22.133467827s

• [SLOW TEST:70.491 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute prestop exec hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl expose 
  should create services for rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  5 15:02:59.083: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should create services for rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating Redis RC
Jan  5 15:02:59.311: INFO: namespace kubectl-5516
Jan  5 15:02:59.311: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5516'
Jan  5 15:03:01.911: INFO: stderr: ""
Jan  5 15:03:01.911: INFO: stdout: "replicationcontroller/redis-master created\n"
STEP: Waiting for Redis master to start.
Jan  5 15:03:02.925: INFO: Selector matched 1 pods for map[app:redis]
Jan  5 15:03:02.925: INFO: Found 0 / 1
Jan  5 15:03:03.942: INFO: Selector matched 1 pods for map[app:redis]
Jan  5 15:03:03.943: INFO: Found 0 / 1
Jan  5 15:03:04.931: INFO: Selector matched 1 pods for map[app:redis]
Jan  5 15:03:04.931: INFO: Found 0 / 1
Jan  5 15:03:05.920: INFO: Selector matched 1 pods for map[app:redis]
Jan  5 15:03:05.920: INFO: Found 0 / 1
Jan  5 15:03:06.918: INFO: Selector matched 1 pods for map[app:redis]
Jan  5 15:03:06.918: INFO: Found 0 / 1
Jan  5 15:03:07.926: INFO: Selector matched 1 pods for map[app:redis]
Jan  5 15:03:07.926: INFO: Found 0 / 1
Jan  5 15:03:08.924: INFO: Selector matched 1 pods for map[app:redis]
Jan  5 15:03:08.924: INFO: Found 1 / 1
Jan  5 15:03:08.924: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Jan  5 15:03:08.929: INFO: Selector matched 1 pods for map[app:redis]
Jan  5 15:03:08.929: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Jan  5 15:03:08.929: INFO: wait on redis-master startup in kubectl-5516 
Jan  5 15:03:08.930: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-54m2p redis-master --namespace=kubectl-5516'
Jan  5 15:03:09.120: INFO: stderr: ""
Jan  5 15:03:09.120: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 05 Jan 15:03:08.199 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 05 Jan 15:03:08.199 # Server started, Redis version 3.2.12\n1:M 05 Jan 15:03:08.199 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 05 Jan 15:03:08.200 * The server is now ready to accept connections on port 6379\n"
STEP: exposing RC
Jan  5 15:03:09.120: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc redis-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-5516'
Jan  5 15:03:09.434: INFO: stderr: ""
Jan  5 15:03:09.434: INFO: stdout: "service/rm2 exposed\n"
Jan  5 15:03:09.443: INFO: Service rm2 in namespace kubectl-5516 found.
STEP: exposing service
Jan  5 15:03:11.459: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-5516'
Jan  5 15:03:11.765: INFO: stderr: ""
Jan  5 15:03:11.765: INFO: stdout: "service/rm3 exposed\n"
Jan  5 15:03:11.777: INFO: Service rm3 in namespace kubectl-5516 found.
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  5 15:03:13.797: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5516" for this suite.
Jan  5 15:03:37.891: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 15:03:38.075: INFO: namespace kubectl-5516 deletion completed in 24.266971015s

• [SLOW TEST:38.992 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl expose
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create services for rc  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  5 15:03:38.076: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Jan  5 15:03:38.186: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7e724a38-20d8-4b24-a5d8-a1063a929ce5" in namespace "projected-754" to be "success or failure"
Jan  5 15:03:38.212: INFO: Pod "downwardapi-volume-7e724a38-20d8-4b24-a5d8-a1063a929ce5": Phase="Pending", Reason="", readiness=false. Elapsed: 25.306534ms
Jan  5 15:03:40.222: INFO: Pod "downwardapi-volume-7e724a38-20d8-4b24-a5d8-a1063a929ce5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035483629s
Jan  5 15:03:42.235: INFO: Pod "downwardapi-volume-7e724a38-20d8-4b24-a5d8-a1063a929ce5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.049208068s
Jan  5 15:03:44.246: INFO: Pod "downwardapi-volume-7e724a38-20d8-4b24-a5d8-a1063a929ce5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.059856172s
Jan  5 15:03:46.255: INFO: Pod "downwardapi-volume-7e724a38-20d8-4b24-a5d8-a1063a929ce5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.068901278s
STEP: Saw pod success
Jan  5 15:03:46.255: INFO: Pod "downwardapi-volume-7e724a38-20d8-4b24-a5d8-a1063a929ce5" satisfied condition "success or failure"
Jan  5 15:03:46.261: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-7e724a38-20d8-4b24-a5d8-a1063a929ce5 container client-container: 
STEP: delete the pod
Jan  5 15:03:46.330: INFO: Waiting for pod downwardapi-volume-7e724a38-20d8-4b24-a5d8-a1063a929ce5 to disappear
Jan  5 15:03:46.402: INFO: Pod downwardapi-volume-7e724a38-20d8-4b24-a5d8-a1063a929ce5 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  5 15:03:46.403: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-754" for this suite.
Jan  5 15:03:52.442: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 15:03:52.596: INFO: namespace projected-754 deletion completed in 6.187133083s

• [SLOW TEST:14.520 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-api-machinery] Aggregator 
  Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  5 15:03:52.596: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename aggregator
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76
Jan  5 15:03:52.713: INFO: >>> kubeConfig: /root/.kube/config
[It] Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Registering the sample API server.
Jan  5 15:03:53.295: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set
Jan  5 15:03:55.571: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713833433, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713833433, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713833433, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713833433, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  5 15:03:57.589: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713833433, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713833433, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713833433, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713833433, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  5 15:03:59.579: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713833433, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713833433, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713833433, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713833433, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  5 15:04:01.581: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713833433, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713833433, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713833433, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713833433, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  5 15:04:03.580: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713833433, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713833433, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713833433, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713833433, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  5 15:04:08.812: INFO: Waited 3.218088553s for the sample-apiserver to be ready to handle requests.
[AfterEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67
[AfterEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  5 15:04:09.352: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "aggregator-7130" for this suite.
Jan  5 15:04:15.443: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 15:04:15.564: INFO: namespace aggregator-7130 deletion completed in 6.158226209s

• [SLOW TEST:22.968 seconds]
[sig-api-machinery] Aggregator
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should receive events on concurrent watches in same order [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  5 15:04:15.565: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should receive events on concurrent watches in same order [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: starting a background goroutine to produce watch events
STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  5 15:04:21.087: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-2750" for this suite.
Jan  5 15:04:27.240: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 15:04:27.498: INFO: namespace watch-2750 deletion completed in 6.368607608s

• [SLOW TEST:11.933 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should receive events on concurrent watches in same order [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  5 15:04:27.499: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name s-test-opt-del-5c3ae99f-81c5-4293-8b43-693c8c3960d1
STEP: Creating secret with name s-test-opt-upd-4ee36ff5-2773-4805-a770-b97aabe27ffe
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-5c3ae99f-81c5-4293-8b43-693c8c3960d1
STEP: Updating secret s-test-opt-upd-4ee36ff5-2773-4805-a770-b97aabe27ffe
STEP: Creating secret with name s-test-opt-create-f7ecea6f-099c-4e36-be87-ab9f6b3c4da7
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  5 15:04:44.054: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-4911" for this suite.
Jan  5 15:05:24.087: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 15:05:24.186: INFO: namespace secrets-4911 deletion completed in 40.124285473s

• [SLOW TEST:56.688 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command in a pod 
  should print the output to logs [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  5 15:05:24.187: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should print the output to logs [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  5 15:05:32.311: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-6078" for this suite.
Jan  5 15:06:18.381: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 15:06:18.576: INFO: namespace kubelet-test-6078 deletion completed in 46.258480713s

• [SLOW TEST:54.389 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when scheduling a busybox command in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40
    should print the output to logs [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  5 15:06:18.577: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Jan  5 15:06:18.735: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1e8ac209-4c9d-4e36-8564-fc9c3bc1cf5e" in namespace "projected-1679" to be "success or failure"
Jan  5 15:06:18.744: INFO: Pod "downwardapi-volume-1e8ac209-4c9d-4e36-8564-fc9c3bc1cf5e": Phase="Pending", Reason="", readiness=false. Elapsed: 9.272421ms
Jan  5 15:06:20.753: INFO: Pod "downwardapi-volume-1e8ac209-4c9d-4e36-8564-fc9c3bc1cf5e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018329276s
Jan  5 15:06:22.762: INFO: Pod "downwardapi-volume-1e8ac209-4c9d-4e36-8564-fc9c3bc1cf5e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.02701254s
Jan  5 15:06:24.780: INFO: Pod "downwardapi-volume-1e8ac209-4c9d-4e36-8564-fc9c3bc1cf5e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.045195799s
Jan  5 15:06:26.798: INFO: Pod "downwardapi-volume-1e8ac209-4c9d-4e36-8564-fc9c3bc1cf5e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.063512609s
STEP: Saw pod success
Jan  5 15:06:26.799: INFO: Pod "downwardapi-volume-1e8ac209-4c9d-4e36-8564-fc9c3bc1cf5e" satisfied condition "success or failure"
Jan  5 15:06:26.811: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-1e8ac209-4c9d-4e36-8564-fc9c3bc1cf5e container client-container: 
STEP: delete the pod
Jan  5 15:06:26.901: INFO: Waiting for pod downwardapi-volume-1e8ac209-4c9d-4e36-8564-fc9c3bc1cf5e to disappear
Jan  5 15:06:26.909: INFO: Pod downwardapi-volume-1e8ac209-4c9d-4e36-8564-fc9c3bc1cf5e no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  5 15:06:26.909: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1679" for this suite.
Jan  5 15:06:32.977: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 15:06:33.120: INFO: namespace projected-1679 deletion completed in 6.165457637s

• [SLOW TEST:14.543 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  5 15:06:33.121: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-638cad09-34a7-412f-afbb-f45a354ed379
STEP: Creating a pod to test consume configMaps
Jan  5 15:06:33.256: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-39d4dd87-6436-4af9-8a5b-4cb4b80d3f31" in namespace "projected-9043" to be "success or failure"
Jan  5 15:06:33.267: INFO: Pod "pod-projected-configmaps-39d4dd87-6436-4af9-8a5b-4cb4b80d3f31": Phase="Pending", Reason="", readiness=false. Elapsed: 10.181758ms
Jan  5 15:06:35.276: INFO: Pod "pod-projected-configmaps-39d4dd87-6436-4af9-8a5b-4cb4b80d3f31": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019123149s
Jan  5 15:06:37.287: INFO: Pod "pod-projected-configmaps-39d4dd87-6436-4af9-8a5b-4cb4b80d3f31": Phase="Pending", Reason="", readiness=false. Elapsed: 4.029933572s
Jan  5 15:06:39.297: INFO: Pod "pod-projected-configmaps-39d4dd87-6436-4af9-8a5b-4cb4b80d3f31": Phase="Pending", Reason="", readiness=false. Elapsed: 6.0402845s
Jan  5 15:06:41.310: INFO: Pod "pod-projected-configmaps-39d4dd87-6436-4af9-8a5b-4cb4b80d3f31": Phase="Pending", Reason="", readiness=false. Elapsed: 8.053192823s
Jan  5 15:06:43.326: INFO: Pod "pod-projected-configmaps-39d4dd87-6436-4af9-8a5b-4cb4b80d3f31": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.069038499s
STEP: Saw pod success
Jan  5 15:06:43.326: INFO: Pod "pod-projected-configmaps-39d4dd87-6436-4af9-8a5b-4cb4b80d3f31" satisfied condition "success or failure"
Jan  5 15:06:43.337: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-39d4dd87-6436-4af9-8a5b-4cb4b80d3f31 container projected-configmap-volume-test: 
STEP: delete the pod
Jan  5 15:06:43.443: INFO: Waiting for pod pod-projected-configmaps-39d4dd87-6436-4af9-8a5b-4cb4b80d3f31 to disappear
Jan  5 15:06:43.461: INFO: Pod pod-projected-configmaps-39d4dd87-6436-4af9-8a5b-4cb4b80d3f31 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  5 15:06:43.461: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9043" for this suite.
Jan  5 15:06:49.548: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 15:06:49.710: INFO: namespace projected-9043 deletion completed in 6.223967692s

• [SLOW TEST:16.589 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  5 15:06:49.711: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test use defaults
Jan  5 15:06:49.866: INFO: Waiting up to 5m0s for pod "client-containers-caa9297c-54a7-4e4a-aea8-cc80a922e8ee" in namespace "containers-854" to be "success or failure"
Jan  5 15:06:49.876: INFO: Pod "client-containers-caa9297c-54a7-4e4a-aea8-cc80a922e8ee": Phase="Pending", Reason="", readiness=false. Elapsed: 9.547155ms
Jan  5 15:06:51.892: INFO: Pod "client-containers-caa9297c-54a7-4e4a-aea8-cc80a922e8ee": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025739016s
Jan  5 15:06:53.907: INFO: Pod "client-containers-caa9297c-54a7-4e4a-aea8-cc80a922e8ee": Phase="Pending", Reason="", readiness=false. Elapsed: 4.041273404s
Jan  5 15:06:55.916: INFO: Pod "client-containers-caa9297c-54a7-4e4a-aea8-cc80a922e8ee": Phase="Pending", Reason="", readiness=false. Elapsed: 6.049479479s
Jan  5 15:06:57.986: INFO: Pod "client-containers-caa9297c-54a7-4e4a-aea8-cc80a922e8ee": Phase="Pending", Reason="", readiness=false. Elapsed: 8.120070957s
Jan  5 15:06:59.993: INFO: Pod "client-containers-caa9297c-54a7-4e4a-aea8-cc80a922e8ee": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.127029054s
STEP: Saw pod success
Jan  5 15:06:59.993: INFO: Pod "client-containers-caa9297c-54a7-4e4a-aea8-cc80a922e8ee" satisfied condition "success or failure"
Jan  5 15:06:59.997: INFO: Trying to get logs from node iruya-node pod client-containers-caa9297c-54a7-4e4a-aea8-cc80a922e8ee container test-container: 
STEP: delete the pod
Jan  5 15:07:00.054: INFO: Waiting for pod client-containers-caa9297c-54a7-4e4a-aea8-cc80a922e8ee to disappear
Jan  5 15:07:00.060: INFO: Pod client-containers-caa9297c-54a7-4e4a-aea8-cc80a922e8ee no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  5 15:07:00.061: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-854" for this suite.
Jan  5 15:07:06.087: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 15:07:06.203: INFO: namespace containers-854 deletion completed in 6.137250924s

• [SLOW TEST:16.492 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl api-versions 
  should check if v1 is in available api versions  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  5 15:07:06.203: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should check if v1 is in available api versions  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: validating api versions
Jan  5 15:07:06.331: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions'
Jan  5 15:07:06.556: INFO: stderr: ""
Jan  5 15:07:06.556: INFO: stdout: "admissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\napps/v1beta1\napps/v1beta2\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  5 15:07:06.557: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3462" for this suite.
Jan  5 15:07:12.601: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 15:07:12.739: INFO: namespace kubectl-3462 deletion completed in 6.168740659s

• [SLOW TEST:6.536 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl api-versions
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should check if v1 is in available api versions  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-storage] Projected configMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  5 15:07:12.739: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name cm-test-opt-del-b81e3f98-9ab3-40cf-96fc-1937fc8b16b6
STEP: Creating configMap with name cm-test-opt-upd-59f80fa0-ea9b-41e8-869e-ad936abe055a
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-b81e3f98-9ab3-40cf-96fc-1937fc8b16b6
STEP: Updating configmap cm-test-opt-upd-59f80fa0-ea9b-41e8-869e-ad936abe055a
STEP: Creating configMap with name cm-test-opt-create-fae59fa5-e776-4d83-8a94-8fe742c8f7ee
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  5 15:08:43.004: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2834" for this suite.
Jan  5 15:09:05.067: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 15:09:05.179: INFO: namespace projected-2834 deletion completed in 22.170656847s

• [SLOW TEST:112.440 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  5 15:09:05.180: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with configmap pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod pod-subpath-test-configmap-2lxq
STEP: Creating a pod to test atomic-volume-subpath
Jan  5 15:09:05.301: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-2lxq" in namespace "subpath-2937" to be "success or failure"
Jan  5 15:09:05.317: INFO: Pod "pod-subpath-test-configmap-2lxq": Phase="Pending", Reason="", readiness=false. Elapsed: 16.183834ms
Jan  5 15:09:07.329: INFO: Pod "pod-subpath-test-configmap-2lxq": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027544953s
Jan  5 15:09:09.339: INFO: Pod "pod-subpath-test-configmap-2lxq": Phase="Pending", Reason="", readiness=false. Elapsed: 4.037787873s
Jan  5 15:09:11.349: INFO: Pod "pod-subpath-test-configmap-2lxq": Phase="Pending", Reason="", readiness=false. Elapsed: 6.047855232s
Jan  5 15:09:13.359: INFO: Pod "pod-subpath-test-configmap-2lxq": Phase="Running", Reason="", readiness=true. Elapsed: 8.057400058s
Jan  5 15:09:15.368: INFO: Pod "pod-subpath-test-configmap-2lxq": Phase="Running", Reason="", readiness=true. Elapsed: 10.067219059s
Jan  5 15:09:17.378: INFO: Pod "pod-subpath-test-configmap-2lxq": Phase="Running", Reason="", readiness=true. Elapsed: 12.076947205s
Jan  5 15:09:19.386: INFO: Pod "pod-subpath-test-configmap-2lxq": Phase="Running", Reason="", readiness=true. Elapsed: 14.084366024s
Jan  5 15:09:21.395: INFO: Pod "pod-subpath-test-configmap-2lxq": Phase="Running", Reason="", readiness=true. Elapsed: 16.094188982s
Jan  5 15:09:23.423: INFO: Pod "pod-subpath-test-configmap-2lxq": Phase="Running", Reason="", readiness=true. Elapsed: 18.122146375s
Jan  5 15:09:25.437: INFO: Pod "pod-subpath-test-configmap-2lxq": Phase="Running", Reason="", readiness=true. Elapsed: 20.136166627s
Jan  5 15:09:27.446: INFO: Pod "pod-subpath-test-configmap-2lxq": Phase="Running", Reason="", readiness=true. Elapsed: 22.144633669s
Jan  5 15:09:29.453: INFO: Pod "pod-subpath-test-configmap-2lxq": Phase="Running", Reason="", readiness=true. Elapsed: 24.152104183s
Jan  5 15:09:31.464: INFO: Pod "pod-subpath-test-configmap-2lxq": Phase="Running", Reason="", readiness=true. Elapsed: 26.162601634s
Jan  5 15:09:33.489: INFO: Pod "pod-subpath-test-configmap-2lxq": Phase="Running", Reason="", readiness=true. Elapsed: 28.187886137s
Jan  5 15:09:35.499: INFO: Pod "pod-subpath-test-configmap-2lxq": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.197379786s
STEP: Saw pod success
Jan  5 15:09:35.499: INFO: Pod "pod-subpath-test-configmap-2lxq" satisfied condition "success or failure"
Jan  5 15:09:35.505: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-configmap-2lxq container test-container-subpath-configmap-2lxq: 
STEP: delete the pod
Jan  5 15:09:35.564: INFO: Waiting for pod pod-subpath-test-configmap-2lxq to disappear
Jan  5 15:09:35.568: INFO: Pod pod-subpath-test-configmap-2lxq no longer exists
STEP: Deleting pod pod-subpath-test-configmap-2lxq
Jan  5 15:09:35.568: INFO: Deleting pod "pod-subpath-test-configmap-2lxq" in namespace "subpath-2937"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  5 15:09:35.571: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-2937" for this suite.
Jan  5 15:09:41.625: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 15:09:41.759: INFO: namespace subpath-2937 deletion completed in 6.182944904s

• [SLOW TEST:36.579 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with configmap pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  5 15:09:41.760: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-map-cb7a3c07-5e1f-4452-94ea-1201a42616b1
STEP: Creating a pod to test consume configMaps
Jan  5 15:09:41.947: INFO: Waiting up to 5m0s for pod "pod-configmaps-518f20c9-9d18-4c78-aa63-828505a12fc7" in namespace "configmap-2810" to be "success or failure"
Jan  5 15:09:41.955: INFO: Pod "pod-configmaps-518f20c9-9d18-4c78-aa63-828505a12fc7": Phase="Pending", Reason="", readiness=false. Elapsed: 7.73003ms
Jan  5 15:09:43.974: INFO: Pod "pod-configmaps-518f20c9-9d18-4c78-aa63-828505a12fc7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026130128s
Jan  5 15:09:45.980: INFO: Pod "pod-configmaps-518f20c9-9d18-4c78-aa63-828505a12fc7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.03284383s
Jan  5 15:09:47.988: INFO: Pod "pod-configmaps-518f20c9-9d18-4c78-aa63-828505a12fc7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.040935149s
Jan  5 15:09:50.003: INFO: Pod "pod-configmaps-518f20c9-9d18-4c78-aa63-828505a12fc7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.055351792s
STEP: Saw pod success
Jan  5 15:09:50.003: INFO: Pod "pod-configmaps-518f20c9-9d18-4c78-aa63-828505a12fc7" satisfied condition "success or failure"
Jan  5 15:09:50.008: INFO: Trying to get logs from node iruya-node pod pod-configmaps-518f20c9-9d18-4c78-aa63-828505a12fc7 container configmap-volume-test: 
STEP: delete the pod
Jan  5 15:09:50.067: INFO: Waiting for pod pod-configmaps-518f20c9-9d18-4c78-aa63-828505a12fc7 to disappear
Jan  5 15:09:50.117: INFO: Pod pod-configmaps-518f20c9-9d18-4c78-aa63-828505a12fc7 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  5 15:09:50.118: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-2810" for this suite.
Jan  5 15:09:56.165: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 15:09:56.290: INFO: namespace configmap-2810 deletion completed in 6.164787562s

• [SLOW TEST:14.530 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  5 15:09:56.291: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: setting up watch
STEP: submitting the pod to kubernetes
Jan  5 15:09:56.439: INFO: observed the pod list
STEP: verifying the pod is in kubernetes
STEP: verifying pod creation was observed
STEP: deleting the pod gracefully
STEP: verifying the kubelet observed the termination notice
STEP: verifying pod deletion was observed
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  5 15:10:11.127: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-8243" for this suite.
Jan  5 15:10:17.235: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 15:10:17.446: INFO: namespace pods-8243 deletion completed in 6.31164098s

• [SLOW TEST:21.155 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[k8s.io] Pods 
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  5 15:10:17.446: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Jan  5 15:10:26.127: INFO: Successfully updated pod "pod-update-activedeadlineseconds-2e91eb1a-6df0-4dfc-9c5d-bee1fd49ec59"
Jan  5 15:10:26.127: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-2e91eb1a-6df0-4dfc-9c5d-bee1fd49ec59" in namespace "pods-4137" to be "terminated due to deadline exceeded"
Jan  5 15:10:26.151: INFO: Pod "pod-update-activedeadlineseconds-2e91eb1a-6df0-4dfc-9c5d-bee1fd49ec59": Phase="Running", Reason="", readiness=true. Elapsed: 23.650319ms
Jan  5 15:10:28.158: INFO: Pod "pod-update-activedeadlineseconds-2e91eb1a-6df0-4dfc-9c5d-bee1fd49ec59": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.030929062s
Jan  5 15:10:28.158: INFO: Pod "pod-update-activedeadlineseconds-2e91eb1a-6df0-4dfc-9c5d-bee1fd49ec59" satisfied condition "terminated due to deadline exceeded"
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  5 15:10:28.158: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-4137" for this suite.
Jan  5 15:10:34.194: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 15:10:34.337: INFO: namespace pods-4137 deletion completed in 6.172483966s

• [SLOW TEST:16.891 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  5 15:10:34.338: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for all rs to be garbage collected
STEP: expected 0 rs, got 1 rs
STEP: expected 0 pods, got 2 pods
STEP: expected 0 rs, got 1 rs
STEP: expected 0 pods, got 2 pods
STEP: Gathering metrics
W0105 15:10:37.761884       8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jan  5 15:10:37.761: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  5 15:10:37.762: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-672" for this suite.
Jan  5 15:10:44.078: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 15:10:44.187: INFO: namespace gc-672 deletion completed in 6.416814693s

• [SLOW TEST:9.849 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  5 15:10:44.188: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating the pod
Jan  5 15:10:52.908: INFO: Successfully updated pod "annotationupdate2fd20aac-c244-4cfa-b7e0-25387c02b5ca"
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  5 15:10:55.064: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-4681" for this suite.
Jan  5 15:11:19.107: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  5 15:11:19.231: INFO: namespace downward-api-4681 deletion completed in 24.154008652s

• [SLOW TEST:35.043 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
Jan  5 15:11:19.231: INFO: Running AfterSuite actions on all nodes
Jan  5 15:11:19.231: INFO: Running AfterSuite actions on node 1
Jan  5 15:11:19.231: INFO: Skipping dumping logs from cluster

Ran 215 of 4412 Specs in 8119.858 seconds
SUCCESS! -- 215 Passed | 0 Failed | 0 Pending | 4197 Skipped
PASS