I0108 12:56:12.545651 8 e2e.go:243] Starting e2e run "b060eb11-87a0-4c96-b55d-d72366d4fc98" on Ginkgo node 1 Running Suite: Kubernetes e2e suite =================================== Random Seed: 1578488171 - Will randomize all specs Will run 215 of 4412 specs Jan 8 12:56:12.842: INFO: >>> kubeConfig: /root/.kube/config Jan 8 12:56:12.847: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Jan 8 12:56:12.914: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Jan 8 12:56:12.941: INFO: 10 / 10 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Jan 8 12:56:12.941: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Jan 8 12:56:12.941: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Jan 8 12:56:12.948: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Jan 8 12:56:12.948: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'weave-net' (0 seconds elapsed) Jan 8 12:56:12.948: INFO: e2e test version: v1.15.7 Jan 8 12:56:12.949: INFO: kube-apiserver version: v1.15.1 SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run deployment should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 8 12:56:12.949: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl Jan 8 12:56:13.118: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1557 [It] should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Jan 8 12:56:13.121: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --generator=deployment/apps.v1 --namespace=kubectl-9673' Jan 8 12:56:16.612: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Jan 8 12:56:16.612: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n" STEP: verifying the deployment e2e-test-nginx-deployment was created STEP: verifying the pod controlled by deployment e2e-test-nginx-deployment was created [AfterEach] [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1562 Jan 8 12:56:18.829: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=kubectl-9673' Jan 8 12:56:19.010: INFO: stderr: "" Jan 8 12:56:19.010: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 8 12:56:19.010: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9673" for this suite. Jan 8 12:56:25.046: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 8 12:56:25.141: INFO: namespace kubectl-9673 deletion completed in 6.121669005s • [SLOW TEST:12.192 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 8 12:56:25.142: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 8 12:56:25.349: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-5380" for this suite. Jan 8 12:56:31.383: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 8 12:56:31.520: INFO: namespace kubelet-test-5380 deletion completed in 6.158189889s • [SLOW TEST:6.378 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78 should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 8 12:56:31.520: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test substitution in container's args Jan 8 12:56:31.621: INFO: Waiting up to 5m0s for pod "var-expansion-3da71c35-f694-4795-a904-55362fc62271" in namespace "var-expansion-8273" to be "success or failure" Jan 8 12:56:31.628: INFO: Pod "var-expansion-3da71c35-f694-4795-a904-55362fc62271": Phase="Pending", Reason="", readiness=false. Elapsed: 6.698239ms Jan 8 12:56:33.636: INFO: Pod "var-expansion-3da71c35-f694-4795-a904-55362fc62271": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015059601s Jan 8 12:56:35.646: INFO: Pod "var-expansion-3da71c35-f694-4795-a904-55362fc62271": Phase="Pending", Reason="", readiness=false. Elapsed: 4.024528599s Jan 8 12:56:37.654: INFO: Pod "var-expansion-3da71c35-f694-4795-a904-55362fc62271": Phase="Pending", Reason="", readiness=false. Elapsed: 6.032592167s Jan 8 12:56:39.661: INFO: Pod "var-expansion-3da71c35-f694-4795-a904-55362fc62271": Phase="Pending", Reason="", readiness=false. Elapsed: 8.040053071s Jan 8 12:56:41.667: INFO: Pod "var-expansion-3da71c35-f694-4795-a904-55362fc62271": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.045794521s STEP: Saw pod success Jan 8 12:56:41.667: INFO: Pod "var-expansion-3da71c35-f694-4795-a904-55362fc62271" satisfied condition "success or failure" Jan 8 12:56:41.670: INFO: Trying to get logs from node iruya-node pod var-expansion-3da71c35-f694-4795-a904-55362fc62271 container dapi-container: STEP: delete the pod Jan 8 12:56:41.792: INFO: Waiting for pod var-expansion-3da71c35-f694-4795-a904-55362fc62271 to disappear Jan 8 12:56:41.824: INFO: Pod var-expansion-3da71c35-f694-4795-a904-55362fc62271 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 8 12:56:41.824: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-8273" for this suite. Jan 8 12:56:49.905: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 8 12:56:50.013: INFO: namespace var-expansion-8273 deletion completed in 8.145926371s • [SLOW TEST:18.493 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Proxy server should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 8 12:56:50.013: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Starting the proxy Jan 8 12:56:50.101: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix374124627/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 8 12:56:50.165: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7174" for this suite. Jan 8 12:56:56.193: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 8 12:56:56.388: INFO: namespace kubectl-7174 deletion completed in 6.212509626s • [SLOW TEST:6.374 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Proxy server /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 8 12:56:56.388: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jan 8 12:56:56.494: INFO: Waiting up to 5m0s for pod "downwardapi-volume-db96fa9f-9176-41fa-8984-5971ba6faf17" in namespace "downward-api-627" to be "success or failure" Jan 8 12:56:56.545: INFO: Pod "downwardapi-volume-db96fa9f-9176-41fa-8984-5971ba6faf17": Phase="Pending", Reason="", readiness=false. Elapsed: 50.763533ms Jan 8 12:56:58.557: INFO: Pod "downwardapi-volume-db96fa9f-9176-41fa-8984-5971ba6faf17": Phase="Pending", Reason="", readiness=false. Elapsed: 2.06274249s Jan 8 12:57:00.573: INFO: Pod "downwardapi-volume-db96fa9f-9176-41fa-8984-5971ba6faf17": Phase="Pending", Reason="", readiness=false. Elapsed: 4.079128919s Jan 8 12:57:02.588: INFO: Pod "downwardapi-volume-db96fa9f-9176-41fa-8984-5971ba6faf17": Phase="Pending", Reason="", readiness=false. Elapsed: 6.093964329s Jan 8 12:57:04.594: INFO: Pod "downwardapi-volume-db96fa9f-9176-41fa-8984-5971ba6faf17": Phase="Pending", Reason="", readiness=false. Elapsed: 8.099837749s Jan 8 12:57:06.609: INFO: Pod "downwardapi-volume-db96fa9f-9176-41fa-8984-5971ba6faf17": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.114813415s STEP: Saw pod success Jan 8 12:57:06.609: INFO: Pod "downwardapi-volume-db96fa9f-9176-41fa-8984-5971ba6faf17" satisfied condition "success or failure" Jan 8 12:57:06.620: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-db96fa9f-9176-41fa-8984-5971ba6faf17 container client-container: STEP: delete the pod Jan 8 12:57:07.004: INFO: Waiting for pod downwardapi-volume-db96fa9f-9176-41fa-8984-5971ba6faf17 to disappear Jan 8 12:57:07.009: INFO: Pod downwardapi-volume-db96fa9f-9176-41fa-8984-5971ba6faf17 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 8 12:57:07.009: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-627" for this suite. Jan 8 12:57:13.047: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 8 12:57:13.153: INFO: namespace downward-api-627 deletion completed in 6.139650097s • [SLOW TEST:16.765 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl replace should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 8 12:57:13.154: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1721 [It] should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Jan 8 12:57:13.198: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --labels=run=e2e-test-nginx-pod --namespace=kubectl-1855' Jan 8 12:57:13.343: INFO: stderr: "" Jan 8 12:57:13.343: INFO: stdout: "pod/e2e-test-nginx-pod created\n" STEP: verifying the pod e2e-test-nginx-pod is running STEP: verifying the pod e2e-test-nginx-pod was created Jan 8 12:57:23.394: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-nginx-pod --namespace=kubectl-1855 -o json' Jan 8 12:57:23.580: INFO: stderr: "" Jan 8 12:57:23.580: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-01-08T12:57:13Z\",\n \"labels\": {\n \"run\": \"e2e-test-nginx-pod\"\n },\n \"name\": \"e2e-test-nginx-pod\",\n \"namespace\": \"kubectl-1855\",\n \"resourceVersion\": \"19771209\",\n \"selfLink\": \"/api/v1/namespaces/kubectl-1855/pods/e2e-test-nginx-pod\",\n \"uid\": \"0fef0add-10bb-429b-871c-6bfbe309a89d\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/nginx:1.14-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-nginx-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-9dklq\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"iruya-node\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-9dklq\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-9dklq\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-01-08T12:57:13Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-01-08T12:57:20Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-01-08T12:57:20Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-01-08T12:57:13Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"docker://2e9d49ac11d36c635f53273306d509c292369f5646215c388eda08e6dc9e30cf\",\n \"image\": \"nginx:1.14-alpine\",\n \"imageID\": \"docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7\",\n \"lastState\": {},\n \"name\": \"e2e-test-nginx-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-01-08T12:57:19Z\"\n }\n }\n }\n ],\n \"hostIP\": \"10.96.3.65\",\n \"phase\": \"Running\",\n \"podIP\": \"10.44.0.1\",\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2020-01-08T12:57:13Z\"\n }\n}\n" STEP: replace the image in the pod Jan 8 12:57:23.581: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-1855' Jan 8 12:57:23.981: INFO: stderr: "" Jan 8 12:57:23.981: INFO: stdout: "pod/e2e-test-nginx-pod replaced\n" STEP: verifying the pod e2e-test-nginx-pod has the right image docker.io/library/busybox:1.29 [AfterEach] [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1726 Jan 8 12:57:24.024: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=kubectl-1855' Jan 8 12:57:29.889: INFO: stderr: "" Jan 8 12:57:29.889: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 8 12:57:29.889: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1855" for this suite. Jan 8 12:57:35.988: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 8 12:57:36.097: INFO: namespace kubectl-1855 deletion completed in 6.131273481s • [SLOW TEST:22.944 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 8 12:57:36.098: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jan 8 12:57:36.267: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2caef054-0a42-456b-9f23-e3fd9e69cb69" in namespace "downward-api-3125" to be "success or failure" Jan 8 12:57:36.293: INFO: Pod "downwardapi-volume-2caef054-0a42-456b-9f23-e3fd9e69cb69": Phase="Pending", Reason="", readiness=false. Elapsed: 25.20845ms Jan 8 12:57:38.303: INFO: Pod "downwardapi-volume-2caef054-0a42-456b-9f23-e3fd9e69cb69": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035433158s Jan 8 12:57:40.315: INFO: Pod "downwardapi-volume-2caef054-0a42-456b-9f23-e3fd9e69cb69": Phase="Pending", Reason="", readiness=false. Elapsed: 4.046940117s Jan 8 12:57:42.360: INFO: Pod "downwardapi-volume-2caef054-0a42-456b-9f23-e3fd9e69cb69": Phase="Running", Reason="", readiness=true. Elapsed: 6.092264916s Jan 8 12:57:44.371: INFO: Pod "downwardapi-volume-2caef054-0a42-456b-9f23-e3fd9e69cb69": Phase="Running", Reason="", readiness=true. Elapsed: 8.102767546s Jan 8 12:57:46.379: INFO: Pod "downwardapi-volume-2caef054-0a42-456b-9f23-e3fd9e69cb69": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.111533584s STEP: Saw pod success Jan 8 12:57:46.379: INFO: Pod "downwardapi-volume-2caef054-0a42-456b-9f23-e3fd9e69cb69" satisfied condition "success or failure" Jan 8 12:57:46.384: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-2caef054-0a42-456b-9f23-e3fd9e69cb69 container client-container: STEP: delete the pod Jan 8 12:57:46.497: INFO: Waiting for pod downwardapi-volume-2caef054-0a42-456b-9f23-e3fd9e69cb69 to disappear Jan 8 12:57:46.509: INFO: Pod downwardapi-volume-2caef054-0a42-456b-9f23-e3fd9e69cb69 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 8 12:57:46.509: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3125" for this suite. Jan 8 12:57:52.548: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 8 12:57:52.675: INFO: namespace downward-api-3125 deletion completed in 6.159714829s • [SLOW TEST:16.578 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 8 12:57:52.676: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jan 8 12:57:52.919: INFO: (0) /api/v1/nodes/iruya-node:10250/proxy/logs/:
alternatives.log
alternatives.l... (200; 14.410971ms)
Jan  8 12:57:52.928: INFO: (1) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 9.490544ms)
Jan  8 12:57:52.936: INFO: (2) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.461186ms)
Jan  8 12:57:52.944: INFO: (3) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.799026ms)
Jan  8 12:57:52.949: INFO: (4) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.531255ms)
Jan  8 12:57:52.957: INFO: (5) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.572227ms)
Jan  8 12:57:52.969: INFO: (6) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 11.952521ms)
Jan  8 12:57:52.973: INFO: (7) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.507929ms)
Jan  8 12:57:52.976: INFO: (8) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.049705ms)
Jan  8 12:57:52.979: INFO: (9) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 2.74983ms)
Jan  8 12:57:52.982: INFO: (10) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.2522ms)
Jan  8 12:57:53.017: INFO: (11) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 35.567561ms)
Jan  8 12:57:53.021: INFO: (12) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.755905ms)
Jan  8 12:57:53.025: INFO: (13) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.579122ms)
Jan  8 12:57:53.029: INFO: (14) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.49381ms)
Jan  8 12:57:53.034: INFO: (15) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.935271ms)
Jan  8 12:57:53.040: INFO: (16) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.383657ms)
Jan  8 12:57:53.044: INFO: (17) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.395109ms)
Jan  8 12:57:53.049: INFO: (18) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.983186ms)
Jan  8 12:57:53.052: INFO: (19) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.312174ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  8 12:57:53.053: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-7420" for this suite.
Jan  8 12:57:59.105: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  8 12:57:59.224: INFO: namespace proxy-7420 deletion completed in 6.167691821s

• [SLOW TEST:6.548 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58
    should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  8 12:57:59.224: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Jan  8 12:57:59.288: INFO: Waiting up to 5m0s for pod "downwardapi-volume-84e1f343-efdd-4658-87ff-b9d8c36cd7dd" in namespace "downward-api-573" to be "success or failure"
Jan  8 12:57:59.311: INFO: Pod "downwardapi-volume-84e1f343-efdd-4658-87ff-b9d8c36cd7dd": Phase="Pending", Reason="", readiness=false. Elapsed: 23.691611ms
Jan  8 12:58:01.320: INFO: Pod "downwardapi-volume-84e1f343-efdd-4658-87ff-b9d8c36cd7dd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032157713s
Jan  8 12:58:03.331: INFO: Pod "downwardapi-volume-84e1f343-efdd-4658-87ff-b9d8c36cd7dd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.043014784s
Jan  8 12:58:05.340: INFO: Pod "downwardapi-volume-84e1f343-efdd-4658-87ff-b9d8c36cd7dd": Phase="Pending", Reason="", readiness=false. Elapsed: 6.052319901s
Jan  8 12:58:07.349: INFO: Pod "downwardapi-volume-84e1f343-efdd-4658-87ff-b9d8c36cd7dd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.060873362s
STEP: Saw pod success
Jan  8 12:58:07.349: INFO: Pod "downwardapi-volume-84e1f343-efdd-4658-87ff-b9d8c36cd7dd" satisfied condition "success or failure"
Jan  8 12:58:07.352: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-84e1f343-efdd-4658-87ff-b9d8c36cd7dd container client-container: 
STEP: delete the pod
Jan  8 12:58:07.472: INFO: Waiting for pod downwardapi-volume-84e1f343-efdd-4658-87ff-b9d8c36cd7dd to disappear
Jan  8 12:58:07.511: INFO: Pod downwardapi-volume-84e1f343-efdd-4658-87ff-b9d8c36cd7dd no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  8 12:58:07.512: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-573" for this suite.
Jan  8 12:58:13.808: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  8 12:58:13.974: INFO: namespace downward-api-573 deletion completed in 6.258864206s

• [SLOW TEST:14.750 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  8 12:58:13.975: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72
[It] RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jan  8 12:58:14.199: INFO: Creating deployment "test-recreate-deployment"
Jan  8 12:58:14.208: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1
Jan  8 12:58:14.224: INFO: deployment "test-recreate-deployment" doesn't have the required revision set
Jan  8 12:58:16.240: INFO: Waiting deployment "test-recreate-deployment" to complete
Jan  8 12:58:16.244: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714085094, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714085094, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714085094, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714085094, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  8 12:58:18.251: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714085094, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714085094, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714085094, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714085094, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  8 12:58:20.254: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714085094, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714085094, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714085094, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714085094, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  8 12:58:22.252: INFO: Triggering a new rollout for deployment "test-recreate-deployment"
Jan  8 12:58:22.267: INFO: Updating deployment test-recreate-deployment
Jan  8 12:58:22.267: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66
Jan  8 12:58:22.830: INFO: Deployment "test-recreate-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment,GenerateName:,Namespace:deployment-2068,SelfLink:/apis/apps/v1/namespaces/deployment-2068/deployments/test-recreate-deployment,UID:26a4fda2-090d-412d-97a1-3e48c606386e,ResourceVersion:19771410,Generation:2,CreationTimestamp:2020-01-08 12:58:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[{Available False 2020-01-08 12:58:22 +0000 UTC 2020-01-08 12:58:22 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2020-01-08 12:58:22 +0000 UTC 2020-01-08 12:58:14 +0000 UTC ReplicaSetUpdated ReplicaSet "test-recreate-deployment-5c8c9cc69d" is progressing.}],ReadyReplicas:0,CollisionCount:nil,},}

Jan  8 12:58:22.893: INFO: New ReplicaSet "test-recreate-deployment-5c8c9cc69d" of Deployment "test-recreate-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5c8c9cc69d,GenerateName:,Namespace:deployment-2068,SelfLink:/apis/apps/v1/namespaces/deployment-2068/replicasets/test-recreate-deployment-5c8c9cc69d,UID:8be7951c-3937-4c4a-8ea5-59b7f4dbdf7c,ResourceVersion:19771409,Generation:1,CreationTimestamp:2020-01-08 12:58:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 26a4fda2-090d-412d-97a1-3e48c606386e 0xc0027e9f67 0xc0027e9f68}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Jan  8 12:58:22.893: INFO: All old ReplicaSets of Deployment "test-recreate-deployment":
Jan  8 12:58:22.893: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-6df85df6b9,GenerateName:,Namespace:deployment-2068,SelfLink:/apis/apps/v1/namespaces/deployment-2068/replicasets/test-recreate-deployment-6df85df6b9,UID:ad1d514a-3e3f-46be-b47d-e310eae7a001,ResourceVersion:19771397,Generation:2,CreationTimestamp:2020-01-08 12:58:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 26a4fda2-090d-412d-97a1-3e48c606386e 0xc0028cc037 0xc0028cc038}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Jan  8 12:58:22.899: INFO: Pod "test-recreate-deployment-5c8c9cc69d-bj89c" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5c8c9cc69d-bj89c,GenerateName:test-recreate-deployment-5c8c9cc69d-,Namespace:deployment-2068,SelfLink:/api/v1/namespaces/deployment-2068/pods/test-recreate-deployment-5c8c9cc69d-bj89c,UID:c01165ca-e311-4cea-80be-e187dbec7a30,ResourceVersion:19771412,Generation:0,CreationTimestamp:2020-01-08 12:58:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-recreate-deployment-5c8c9cc69d 8be7951c-3937-4c4a-8ea5-59b7f4dbdf7c 0xc0028cc917 0xc0028cc918}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-tm8wj {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-tm8wj,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-tm8wj true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0028cc990} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0028cc9b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-08 12:58:22 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-08 12:58:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-08 12:58:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-08 12:58:22 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2020-01-08 12:58:22 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  8 12:58:22.899: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-2068" for this suite.
Jan  8 12:58:28.963: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  8 12:58:29.048: INFO: namespace deployment-2068 deletion completed in 6.145491353s

• [SLOW TEST:15.074 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  8 12:58:29.048: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the rc
STEP: delete the rc
STEP: wait for all pods to be garbage collected
STEP: Gathering metrics
W0108 12:58:40.530363       8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jan  8 12:58:40.530: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  8 12:58:40.530: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-2030" for this suite.
Jan  8 12:58:46.568: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  8 12:58:46.686: INFO: namespace gc-2030 deletion completed in 6.15149958s

• [SLOW TEST:17.638 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] KubeletManagedEtcHosts 
  should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] KubeletManagedEtcHosts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  8 12:58:46.687: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Setting up the test
STEP: Creating hostNetwork=false pod
STEP: Creating hostNetwork=true pod
STEP: Running the test
STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false
Jan  8 12:59:06.874: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-9047 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan  8 12:59:06.874: INFO: >>> kubeConfig: /root/.kube/config
I0108 12:59:06.961474       8 log.go:172] (0xc00084da20) (0xc00227e8c0) Create stream
I0108 12:59:06.961555       8 log.go:172] (0xc00084da20) (0xc00227e8c0) Stream added, broadcasting: 1
I0108 12:59:06.970962       8 log.go:172] (0xc00084da20) Reply frame received for 1
I0108 12:59:06.971010       8 log.go:172] (0xc00084da20) (0xc0027e2000) Create stream
I0108 12:59:06.971021       8 log.go:172] (0xc00084da20) (0xc0027e2000) Stream added, broadcasting: 3
I0108 12:59:06.972975       8 log.go:172] (0xc00084da20) Reply frame received for 3
I0108 12:59:06.973022       8 log.go:172] (0xc00084da20) (0xc0027e20a0) Create stream
I0108 12:59:06.973036       8 log.go:172] (0xc00084da20) (0xc0027e20a0) Stream added, broadcasting: 5
I0108 12:59:06.974961       8 log.go:172] (0xc00084da20) Reply frame received for 5
I0108 12:59:07.107358       8 log.go:172] (0xc00084da20) Data frame received for 3
I0108 12:59:07.107451       8 log.go:172] (0xc0027e2000) (3) Data frame handling
I0108 12:59:07.107473       8 log.go:172] (0xc0027e2000) (3) Data frame sent
I0108 12:59:07.231975       8 log.go:172] (0xc00084da20) (0xc0027e2000) Stream removed, broadcasting: 3
I0108 12:59:07.232220       8 log.go:172] (0xc00084da20) Data frame received for 1
I0108 12:59:07.232315       8 log.go:172] (0xc00227e8c0) (1) Data frame handling
I0108 12:59:07.232348       8 log.go:172] (0xc00084da20) (0xc0027e20a0) Stream removed, broadcasting: 5
I0108 12:59:07.232428       8 log.go:172] (0xc00227e8c0) (1) Data frame sent
I0108 12:59:07.232450       8 log.go:172] (0xc00084da20) (0xc00227e8c0) Stream removed, broadcasting: 1
I0108 12:59:07.232465       8 log.go:172] (0xc00084da20) Go away received
I0108 12:59:07.233194       8 log.go:172] (0xc00084da20) (0xc00227e8c0) Stream removed, broadcasting: 1
I0108 12:59:07.233238       8 log.go:172] (0xc00084da20) (0xc0027e2000) Stream removed, broadcasting: 3
I0108 12:59:07.233281       8 log.go:172] (0xc00084da20) (0xc0027e20a0) Stream removed, broadcasting: 5
Jan  8 12:59:07.233: INFO: Exec stderr: ""
Jan  8 12:59:07.233: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-9047 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan  8 12:59:07.233: INFO: >>> kubeConfig: /root/.kube/config
I0108 12:59:07.302981       8 log.go:172] (0xc0031dc9a0) (0xc00227ec80) Create stream
I0108 12:59:07.303037       8 log.go:172] (0xc0031dc9a0) (0xc00227ec80) Stream added, broadcasting: 1
I0108 12:59:07.309690       8 log.go:172] (0xc0031dc9a0) Reply frame received for 1
I0108 12:59:07.309723       8 log.go:172] (0xc0031dc9a0) (0xc001744e60) Create stream
I0108 12:59:07.309731       8 log.go:172] (0xc0031dc9a0) (0xc001744e60) Stream added, broadcasting: 3
I0108 12:59:07.311629       8 log.go:172] (0xc0031dc9a0) Reply frame received for 3
I0108 12:59:07.311662       8 log.go:172] (0xc0031dc9a0) (0xc0032c4960) Create stream
I0108 12:59:07.311671       8 log.go:172] (0xc0031dc9a0) (0xc0032c4960) Stream added, broadcasting: 5
I0108 12:59:07.313373       8 log.go:172] (0xc0031dc9a0) Reply frame received for 5
I0108 12:59:07.389753       8 log.go:172] (0xc0031dc9a0) Data frame received for 3
I0108 12:59:07.389812       8 log.go:172] (0xc001744e60) (3) Data frame handling
I0108 12:59:07.389827       8 log.go:172] (0xc001744e60) (3) Data frame sent
I0108 12:59:07.494812       8 log.go:172] (0xc0031dc9a0) Data frame received for 1
I0108 12:59:07.494905       8 log.go:172] (0xc0031dc9a0) (0xc0032c4960) Stream removed, broadcasting: 5
I0108 12:59:07.494946       8 log.go:172] (0xc00227ec80) (1) Data frame handling
I0108 12:59:07.494967       8 log.go:172] (0xc00227ec80) (1) Data frame sent
I0108 12:59:07.494994       8 log.go:172] (0xc0031dc9a0) (0xc001744e60) Stream removed, broadcasting: 3
I0108 12:59:07.495076       8 log.go:172] (0xc0031dc9a0) (0xc00227ec80) Stream removed, broadcasting: 1
I0108 12:59:07.495102       8 log.go:172] (0xc0031dc9a0) Go away received
I0108 12:59:07.495609       8 log.go:172] (0xc0031dc9a0) (0xc00227ec80) Stream removed, broadcasting: 1
I0108 12:59:07.495641       8 log.go:172] (0xc0031dc9a0) (0xc001744e60) Stream removed, broadcasting: 3
I0108 12:59:07.495652       8 log.go:172] (0xc0031dc9a0) (0xc0032c4960) Stream removed, broadcasting: 5
Jan  8 12:59:07.495: INFO: Exec stderr: ""
Jan  8 12:59:07.495: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-9047 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan  8 12:59:07.495: INFO: >>> kubeConfig: /root/.kube/config
I0108 12:59:07.551287       8 log.go:172] (0xc0031dd8c0) (0xc00227f040) Create stream
I0108 12:59:07.551325       8 log.go:172] (0xc0031dd8c0) (0xc00227f040) Stream added, broadcasting: 1
I0108 12:59:07.560848       8 log.go:172] (0xc0031dd8c0) Reply frame received for 1
I0108 12:59:07.560882       8 log.go:172] (0xc0031dd8c0) (0xc00202e000) Create stream
I0108 12:59:07.560894       8 log.go:172] (0xc0031dd8c0) (0xc00202e000) Stream added, broadcasting: 3
I0108 12:59:07.563529       8 log.go:172] (0xc0031dd8c0) Reply frame received for 3
I0108 12:59:07.563559       8 log.go:172] (0xc0031dd8c0) (0xc00227f0e0) Create stream
I0108 12:59:07.563571       8 log.go:172] (0xc0031dd8c0) (0xc00227f0e0) Stream added, broadcasting: 5
I0108 12:59:07.566355       8 log.go:172] (0xc0031dd8c0) Reply frame received for 5
I0108 12:59:07.666084       8 log.go:172] (0xc0031dd8c0) Data frame received for 3
I0108 12:59:07.666154       8 log.go:172] (0xc00202e000) (3) Data frame handling
I0108 12:59:07.666172       8 log.go:172] (0xc00202e000) (3) Data frame sent
I0108 12:59:07.783691       8 log.go:172] (0xc0031dd8c0) (0xc00202e000) Stream removed, broadcasting: 3
I0108 12:59:07.783952       8 log.go:172] (0xc0031dd8c0) Data frame received for 1
I0108 12:59:07.784002       8 log.go:172] (0xc0031dd8c0) (0xc00227f0e0) Stream removed, broadcasting: 5
I0108 12:59:07.784097       8 log.go:172] (0xc00227f040) (1) Data frame handling
I0108 12:59:07.784129       8 log.go:172] (0xc00227f040) (1) Data frame sent
I0108 12:59:07.784148       8 log.go:172] (0xc0031dd8c0) (0xc00227f040) Stream removed, broadcasting: 1
I0108 12:59:07.784172       8 log.go:172] (0xc0031dd8c0) Go away received
I0108 12:59:07.784510       8 log.go:172] (0xc0031dd8c0) (0xc00227f040) Stream removed, broadcasting: 1
I0108 12:59:07.784537       8 log.go:172] (0xc0031dd8c0) (0xc00202e000) Stream removed, broadcasting: 3
I0108 12:59:07.784556       8 log.go:172] (0xc0031dd8c0) (0xc00227f0e0) Stream removed, broadcasting: 5
Jan  8 12:59:07.784: INFO: Exec stderr: ""
Jan  8 12:59:07.784: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-9047 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan  8 12:59:07.784: INFO: >>> kubeConfig: /root/.kube/config
I0108 12:59:07.851021       8 log.go:172] (0xc000860e70) (0xc0027e23c0) Create stream
I0108 12:59:07.851110       8 log.go:172] (0xc000860e70) (0xc0027e23c0) Stream added, broadcasting: 1
I0108 12:59:07.859269       8 log.go:172] (0xc000860e70) Reply frame received for 1
I0108 12:59:07.859379       8 log.go:172] (0xc000860e70) (0xc0032c4a00) Create stream
I0108 12:59:07.859390       8 log.go:172] (0xc000860e70) (0xc0032c4a00) Stream added, broadcasting: 3
I0108 12:59:07.861159       8 log.go:172] (0xc000860e70) Reply frame received for 3
I0108 12:59:07.861184       8 log.go:172] (0xc000860e70) (0xc0032c4aa0) Create stream
I0108 12:59:07.861192       8 log.go:172] (0xc000860e70) (0xc0032c4aa0) Stream added, broadcasting: 5
I0108 12:59:07.862443       8 log.go:172] (0xc000860e70) Reply frame received for 5
I0108 12:59:07.960529       8 log.go:172] (0xc000860e70) Data frame received for 3
I0108 12:59:07.960622       8 log.go:172] (0xc0032c4a00) (3) Data frame handling
I0108 12:59:07.960659       8 log.go:172] (0xc0032c4a00) (3) Data frame sent
I0108 12:59:08.108762       8 log.go:172] (0xc000860e70) (0xc0032c4a00) Stream removed, broadcasting: 3
I0108 12:59:08.108856       8 log.go:172] (0xc000860e70) Data frame received for 1
I0108 12:59:08.108869       8 log.go:172] (0xc0027e23c0) (1) Data frame handling
I0108 12:59:08.108880       8 log.go:172] (0xc0027e23c0) (1) Data frame sent
I0108 12:59:08.108887       8 log.go:172] (0xc000860e70) (0xc0027e23c0) Stream removed, broadcasting: 1
I0108 12:59:08.108895       8 log.go:172] (0xc000860e70) (0xc0032c4aa0) Stream removed, broadcasting: 5
I0108 12:59:08.108911       8 log.go:172] (0xc000860e70) Go away received
I0108 12:59:08.109096       8 log.go:172] (0xc000860e70) (0xc0027e23c0) Stream removed, broadcasting: 1
I0108 12:59:08.109113       8 log.go:172] (0xc000860e70) (0xc0032c4a00) Stream removed, broadcasting: 3
I0108 12:59:08.109123       8 log.go:172] (0xc000860e70) (0xc0032c4aa0) Stream removed, broadcasting: 5
Jan  8 12:59:08.109: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount
Jan  8 12:59:08.109: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-9047 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan  8 12:59:08.109: INFO: >>> kubeConfig: /root/.kube/config
I0108 12:59:08.156226       8 log.go:172] (0xc000f5cbb0) (0xc00227f400) Create stream
I0108 12:59:08.156258       8 log.go:172] (0xc000f5cbb0) (0xc00227f400) Stream added, broadcasting: 1
I0108 12:59:08.170376       8 log.go:172] (0xc000f5cbb0) Reply frame received for 1
I0108 12:59:08.170460       8 log.go:172] (0xc000f5cbb0) (0xc00202e140) Create stream
I0108 12:59:08.170467       8 log.go:172] (0xc000f5cbb0) (0xc00202e140) Stream added, broadcasting: 3
I0108 12:59:08.176085       8 log.go:172] (0xc000f5cbb0) Reply frame received for 3
I0108 12:59:08.176111       8 log.go:172] (0xc000f5cbb0) (0xc0027e2460) Create stream
I0108 12:59:08.176118       8 log.go:172] (0xc000f5cbb0) (0xc0027e2460) Stream added, broadcasting: 5
I0108 12:59:08.181159       8 log.go:172] (0xc000f5cbb0) Reply frame received for 5
I0108 12:59:08.292348       8 log.go:172] (0xc000f5cbb0) Data frame received for 3
I0108 12:59:08.292670       8 log.go:172] (0xc00202e140) (3) Data frame handling
I0108 12:59:08.292720       8 log.go:172] (0xc00202e140) (3) Data frame sent
I0108 12:59:08.426976       8 log.go:172] (0xc000f5cbb0) Data frame received for 1
I0108 12:59:08.427110       8 log.go:172] (0xc00227f400) (1) Data frame handling
I0108 12:59:08.427140       8 log.go:172] (0xc00227f400) (1) Data frame sent
I0108 12:59:08.428150       8 log.go:172] (0xc000f5cbb0) (0xc00227f400) Stream removed, broadcasting: 1
I0108 12:59:08.428544       8 log.go:172] (0xc000f5cbb0) (0xc0027e2460) Stream removed, broadcasting: 5
I0108 12:59:08.428586       8 log.go:172] (0xc000f5cbb0) (0xc00202e140) Stream removed, broadcasting: 3
I0108 12:59:08.428660       8 log.go:172] (0xc000f5cbb0) Go away received
I0108 12:59:08.428920       8 log.go:172] (0xc000f5cbb0) (0xc00227f400) Stream removed, broadcasting: 1
I0108 12:59:08.428941       8 log.go:172] (0xc000f5cbb0) (0xc00202e140) Stream removed, broadcasting: 3
I0108 12:59:08.428948       8 log.go:172] (0xc000f5cbb0) (0xc0027e2460) Stream removed, broadcasting: 5
Jan  8 12:59:08.428: INFO: Exec stderr: ""
Jan  8 12:59:08.429: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-9047 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan  8 12:59:08.429: INFO: >>> kubeConfig: /root/.kube/config
I0108 12:59:08.553089       8 log.go:172] (0xc000f3cc60) (0xc00202e460) Create stream
I0108 12:59:08.553208       8 log.go:172] (0xc000f3cc60) (0xc00202e460) Stream added, broadcasting: 1
I0108 12:59:08.611481       8 log.go:172] (0xc000f3cc60) Reply frame received for 1
I0108 12:59:08.611773       8 log.go:172] (0xc000f3cc60) (0xc00227f540) Create stream
I0108 12:59:08.611798       8 log.go:172] (0xc000f3cc60) (0xc00227f540) Stream added, broadcasting: 3
I0108 12:59:08.615430       8 log.go:172] (0xc000f3cc60) Reply frame received for 3
I0108 12:59:08.615537       8 log.go:172] (0xc000f3cc60) (0xc0032c4b40) Create stream
I0108 12:59:08.615543       8 log.go:172] (0xc000f3cc60) (0xc0032c4b40) Stream added, broadcasting: 5
I0108 12:59:08.620953       8 log.go:172] (0xc000f3cc60) Reply frame received for 5
I0108 12:59:08.764088       8 log.go:172] (0xc000f3cc60) Data frame received for 3
I0108 12:59:08.764142       8 log.go:172] (0xc00227f540) (3) Data frame handling
I0108 12:59:08.764159       8 log.go:172] (0xc00227f540) (3) Data frame sent
I0108 12:59:08.881193       8 log.go:172] (0xc000f3cc60) (0xc00227f540) Stream removed, broadcasting: 3
I0108 12:59:08.881433       8 log.go:172] (0xc000f3cc60) Data frame received for 1
I0108 12:59:08.881456       8 log.go:172] (0xc00202e460) (1) Data frame handling
I0108 12:59:08.881486       8 log.go:172] (0xc00202e460) (1) Data frame sent
I0108 12:59:08.881505       8 log.go:172] (0xc000f3cc60) (0xc00202e460) Stream removed, broadcasting: 1
I0108 12:59:08.881630       8 log.go:172] (0xc000f3cc60) (0xc0032c4b40) Stream removed, broadcasting: 5
I0108 12:59:08.881807       8 log.go:172] (0xc000f3cc60) (0xc00202e460) Stream removed, broadcasting: 1
I0108 12:59:08.882046       8 log.go:172] (0xc000f3cc60) (0xc00227f540) Stream removed, broadcasting: 3
I0108 12:59:08.882095       8 log.go:172] (0xc000f3cc60) (0xc0032c4b40) Stream removed, broadcasting: 5
Jan  8 12:59:08.882: INFO: Exec stderr: ""
I0108 12:59:08.882138       8 log.go:172] (0xc000f3cc60) Go away received
STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true
Jan  8 12:59:08.882: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-9047 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan  8 12:59:08.882: INFO: >>> kubeConfig: /root/.kube/config
I0108 12:59:08.941030       8 log.go:172] (0xc000ded1e0) (0xc0032c4e60) Create stream
I0108 12:59:08.941157       8 log.go:172] (0xc000ded1e0) (0xc0032c4e60) Stream added, broadcasting: 1
I0108 12:59:08.949426       8 log.go:172] (0xc000ded1e0) Reply frame received for 1
I0108 12:59:08.949473       8 log.go:172] (0xc000ded1e0) (0xc0027e2500) Create stream
I0108 12:59:08.949482       8 log.go:172] (0xc000ded1e0) (0xc0027e2500) Stream added, broadcasting: 3
I0108 12:59:08.950496       8 log.go:172] (0xc000ded1e0) Reply frame received for 3
I0108 12:59:08.950514       8 log.go:172] (0xc000ded1e0) (0xc0027e25a0) Create stream
I0108 12:59:08.950521       8 log.go:172] (0xc000ded1e0) (0xc0027e25a0) Stream added, broadcasting: 5
I0108 12:59:08.951481       8 log.go:172] (0xc000ded1e0) Reply frame received for 5
I0108 12:59:09.031525       8 log.go:172] (0xc000ded1e0) Data frame received for 3
I0108 12:59:09.031644       8 log.go:172] (0xc0027e2500) (3) Data frame handling
I0108 12:59:09.031668       8 log.go:172] (0xc0027e2500) (3) Data frame sent
I0108 12:59:09.160680       8 log.go:172] (0xc000ded1e0) (0xc0027e2500) Stream removed, broadcasting: 3
I0108 12:59:09.160872       8 log.go:172] (0xc000ded1e0) Data frame received for 1
I0108 12:59:09.160918       8 log.go:172] (0xc000ded1e0) (0xc0027e25a0) Stream removed, broadcasting: 5
I0108 12:59:09.160939       8 log.go:172] (0xc0032c4e60) (1) Data frame handling
I0108 12:59:09.160951       8 log.go:172] (0xc0032c4e60) (1) Data frame sent
I0108 12:59:09.160958       8 log.go:172] (0xc000ded1e0) (0xc0032c4e60) Stream removed, broadcasting: 1
I0108 12:59:09.160965       8 log.go:172] (0xc000ded1e0) Go away received
I0108 12:59:09.161315       8 log.go:172] (0xc000ded1e0) (0xc0032c4e60) Stream removed, broadcasting: 1
I0108 12:59:09.161406       8 log.go:172] (0xc000ded1e0) (0xc0027e2500) Stream removed, broadcasting: 3
I0108 12:59:09.161421       8 log.go:172] (0xc000ded1e0) (0xc0027e25a0) Stream removed, broadcasting: 5
Jan  8 12:59:09.161: INFO: Exec stderr: ""
Jan  8 12:59:09.161: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-9047 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan  8 12:59:09.161: INFO: >>> kubeConfig: /root/.kube/config
I0108 12:59:09.214232       8 log.go:172] (0xc000f5dce0) (0xc00227f860) Create stream
I0108 12:59:09.214483       8 log.go:172] (0xc000f5dce0) (0xc00227f860) Stream added, broadcasting: 1
I0108 12:59:09.222132       8 log.go:172] (0xc000f5dce0) Reply frame received for 1
I0108 12:59:09.222204       8 log.go:172] (0xc000f5dce0) (0xc0032c4f00) Create stream
I0108 12:59:09.222211       8 log.go:172] (0xc000f5dce0) (0xc0032c4f00) Stream added, broadcasting: 3
I0108 12:59:09.223902       8 log.go:172] (0xc000f5dce0) Reply frame received for 3
I0108 12:59:09.223935       8 log.go:172] (0xc000f5dce0) (0xc00202e5a0) Create stream
I0108 12:59:09.223944       8 log.go:172] (0xc000f5dce0) (0xc00202e5a0) Stream added, broadcasting: 5
I0108 12:59:09.226073       8 log.go:172] (0xc000f5dce0) Reply frame received for 5
I0108 12:59:09.333242       8 log.go:172] (0xc000f5dce0) Data frame received for 3
I0108 12:59:09.333573       8 log.go:172] (0xc0032c4f00) (3) Data frame handling
I0108 12:59:09.333628       8 log.go:172] (0xc0032c4f00) (3) Data frame sent
I0108 12:59:09.457269       8 log.go:172] (0xc000f5dce0) Data frame received for 1
I0108 12:59:09.457346       8 log.go:172] (0xc000f5dce0) (0xc0032c4f00) Stream removed, broadcasting: 3
I0108 12:59:09.457403       8 log.go:172] (0xc00227f860) (1) Data frame handling
I0108 12:59:09.457437       8 log.go:172] (0xc000f5dce0) (0xc00202e5a0) Stream removed, broadcasting: 5
I0108 12:59:09.457606       8 log.go:172] (0xc00227f860) (1) Data frame sent
I0108 12:59:09.457637       8 log.go:172] (0xc000f5dce0) (0xc00227f860) Stream removed, broadcasting: 1
I0108 12:59:09.457660       8 log.go:172] (0xc000f5dce0) Go away received
I0108 12:59:09.457809       8 log.go:172] (0xc000f5dce0) (0xc00227f860) Stream removed, broadcasting: 1
I0108 12:59:09.457827       8 log.go:172] (0xc000f5dce0) (0xc0032c4f00) Stream removed, broadcasting: 3
I0108 12:59:09.457843       8 log.go:172] (0xc000f5dce0) (0xc00202e5a0) Stream removed, broadcasting: 5
Jan  8 12:59:09.457: INFO: Exec stderr: ""
Jan  8 12:59:09.457: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-9047 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan  8 12:59:09.458: INFO: >>> kubeConfig: /root/.kube/config
I0108 12:59:09.519675       8 log.go:172] (0xc0014708f0) (0xc00227fb80) Create stream
I0108 12:59:09.519785       8 log.go:172] (0xc0014708f0) (0xc00227fb80) Stream added, broadcasting: 1
I0108 12:59:09.527952       8 log.go:172] (0xc0014708f0) Reply frame received for 1
I0108 12:59:09.527994       8 log.go:172] (0xc0014708f0) (0xc00227fc20) Create stream
I0108 12:59:09.528001       8 log.go:172] (0xc0014708f0) (0xc00227fc20) Stream added, broadcasting: 3
I0108 12:59:09.529413       8 log.go:172] (0xc0014708f0) Reply frame received for 3
I0108 12:59:09.529437       8 log.go:172] (0xc0014708f0) (0xc0032c4fa0) Create stream
I0108 12:59:09.529447       8 log.go:172] (0xc0014708f0) (0xc0032c4fa0) Stream added, broadcasting: 5
I0108 12:59:09.530873       8 log.go:172] (0xc0014708f0) Reply frame received for 5
I0108 12:59:09.610523       8 log.go:172] (0xc0014708f0) Data frame received for 3
I0108 12:59:09.610584       8 log.go:172] (0xc00227fc20) (3) Data frame handling
I0108 12:59:09.610606       8 log.go:172] (0xc00227fc20) (3) Data frame sent
I0108 12:59:09.729093       8 log.go:172] (0xc0014708f0) (0xc00227fc20) Stream removed, broadcasting: 3
I0108 12:59:09.729372       8 log.go:172] (0xc0014708f0) Data frame received for 1
I0108 12:59:09.729406       8 log.go:172] (0xc0014708f0) (0xc0032c4fa0) Stream removed, broadcasting: 5
I0108 12:59:09.729547       8 log.go:172] (0xc00227fb80) (1) Data frame handling
I0108 12:59:09.729574       8 log.go:172] (0xc00227fb80) (1) Data frame sent
I0108 12:59:09.729594       8 log.go:172] (0xc0014708f0) (0xc00227fb80) Stream removed, broadcasting: 1
I0108 12:59:09.729614       8 log.go:172] (0xc0014708f0) Go away received
I0108 12:59:09.729952       8 log.go:172] (0xc0014708f0) (0xc00227fb80) Stream removed, broadcasting: 1
I0108 12:59:09.729990       8 log.go:172] (0xc0014708f0) (0xc00227fc20) Stream removed, broadcasting: 3
I0108 12:59:09.730008       8 log.go:172] (0xc0014708f0) (0xc0032c4fa0) Stream removed, broadcasting: 5
Jan  8 12:59:09.730: INFO: Exec stderr: ""
Jan  8 12:59:09.730: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-9047 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan  8 12:59:09.730: INFO: >>> kubeConfig: /root/.kube/config
I0108 12:59:09.799769       8 log.go:172] (0xc000cbc9a0) (0xc001745220) Create stream
I0108 12:59:09.799921       8 log.go:172] (0xc000cbc9a0) (0xc001745220) Stream added, broadcasting: 1
I0108 12:59:09.805752       8 log.go:172] (0xc000cbc9a0) Reply frame received for 1
I0108 12:59:09.805812       8 log.go:172] (0xc000cbc9a0) (0xc0032c5040) Create stream
I0108 12:59:09.805817       8 log.go:172] (0xc000cbc9a0) (0xc0032c5040) Stream added, broadcasting: 3
I0108 12:59:09.806957       8 log.go:172] (0xc000cbc9a0) Reply frame received for 3
I0108 12:59:09.806975       8 log.go:172] (0xc000cbc9a0) (0xc00227fcc0) Create stream
I0108 12:59:09.806982       8 log.go:172] (0xc000cbc9a0) (0xc00227fcc0) Stream added, broadcasting: 5
I0108 12:59:09.808143       8 log.go:172] (0xc000cbc9a0) Reply frame received for 5
I0108 12:59:09.889166       8 log.go:172] (0xc000cbc9a0) Data frame received for 3
I0108 12:59:09.889395       8 log.go:172] (0xc0032c5040) (3) Data frame handling
I0108 12:59:09.889472       8 log.go:172] (0xc0032c5040) (3) Data frame sent
I0108 12:59:10.080090       8 log.go:172] (0xc000cbc9a0) (0xc0032c5040) Stream removed, broadcasting: 3
I0108 12:59:10.080377       8 log.go:172] (0xc000cbc9a0) Data frame received for 1
I0108 12:59:10.080442       8 log.go:172] (0xc001745220) (1) Data frame handling
I0108 12:59:10.080468       8 log.go:172] (0xc001745220) (1) Data frame sent
I0108 12:59:10.080483       8 log.go:172] (0xc000cbc9a0) (0xc001745220) Stream removed, broadcasting: 1
I0108 12:59:10.080540       8 log.go:172] (0xc000cbc9a0) (0xc00227fcc0) Stream removed, broadcasting: 5
I0108 12:59:10.080617       8 log.go:172] (0xc000cbc9a0) Go away received
I0108 12:59:10.080983       8 log.go:172] (0xc000cbc9a0) (0xc001745220) Stream removed, broadcasting: 1
I0108 12:59:10.081013       8 log.go:172] (0xc000cbc9a0) (0xc0032c5040) Stream removed, broadcasting: 3
I0108 12:59:10.081019       8 log.go:172] (0xc000cbc9a0) (0xc00227fcc0) Stream removed, broadcasting: 5
Jan  8 12:59:10.081: INFO: Exec stderr: ""
[AfterEach] [k8s.io] KubeletManagedEtcHosts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  8 12:59:10.081: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-kubelet-etc-hosts-9047" for this suite.
Jan  8 13:00:02.138: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  8 13:00:02.241: INFO: namespace e2e-kubelet-etc-hosts-9047 deletion completed in 52.146288725s

• [SLOW TEST:75.554 seconds]
[k8s.io] KubeletManagedEtcHosts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-auth] ServiceAccounts 
  should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  8 13:00:02.242: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: getting the auto-created API token
Jan  8 13:00:02.894: INFO: created pod pod-service-account-defaultsa
Jan  8 13:00:02.894: INFO: pod pod-service-account-defaultsa service account token volume mount: true
Jan  8 13:00:02.929: INFO: created pod pod-service-account-mountsa
Jan  8 13:00:02.929: INFO: pod pod-service-account-mountsa service account token volume mount: true
Jan  8 13:00:03.043: INFO: created pod pod-service-account-nomountsa
Jan  8 13:00:03.043: INFO: pod pod-service-account-nomountsa service account token volume mount: false
Jan  8 13:00:03.067: INFO: created pod pod-service-account-defaultsa-mountspec
Jan  8 13:00:03.067: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true
Jan  8 13:00:03.085: INFO: created pod pod-service-account-mountsa-mountspec
Jan  8 13:00:03.085: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true
Jan  8 13:00:03.195: INFO: created pod pod-service-account-nomountsa-mountspec
Jan  8 13:00:03.195: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true
Jan  8 13:00:03.240: INFO: created pod pod-service-account-defaultsa-nomountspec
Jan  8 13:00:03.240: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false
Jan  8 13:00:03.276: INFO: created pod pod-service-account-mountsa-nomountspec
Jan  8 13:00:03.276: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false
Jan  8 13:00:03.476: INFO: created pod pod-service-account-nomountsa-nomountspec
Jan  8 13:00:03.476: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false
[AfterEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  8 13:00:03.476: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-9557" for this suite.
Jan  8 13:00:29.852: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  8 13:00:29.999: INFO: namespace svcaccounts-9557 deletion completed in 26.45479876s

• [SLOW TEST:27.757 seconds]
[sig-auth] ServiceAccounts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23
  should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy logs on node using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  8 13:00:30.000: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy logs on node using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jan  8 13:00:30.318: INFO: (0) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 20.827509ms)
Jan  8 13:00:30.325: INFO: (1) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.323131ms)
Jan  8 13:00:30.331: INFO: (2) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.98564ms)
Jan  8 13:00:30.345: INFO: (3) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 14.036045ms)
Jan  8 13:00:30.358: INFO: (4) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 12.470889ms)
Jan  8 13:00:30.378: INFO: (5) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 20.223118ms)
Jan  8 13:00:30.390: INFO: (6) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 11.608641ms)
Jan  8 13:00:30.397: INFO: (7) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.015731ms)
Jan  8 13:00:30.405: INFO: (8) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.270332ms)
Jan  8 13:00:30.414: INFO: (9) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.455627ms)
Jan  8 13:00:30.422: INFO: (10) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.565928ms)
Jan  8 13:00:30.437: INFO: (11) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 14.364189ms)
Jan  8 13:00:30.453: INFO: (12) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 16.142808ms)
Jan  8 13:00:30.469: INFO: (13) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 15.586488ms)
Jan  8 13:00:30.480: INFO: (14) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 10.715335ms)
Jan  8 13:00:30.505: INFO: (15) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 25.213785ms)
Jan  8 13:00:30.515: INFO: (16) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 10.031352ms)
Jan  8 13:00:30.524: INFO: (17) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.627809ms)
Jan  8 13:00:30.529: INFO: (18) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.971518ms)
Jan  8 13:00:30.533: INFO: (19) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.159609ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  8 13:00:30.533: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-8554" for this suite.
Jan  8 13:00:36.564: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  8 13:00:36.673: INFO: namespace proxy-8554 deletion completed in 6.135970855s

• [SLOW TEST:6.674 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58
    should proxy logs on node using proxy subresource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  8 13:00:36.674: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0644 on node default medium
Jan  8 13:00:36.836: INFO: Waiting up to 5m0s for pod "pod-f092cbfe-8be9-437c-8b79-3e5c1516fcb6" in namespace "emptydir-5297" to be "success or failure"
Jan  8 13:00:36.920: INFO: Pod "pod-f092cbfe-8be9-437c-8b79-3e5c1516fcb6": Phase="Pending", Reason="", readiness=false. Elapsed: 84.858072ms
Jan  8 13:00:38.939: INFO: Pod "pod-f092cbfe-8be9-437c-8b79-3e5c1516fcb6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.103137449s
Jan  8 13:00:40.955: INFO: Pod "pod-f092cbfe-8be9-437c-8b79-3e5c1516fcb6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.119204289s
Jan  8 13:00:42.984: INFO: Pod "pod-f092cbfe-8be9-437c-8b79-3e5c1516fcb6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.148097905s
Jan  8 13:00:44.992: INFO: Pod "pod-f092cbfe-8be9-437c-8b79-3e5c1516fcb6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.156132014s
STEP: Saw pod success
Jan  8 13:00:44.992: INFO: Pod "pod-f092cbfe-8be9-437c-8b79-3e5c1516fcb6" satisfied condition "success or failure"
Jan  8 13:00:45.017: INFO: Trying to get logs from node iruya-node pod pod-f092cbfe-8be9-437c-8b79-3e5c1516fcb6 container test-container: 
STEP: delete the pod
Jan  8 13:00:45.176: INFO: Waiting for pod pod-f092cbfe-8be9-437c-8b79-3e5c1516fcb6 to disappear
Jan  8 13:00:45.182: INFO: Pod pod-f092cbfe-8be9-437c-8b79-3e5c1516fcb6 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  8 13:00:45.182: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-5297" for this suite.
Jan  8 13:00:51.214: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  8 13:00:51.369: INFO: namespace emptydir-5297 deletion completed in 6.180764676s

• [SLOW TEST:14.695 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  8 13:00:51.371: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name projected-secret-test-map-34dd7aae-f5c3-44a9-875a-f58313c66575
STEP: Creating a pod to test consume secrets
Jan  8 13:00:51.595: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-c02223c6-42c3-4265-a611-fa874b4094a0" in namespace "projected-418" to be "success or failure"
Jan  8 13:00:51.601: INFO: Pod "pod-projected-secrets-c02223c6-42c3-4265-a611-fa874b4094a0": Phase="Pending", Reason="", readiness=false. Elapsed: 5.848365ms
Jan  8 13:00:53.613: INFO: Pod "pod-projected-secrets-c02223c6-42c3-4265-a611-fa874b4094a0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017491476s
Jan  8 13:00:55.618: INFO: Pod "pod-projected-secrets-c02223c6-42c3-4265-a611-fa874b4094a0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.022666668s
Jan  8 13:00:57.626: INFO: Pod "pod-projected-secrets-c02223c6-42c3-4265-a611-fa874b4094a0": Phase="Pending", Reason="", readiness=false. Elapsed: 6.030718696s
Jan  8 13:00:59.635: INFO: Pod "pod-projected-secrets-c02223c6-42c3-4265-a611-fa874b4094a0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.039996224s
STEP: Saw pod success
Jan  8 13:00:59.635: INFO: Pod "pod-projected-secrets-c02223c6-42c3-4265-a611-fa874b4094a0" satisfied condition "success or failure"
Jan  8 13:00:59.641: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-c02223c6-42c3-4265-a611-fa874b4094a0 container projected-secret-volume-test: 
STEP: delete the pod
Jan  8 13:00:59.704: INFO: Waiting for pod pod-projected-secrets-c02223c6-42c3-4265-a611-fa874b4094a0 to disappear
Jan  8 13:00:59.851: INFO: Pod pod-projected-secrets-c02223c6-42c3-4265-a611-fa874b4094a0 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  8 13:00:59.851: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-418" for this suite.
Jan  8 13:01:05.901: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  8 13:01:06.033: INFO: namespace projected-418 deletion completed in 6.165065548s

• [SLOW TEST:14.662 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  8 13:01:06.033: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name projected-secret-test-ec256d9f-aac2-47ee-80cc-b129fa8cd9b2
STEP: Creating a pod to test consume secrets
Jan  8 13:01:06.221: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-f82ffd2a-7577-4200-8ac4-2e78cbe58f2f" in namespace "projected-1270" to be "success or failure"
Jan  8 13:01:06.229: INFO: Pod "pod-projected-secrets-f82ffd2a-7577-4200-8ac4-2e78cbe58f2f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.017462ms
Jan  8 13:01:08.241: INFO: Pod "pod-projected-secrets-f82ffd2a-7577-4200-8ac4-2e78cbe58f2f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019783461s
Jan  8 13:01:10.348: INFO: Pod "pod-projected-secrets-f82ffd2a-7577-4200-8ac4-2e78cbe58f2f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.126411845s
Jan  8 13:01:12.367: INFO: Pod "pod-projected-secrets-f82ffd2a-7577-4200-8ac4-2e78cbe58f2f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.145641002s
Jan  8 13:01:14.383: INFO: Pod "pod-projected-secrets-f82ffd2a-7577-4200-8ac4-2e78cbe58f2f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.161983289s
STEP: Saw pod success
Jan  8 13:01:14.383: INFO: Pod "pod-projected-secrets-f82ffd2a-7577-4200-8ac4-2e78cbe58f2f" satisfied condition "success or failure"
Jan  8 13:01:14.388: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-f82ffd2a-7577-4200-8ac4-2e78cbe58f2f container secret-volume-test: 
STEP: delete the pod
Jan  8 13:01:14.524: INFO: Waiting for pod pod-projected-secrets-f82ffd2a-7577-4200-8ac4-2e78cbe58f2f to disappear
Jan  8 13:01:14.535: INFO: Pod pod-projected-secrets-f82ffd2a-7577-4200-8ac4-2e78cbe58f2f no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  8 13:01:14.536: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1270" for this suite.
Jan  8 13:01:20.578: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  8 13:01:20.685: INFO: namespace projected-1270 deletion completed in 6.142463681s

• [SLOW TEST:14.652 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  8 13:01:20.687: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Jan  8 13:01:20.866: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4db3053c-a386-4449-8d07-21be7a05bf5f" in namespace "projected-2051" to be "success or failure"
Jan  8 13:01:20.878: INFO: Pod "downwardapi-volume-4db3053c-a386-4449-8d07-21be7a05bf5f": Phase="Pending", Reason="", readiness=false. Elapsed: 11.865293ms
Jan  8 13:01:22.919: INFO: Pod "downwardapi-volume-4db3053c-a386-4449-8d07-21be7a05bf5f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.053419686s
Jan  8 13:01:24.928: INFO: Pod "downwardapi-volume-4db3053c-a386-4449-8d07-21be7a05bf5f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.062690426s
Jan  8 13:01:26.944: INFO: Pod "downwardapi-volume-4db3053c-a386-4449-8d07-21be7a05bf5f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.078559514s
Jan  8 13:01:28.951: INFO: Pod "downwardapi-volume-4db3053c-a386-4449-8d07-21be7a05bf5f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.085327032s
STEP: Saw pod success
Jan  8 13:01:28.951: INFO: Pod "downwardapi-volume-4db3053c-a386-4449-8d07-21be7a05bf5f" satisfied condition "success or failure"
Jan  8 13:01:28.954: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-4db3053c-a386-4449-8d07-21be7a05bf5f container client-container: 
STEP: delete the pod
Jan  8 13:01:29.085: INFO: Waiting for pod downwardapi-volume-4db3053c-a386-4449-8d07-21be7a05bf5f to disappear
Jan  8 13:01:29.097: INFO: Pod downwardapi-volume-4db3053c-a386-4449-8d07-21be7a05bf5f no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  8 13:01:29.097: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2051" for this suite.
Jan  8 13:01:35.154: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  8 13:01:35.310: INFO: namespace projected-2051 deletion completed in 6.204835986s

• [SLOW TEST:14.624 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-storage] HostPath 
  should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  8 13:01:35.310: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename hostpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37
[It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test hostPath mode
Jan  8 13:01:35.443: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-925" to be "success or failure"
Jan  8 13:01:35.447: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.501962ms
Jan  8 13:01:37.455: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012363126s
Jan  8 13:01:39.465: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.02210797s
Jan  8 13:01:41.483: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 6.040305855s
Jan  8 13:01:43.491: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 8.047853383s
Jan  8 13:01:45.502: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.059095184s
STEP: Saw pod success
Jan  8 13:01:45.502: INFO: Pod "pod-host-path-test" satisfied condition "success or failure"
Jan  8 13:01:45.509: INFO: Trying to get logs from node iruya-node pod pod-host-path-test container test-container-1: 
STEP: delete the pod
Jan  8 13:01:45.568: INFO: Waiting for pod pod-host-path-test to disappear
Jan  8 13:01:45.572: INFO: Pod pod-host-path-test no longer exists
[AfterEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  8 13:01:45.572: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "hostpath-925" for this suite.
Jan  8 13:01:51.661: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  8 13:01:51.828: INFO: namespace hostpath-925 deletion completed in 6.250924743s

• [SLOW TEST:16.518 seconds]
[sig-storage] HostPath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34
  should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  8 13:01:51.829: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0644 on node default medium
Jan  8 13:01:51.999: INFO: Waiting up to 5m0s for pod "pod-e9b31306-42c2-4830-8824-148873bbc595" in namespace "emptydir-8584" to be "success or failure"
Jan  8 13:01:52.028: INFO: Pod "pod-e9b31306-42c2-4830-8824-148873bbc595": Phase="Pending", Reason="", readiness=false. Elapsed: 28.74982ms
Jan  8 13:01:54.039: INFO: Pod "pod-e9b31306-42c2-4830-8824-148873bbc595": Phase="Pending", Reason="", readiness=false. Elapsed: 2.039909564s
Jan  8 13:01:56.046: INFO: Pod "pod-e9b31306-42c2-4830-8824-148873bbc595": Phase="Pending", Reason="", readiness=false. Elapsed: 4.046436127s
Jan  8 13:01:58.057: INFO: Pod "pod-e9b31306-42c2-4830-8824-148873bbc595": Phase="Pending", Reason="", readiness=false. Elapsed: 6.057798472s
Jan  8 13:02:00.063: INFO: Pod "pod-e9b31306-42c2-4830-8824-148873bbc595": Phase="Pending", Reason="", readiness=false. Elapsed: 8.063834666s
Jan  8 13:02:02.073: INFO: Pod "pod-e9b31306-42c2-4830-8824-148873bbc595": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.073787919s
STEP: Saw pod success
Jan  8 13:02:02.073: INFO: Pod "pod-e9b31306-42c2-4830-8824-148873bbc595" satisfied condition "success or failure"
Jan  8 13:02:02.078: INFO: Trying to get logs from node iruya-node pod pod-e9b31306-42c2-4830-8824-148873bbc595 container test-container: 
STEP: delete the pod
Jan  8 13:02:02.148: INFO: Waiting for pod pod-e9b31306-42c2-4830-8824-148873bbc595 to disappear
Jan  8 13:02:02.153: INFO: Pod pod-e9b31306-42c2-4830-8824-148873bbc595 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  8 13:02:02.153: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-8584" for this suite.
Jan  8 13:02:08.191: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  8 13:02:08.311: INFO: namespace emptydir-8584 deletion completed in 6.14939175s

• [SLOW TEST:16.483 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for ExternalName services [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  8 13:02:08.312: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for ExternalName services [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a test externalName service
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-9551.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-9551.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-9551.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-9551.svc.cluster.local; sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jan  8 13:02:20.547: INFO: File wheezy_udp@dns-test-service-3.dns-9551.svc.cluster.local from pod  dns-9551/dns-test-a30f9812-a273-4e42-a472-ffcfd9b512d4 contains '' instead of 'foo.example.com.'
Jan  8 13:02:20.557: INFO: File jessie_udp@dns-test-service-3.dns-9551.svc.cluster.local from pod  dns-9551/dns-test-a30f9812-a273-4e42-a472-ffcfd9b512d4 contains '' instead of 'foo.example.com.'
Jan  8 13:02:20.557: INFO: Lookups using dns-9551/dns-test-a30f9812-a273-4e42-a472-ffcfd9b512d4 failed for: [wheezy_udp@dns-test-service-3.dns-9551.svc.cluster.local jessie_udp@dns-test-service-3.dns-9551.svc.cluster.local]

Jan  8 13:02:25.582: INFO: DNS probes using dns-test-a30f9812-a273-4e42-a472-ffcfd9b512d4 succeeded

STEP: deleting the pod
STEP: changing the externalName to bar.example.com
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-9551.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-9551.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-9551.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-9551.svc.cluster.local; sleep 1; done

STEP: creating a second pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jan  8 13:02:43.781: INFO: File wheezy_udp@dns-test-service-3.dns-9551.svc.cluster.local from pod  dns-9551/dns-test-42cc6ef0-8f52-436c-950e-608f072050ff contains '' instead of 'bar.example.com.'
Jan  8 13:02:43.799: INFO: File jessie_udp@dns-test-service-3.dns-9551.svc.cluster.local from pod  dns-9551/dns-test-42cc6ef0-8f52-436c-950e-608f072050ff contains '' instead of 'bar.example.com.'
Jan  8 13:02:43.799: INFO: Lookups using dns-9551/dns-test-42cc6ef0-8f52-436c-950e-608f072050ff failed for: [wheezy_udp@dns-test-service-3.dns-9551.svc.cluster.local jessie_udp@dns-test-service-3.dns-9551.svc.cluster.local]

Jan  8 13:02:48.812: INFO: File wheezy_udp@dns-test-service-3.dns-9551.svc.cluster.local from pod  dns-9551/dns-test-42cc6ef0-8f52-436c-950e-608f072050ff contains 'foo.example.com.
' instead of 'bar.example.com.'
Jan  8 13:02:48.818: INFO: File jessie_udp@dns-test-service-3.dns-9551.svc.cluster.local from pod  dns-9551/dns-test-42cc6ef0-8f52-436c-950e-608f072050ff contains 'foo.example.com.
' instead of 'bar.example.com.'
Jan  8 13:02:48.818: INFO: Lookups using dns-9551/dns-test-42cc6ef0-8f52-436c-950e-608f072050ff failed for: [wheezy_udp@dns-test-service-3.dns-9551.svc.cluster.local jessie_udp@dns-test-service-3.dns-9551.svc.cluster.local]

Jan  8 13:02:53.825: INFO: File jessie_udp@dns-test-service-3.dns-9551.svc.cluster.local from pod  dns-9551/dns-test-42cc6ef0-8f52-436c-950e-608f072050ff contains 'foo.example.com.
' instead of 'bar.example.com.'
Jan  8 13:02:53.825: INFO: Lookups using dns-9551/dns-test-42cc6ef0-8f52-436c-950e-608f072050ff failed for: [jessie_udp@dns-test-service-3.dns-9551.svc.cluster.local]

Jan  8 13:02:58.830: INFO: DNS probes using dns-test-42cc6ef0-8f52-436c-950e-608f072050ff succeeded

STEP: deleting the pod
STEP: changing the service to type=ClusterIP
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-9551.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-9551.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-9551.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-9551.svc.cluster.local; sleep 1; done

STEP: creating a third pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jan  8 13:03:13.250: INFO: File wheezy_udp@dns-test-service-3.dns-9551.svc.cluster.local from pod  dns-9551/dns-test-abb0bb14-efe9-46f2-9caa-b56812c20862 contains '' instead of '10.102.245.248'
Jan  8 13:03:13.259: INFO: File jessie_udp@dns-test-service-3.dns-9551.svc.cluster.local from pod  dns-9551/dns-test-abb0bb14-efe9-46f2-9caa-b56812c20862 contains '' instead of '10.102.245.248'
Jan  8 13:03:13.259: INFO: Lookups using dns-9551/dns-test-abb0bb14-efe9-46f2-9caa-b56812c20862 failed for: [wheezy_udp@dns-test-service-3.dns-9551.svc.cluster.local jessie_udp@dns-test-service-3.dns-9551.svc.cluster.local]

Jan  8 13:03:18.279: INFO: File wheezy_udp@dns-test-service-3.dns-9551.svc.cluster.local from pod  dns-9551/dns-test-abb0bb14-efe9-46f2-9caa-b56812c20862 contains '' instead of '10.102.245.248'
Jan  8 13:03:18.287: INFO: Lookups using dns-9551/dns-test-abb0bb14-efe9-46f2-9caa-b56812c20862 failed for: [wheezy_udp@dns-test-service-3.dns-9551.svc.cluster.local]

Jan  8 13:03:23.296: INFO: DNS probes using dns-test-abb0bb14-efe9-46f2-9caa-b56812c20862 succeeded

STEP: deleting the pod
STEP: deleting the test externalName service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  8 13:03:23.567: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-9551" for this suite.
Jan  8 13:03:31.720: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  8 13:03:31.918: INFO: namespace dns-9551 deletion completed in 8.236717349s

• [SLOW TEST:83.606 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for ExternalName services [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-storage] Projected configMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  8 13:03:31.918: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-010c3f65-31d9-4af1-b789-8bf069b10d77
STEP: Creating a pod to test consume configMaps
Jan  8 13:03:32.078: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-8fe1ed34-c612-40eb-81c1-604c0453e801" in namespace "projected-9593" to be "success or failure"
Jan  8 13:03:32.101: INFO: Pod "pod-projected-configmaps-8fe1ed34-c612-40eb-81c1-604c0453e801": Phase="Pending", Reason="", readiness=false. Elapsed: 23.618354ms
Jan  8 13:03:34.117: INFO: Pod "pod-projected-configmaps-8fe1ed34-c612-40eb-81c1-604c0453e801": Phase="Pending", Reason="", readiness=false. Elapsed: 2.039490063s
Jan  8 13:03:36.123: INFO: Pod "pod-projected-configmaps-8fe1ed34-c612-40eb-81c1-604c0453e801": Phase="Pending", Reason="", readiness=false. Elapsed: 4.045444434s
Jan  8 13:03:38.150: INFO: Pod "pod-projected-configmaps-8fe1ed34-c612-40eb-81c1-604c0453e801": Phase="Pending", Reason="", readiness=false. Elapsed: 6.071781501s
Jan  8 13:03:40.155: INFO: Pod "pod-projected-configmaps-8fe1ed34-c612-40eb-81c1-604c0453e801": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.077684207s
STEP: Saw pod success
Jan  8 13:03:40.155: INFO: Pod "pod-projected-configmaps-8fe1ed34-c612-40eb-81c1-604c0453e801" satisfied condition "success or failure"
Jan  8 13:03:40.158: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-8fe1ed34-c612-40eb-81c1-604c0453e801 container projected-configmap-volume-test: 
STEP: delete the pod
Jan  8 13:03:40.979: INFO: Waiting for pod pod-projected-configmaps-8fe1ed34-c612-40eb-81c1-604c0453e801 to disappear
Jan  8 13:03:40.992: INFO: Pod pod-projected-configmaps-8fe1ed34-c612-40eb-81c1-604c0453e801 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  8 13:03:40.992: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9593" for this suite.
Jan  8 13:03:47.455: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  8 13:03:47.604: INFO: namespace projected-9593 deletion completed in 6.184248502s

• [SLOW TEST:15.686 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  8 13:03:47.604: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44
[It] should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
Jan  8 13:03:47.723: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  8 13:04:01.261: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-5406" for this suite.
Jan  8 13:04:07.328: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  8 13:04:07.442: INFO: namespace init-container-5406 deletion completed in 6.15746772s

• [SLOW TEST:19.838 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  8 13:04:07.442: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Jan  8 13:04:07.531: INFO: Waiting up to 5m0s for pod "downwardapi-volume-279b240f-03f9-4928-943a-553dded795df" in namespace "projected-7415" to be "success or failure"
Jan  8 13:04:07.581: INFO: Pod "downwardapi-volume-279b240f-03f9-4928-943a-553dded795df": Phase="Pending", Reason="", readiness=false. Elapsed: 49.901487ms
Jan  8 13:04:09.593: INFO: Pod "downwardapi-volume-279b240f-03f9-4928-943a-553dded795df": Phase="Pending", Reason="", readiness=false. Elapsed: 2.061959341s
Jan  8 13:04:11.618: INFO: Pod "downwardapi-volume-279b240f-03f9-4928-943a-553dded795df": Phase="Pending", Reason="", readiness=false. Elapsed: 4.087214876s
Jan  8 13:04:13.646: INFO: Pod "downwardapi-volume-279b240f-03f9-4928-943a-553dded795df": Phase="Pending", Reason="", readiness=false. Elapsed: 6.115673002s
Jan  8 13:04:15.659: INFO: Pod "downwardapi-volume-279b240f-03f9-4928-943a-553dded795df": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.128122581s
STEP: Saw pod success
Jan  8 13:04:15.659: INFO: Pod "downwardapi-volume-279b240f-03f9-4928-943a-553dded795df" satisfied condition "success or failure"
Jan  8 13:04:15.666: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-279b240f-03f9-4928-943a-553dded795df container client-container: 
STEP: delete the pod
Jan  8 13:04:15.724: INFO: Waiting for pod downwardapi-volume-279b240f-03f9-4928-943a-553dded795df to disappear
Jan  8 13:04:15.740: INFO: Pod downwardapi-volume-279b240f-03f9-4928-943a-553dded795df no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  8 13:04:15.740: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7415" for this suite.
Jan  8 13:04:21.794: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  8 13:04:22.016: INFO: namespace projected-7415 deletion completed in 6.268341815s

• [SLOW TEST:14.574 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  8 13:04:22.017: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Jan  8 13:04:29.211: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  8 13:04:29.282: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-4674" for this suite.
Jan  8 13:04:35.357: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  8 13:04:35.525: INFO: namespace container-runtime-4674 deletion completed in 6.206640932s

• [SLOW TEST:13.508 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129
      should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  8 13:04:35.525: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Jan  8 13:04:51.821: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan  8 13:04:51.919: INFO: Pod pod-with-prestop-exec-hook still exists
Jan  8 13:04:53.919: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan  8 13:04:53.936: INFO: Pod pod-with-prestop-exec-hook still exists
Jan  8 13:04:55.919: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan  8 13:04:55.931: INFO: Pod pod-with-prestop-exec-hook still exists
Jan  8 13:04:57.919: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan  8 13:04:57.929: INFO: Pod pod-with-prestop-exec-hook still exists
Jan  8 13:04:59.919: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan  8 13:04:59.929: INFO: Pod pod-with-prestop-exec-hook still exists
Jan  8 13:05:01.919: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan  8 13:05:01.933: INFO: Pod pod-with-prestop-exec-hook still exists
Jan  8 13:05:03.919: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan  8 13:05:03.933: INFO: Pod pod-with-prestop-exec-hook still exists
Jan  8 13:05:05.919: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan  8 13:05:05.929: INFO: Pod pod-with-prestop-exec-hook still exists
Jan  8 13:05:07.919: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan  8 13:05:07.933: INFO: Pod pod-with-prestop-exec-hook still exists
Jan  8 13:05:09.919: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan  8 13:05:09.939: INFO: Pod pod-with-prestop-exec-hook still exists
Jan  8 13:05:11.919: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan  8 13:05:11.927: INFO: Pod pod-with-prestop-exec-hook still exists
Jan  8 13:05:13.919: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan  8 13:05:13.930: INFO: Pod pod-with-prestop-exec-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  8 13:05:13.956: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-6083" for this suite.
Jan  8 13:05:36.079: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  8 13:05:36.224: INFO: namespace container-lifecycle-hook-6083 deletion completed in 22.258791109s

• [SLOW TEST:60.699 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute prestop exec hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  8 13:05:36.225: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Jan  8 13:05:36.378: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7120b49c-dc86-40bc-90ce-2d0dfa63d494" in namespace "downward-api-6002" to be "success or failure"
Jan  8 13:05:36.408: INFO: Pod "downwardapi-volume-7120b49c-dc86-40bc-90ce-2d0dfa63d494": Phase="Pending", Reason="", readiness=false. Elapsed: 30.345995ms
Jan  8 13:05:38.418: INFO: Pod "downwardapi-volume-7120b49c-dc86-40bc-90ce-2d0dfa63d494": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040008752s
Jan  8 13:05:40.428: INFO: Pod "downwardapi-volume-7120b49c-dc86-40bc-90ce-2d0dfa63d494": Phase="Pending", Reason="", readiness=false. Elapsed: 4.050628469s
Jan  8 13:05:42.445: INFO: Pod "downwardapi-volume-7120b49c-dc86-40bc-90ce-2d0dfa63d494": Phase="Pending", Reason="", readiness=false. Elapsed: 6.067094522s
Jan  8 13:05:44.456: INFO: Pod "downwardapi-volume-7120b49c-dc86-40bc-90ce-2d0dfa63d494": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.078270217s
STEP: Saw pod success
Jan  8 13:05:44.456: INFO: Pod "downwardapi-volume-7120b49c-dc86-40bc-90ce-2d0dfa63d494" satisfied condition "success or failure"
Jan  8 13:05:44.463: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-7120b49c-dc86-40bc-90ce-2d0dfa63d494 container client-container: 
STEP: delete the pod
Jan  8 13:05:44.614: INFO: Waiting for pod downwardapi-volume-7120b49c-dc86-40bc-90ce-2d0dfa63d494 to disappear
Jan  8 13:05:44.636: INFO: Pod downwardapi-volume-7120b49c-dc86-40bc-90ce-2d0dfa63d494 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  8 13:05:44.636: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-6002" for this suite.
Jan  8 13:05:51.676: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  8 13:05:51.839: INFO: namespace downward-api-6002 deletion completed in 7.196024232s

• [SLOW TEST:15.615 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should have monotonically increasing restart count [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  8 13:05:51.841: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should have monotonically increasing restart count [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod liveness-baeb339f-8897-4ac4-a710-68dd4db217fc in namespace container-probe-5721
Jan  8 13:06:00.073: INFO: Started pod liveness-baeb339f-8897-4ac4-a710-68dd4db217fc in namespace container-probe-5721
STEP: checking the pod's current state and verifying that restartCount is present
Jan  8 13:06:00.079: INFO: Initial restart count of pod liveness-baeb339f-8897-4ac4-a710-68dd4db217fc is 0
Jan  8 13:06:22.696: INFO: Restart count of pod container-probe-5721/liveness-baeb339f-8897-4ac4-a710-68dd4db217fc is now 1 (22.616904599s elapsed)
Jan  8 13:06:42.808: INFO: Restart count of pod container-probe-5721/liveness-baeb339f-8897-4ac4-a710-68dd4db217fc is now 2 (42.728865139s elapsed)
Jan  8 13:07:03.031: INFO: Restart count of pod container-probe-5721/liveness-baeb339f-8897-4ac4-a710-68dd4db217fc is now 3 (1m2.951825806s elapsed)
Jan  8 13:07:23.127: INFO: Restart count of pod container-probe-5721/liveness-baeb339f-8897-4ac4-a710-68dd4db217fc is now 4 (1m23.047735377s elapsed)
Jan  8 13:08:31.742: INFO: Restart count of pod container-probe-5721/liveness-baeb339f-8897-4ac4-a710-68dd4db217fc is now 5 (2m31.663441363s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  8 13:08:31.901: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-5721" for this suite.
Jan  8 13:08:37.960: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  8 13:08:38.060: INFO: namespace container-probe-5721 deletion completed in 6.139554057s

• [SLOW TEST:166.219 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should have monotonically increasing restart count [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-api-machinery] Aggregator 
  Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  8 13:08:38.060: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename aggregator
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76
Jan  8 13:08:38.137: INFO: >>> kubeConfig: /root/.kube/config
[It] Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Registering the sample API server.
Jan  8 13:08:39.038: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set
Jan  8 13:08:41.260: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714085719, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714085719, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714085719, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714085719, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  8 13:08:43.270: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714085719, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714085719, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714085719, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714085719, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  8 13:08:45.288: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714085719, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714085719, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714085719, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714085719, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  8 13:08:47.289: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714085719, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714085719, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714085719, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714085719, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  8 13:08:53.325: INFO: Waited 4.0460211s for the sample-apiserver to be ready to handle requests.
[AfterEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67
[AfterEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  8 13:08:53.812: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "aggregator-2645" for this suite.
Jan  8 13:08:59.967: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  8 13:09:00.121: INFO: namespace aggregator-2645 deletion completed in 6.27216622s

• [SLOW TEST:22.061 seconds]
[sig-api-machinery] Aggregator
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  8 13:09:00.122: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-2c774633-45ac-4a91-beb8-87f36f18cd34
STEP: Creating a pod to test consume secrets
Jan  8 13:09:00.337: INFO: Waiting up to 5m0s for pod "pod-secrets-64efe472-255a-45c4-ba4d-cea1977bc669" in namespace "secrets-2353" to be "success or failure"
Jan  8 13:09:00.355: INFO: Pod "pod-secrets-64efe472-255a-45c4-ba4d-cea1977bc669": Phase="Pending", Reason="", readiness=false. Elapsed: 17.904666ms
Jan  8 13:09:02.368: INFO: Pod "pod-secrets-64efe472-255a-45c4-ba4d-cea1977bc669": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030924341s
Jan  8 13:09:04.380: INFO: Pod "pod-secrets-64efe472-255a-45c4-ba4d-cea1977bc669": Phase="Pending", Reason="", readiness=false. Elapsed: 4.043537608s
Jan  8 13:09:06.386: INFO: Pod "pod-secrets-64efe472-255a-45c4-ba4d-cea1977bc669": Phase="Pending", Reason="", readiness=false. Elapsed: 6.049571675s
Jan  8 13:09:08.401: INFO: Pod "pod-secrets-64efe472-255a-45c4-ba4d-cea1977bc669": Phase="Pending", Reason="", readiness=false. Elapsed: 8.064411946s
Jan  8 13:09:10.413: INFO: Pod "pod-secrets-64efe472-255a-45c4-ba4d-cea1977bc669": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.075945666s
STEP: Saw pod success
Jan  8 13:09:10.413: INFO: Pod "pod-secrets-64efe472-255a-45c4-ba4d-cea1977bc669" satisfied condition "success or failure"
Jan  8 13:09:10.421: INFO: Trying to get logs from node iruya-node pod pod-secrets-64efe472-255a-45c4-ba4d-cea1977bc669 container secret-volume-test: 
STEP: delete the pod
Jan  8 13:09:10.535: INFO: Waiting for pod pod-secrets-64efe472-255a-45c4-ba4d-cea1977bc669 to disappear
Jan  8 13:09:10.605: INFO: Pod pod-secrets-64efe472-255a-45c4-ba4d-cea1977bc669 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  8 13:09:10.606: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-2353" for this suite.
Jan  8 13:09:16.646: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  8 13:09:16.801: INFO: namespace secrets-2353 deletion completed in 6.18554006s

• [SLOW TEST:16.679 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run pod 
  should create a pod from an image when restart is Never  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  8 13:09:16.801: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1685
[It] should create a pod from an image when restart is Never  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Jan  8 13:09:16.964: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-872'
Jan  8 13:09:19.963: INFO: stderr: ""
Jan  8 13:09:19.963: INFO: stdout: "pod/e2e-test-nginx-pod created\n"
STEP: verifying the pod e2e-test-nginx-pod was created
[AfterEach] [k8s.io] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1690
Jan  8 13:09:20.009: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=kubectl-872'
Jan  8 13:09:26.265: INFO: stderr: ""
Jan  8 13:09:26.265: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  8 13:09:26.265: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-872" for this suite.
Jan  8 13:09:32.343: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  8 13:09:32.438: INFO: namespace kubectl-872 deletion completed in 6.165035961s

• [SLOW TEST:15.637 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create a pod from an image when restart is Never  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  8 13:09:32.439: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir volume type on node default medium
Jan  8 13:09:32.522: INFO: Waiting up to 5m0s for pod "pod-fbd7fdb8-2368-4e4f-a87f-2d839395ebed" in namespace "emptydir-3882" to be "success or failure"
Jan  8 13:09:32.537: INFO: Pod "pod-fbd7fdb8-2368-4e4f-a87f-2d839395ebed": Phase="Pending", Reason="", readiness=false. Elapsed: 14.480984ms
Jan  8 13:09:34.551: INFO: Pod "pod-fbd7fdb8-2368-4e4f-a87f-2d839395ebed": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028441904s
Jan  8 13:09:36.560: INFO: Pod "pod-fbd7fdb8-2368-4e4f-a87f-2d839395ebed": Phase="Pending", Reason="", readiness=false. Elapsed: 4.03748825s
Jan  8 13:09:38.569: INFO: Pod "pod-fbd7fdb8-2368-4e4f-a87f-2d839395ebed": Phase="Pending", Reason="", readiness=false. Elapsed: 6.046765193s
Jan  8 13:09:40.599: INFO: Pod "pod-fbd7fdb8-2368-4e4f-a87f-2d839395ebed": Phase="Pending", Reason="", readiness=false. Elapsed: 8.076790297s
Jan  8 13:09:42.609: INFO: Pod "pod-fbd7fdb8-2368-4e4f-a87f-2d839395ebed": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.086322192s
STEP: Saw pod success
Jan  8 13:09:42.609: INFO: Pod "pod-fbd7fdb8-2368-4e4f-a87f-2d839395ebed" satisfied condition "success or failure"
Jan  8 13:09:42.612: INFO: Trying to get logs from node iruya-node pod pod-fbd7fdb8-2368-4e4f-a87f-2d839395ebed container test-container: 
STEP: delete the pod
Jan  8 13:09:42.792: INFO: Waiting for pod pod-fbd7fdb8-2368-4e4f-a87f-2d839395ebed to disappear
Jan  8 13:09:42.801: INFO: Pod pod-fbd7fdb8-2368-4e4f-a87f-2d839395ebed no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  8 13:09:42.802: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-3882" for this suite.
Jan  8 13:09:48.872: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  8 13:09:48.956: INFO: namespace emptydir-3882 deletion completed in 6.14287196s

• [SLOW TEST:16.517 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-storage] EmptyDir volumes 
  pod should support shared volumes between containers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  8 13:09:48.956: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] pod should support shared volumes between containers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating Pod
STEP: Waiting for the pod running
STEP: Geting the pod
STEP: Reading file content from the nginx-container
Jan  8 13:10:01.120: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec pod-sharedvolume-1bcab697-f6b2-45df-b74b-2c1411c19707 -c busybox-main-container --namespace=emptydir-5256 -- cat /usr/share/volumeshare/shareddata.txt'
Jan  8 13:10:01.648: INFO: stderr: "I0108 13:10:01.392552     226 log.go:172] (0xc000a30420) (0xc0003ca8c0) Create stream\nI0108 13:10:01.392970     226 log.go:172] (0xc000a30420) (0xc0003ca8c0) Stream added, broadcasting: 1\nI0108 13:10:01.409178     226 log.go:172] (0xc000a30420) Reply frame received for 1\nI0108 13:10:01.409301     226 log.go:172] (0xc000a30420) (0xc0003ca000) Create stream\nI0108 13:10:01.409355     226 log.go:172] (0xc000a30420) (0xc0003ca000) Stream added, broadcasting: 3\nI0108 13:10:01.410832     226 log.go:172] (0xc000a30420) Reply frame received for 3\nI0108 13:10:01.410880     226 log.go:172] (0xc000a30420) (0xc000622320) Create stream\nI0108 13:10:01.410899     226 log.go:172] (0xc000a30420) (0xc000622320) Stream added, broadcasting: 5\nI0108 13:10:01.413058     226 log.go:172] (0xc000a30420) Reply frame received for 5\nI0108 13:10:01.501427     226 log.go:172] (0xc000a30420) Data frame received for 3\nI0108 13:10:01.501564     226 log.go:172] (0xc0003ca000) (3) Data frame handling\nI0108 13:10:01.501625     226 log.go:172] (0xc0003ca000) (3) Data frame sent\nI0108 13:10:01.630821     226 log.go:172] (0xc000a30420) Data frame received for 1\nI0108 13:10:01.631241     226 log.go:172] (0xc000a30420) (0xc0003ca000) Stream removed, broadcasting: 3\nI0108 13:10:01.631430     226 log.go:172] (0xc0003ca8c0) (1) Data frame handling\nI0108 13:10:01.631506     226 log.go:172] (0xc0003ca8c0) (1) Data frame sent\nI0108 13:10:01.631590     226 log.go:172] (0xc000a30420) (0xc000622320) Stream removed, broadcasting: 5\nI0108 13:10:01.631766     226 log.go:172] (0xc000a30420) (0xc0003ca8c0) Stream removed, broadcasting: 1\nI0108 13:10:01.631811     226 log.go:172] (0xc000a30420) Go away received\nI0108 13:10:01.634027     226 log.go:172] (0xc000a30420) (0xc0003ca8c0) Stream removed, broadcasting: 1\nI0108 13:10:01.634124     226 log.go:172] (0xc000a30420) (0xc0003ca000) Stream removed, broadcasting: 3\nI0108 13:10:01.634151     226 log.go:172] (0xc000a30420) (0xc000622320) Stream removed, broadcasting: 5\n"
Jan  8 13:10:01.648: INFO: stdout: "Hello from the busy-box sub-container\n"
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  8 13:10:01.648: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-5256" for this suite.
Jan  8 13:10:07.730: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  8 13:10:07.814: INFO: namespace emptydir-5256 deletion completed in 6.15466323s

• [SLOW TEST:18.858 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  pod should support shared volumes between containers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  8 13:10:07.815: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name cm-test-opt-del-a522171e-34b1-434a-b1ee-767c453a58d5
STEP: Creating configMap with name cm-test-opt-upd-7e976ddb-74f0-4d34-8418-cd33206c7020
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-a522171e-34b1-434a-b1ee-767c453a58d5
STEP: Updating configmap cm-test-opt-upd-7e976ddb-74f0-4d34-8418-cd33206c7020
STEP: Creating configMap with name cm-test-opt-create-d55b73ac-1d74-4996-b5e7-53a701f6e242
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  8 13:10:28.437: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2835" for this suite.
Jan  8 13:10:50.505: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  8 13:10:50.603: INFO: namespace projected-2835 deletion completed in 22.162112383s

• [SLOW TEST:42.789 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-auth] ServiceAccounts 
  should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  8 13:10:50.604: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: getting the auto-created API token
STEP: reading a file in the container
Jan  8 13:11:01.291: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-2898 pod-service-account-bf4c2e1f-e050-4d35-bdd1-94dfe9f2d78e -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token'
STEP: reading a file in the container
Jan  8 13:11:01.783: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-2898 pod-service-account-bf4c2e1f-e050-4d35-bdd1-94dfe9f2d78e -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt'
STEP: reading a file in the container
Jan  8 13:11:02.266: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-2898 pod-service-account-bf4c2e1f-e050-4d35-bdd1-94dfe9f2d78e -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace'
[AfterEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  8 13:11:02.937: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-2898" for this suite.
Jan  8 13:11:08.963: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  8 13:11:09.044: INFO: namespace svcaccounts-2898 deletion completed in 6.101132663s

• [SLOW TEST:18.439 seconds]
[sig-auth] ServiceAccounts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23
  should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command in a pod 
  should print the output to logs [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  8 13:11:09.044: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should print the output to logs [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  8 13:11:19.216: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-9969" for this suite.
Jan  8 13:12:01.254: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  8 13:12:01.388: INFO: namespace kubelet-test-9969 deletion completed in 42.164770225s

• [SLOW TEST:52.344 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when scheduling a busybox command in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40
    should print the output to logs [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  8 13:12:01.388: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod busybox-9ac95e22-f011-46d4-9127-698cd66686df in namespace container-probe-1713
Jan  8 13:12:09.514: INFO: Started pod busybox-9ac95e22-f011-46d4-9127-698cd66686df in namespace container-probe-1713
STEP: checking the pod's current state and verifying that restartCount is present
Jan  8 13:12:09.518: INFO: Initial restart count of pod busybox-9ac95e22-f011-46d4-9127-698cd66686df is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  8 13:16:11.482: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-1713" for this suite.
Jan  8 13:16:17.516: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  8 13:16:17.647: INFO: namespace container-probe-1713 deletion completed in 6.15269885s

• [SLOW TEST:256.258 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl api-versions 
  should check if v1 is in available api versions  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  8 13:16:17.647: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should check if v1 is in available api versions  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: validating api versions
Jan  8 13:16:17.750: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions'
Jan  8 13:16:18.074: INFO: stderr: ""
Jan  8 13:16:18.075: INFO: stdout: "admissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\napps/v1beta1\napps/v1beta2\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  8 13:16:18.075: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5518" for this suite.
Jan  8 13:16:24.105: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  8 13:16:24.239: INFO: namespace kubectl-5518 deletion completed in 6.157694143s

• [SLOW TEST:6.592 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl api-versions
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should check if v1 is in available api versions  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  8 13:16:24.239: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: setting up watch
STEP: submitting the pod to kubernetes
Jan  8 13:16:24.371: INFO: observed the pod list
STEP: verifying the pod is in kubernetes
STEP: verifying pod creation was observed
STEP: deleting the pod gracefully
STEP: verifying the kubelet observed the termination notice
STEP: verifying pod deletion was observed
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  8 13:16:46.619: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-8440" for this suite.
Jan  8 13:16:52.655: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  8 13:16:52.766: INFO: namespace pods-8440 deletion completed in 6.132484682s

• [SLOW TEST:28.526 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-network] Services 
  should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  8 13:16:52.766: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88
[It] should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating service multi-endpoint-test in namespace services-791
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-791 to expose endpoints map[]
Jan  8 13:16:52.948: INFO: successfully validated that service multi-endpoint-test in namespace services-791 exposes endpoints map[] (36.718979ms elapsed)
STEP: Creating pod pod1 in namespace services-791
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-791 to expose endpoints map[pod1:[100]]
Jan  8 13:16:57.164: INFO: Unexpected endpoints: found map[], expected map[pod1:[100]] (4.147958785s elapsed, will retry)
Jan  8 13:17:00.204: INFO: successfully validated that service multi-endpoint-test in namespace services-791 exposes endpoints map[pod1:[100]] (7.187653597s elapsed)
STEP: Creating pod pod2 in namespace services-791
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-791 to expose endpoints map[pod1:[100] pod2:[101]]
Jan  8 13:17:06.307: INFO: Unexpected endpoints: found map[bbabf8ca-d8f3-43fb-90d6-37f491cb8efc:[100]], expected map[pod1:[100] pod2:[101]] (6.079234589s elapsed, will retry)
Jan  8 13:17:08.364: INFO: successfully validated that service multi-endpoint-test in namespace services-791 exposes endpoints map[pod1:[100] pod2:[101]] (8.136432031s elapsed)
STEP: Deleting pod pod1 in namespace services-791
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-791 to expose endpoints map[pod2:[101]]
Jan  8 13:17:08.483: INFO: successfully validated that service multi-endpoint-test in namespace services-791 exposes endpoints map[pod2:[101]] (112.35686ms elapsed)
STEP: Deleting pod pod2 in namespace services-791
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-791 to expose endpoints map[]
Jan  8 13:17:08.516: INFO: successfully validated that service multi-endpoint-test in namespace services-791 exposes endpoints map[] (12.459349ms elapsed)
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  8 13:17:08.615: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-791" for this suite.
Jan  8 13:17:32.654: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  8 13:17:32.815: INFO: namespace services-791 deletion completed in 24.186703385s
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92

• [SLOW TEST:40.049 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  8 13:17:32.815: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Jan  8 13:17:32.947: INFO: Waiting up to 5m0s for pod "downwardapi-volume-16120777-d41b-4af5-a58d-e857f6ef1ac5" in namespace "projected-8325" to be "success or failure"
Jan  8 13:17:33.013: INFO: Pod "downwardapi-volume-16120777-d41b-4af5-a58d-e857f6ef1ac5": Phase="Pending", Reason="", readiness=false. Elapsed: 65.981634ms
Jan  8 13:17:35.023: INFO: Pod "downwardapi-volume-16120777-d41b-4af5-a58d-e857f6ef1ac5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.076410773s
Jan  8 13:17:37.033: INFO: Pod "downwardapi-volume-16120777-d41b-4af5-a58d-e857f6ef1ac5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.085859667s
Jan  8 13:17:39.070: INFO: Pod "downwardapi-volume-16120777-d41b-4af5-a58d-e857f6ef1ac5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.12347383s
Jan  8 13:17:41.090: INFO: Pod "downwardapi-volume-16120777-d41b-4af5-a58d-e857f6ef1ac5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.143400093s
STEP: Saw pod success
Jan  8 13:17:41.090: INFO: Pod "downwardapi-volume-16120777-d41b-4af5-a58d-e857f6ef1ac5" satisfied condition "success or failure"
Jan  8 13:17:41.097: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-16120777-d41b-4af5-a58d-e857f6ef1ac5 container client-container: 
STEP: delete the pod
Jan  8 13:17:41.202: INFO: Waiting for pod downwardapi-volume-16120777-d41b-4af5-a58d-e857f6ef1ac5 to disappear
Jan  8 13:17:41.310: INFO: Pod downwardapi-volume-16120777-d41b-4af5-a58d-e857f6ef1ac5 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  8 13:17:41.311: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8325" for this suite.
Jan  8 13:17:47.382: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  8 13:17:47.521: INFO: namespace projected-8325 deletion completed in 6.186660735s

• [SLOW TEST:14.706 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should have an terminated reason [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  8 13:17:47.522: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should have an terminated reason [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  8 13:17:55.710: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-6750" for this suite.
Jan  8 13:18:01.823: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  8 13:18:01.975: INFO: namespace kubelet-test-6750 deletion completed in 6.255836754s

• [SLOW TEST:14.454 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78
    should have an terminated reason [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicaSet 
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  8 13:18:01.976: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jan  8 13:18:02.094: INFO: Creating ReplicaSet my-hostname-basic-266db311-3c31-447f-a6f2-485a4cae8313
Jan  8 13:18:02.113: INFO: Pod name my-hostname-basic-266db311-3c31-447f-a6f2-485a4cae8313: Found 0 pods out of 1
Jan  8 13:18:07.136: INFO: Pod name my-hostname-basic-266db311-3c31-447f-a6f2-485a4cae8313: Found 1 pods out of 1
Jan  8 13:18:07.136: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-266db311-3c31-447f-a6f2-485a4cae8313" is running
Jan  8 13:18:09.149: INFO: Pod "my-hostname-basic-266db311-3c31-447f-a6f2-485a4cae8313-6v7bp" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-08 13:18:02 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-08 13:18:02 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-266db311-3c31-447f-a6f2-485a4cae8313]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-08 13:18:02 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-266db311-3c31-447f-a6f2-485a4cae8313]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-08 13:18:02 +0000 UTC Reason: Message:}])
Jan  8 13:18:09.150: INFO: Trying to dial the pod
Jan  8 13:18:14.185: INFO: Controller my-hostname-basic-266db311-3c31-447f-a6f2-485a4cae8313: Got expected result from replica 1 [my-hostname-basic-266db311-3c31-447f-a6f2-485a4cae8313-6v7bp]: "my-hostname-basic-266db311-3c31-447f-a6f2-485a4cae8313-6v7bp", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  8 13:18:14.185: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replicaset-2227" for this suite.
Jan  8 13:18:20.220: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  8 13:18:20.368: INFO: namespace replicaset-2227 deletion completed in 6.177998759s

• [SLOW TEST:18.393 seconds]
[sig-apps] ReplicaSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class 
  should be submitted and removed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  8 13:18:20.369: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods Set QOS Class
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:179
[It] should be submitted and removed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying QOS class is set on the pod
[AfterEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  8 13:18:20.757: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-2716" for this suite.
Jan  8 13:18:42.862: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  8 13:18:43.090: INFO: namespace pods-2716 deletion completed in 22.256624821s

• [SLOW TEST:22.721 seconds]
[k8s.io] [sig-node] Pods Extended
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  [k8s.io] Pods Set QOS Class
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should be submitted and removed  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl rolling-update 
  should support rolling-update to same image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  8 13:18:43.090: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1516
[It] should support rolling-update to same image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Jan  8 13:18:43.172: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=kubectl-8252'
Jan  8 13:18:43.327: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Jan  8 13:18:43.328: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n"
STEP: verifying the rc e2e-test-nginx-rc was created
Jan  8 13:18:43.340: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0
Jan  8 13:18:43.351: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
STEP: rolling-update to same image controller
Jan  8 13:18:43.444: INFO: scanned /root for discovery docs: 
Jan  8 13:18:43.444: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-nginx-rc --update-period=1s --image=docker.io/library/nginx:1.14-alpine --image-pull-policy=IfNotPresent --namespace=kubectl-8252'
Jan  8 13:19:05.706: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Jan  8 13:19:05.706: INFO: stdout: "Created e2e-test-nginx-rc-0378531852db651c1d6d5bd6c8c647de\nScaling up e2e-test-nginx-rc-0378531852db651c1d6d5bd6c8c647de from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-0378531852db651c1d6d5bd6c8c647de up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-0378531852db651c1d6d5bd6c8c647de to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n"
Jan  8 13:19:05.706: INFO: stdout: "Created e2e-test-nginx-rc-0378531852db651c1d6d5bd6c8c647de\nScaling up e2e-test-nginx-rc-0378531852db651c1d6d5bd6c8c647de from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-0378531852db651c1d6d5bd6c8c647de up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-0378531852db651c1d6d5bd6c8c647de to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n"
STEP: waiting for all containers in run=e2e-test-nginx-rc pods to come up.
Jan  8 13:19:05.707: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-8252'
Jan  8 13:19:05.915: INFO: stderr: ""
Jan  8 13:19:05.915: INFO: stdout: "e2e-test-nginx-rc-0378531852db651c1d6d5bd6c8c647de-cshjv "
Jan  8 13:19:05.915: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-0378531852db651c1d6d5bd6c8c647de-cshjv -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-nginx-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8252'
Jan  8 13:19:06.038: INFO: stderr: ""
Jan  8 13:19:06.039: INFO: stdout: "true"
Jan  8 13:19:06.039: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-0378531852db651c1d6d5bd6c8c647de-cshjv -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-nginx-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8252'
Jan  8 13:19:06.170: INFO: stderr: ""
Jan  8 13:19:06.171: INFO: stdout: "docker.io/library/nginx:1.14-alpine"
Jan  8 13:19:06.171: INFO: e2e-test-nginx-rc-0378531852db651c1d6d5bd6c8c647de-cshjv is verified up and running
[AfterEach] [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1522
Jan  8 13:19:06.171: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=kubectl-8252'
Jan  8 13:19:06.285: INFO: stderr: ""
Jan  8 13:19:06.285: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  8 13:19:06.285: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-8252" for this suite.
Jan  8 13:19:12.311: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  8 13:19:12.454: INFO: namespace kubectl-8252 deletion completed in 6.161929665s

• [SLOW TEST:29.363 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should support rolling-update to same image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  8 13:19:12.455: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a watch on configmaps with label A
STEP: creating a watch on configmaps with label B
STEP: creating a watch on configmaps with label A or B
STEP: creating a configmap with label A and ensuring the correct watchers observe the notification
Jan  8 13:19:12.926: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-7554,SelfLink:/api/v1/namespaces/watch-7554/configmaps/e2e-watch-test-configmap-a,UID:beb279fa-f44f-47ef-9d44-05499f228afe,ResourceVersion:19774330,Generation:0,CreationTimestamp:2020-01-08 13:19:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
Jan  8 13:19:12.927: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-7554,SelfLink:/api/v1/namespaces/watch-7554/configmaps/e2e-watch-test-configmap-a,UID:beb279fa-f44f-47ef-9d44-05499f228afe,ResourceVersion:19774330,Generation:0,CreationTimestamp:2020-01-08 13:19:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
STEP: modifying configmap A and ensuring the correct watchers observe the notification
Jan  8 13:19:22.955: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-7554,SelfLink:/api/v1/namespaces/watch-7554/configmaps/e2e-watch-test-configmap-a,UID:beb279fa-f44f-47ef-9d44-05499f228afe,ResourceVersion:19774345,Generation:0,CreationTimestamp:2020-01-08 13:19:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
Jan  8 13:19:22.955: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-7554,SelfLink:/api/v1/namespaces/watch-7554/configmaps/e2e-watch-test-configmap-a,UID:beb279fa-f44f-47ef-9d44-05499f228afe,ResourceVersion:19774345,Generation:0,CreationTimestamp:2020-01-08 13:19:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying configmap A again and ensuring the correct watchers observe the notification
Jan  8 13:19:32.972: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-7554,SelfLink:/api/v1/namespaces/watch-7554/configmaps/e2e-watch-test-configmap-a,UID:beb279fa-f44f-47ef-9d44-05499f228afe,ResourceVersion:19774359,Generation:0,CreationTimestamp:2020-01-08 13:19:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Jan  8 13:19:32.972: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-7554,SelfLink:/api/v1/namespaces/watch-7554/configmaps/e2e-watch-test-configmap-a,UID:beb279fa-f44f-47ef-9d44-05499f228afe,ResourceVersion:19774359,Generation:0,CreationTimestamp:2020-01-08 13:19:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
STEP: deleting configmap A and ensuring the correct watchers observe the notification
Jan  8 13:19:42.996: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-7554,SelfLink:/api/v1/namespaces/watch-7554/configmaps/e2e-watch-test-configmap-a,UID:beb279fa-f44f-47ef-9d44-05499f228afe,ResourceVersion:19774374,Generation:0,CreationTimestamp:2020-01-08 13:19:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Jan  8 13:19:42.996: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-7554,SelfLink:/api/v1/namespaces/watch-7554/configmaps/e2e-watch-test-configmap-a,UID:beb279fa-f44f-47ef-9d44-05499f228afe,ResourceVersion:19774374,Generation:0,CreationTimestamp:2020-01-08 13:19:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
STEP: creating a configmap with label B and ensuring the correct watchers observe the notification
Jan  8 13:19:53.008: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-7554,SelfLink:/api/v1/namespaces/watch-7554/configmaps/e2e-watch-test-configmap-b,UID:2cea5ad3-1f68-42bf-9704-4f913ae225ea,ResourceVersion:19774388,Generation:0,CreationTimestamp:2020-01-08 13:19:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
Jan  8 13:19:53.008: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-7554,SelfLink:/api/v1/namespaces/watch-7554/configmaps/e2e-watch-test-configmap-b,UID:2cea5ad3-1f68-42bf-9704-4f913ae225ea,ResourceVersion:19774388,Generation:0,CreationTimestamp:2020-01-08 13:19:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
STEP: deleting configmap B and ensuring the correct watchers observe the notification
Jan  8 13:20:03.060: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-7554,SelfLink:/api/v1/namespaces/watch-7554/configmaps/e2e-watch-test-configmap-b,UID:2cea5ad3-1f68-42bf-9704-4f913ae225ea,ResourceVersion:19774402,Generation:0,CreationTimestamp:2020-01-08 13:19:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
Jan  8 13:20:03.060: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-7554,SelfLink:/api/v1/namespaces/watch-7554/configmaps/e2e-watch-test-configmap-b,UID:2cea5ad3-1f68-42bf-9704-4f913ae225ea,ResourceVersion:19774402,Generation:0,CreationTimestamp:2020-01-08 13:19:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  8 13:20:13.061: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-7554" for this suite.
Jan  8 13:20:19.103: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  8 13:20:19.257: INFO: namespace watch-7554 deletion completed in 6.189371197s

• [SLOW TEST:66.803 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  8 13:20:19.259: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating secret secrets-2257/secret-test-64537f07-d950-4a34-9cb4-9577eaea1130
STEP: Creating a pod to test consume secrets
Jan  8 13:20:19.350: INFO: Waiting up to 5m0s for pod "pod-configmaps-608bd507-c317-4806-b7d9-2420b1d36bab" in namespace "secrets-2257" to be "success or failure"
Jan  8 13:20:19.391: INFO: Pod "pod-configmaps-608bd507-c317-4806-b7d9-2420b1d36bab": Phase="Pending", Reason="", readiness=false. Elapsed: 41.239118ms
Jan  8 13:20:21.405: INFO: Pod "pod-configmaps-608bd507-c317-4806-b7d9-2420b1d36bab": Phase="Pending", Reason="", readiness=false. Elapsed: 2.055164289s
Jan  8 13:20:23.423: INFO: Pod "pod-configmaps-608bd507-c317-4806-b7d9-2420b1d36bab": Phase="Pending", Reason="", readiness=false. Elapsed: 4.072332613s
Jan  8 13:20:25.432: INFO: Pod "pod-configmaps-608bd507-c317-4806-b7d9-2420b1d36bab": Phase="Pending", Reason="", readiness=false. Elapsed: 6.081304521s
Jan  8 13:20:27.439: INFO: Pod "pod-configmaps-608bd507-c317-4806-b7d9-2420b1d36bab": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.088890284s
STEP: Saw pod success
Jan  8 13:20:27.439: INFO: Pod "pod-configmaps-608bd507-c317-4806-b7d9-2420b1d36bab" satisfied condition "success or failure"
Jan  8 13:20:27.444: INFO: Trying to get logs from node iruya-node pod pod-configmaps-608bd507-c317-4806-b7d9-2420b1d36bab container env-test: 
STEP: delete the pod
Jan  8 13:20:27.509: INFO: Waiting for pod pod-configmaps-608bd507-c317-4806-b7d9-2420b1d36bab to disappear
Jan  8 13:20:27.615: INFO: Pod pod-configmaps-608bd507-c317-4806-b7d9-2420b1d36bab no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  8 13:20:27.615: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-2257" for this suite.
Jan  8 13:20:33.657: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  8 13:20:33.779: INFO: namespace secrets-2257 deletion completed in 6.15500454s

• [SLOW TEST:14.521 seconds]
[sig-api-machinery] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  8 13:20:33.780: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-map-b58a30f1-2333-4f8b-8110-01f703b718b6
STEP: Creating a pod to test consume configMaps
Jan  8 13:20:33.926: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-12939645-4015-4670-a492-b99f124a1e53" in namespace "projected-2724" to be "success or failure"
Jan  8 13:20:33.956: INFO: Pod "pod-projected-configmaps-12939645-4015-4670-a492-b99f124a1e53": Phase="Pending", Reason="", readiness=false. Elapsed: 28.896251ms
Jan  8 13:20:35.964: INFO: Pod "pod-projected-configmaps-12939645-4015-4670-a492-b99f124a1e53": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037161785s
Jan  8 13:20:37.972: INFO: Pod "pod-projected-configmaps-12939645-4015-4670-a492-b99f124a1e53": Phase="Pending", Reason="", readiness=false. Elapsed: 4.045369626s
Jan  8 13:20:39.994: INFO: Pod "pod-projected-configmaps-12939645-4015-4670-a492-b99f124a1e53": Phase="Pending", Reason="", readiness=false. Elapsed: 6.067724537s
Jan  8 13:20:42.007: INFO: Pod "pod-projected-configmaps-12939645-4015-4670-a492-b99f124a1e53": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.080262361s
STEP: Saw pod success
Jan  8 13:20:42.007: INFO: Pod "pod-projected-configmaps-12939645-4015-4670-a492-b99f124a1e53" satisfied condition "success or failure"
Jan  8 13:20:42.014: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-12939645-4015-4670-a492-b99f124a1e53 container projected-configmap-volume-test: 
STEP: delete the pod
Jan  8 13:20:42.079: INFO: Waiting for pod pod-projected-configmaps-12939645-4015-4670-a492-b99f124a1e53 to disappear
Jan  8 13:20:42.086: INFO: Pod pod-projected-configmaps-12939645-4015-4670-a492-b99f124a1e53 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  8 13:20:42.087: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2724" for this suite.
Jan  8 13:20:48.144: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  8 13:20:48.305: INFO: namespace projected-2724 deletion completed in 6.19994045s

• [SLOW TEST:14.526 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  8 13:20:48.306: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-f9271243-d6f1-4f5b-8a15-609266b392b8
STEP: Creating a pod to test consume configMaps
Jan  8 13:20:48.405: INFO: Waiting up to 5m0s for pod "pod-configmaps-b13e6b6e-a6dc-4810-b60a-8fbf5ff2d955" in namespace "configmap-823" to be "success or failure"
Jan  8 13:20:48.463: INFO: Pod "pod-configmaps-b13e6b6e-a6dc-4810-b60a-8fbf5ff2d955": Phase="Pending", Reason="", readiness=false. Elapsed: 58.241404ms
Jan  8 13:20:50.479: INFO: Pod "pod-configmaps-b13e6b6e-a6dc-4810-b60a-8fbf5ff2d955": Phase="Pending", Reason="", readiness=false. Elapsed: 2.074258783s
Jan  8 13:20:52.511: INFO: Pod "pod-configmaps-b13e6b6e-a6dc-4810-b60a-8fbf5ff2d955": Phase="Pending", Reason="", readiness=false. Elapsed: 4.106309704s
Jan  8 13:20:54.524: INFO: Pod "pod-configmaps-b13e6b6e-a6dc-4810-b60a-8fbf5ff2d955": Phase="Pending", Reason="", readiness=false. Elapsed: 6.118848636s
Jan  8 13:20:56.538: INFO: Pod "pod-configmaps-b13e6b6e-a6dc-4810-b60a-8fbf5ff2d955": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.13313429s
STEP: Saw pod success
Jan  8 13:20:56.538: INFO: Pod "pod-configmaps-b13e6b6e-a6dc-4810-b60a-8fbf5ff2d955" satisfied condition "success or failure"
Jan  8 13:20:56.544: INFO: Trying to get logs from node iruya-node pod pod-configmaps-b13e6b6e-a6dc-4810-b60a-8fbf5ff2d955 container configmap-volume-test: 
STEP: delete the pod
Jan  8 13:20:56.821: INFO: Waiting for pod pod-configmaps-b13e6b6e-a6dc-4810-b60a-8fbf5ff2d955 to disappear
Jan  8 13:20:56.903: INFO: Pod pod-configmaps-b13e6b6e-a6dc-4810-b60a-8fbf5ff2d955 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  8 13:20:56.903: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-823" for this suite.
Jan  8 13:21:02.992: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  8 13:21:03.117: INFO: namespace configmap-823 deletion completed in 6.202805555s

• [SLOW TEST:14.811 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl cluster-info 
  should check if Kubernetes master services is included in cluster-info  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  8 13:21:03.117: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should check if Kubernetes master services is included in cluster-info  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: validating cluster-info
Jan  8 13:21:03.212: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info'
Jan  8 13:21:05.245: INFO: stderr: ""
Jan  8 13:21:05.245: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.24.4.57:6443\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.24.4.57:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  8 13:21:05.245: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-2415" for this suite.
Jan  8 13:21:11.286: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  8 13:21:11.402: INFO: namespace kubectl-2415 deletion completed in 6.147339176s

• [SLOW TEST:8.284 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl cluster-info
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should check if Kubernetes master services is included in cluster-info  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  8 13:21:11.402: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Jan  8 13:24:09.755: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  8 13:24:09.770: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  8 13:24:11.770: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  8 13:24:11.780: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  8 13:24:13.770: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  8 13:24:13.791: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  8 13:24:15.770: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  8 13:24:15.787: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  8 13:24:17.770: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  8 13:24:17.784: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  8 13:24:19.770: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  8 13:24:19.795: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  8 13:24:21.770: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  8 13:24:21.780: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  8 13:24:23.770: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  8 13:24:23.786: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  8 13:24:25.770: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  8 13:24:25.780: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  8 13:24:27.770: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  8 13:24:27.788: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  8 13:24:29.770: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  8 13:24:29.787: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  8 13:24:31.770: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  8 13:24:31.779: INFO: Pod pod-with-poststart-exec-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  8 13:24:31.780: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-9537" for this suite.
Jan  8 13:24:53.852: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  8 13:24:53.969: INFO: namespace container-lifecycle-hook-9537 deletion completed in 22.179867353s

• [SLOW TEST:222.567 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute poststart exec hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  8 13:24:53.969: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Jan  8 13:24:54.135: INFO: Waiting up to 5m0s for pod "downwardapi-volume-899bed17-8d93-4bc8-a6a3-c4d68491a03c" in namespace "projected-5475" to be "success or failure"
Jan  8 13:24:54.151: INFO: Pod "downwardapi-volume-899bed17-8d93-4bc8-a6a3-c4d68491a03c": Phase="Pending", Reason="", readiness=false. Elapsed: 16.265597ms
Jan  8 13:24:56.165: INFO: Pod "downwardapi-volume-899bed17-8d93-4bc8-a6a3-c4d68491a03c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029928495s
Jan  8 13:24:58.174: INFO: Pod "downwardapi-volume-899bed17-8d93-4bc8-a6a3-c4d68491a03c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.039219832s
Jan  8 13:25:00.184: INFO: Pod "downwardapi-volume-899bed17-8d93-4bc8-a6a3-c4d68491a03c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.048515183s
Jan  8 13:25:02.197: INFO: Pod "downwardapi-volume-899bed17-8d93-4bc8-a6a3-c4d68491a03c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.062148935s
STEP: Saw pod success
Jan  8 13:25:02.197: INFO: Pod "downwardapi-volume-899bed17-8d93-4bc8-a6a3-c4d68491a03c" satisfied condition "success or failure"
Jan  8 13:25:02.204: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-899bed17-8d93-4bc8-a6a3-c4d68491a03c container client-container: 
STEP: delete the pod
Jan  8 13:25:02.295: INFO: Waiting for pod downwardapi-volume-899bed17-8d93-4bc8-a6a3-c4d68491a03c to disappear
Jan  8 13:25:02.313: INFO: Pod downwardapi-volume-899bed17-8d93-4bc8-a6a3-c4d68491a03c no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  8 13:25:02.314: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5475" for this suite.
Jan  8 13:25:08.384: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  8 13:25:08.481: INFO: namespace projected-5475 deletion completed in 6.162419805s

• [SLOW TEST:14.512 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  8 13:25:08.482: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Jan  8 13:25:24.726: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan  8 13:25:24.735: INFO: Pod pod-with-poststart-http-hook still exists
Jan  8 13:25:26.736: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan  8 13:25:26.748: INFO: Pod pod-with-poststart-http-hook still exists
Jan  8 13:25:28.736: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan  8 13:25:28.743: INFO: Pod pod-with-poststart-http-hook still exists
Jan  8 13:25:30.736: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan  8 13:25:30.743: INFO: Pod pod-with-poststart-http-hook still exists
Jan  8 13:25:32.736: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan  8 13:25:32.744: INFO: Pod pod-with-poststart-http-hook still exists
Jan  8 13:25:34.736: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan  8 13:25:34.740: INFO: Pod pod-with-poststart-http-hook still exists
Jan  8 13:25:36.736: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan  8 13:25:36.747: INFO: Pod pod-with-poststart-http-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  8 13:25:36.747: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-1910" for this suite.
Jan  8 13:25:58.772: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  8 13:25:58.919: INFO: namespace container-lifecycle-hook-1910 deletion completed in 22.166670154s

• [SLOW TEST:50.437 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute poststart http hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  8 13:25:58.920: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a watch on configmaps
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: closing the watch once it receives two notifications
Jan  8 13:25:59.016: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-9027,SelfLink:/api/v1/namespaces/watch-9027/configmaps/e2e-watch-test-watch-closed,UID:dc2e290e-5332-4570-8840-727a9c8e1b5b,ResourceVersion:19775081,Generation:0,CreationTimestamp:2020-01-08 13:25:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
Jan  8 13:25:59.016: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-9027,SelfLink:/api/v1/namespaces/watch-9027/configmaps/e2e-watch-test-watch-closed,UID:dc2e290e-5332-4570-8840-727a9c8e1b5b,ResourceVersion:19775082,Generation:0,CreationTimestamp:2020-01-08 13:25:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying the configmap a second time, while the watch is closed
STEP: creating a new watch on configmaps from the last resource version observed by the first watch
STEP: deleting the configmap
STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed
Jan  8 13:25:59.069: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-9027,SelfLink:/api/v1/namespaces/watch-9027/configmaps/e2e-watch-test-watch-closed,UID:dc2e290e-5332-4570-8840-727a9c8e1b5b,ResourceVersion:19775083,Generation:0,CreationTimestamp:2020-01-08 13:25:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Jan  8 13:25:59.069: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-9027,SelfLink:/api/v1/namespaces/watch-9027/configmaps/e2e-watch-test-watch-closed,UID:dc2e290e-5332-4570-8840-727a9c8e1b5b,ResourceVersion:19775084,Generation:0,CreationTimestamp:2020-01-08 13:25:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  8 13:25:59.069: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-9027" for this suite.
Jan  8 13:26:05.130: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  8 13:26:05.219: INFO: namespace watch-9027 deletion completed in 6.111711387s

• [SLOW TEST:6.299 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  8 13:26:05.220: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the rc1
STEP: create the rc2
STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well
STEP: delete the rc simpletest-rc-to-be-deleted
STEP: wait for the rc to be deleted
STEP: Gathering metrics
W0108 13:26:21.123969       8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jan  8 13:26:21.124: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  8 13:26:21.124: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-4479" for this suite.
Jan  8 13:26:39.307: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  8 13:26:39.508: INFO: namespace gc-4479 deletion completed in 18.35717978s

• [SLOW TEST:34.289 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  8 13:26:39.509: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating the pod
Jan  8 13:26:48.213: INFO: Successfully updated pod "annotationupdate9db6b203-d0b0-4c9c-9f7b-4924c48d23f3"
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  8 13:26:52.301: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4495" for this suite.
Jan  8 13:27:14.334: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  8 13:27:14.502: INFO: namespace projected-4495 deletion completed in 22.194075871s

• [SLOW TEST:34.994 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  8 13:27:14.503: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Jan  8 13:27:14.596: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1e212153-afc3-45d0-bf85-aba589ca5a96" in namespace "downward-api-4681" to be "success or failure"
Jan  8 13:27:14.658: INFO: Pod "downwardapi-volume-1e212153-afc3-45d0-bf85-aba589ca5a96": Phase="Pending", Reason="", readiness=false. Elapsed: 60.997835ms
Jan  8 13:27:16.673: INFO: Pod "downwardapi-volume-1e212153-afc3-45d0-bf85-aba589ca5a96": Phase="Pending", Reason="", readiness=false. Elapsed: 2.076870557s
Jan  8 13:27:18.697: INFO: Pod "downwardapi-volume-1e212153-afc3-45d0-bf85-aba589ca5a96": Phase="Pending", Reason="", readiness=false. Elapsed: 4.100727969s
Jan  8 13:27:20.706: INFO: Pod "downwardapi-volume-1e212153-afc3-45d0-bf85-aba589ca5a96": Phase="Pending", Reason="", readiness=false. Elapsed: 6.109583732s
Jan  8 13:27:22.714: INFO: Pod "downwardapi-volume-1e212153-afc3-45d0-bf85-aba589ca5a96": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.117766234s
STEP: Saw pod success
Jan  8 13:27:22.714: INFO: Pod "downwardapi-volume-1e212153-afc3-45d0-bf85-aba589ca5a96" satisfied condition "success or failure"
Jan  8 13:27:22.719: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-1e212153-afc3-45d0-bf85-aba589ca5a96 container client-container: 
STEP: delete the pod
Jan  8 13:27:22.874: INFO: Waiting for pod downwardapi-volume-1e212153-afc3-45d0-bf85-aba589ca5a96 to disappear
Jan  8 13:27:22.889: INFO: Pod downwardapi-volume-1e212153-afc3-45d0-bf85-aba589ca5a96 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  8 13:27:22.890: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-4681" for this suite.
Jan  8 13:27:29.051: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  8 13:27:29.193: INFO: namespace downward-api-4681 deletion completed in 6.290264275s

• [SLOW TEST:14.690 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-api-machinery] Watchers 
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  8 13:27:29.193: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a watch on configmaps with a certain label
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: changing the label value of the configmap
STEP: Expecting to observe a delete notification for the watched object
Jan  8 13:27:29.313: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-567,SelfLink:/api/v1/namespaces/watch-567/configmaps/e2e-watch-test-label-changed,UID:e5217072-2e7c-4206-9909-e4c5913ce2f3,ResourceVersion:19775388,Generation:0,CreationTimestamp:2020-01-08 13:27:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
Jan  8 13:27:29.313: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-567,SelfLink:/api/v1/namespaces/watch-567/configmaps/e2e-watch-test-label-changed,UID:e5217072-2e7c-4206-9909-e4c5913ce2f3,ResourceVersion:19775389,Generation:0,CreationTimestamp:2020-01-08 13:27:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
Jan  8 13:27:29.314: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-567,SelfLink:/api/v1/namespaces/watch-567/configmaps/e2e-watch-test-label-changed,UID:e5217072-2e7c-4206-9909-e4c5913ce2f3,ResourceVersion:19775390,Generation:0,CreationTimestamp:2020-01-08 13:27:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying the configmap a second time
STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements
STEP: changing the label value of the configmap back
STEP: modifying the configmap a third time
STEP: deleting the configmap
STEP: Expecting to observe an add notification for the watched object when the label value was restored
Jan  8 13:27:39.380: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-567,SelfLink:/api/v1/namespaces/watch-567/configmaps/e2e-watch-test-label-changed,UID:e5217072-2e7c-4206-9909-e4c5913ce2f3,ResourceVersion:19775405,Generation:0,CreationTimestamp:2020-01-08 13:27:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Jan  8 13:27:39.381: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-567,SelfLink:/api/v1/namespaces/watch-567/configmaps/e2e-watch-test-label-changed,UID:e5217072-2e7c-4206-9909-e4c5913ce2f3,ResourceVersion:19775406,Generation:0,CreationTimestamp:2020-01-08 13:27:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},}
Jan  8 13:27:39.381: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-567,SelfLink:/api/v1/namespaces/watch-567/configmaps/e2e-watch-test-label-changed,UID:e5217072-2e7c-4206-9909-e4c5913ce2f3,ResourceVersion:19775407,Generation:0,CreationTimestamp:2020-01-08 13:27:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  8 13:27:39.381: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-567" for this suite.
Jan  8 13:27:45.425: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  8 13:27:45.592: INFO: namespace watch-567 deletion completed in 6.192706465s

• [SLOW TEST:16.399 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should surface a failure condition on a common issue like exceeded quota [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  8 13:27:45.594: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should surface a failure condition on a common issue like exceeded quota [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jan  8 13:27:45.724: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace
STEP: Creating rc "condition-test" that asks for more than the allowed pod quota
STEP: Checking rc "condition-test" has the desired failure condition set
STEP: Scaling down rc "condition-test" to satisfy pod quota
Jan  8 13:27:48.165: INFO: Updating replication controller "condition-test"
STEP: Checking rc "condition-test" has no failure condition set
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  8 13:27:48.215: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-9043" for this suite.
Jan  8 13:27:58.423: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  8 13:27:59.080: INFO: namespace replication-controller-9043 deletion completed in 10.846794796s

• [SLOW TEST:13.486 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should surface a failure condition on a common issue like exceeded quota [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  8 13:27:59.081: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Jan  8 13:27:59.213: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4a2e60ab-2af0-403e-a859-0b3321613f1b" in namespace "downward-api-84" to be "success or failure"
Jan  8 13:27:59.219: INFO: Pod "downwardapi-volume-4a2e60ab-2af0-403e-a859-0b3321613f1b": Phase="Pending", Reason="", readiness=false. Elapsed: 5.685254ms
Jan  8 13:28:01.225: INFO: Pod "downwardapi-volume-4a2e60ab-2af0-403e-a859-0b3321613f1b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012287867s
Jan  8 13:28:03.241: INFO: Pod "downwardapi-volume-4a2e60ab-2af0-403e-a859-0b3321613f1b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.027434388s
Jan  8 13:28:05.249: INFO: Pod "downwardapi-volume-4a2e60ab-2af0-403e-a859-0b3321613f1b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.035820547s
Jan  8 13:28:07.258: INFO: Pod "downwardapi-volume-4a2e60ab-2af0-403e-a859-0b3321613f1b": Phase="Pending", Reason="", readiness=false. Elapsed: 8.044798497s
Jan  8 13:28:09.270: INFO: Pod "downwardapi-volume-4a2e60ab-2af0-403e-a859-0b3321613f1b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.057105198s
STEP: Saw pod success
Jan  8 13:28:09.270: INFO: Pod "downwardapi-volume-4a2e60ab-2af0-403e-a859-0b3321613f1b" satisfied condition "success or failure"
Jan  8 13:28:09.275: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-4a2e60ab-2af0-403e-a859-0b3321613f1b container client-container: 
STEP: delete the pod
Jan  8 13:28:09.443: INFO: Waiting for pod downwardapi-volume-4a2e60ab-2af0-403e-a859-0b3321613f1b to disappear
Jan  8 13:28:09.466: INFO: Pod downwardapi-volume-4a2e60ab-2af0-403e-a859-0b3321613f1b no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  8 13:28:09.467: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-84" for this suite.
Jan  8 13:28:15.504: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  8 13:28:15.702: INFO: namespace downward-api-84 deletion completed in 6.228314524s

• [SLOW TEST:16.622 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  8 13:28:15.702: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0644 on tmpfs
Jan  8 13:28:15.819: INFO: Waiting up to 5m0s for pod "pod-c253937e-b7c2-4f1b-9fc7-c9013d5a958b" in namespace "emptydir-3808" to be "success or failure"
Jan  8 13:28:15.826: INFO: Pod "pod-c253937e-b7c2-4f1b-9fc7-c9013d5a958b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.865808ms
Jan  8 13:28:17.845: INFO: Pod "pod-c253937e-b7c2-4f1b-9fc7-c9013d5a958b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026412929s
Jan  8 13:28:19.859: INFO: Pod "pod-c253937e-b7c2-4f1b-9fc7-c9013d5a958b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.040306022s
Jan  8 13:28:21.875: INFO: Pod "pod-c253937e-b7c2-4f1b-9fc7-c9013d5a958b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.055974243s
Jan  8 13:28:23.893: INFO: Pod "pod-c253937e-b7c2-4f1b-9fc7-c9013d5a958b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.074364222s
STEP: Saw pod success
Jan  8 13:28:23.894: INFO: Pod "pod-c253937e-b7c2-4f1b-9fc7-c9013d5a958b" satisfied condition "success or failure"
Jan  8 13:28:23.899: INFO: Trying to get logs from node iruya-node pod pod-c253937e-b7c2-4f1b-9fc7-c9013d5a958b container test-container: 
STEP: delete the pod
Jan  8 13:28:23.974: INFO: Waiting for pod pod-c253937e-b7c2-4f1b-9fc7-c9013d5a958b to disappear
Jan  8 13:28:24.021: INFO: Pod pod-c253937e-b7c2-4f1b-9fc7-c9013d5a958b no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  8 13:28:24.021: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-3808" for this suite.
Jan  8 13:28:30.157: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  8 13:28:30.290: INFO: namespace emptydir-3808 deletion completed in 6.259146237s

• [SLOW TEST:14.587 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-node] Downward API 
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  8 13:28:30.290: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward api env vars
Jan  8 13:28:30.422: INFO: Waiting up to 5m0s for pod "downward-api-9c0c917b-d94e-49e3-938c-9923ee12bf2b" in namespace "downward-api-9050" to be "success or failure"
Jan  8 13:28:30.434: INFO: Pod "downward-api-9c0c917b-d94e-49e3-938c-9923ee12bf2b": Phase="Pending", Reason="", readiness=false. Elapsed: 11.596112ms
Jan  8 13:28:32.447: INFO: Pod "downward-api-9c0c917b-d94e-49e3-938c-9923ee12bf2b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024564782s
Jan  8 13:28:34.456: INFO: Pod "downward-api-9c0c917b-d94e-49e3-938c-9923ee12bf2b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.033647055s
Jan  8 13:28:36.466: INFO: Pod "downward-api-9c0c917b-d94e-49e3-938c-9923ee12bf2b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.043540056s
Jan  8 13:28:38.475: INFO: Pod "downward-api-9c0c917b-d94e-49e3-938c-9923ee12bf2b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.05274484s
STEP: Saw pod success
Jan  8 13:28:38.475: INFO: Pod "downward-api-9c0c917b-d94e-49e3-938c-9923ee12bf2b" satisfied condition "success or failure"
Jan  8 13:28:38.479: INFO: Trying to get logs from node iruya-node pod downward-api-9c0c917b-d94e-49e3-938c-9923ee12bf2b container dapi-container: 
STEP: delete the pod
Jan  8 13:28:38.639: INFO: Waiting for pod downward-api-9c0c917b-d94e-49e3-938c-9923ee12bf2b to disappear
Jan  8 13:28:38.646: INFO: Pod downward-api-9c0c917b-d94e-49e3-938c-9923ee12bf2b no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  8 13:28:38.646: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-9050" for this suite.
Jan  8 13:28:44.673: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  8 13:28:44.792: INFO: namespace downward-api-9050 deletion completed in 6.138759817s

• [SLOW TEST:14.502 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  8 13:28:44.793: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod pod-subpath-test-configmap-lh74
STEP: Creating a pod to test atomic-volume-subpath
Jan  8 13:28:44.884: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-lh74" in namespace "subpath-3690" to be "success or failure"
Jan  8 13:28:44.913: INFO: Pod "pod-subpath-test-configmap-lh74": Phase="Pending", Reason="", readiness=false. Elapsed: 28.889953ms
Jan  8 13:28:46.929: INFO: Pod "pod-subpath-test-configmap-lh74": Phase="Pending", Reason="", readiness=false. Elapsed: 2.044881296s
Jan  8 13:28:48.941: INFO: Pod "pod-subpath-test-configmap-lh74": Phase="Pending", Reason="", readiness=false. Elapsed: 4.056867258s
Jan  8 13:28:50.947: INFO: Pod "pod-subpath-test-configmap-lh74": Phase="Pending", Reason="", readiness=false. Elapsed: 6.062986453s
Jan  8 13:28:52.961: INFO: Pod "pod-subpath-test-configmap-lh74": Phase="Running", Reason="", readiness=true. Elapsed: 8.076370264s
Jan  8 13:28:54.993: INFO: Pod "pod-subpath-test-configmap-lh74": Phase="Running", Reason="", readiness=true. Elapsed: 10.109061573s
Jan  8 13:28:57.001: INFO: Pod "pod-subpath-test-configmap-lh74": Phase="Running", Reason="", readiness=true. Elapsed: 12.117183579s
Jan  8 13:28:59.010: INFO: Pod "pod-subpath-test-configmap-lh74": Phase="Running", Reason="", readiness=true. Elapsed: 14.126188341s
Jan  8 13:29:01.021: INFO: Pod "pod-subpath-test-configmap-lh74": Phase="Running", Reason="", readiness=true. Elapsed: 16.136229782s
Jan  8 13:29:03.419: INFO: Pod "pod-subpath-test-configmap-lh74": Phase="Running", Reason="", readiness=true. Elapsed: 18.535190391s
Jan  8 13:29:05.434: INFO: Pod "pod-subpath-test-configmap-lh74": Phase="Running", Reason="", readiness=true. Elapsed: 20.549551377s
Jan  8 13:29:07.443: INFO: Pod "pod-subpath-test-configmap-lh74": Phase="Running", Reason="", readiness=true. Elapsed: 22.558878451s
Jan  8 13:29:09.452: INFO: Pod "pod-subpath-test-configmap-lh74": Phase="Running", Reason="", readiness=true. Elapsed: 24.568191689s
Jan  8 13:29:11.463: INFO: Pod "pod-subpath-test-configmap-lh74": Phase="Running", Reason="", readiness=true. Elapsed: 26.578449951s
Jan  8 13:29:13.472: INFO: Pod "pod-subpath-test-configmap-lh74": Phase="Succeeded", Reason="", readiness=false. Elapsed: 28.587901447s
STEP: Saw pod success
Jan  8 13:29:13.472: INFO: Pod "pod-subpath-test-configmap-lh74" satisfied condition "success or failure"
Jan  8 13:29:13.476: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-configmap-lh74 container test-container-subpath-configmap-lh74: 
STEP: delete the pod
Jan  8 13:29:13.733: INFO: Waiting for pod pod-subpath-test-configmap-lh74 to disappear
Jan  8 13:29:13.743: INFO: Pod pod-subpath-test-configmap-lh74 no longer exists
STEP: Deleting pod pod-subpath-test-configmap-lh74
Jan  8 13:29:13.743: INFO: Deleting pod "pod-subpath-test-configmap-lh74" in namespace "subpath-3690"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  8 13:29:13.750: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-3690" for this suite.
Jan  8 13:29:21.802: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  8 13:29:21.958: INFO: namespace subpath-3690 deletion completed in 8.199767184s

• [SLOW TEST:37.165 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  8 13:29:21.958: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-94a415ab-7b30-428a-877f-a8d16c679001
STEP: Creating a pod to test consume configMaps
Jan  8 13:29:22.045: INFO: Waiting up to 5m0s for pod "pod-configmaps-83431469-b801-4fbf-a473-b4e269845fd1" in namespace "configmap-7983" to be "success or failure"
Jan  8 13:29:22.143: INFO: Pod "pod-configmaps-83431469-b801-4fbf-a473-b4e269845fd1": Phase="Pending", Reason="", readiness=false. Elapsed: 97.655297ms
Jan  8 13:29:24.151: INFO: Pod "pod-configmaps-83431469-b801-4fbf-a473-b4e269845fd1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.105820224s
Jan  8 13:29:26.160: INFO: Pod "pod-configmaps-83431469-b801-4fbf-a473-b4e269845fd1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.115259893s
Jan  8 13:29:28.168: INFO: Pod "pod-configmaps-83431469-b801-4fbf-a473-b4e269845fd1": Phase="Pending", Reason="", readiness=false. Elapsed: 6.123156s
Jan  8 13:29:30.181: INFO: Pod "pod-configmaps-83431469-b801-4fbf-a473-b4e269845fd1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.136004937s
STEP: Saw pod success
Jan  8 13:29:30.181: INFO: Pod "pod-configmaps-83431469-b801-4fbf-a473-b4e269845fd1" satisfied condition "success or failure"
Jan  8 13:29:30.188: INFO: Trying to get logs from node iruya-node pod pod-configmaps-83431469-b801-4fbf-a473-b4e269845fd1 container configmap-volume-test: 
STEP: delete the pod
Jan  8 13:29:30.348: INFO: Waiting for pod pod-configmaps-83431469-b801-4fbf-a473-b4e269845fd1 to disappear
Jan  8 13:29:30.355: INFO: Pod pod-configmaps-83431469-b801-4fbf-a473-b4e269845fd1 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  8 13:29:30.355: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-7983" for this suite.
Jan  8 13:29:36.387: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  8 13:29:36.532: INFO: namespace configmap-7983 deletion completed in 6.169774908s

• [SLOW TEST:14.575 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources Simple CustomResourceDefinition 
  creating/deleting custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  8 13:29:36.534: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] creating/deleting custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jan  8 13:29:36.663: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  8 13:29:37.761: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-4277" for this suite.
Jan  8 13:29:43.793: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  8 13:29:43.988: INFO: namespace custom-resource-definition-4277 deletion completed in 6.222019737s

• [SLOW TEST:7.454 seconds]
[sig-api-machinery] CustomResourceDefinition resources
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  Simple CustomResourceDefinition
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:35
    creating/deleting custom resource definition objects works  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  8 13:29:43.989: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Jan  8 13:29:52.680: INFO: Successfully updated pod "pod-update-c17af1bf-0749-4f64-bf2b-f8a50eb3645d"
STEP: verifying the updated pod is in kubernetes
Jan  8 13:29:52.714: INFO: Pod update OK
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  8 13:29:52.714: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-3160" for this suite.
Jan  8 13:30:14.748: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  8 13:30:14.853: INFO: namespace pods-3160 deletion completed in 22.133032231s

• [SLOW TEST:30.864 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-storage] Secrets 
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  8 13:30:14.854: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-d5519b71-7687-49bc-8b50-7c87b3c8a611
STEP: Creating a pod to test consume secrets
Jan  8 13:30:15.080: INFO: Waiting up to 5m0s for pod "pod-secrets-541d8aff-5e61-4932-8f44-c6ed17ccf9af" in namespace "secrets-9562" to be "success or failure"
Jan  8 13:30:15.092: INFO: Pod "pod-secrets-541d8aff-5e61-4932-8f44-c6ed17ccf9af": Phase="Pending", Reason="", readiness=false. Elapsed: 11.177385ms
Jan  8 13:30:17.104: INFO: Pod "pod-secrets-541d8aff-5e61-4932-8f44-c6ed17ccf9af": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023941202s
Jan  8 13:30:19.113: INFO: Pod "pod-secrets-541d8aff-5e61-4932-8f44-c6ed17ccf9af": Phase="Pending", Reason="", readiness=false. Elapsed: 4.032883019s
Jan  8 13:30:21.120: INFO: Pod "pod-secrets-541d8aff-5e61-4932-8f44-c6ed17ccf9af": Phase="Pending", Reason="", readiness=false. Elapsed: 6.039717275s
Jan  8 13:30:23.130: INFO: Pod "pod-secrets-541d8aff-5e61-4932-8f44-c6ed17ccf9af": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.049364826s
STEP: Saw pod success
Jan  8 13:30:23.130: INFO: Pod "pod-secrets-541d8aff-5e61-4932-8f44-c6ed17ccf9af" satisfied condition "success or failure"
Jan  8 13:30:23.134: INFO: Trying to get logs from node iruya-node pod pod-secrets-541d8aff-5e61-4932-8f44-c6ed17ccf9af container secret-volume-test: 
STEP: delete the pod
Jan  8 13:30:23.405: INFO: Waiting for pod pod-secrets-541d8aff-5e61-4932-8f44-c6ed17ccf9af to disappear
Jan  8 13:30:23.423: INFO: Pod pod-secrets-541d8aff-5e61-4932-8f44-c6ed17ccf9af no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  8 13:30:23.424: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-9562" for this suite.
Jan  8 13:30:29.478: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  8 13:30:29.656: INFO: namespace secrets-9562 deletion completed in 6.222226852s
STEP: Destroying namespace "secret-namespace-9089" for this suite.
Jan  8 13:30:35.713: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  8 13:30:35.886: INFO: namespace secret-namespace-9089 deletion completed in 6.230411184s

• [SLOW TEST:21.033 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  8 13:30:35.887: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-map-f774b98e-b58e-463f-8121-693e718859e3
STEP: Creating a pod to test consume secrets
Jan  8 13:30:36.038: INFO: Waiting up to 5m0s for pod "pod-secrets-aa903aa8-2519-4e9d-a6aa-a989663dd9cf" in namespace "secrets-7246" to be "success or failure"
Jan  8 13:30:36.054: INFO: Pod "pod-secrets-aa903aa8-2519-4e9d-a6aa-a989663dd9cf": Phase="Pending", Reason="", readiness=false. Elapsed: 15.38193ms
Jan  8 13:30:38.063: INFO: Pod "pod-secrets-aa903aa8-2519-4e9d-a6aa-a989663dd9cf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024415074s
Jan  8 13:30:40.069: INFO: Pod "pod-secrets-aa903aa8-2519-4e9d-a6aa-a989663dd9cf": Phase="Pending", Reason="", readiness=false. Elapsed: 4.030664258s
Jan  8 13:30:42.079: INFO: Pod "pod-secrets-aa903aa8-2519-4e9d-a6aa-a989663dd9cf": Phase="Pending", Reason="", readiness=false. Elapsed: 6.04092835s
Jan  8 13:30:44.117: INFO: Pod "pod-secrets-aa903aa8-2519-4e9d-a6aa-a989663dd9cf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.078588245s
STEP: Saw pod success
Jan  8 13:30:44.117: INFO: Pod "pod-secrets-aa903aa8-2519-4e9d-a6aa-a989663dd9cf" satisfied condition "success or failure"
Jan  8 13:30:44.122: INFO: Trying to get logs from node iruya-node pod pod-secrets-aa903aa8-2519-4e9d-a6aa-a989663dd9cf container secret-volume-test: 
STEP: delete the pod
Jan  8 13:30:44.164: INFO: Waiting for pod pod-secrets-aa903aa8-2519-4e9d-a6aa-a989663dd9cf to disappear
Jan  8 13:30:44.169: INFO: Pod pod-secrets-aa903aa8-2519-4e9d-a6aa-a989663dd9cf no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  8 13:30:44.170: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-7246" for this suite.
Jan  8 13:30:50.248: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  8 13:30:50.372: INFO: namespace secrets-7246 deletion completed in 6.190982258s

• [SLOW TEST:14.486 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  8 13:30:50.373: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Jan  8 13:30:58.659: INFO: Expected: &{OK} to match Container's Termination Message: OK --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  8 13:30:58.784: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-4617" for this suite.
Jan  8 13:31:04.969: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  8 13:31:05.098: INFO: namespace container-runtime-4617 deletion completed in 6.167437037s

• [SLOW TEST:14.726 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129
      should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform canary updates and phased rolling updates of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  8 13:31:05.099: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75
STEP: Creating service test in namespace statefulset-5973
[It] should perform canary updates and phased rolling updates of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a new StatefulSet
Jan  8 13:31:05.253: INFO: Found 0 stateful pods, waiting for 3
Jan  8 13:31:15.263: INFO: Found 2 stateful pods, waiting for 3
Jan  8 13:31:25.270: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jan  8 13:31:25.270: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jan  8 13:31:25.270: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Jan  8 13:31:35.265: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jan  8 13:31:35.265: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jan  8 13:31:35.265: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Updating stateful set template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine
Jan  8 13:31:35.320: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Not applying an update when the partition is greater than the number of replicas
STEP: Performing a canary update
Jan  8 13:31:45.369: INFO: Updating stateful set ss2
Jan  8 13:31:45.391: INFO: Waiting for Pod statefulset-5973/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan  8 13:31:55.406: INFO: Waiting for Pod statefulset-5973/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
STEP: Restoring Pods to the correct revision when they are deleted
Jan  8 13:32:05.742: INFO: Found 2 stateful pods, waiting for 3
Jan  8 13:32:15.751: INFO: Found 2 stateful pods, waiting for 3
Jan  8 13:32:25.757: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jan  8 13:32:25.757: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jan  8 13:32:25.757: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Performing a phased rolling update
Jan  8 13:32:25.807: INFO: Updating stateful set ss2
Jan  8 13:32:25.833: INFO: Waiting for Pod statefulset-5973/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan  8 13:32:36.131: INFO: Updating stateful set ss2
Jan  8 13:32:36.773: INFO: Waiting for StatefulSet statefulset-5973/ss2 to complete update
Jan  8 13:32:36.773: INFO: Waiting for Pod statefulset-5973/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan  8 13:32:46.788: INFO: Waiting for StatefulSet statefulset-5973/ss2 to complete update
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86
Jan  8 13:32:56.789: INFO: Deleting all statefulset in ns statefulset-5973
Jan  8 13:32:56.794: INFO: Scaling statefulset ss2 to 0
Jan  8 13:33:26.864: INFO: Waiting for statefulset status.replicas updated to 0
Jan  8 13:33:26.870: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  8 13:33:26.899: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-5973" for this suite.
Jan  8 13:33:34.938: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  8 13:33:35.080: INFO: namespace statefulset-5973 deletion completed in 8.174380201s

• [SLOW TEST:149.981 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should perform canary updates and phased rolling updates of template modifications [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  8 13:33:35.080: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-de7a2048-225c-4d8b-8efb-ae0c0eb3f08c
STEP: Creating a pod to test consume configMaps
Jan  8 13:33:35.213: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-022cd360-9bea-4d91-8502-6b317d1f13fd" in namespace "projected-6067" to be "success or failure"
Jan  8 13:33:35.216: INFO: Pod "pod-projected-configmaps-022cd360-9bea-4d91-8502-6b317d1f13fd": Phase="Pending", Reason="", readiness=false. Elapsed: 3.381513ms
Jan  8 13:33:37.227: INFO: Pod "pod-projected-configmaps-022cd360-9bea-4d91-8502-6b317d1f13fd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014361689s
Jan  8 13:33:39.234: INFO: Pod "pod-projected-configmaps-022cd360-9bea-4d91-8502-6b317d1f13fd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.021341429s
Jan  8 13:33:41.484: INFO: Pod "pod-projected-configmaps-022cd360-9bea-4d91-8502-6b317d1f13fd": Phase="Pending", Reason="", readiness=false. Elapsed: 6.270991983s
Jan  8 13:33:43.492: INFO: Pod "pod-projected-configmaps-022cd360-9bea-4d91-8502-6b317d1f13fd": Phase="Pending", Reason="", readiness=false. Elapsed: 8.279336815s
Jan  8 13:33:45.501: INFO: Pod "pod-projected-configmaps-022cd360-9bea-4d91-8502-6b317d1f13fd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.287865582s
STEP: Saw pod success
Jan  8 13:33:45.501: INFO: Pod "pod-projected-configmaps-022cd360-9bea-4d91-8502-6b317d1f13fd" satisfied condition "success or failure"
Jan  8 13:33:45.505: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-022cd360-9bea-4d91-8502-6b317d1f13fd container projected-configmap-volume-test: 
STEP: delete the pod
Jan  8 13:33:45.657: INFO: Waiting for pod pod-projected-configmaps-022cd360-9bea-4d91-8502-6b317d1f13fd to disappear
Jan  8 13:33:45.666: INFO: Pod pod-projected-configmaps-022cd360-9bea-4d91-8502-6b317d1f13fd no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  8 13:33:45.666: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6067" for this suite.
Jan  8 13:33:51.713: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  8 13:33:51.802: INFO: namespace projected-6067 deletion completed in 6.125138869s

• [SLOW TEST:16.722 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  8 13:33:51.803: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-e54f85d6-5f6b-4f1f-8f07-d8d1922fea0a
STEP: Creating a pod to test consume configMaps
Jan  8 13:33:51.879: INFO: Waiting up to 5m0s for pod "pod-configmaps-fb599c1d-ed57-46a6-8369-372ce7e2a77b" in namespace "configmap-7697" to be "success or failure"
Jan  8 13:33:51.930: INFO: Pod "pod-configmaps-fb599c1d-ed57-46a6-8369-372ce7e2a77b": Phase="Pending", Reason="", readiness=false. Elapsed: 51.322446ms
Jan  8 13:33:53.944: INFO: Pod "pod-configmaps-fb599c1d-ed57-46a6-8369-372ce7e2a77b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.065197053s
Jan  8 13:33:55.950: INFO: Pod "pod-configmaps-fb599c1d-ed57-46a6-8369-372ce7e2a77b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.071274095s
Jan  8 13:33:57.959: INFO: Pod "pod-configmaps-fb599c1d-ed57-46a6-8369-372ce7e2a77b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.080075966s
Jan  8 13:33:59.970: INFO: Pod "pod-configmaps-fb599c1d-ed57-46a6-8369-372ce7e2a77b": Phase="Running", Reason="", readiness=true. Elapsed: 8.090951034s
Jan  8 13:34:01.976: INFO: Pod "pod-configmaps-fb599c1d-ed57-46a6-8369-372ce7e2a77b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.096566099s
STEP: Saw pod success
Jan  8 13:34:01.976: INFO: Pod "pod-configmaps-fb599c1d-ed57-46a6-8369-372ce7e2a77b" satisfied condition "success or failure"
Jan  8 13:34:01.979: INFO: Trying to get logs from node iruya-node pod pod-configmaps-fb599c1d-ed57-46a6-8369-372ce7e2a77b container configmap-volume-test: 
STEP: delete the pod
Jan  8 13:34:02.049: INFO: Waiting for pod pod-configmaps-fb599c1d-ed57-46a6-8369-372ce7e2a77b to disappear
Jan  8 13:34:02.061: INFO: Pod pod-configmaps-fb599c1d-ed57-46a6-8369-372ce7e2a77b no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  8 13:34:02.061: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-7697" for this suite.
Jan  8 13:34:08.185: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  8 13:34:08.337: INFO: namespace configmap-7697 deletion completed in 6.270069841s

• [SLOW TEST:16.535 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  8 13:34:08.338: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test override command
Jan  8 13:34:08.467: INFO: Waiting up to 5m0s for pod "client-containers-3c5997aa-6ca1-4dee-93d4-b59ec499ecbd" in namespace "containers-4094" to be "success or failure"
Jan  8 13:34:08.473: INFO: Pod "client-containers-3c5997aa-6ca1-4dee-93d4-b59ec499ecbd": Phase="Pending", Reason="", readiness=false. Elapsed: 6.593192ms
Jan  8 13:34:10.486: INFO: Pod "client-containers-3c5997aa-6ca1-4dee-93d4-b59ec499ecbd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01927316s
Jan  8 13:34:12.958: INFO: Pod "client-containers-3c5997aa-6ca1-4dee-93d4-b59ec499ecbd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.490874974s
Jan  8 13:34:14.969: INFO: Pod "client-containers-3c5997aa-6ca1-4dee-93d4-b59ec499ecbd": Phase="Pending", Reason="", readiness=false. Elapsed: 6.502601749s
Jan  8 13:34:16.986: INFO: Pod "client-containers-3c5997aa-6ca1-4dee-93d4-b59ec499ecbd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.519226126s
STEP: Saw pod success
Jan  8 13:34:16.986: INFO: Pod "client-containers-3c5997aa-6ca1-4dee-93d4-b59ec499ecbd" satisfied condition "success or failure"
Jan  8 13:34:16.996: INFO: Trying to get logs from node iruya-node pod client-containers-3c5997aa-6ca1-4dee-93d4-b59ec499ecbd container test-container: 
STEP: delete the pod
Jan  8 13:34:17.255: INFO: Waiting for pod client-containers-3c5997aa-6ca1-4dee-93d4-b59ec499ecbd to disappear
Jan  8 13:34:17.265: INFO: Pod client-containers-3c5997aa-6ca1-4dee-93d4-b59ec499ecbd no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  8 13:34:17.265: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-4094" for this suite.
Jan  8 13:34:23.367: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  8 13:34:23.497: INFO: namespace containers-4094 deletion completed in 6.21980098s

• [SLOW TEST:15.159 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  8 13:34:23.497: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name cm-test-opt-del-5c3529db-fc3d-44bc-a590-7a8b4016001b
STEP: Creating configMap with name cm-test-opt-upd-2d8062a1-4be5-48f2-b164-11f750fd6f7b
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-5c3529db-fc3d-44bc-a590-7a8b4016001b
STEP: Updating configmap cm-test-opt-upd-2d8062a1-4be5-48f2-b164-11f750fd6f7b
STEP: Creating configMap with name cm-test-opt-create-8eb88520-f7fb-4993-98c8-71cdb5d730b0
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  8 13:35:50.383: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-3789" for this suite.
Jan  8 13:36:14.463: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  8 13:36:14.614: INFO: namespace configmap-3789 deletion completed in 24.227101399s

• [SLOW TEST:111.117 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  8 13:36:14.614: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44
[It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
Jan  8 13:36:14.724: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  8 13:36:27.755: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-3973" for this suite.
Jan  8 13:36:33.815: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  8 13:36:33.937: INFO: namespace init-container-3973 deletion completed in 6.151035556s

• [SLOW TEST:19.323 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  8 13:36:33.940: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0777 on node default medium
Jan  8 13:36:34.102: INFO: Waiting up to 5m0s for pod "pod-b55d1a4e-0978-456e-bfcc-652bf61d2ec4" in namespace "emptydir-3951" to be "success or failure"
Jan  8 13:36:34.119: INFO: Pod "pod-b55d1a4e-0978-456e-bfcc-652bf61d2ec4": Phase="Pending", Reason="", readiness=false. Elapsed: 16.591065ms
Jan  8 13:36:36.128: INFO: Pod "pod-b55d1a4e-0978-456e-bfcc-652bf61d2ec4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02599104s
Jan  8 13:36:38.141: INFO: Pod "pod-b55d1a4e-0978-456e-bfcc-652bf61d2ec4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.038592009s
Jan  8 13:36:40.155: INFO: Pod "pod-b55d1a4e-0978-456e-bfcc-652bf61d2ec4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.052196364s
Jan  8 13:36:42.164: INFO: Pod "pod-b55d1a4e-0978-456e-bfcc-652bf61d2ec4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.061702834s
STEP: Saw pod success
Jan  8 13:36:42.164: INFO: Pod "pod-b55d1a4e-0978-456e-bfcc-652bf61d2ec4" satisfied condition "success or failure"
Jan  8 13:36:42.169: INFO: Trying to get logs from node iruya-node pod pod-b55d1a4e-0978-456e-bfcc-652bf61d2ec4 container test-container: 
STEP: delete the pod
Jan  8 13:36:42.266: INFO: Waiting for pod pod-b55d1a4e-0978-456e-bfcc-652bf61d2ec4 to disappear
Jan  8 13:36:42.293: INFO: Pod pod-b55d1a4e-0978-456e-bfcc-652bf61d2ec4 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  8 13:36:42.293: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-3951" for this suite.
Jan  8 13:36:48.359: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  8 13:36:48.512: INFO: namespace emptydir-3951 deletion completed in 6.175058905s

• [SLOW TEST:14.572 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  8 13:36:48.513: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward api env vars
Jan  8 13:36:48.656: INFO: Waiting up to 5m0s for pod "downward-api-6d961a0e-c612-4b28-b003-af46d7332d44" in namespace "downward-api-5790" to be "success or failure"
Jan  8 13:36:48.672: INFO: Pod "downward-api-6d961a0e-c612-4b28-b003-af46d7332d44": Phase="Pending", Reason="", readiness=false. Elapsed: 16.224896ms
Jan  8 13:36:50.687: INFO: Pod "downward-api-6d961a0e-c612-4b28-b003-af46d7332d44": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031718017s
Jan  8 13:36:52.695: INFO: Pod "downward-api-6d961a0e-c612-4b28-b003-af46d7332d44": Phase="Pending", Reason="", readiness=false. Elapsed: 4.039419639s
Jan  8 13:36:54.707: INFO: Pod "downward-api-6d961a0e-c612-4b28-b003-af46d7332d44": Phase="Pending", Reason="", readiness=false. Elapsed: 6.051396784s
Jan  8 13:36:56.735: INFO: Pod "downward-api-6d961a0e-c612-4b28-b003-af46d7332d44": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.079436755s
STEP: Saw pod success
Jan  8 13:36:56.735: INFO: Pod "downward-api-6d961a0e-c612-4b28-b003-af46d7332d44" satisfied condition "success or failure"
Jan  8 13:36:56.739: INFO: Trying to get logs from node iruya-node pod downward-api-6d961a0e-c612-4b28-b003-af46d7332d44 container dapi-container: 
STEP: delete the pod
Jan  8 13:36:56.866: INFO: Waiting for pod downward-api-6d961a0e-c612-4b28-b003-af46d7332d44 to disappear
Jan  8 13:36:56.875: INFO: Pod downward-api-6d961a0e-c612-4b28-b003-af46d7332d44 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  8 13:36:56.875: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-5790" for this suite.
Jan  8 13:37:02.904: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  8 13:37:03.028: INFO: namespace downward-api-5790 deletion completed in 6.146214527s

• [SLOW TEST:14.515 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32
  should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-node] ConfigMap 
  should fail to create ConfigMap with empty key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  8 13:37:03.028: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create ConfigMap with empty key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap that has name configmap-test-emptyKey-37060c55-6ef9-4132-801a-af820d93fbf6
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  8 13:37:03.244: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-277" for this suite.
Jan  8 13:37:09.276: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  8 13:37:09.438: INFO: namespace configmap-277 deletion completed in 6.185082499s

• [SLOW TEST:6.410 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should fail to create ConfigMap with empty key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  8 13:37:09.439: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81
Jan  8 13:37:09.575: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Jan  8 13:37:09.588: INFO: Waiting for terminating namespaces to be deleted...
Jan  8 13:37:09.594: INFO: 
Logging pods the kubelet thinks is on node iruya-node before test
Jan  8 13:37:09.606: INFO: kube-proxy-976zl from kube-system started at 2019-08-04 09:01:39 +0000 UTC (1 container statuses recorded)
Jan  8 13:37:09.606: INFO: 	Container kube-proxy ready: true, restart count 0
Jan  8 13:37:09.606: INFO: weave-net-rlp57 from kube-system started at 2019-10-12 11:56:39 +0000 UTC (2 container statuses recorded)
Jan  8 13:37:09.606: INFO: 	Container weave ready: true, restart count 0
Jan  8 13:37:09.606: INFO: 	Container weave-npc ready: true, restart count 0
Jan  8 13:37:09.606: INFO: 
Logging pods the kubelet thinks is on node iruya-server-sfge57q7djm7 before test
Jan  8 13:37:09.621: INFO: kube-controller-manager-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:42 +0000 UTC (1 container statuses recorded)
Jan  8 13:37:09.621: INFO: 	Container kube-controller-manager ready: true, restart count 18
Jan  8 13:37:09.621: INFO: kube-proxy-58v95 from kube-system started at 2019-08-04 08:52:37 +0000 UTC (1 container statuses recorded)
Jan  8 13:37:09.621: INFO: 	Container kube-proxy ready: true, restart count 0
Jan  8 13:37:09.621: INFO: kube-apiserver-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:39 +0000 UTC (1 container statuses recorded)
Jan  8 13:37:09.621: INFO: 	Container kube-apiserver ready: true, restart count 0
Jan  8 13:37:09.621: INFO: kube-scheduler-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:43 +0000 UTC (1 container statuses recorded)
Jan  8 13:37:09.621: INFO: 	Container kube-scheduler ready: true, restart count 12
Jan  8 13:37:09.621: INFO: coredns-5c98db65d4-xx8w8 from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded)
Jan  8 13:37:09.621: INFO: 	Container coredns ready: true, restart count 0
Jan  8 13:37:09.621: INFO: etcd-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:38 +0000 UTC (1 container statuses recorded)
Jan  8 13:37:09.621: INFO: 	Container etcd ready: true, restart count 0
Jan  8 13:37:09.621: INFO: weave-net-bzl4d from kube-system started at 2019-08-04 08:52:37 +0000 UTC (2 container statuses recorded)
Jan  8 13:37:09.621: INFO: 	Container weave ready: true, restart count 0
Jan  8 13:37:09.621: INFO: 	Container weave-npc ready: true, restart count 0
Jan  8 13:37:09.621: INFO: coredns-5c98db65d4-bm4gs from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded)
Jan  8 13:37:09.621: INFO: 	Container coredns ready: true, restart count 0
[It] validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: verifying the node has the label node iruya-node
STEP: verifying the node has the label node iruya-server-sfge57q7djm7
Jan  8 13:37:09.754: INFO: Pod coredns-5c98db65d4-bm4gs requesting resource cpu=100m on Node iruya-server-sfge57q7djm7
Jan  8 13:37:09.754: INFO: Pod coredns-5c98db65d4-xx8w8 requesting resource cpu=100m on Node iruya-server-sfge57q7djm7
Jan  8 13:37:09.754: INFO: Pod etcd-iruya-server-sfge57q7djm7 requesting resource cpu=0m on Node iruya-server-sfge57q7djm7
Jan  8 13:37:09.754: INFO: Pod kube-apiserver-iruya-server-sfge57q7djm7 requesting resource cpu=250m on Node iruya-server-sfge57q7djm7
Jan  8 13:37:09.754: INFO: Pod kube-controller-manager-iruya-server-sfge57q7djm7 requesting resource cpu=200m on Node iruya-server-sfge57q7djm7
Jan  8 13:37:09.754: INFO: Pod kube-proxy-58v95 requesting resource cpu=0m on Node iruya-server-sfge57q7djm7
Jan  8 13:37:09.754: INFO: Pod kube-proxy-976zl requesting resource cpu=0m on Node iruya-node
Jan  8 13:37:09.754: INFO: Pod kube-scheduler-iruya-server-sfge57q7djm7 requesting resource cpu=100m on Node iruya-server-sfge57q7djm7
Jan  8 13:37:09.754: INFO: Pod weave-net-bzl4d requesting resource cpu=20m on Node iruya-server-sfge57q7djm7
Jan  8 13:37:09.754: INFO: Pod weave-net-rlp57 requesting resource cpu=20m on Node iruya-node
STEP: Starting Pods to consume most of the cluster CPU.
STEP: Creating another pod that requires unavailable amount of CPU.
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-46b6f2ad-bee8-4252-9b6d-adab54748e40.15e7ecdd5d37b761], Reason = [Scheduled], Message = [Successfully assigned sched-pred-6286/filler-pod-46b6f2ad-bee8-4252-9b6d-adab54748e40 to iruya-server-sfge57q7djm7]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-46b6f2ad-bee8-4252-9b6d-adab54748e40.15e7ecde93c793da], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-46b6f2ad-bee8-4252-9b6d-adab54748e40.15e7ecdf7b0cb2f8], Reason = [Created], Message = [Created container filler-pod-46b6f2ad-bee8-4252-9b6d-adab54748e40]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-46b6f2ad-bee8-4252-9b6d-adab54748e40.15e7ecdf9aec26b8], Reason = [Started], Message = [Started container filler-pod-46b6f2ad-bee8-4252-9b6d-adab54748e40]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-a4c1d03b-5883-4196-8612-c3fdfc225f87.15e7ecdd5620d715], Reason = [Scheduled], Message = [Successfully assigned sched-pred-6286/filler-pod-a4c1d03b-5883-4196-8612-c3fdfc225f87 to iruya-node]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-a4c1d03b-5883-4196-8612-c3fdfc225f87.15e7ecde9949e5f6], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-a4c1d03b-5883-4196-8612-c3fdfc225f87.15e7ecdf6aed5db8], Reason = [Created], Message = [Created container filler-pod-a4c1d03b-5883-4196-8612-c3fdfc225f87]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-a4c1d03b-5883-4196-8612-c3fdfc225f87.15e7ecdf8fbded9a], Reason = [Started], Message = [Started container filler-pod-a4c1d03b-5883-4196-8612-c3fdfc225f87]
STEP: Considering event: 
Type = [Warning], Name = [additional-pod.15e7ece02ae0d7b1], Reason = [FailedScheduling], Message = [0/2 nodes are available: 2 Insufficient cpu.]
STEP: removing the label node off the node iruya-node
STEP: verifying the node doesn't have the label node
STEP: removing the label node off the node iruya-server-sfge57q7djm7
STEP: verifying the node doesn't have the label node
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  8 13:37:23.084: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-6286" for this suite.
Jan  8 13:37:29.501: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  8 13:37:30.606: INFO: namespace sched-pred-6286 deletion completed in 7.515955249s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72

• [SLOW TEST:21.168 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23
  validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period 
  should be submitted and removed [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  8 13:37:30.607: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Delete Grace Period
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:47
[It] should be submitted and removed [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: setting up selector
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
Jan  8 13:37:41.024: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0'
STEP: deleting the pod gracefully
STEP: verifying the kubelet observed the termination notice
Jan  8 13:38:01.208: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed
[AfterEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  8 13:38:01.213: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-3700" for this suite.
Jan  8 13:38:07.246: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  8 13:38:07.441: INFO: namespace pods-3700 deletion completed in 6.221962302s

• [SLOW TEST:36.834 seconds]
[k8s.io] [sig-node] Pods Extended
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  [k8s.io] Delete Grace Period
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should be submitted and removed [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  8 13:38:07.442: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-b1f43208-c368-453c-b416-054972123874
STEP: Creating a pod to test consume secrets
Jan  8 13:38:07.601: INFO: Waiting up to 5m0s for pod "pod-secrets-424803a9-76b1-48b3-ae32-696e3f67b0be" in namespace "secrets-656" to be "success or failure"
Jan  8 13:38:07.613: INFO: Pod "pod-secrets-424803a9-76b1-48b3-ae32-696e3f67b0be": Phase="Pending", Reason="", readiness=false. Elapsed: 12.399ms
Jan  8 13:38:09.655: INFO: Pod "pod-secrets-424803a9-76b1-48b3-ae32-696e3f67b0be": Phase="Pending", Reason="", readiness=false. Elapsed: 2.053794188s
Jan  8 13:38:11.665: INFO: Pod "pod-secrets-424803a9-76b1-48b3-ae32-696e3f67b0be": Phase="Pending", Reason="", readiness=false. Elapsed: 4.064464523s
Jan  8 13:38:13.675: INFO: Pod "pod-secrets-424803a9-76b1-48b3-ae32-696e3f67b0be": Phase="Pending", Reason="", readiness=false. Elapsed: 6.074400085s
Jan  8 13:38:15.700: INFO: Pod "pod-secrets-424803a9-76b1-48b3-ae32-696e3f67b0be": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.099557006s
STEP: Saw pod success
Jan  8 13:38:15.701: INFO: Pod "pod-secrets-424803a9-76b1-48b3-ae32-696e3f67b0be" satisfied condition "success or failure"
Jan  8 13:38:15.708: INFO: Trying to get logs from node iruya-node pod pod-secrets-424803a9-76b1-48b3-ae32-696e3f67b0be container secret-env-test: 
STEP: delete the pod
Jan  8 13:38:15.771: INFO: Waiting for pod pod-secrets-424803a9-76b1-48b3-ae32-696e3f67b0be to disappear
Jan  8 13:38:15.790: INFO: Pod pod-secrets-424803a9-76b1-48b3-ae32-696e3f67b0be no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  8 13:38:15.790: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-656" for this suite.
Jan  8 13:38:22.035: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  8 13:38:22.155: INFO: namespace secrets-656 deletion completed in 6.277931275s

• [SLOW TEST:14.713 seconds]
[sig-api-machinery] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  8 13:38:22.155: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Jan  8 13:38:38.401: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jan  8 13:38:38.462: INFO: Pod pod-with-prestop-http-hook still exists
Jan  8 13:38:40.463: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jan  8 13:38:40.475: INFO: Pod pod-with-prestop-http-hook still exists
Jan  8 13:38:42.463: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jan  8 13:38:42.476: INFO: Pod pod-with-prestop-http-hook still exists
Jan  8 13:38:44.463: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jan  8 13:38:44.480: INFO: Pod pod-with-prestop-http-hook still exists
Jan  8 13:38:46.463: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jan  8 13:38:46.477: INFO: Pod pod-with-prestop-http-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  8 13:38:46.562: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-2790" for this suite.
Jan  8 13:39:08.603: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  8 13:39:08.706: INFO: namespace container-lifecycle-hook-2790 deletion completed in 22.131795156s

• [SLOW TEST:46.551 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute prestop http hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-network] Services 
  should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  8 13:39:08.707: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88
[It] should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  8 13:39:08.782: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-6747" for this suite.
Jan  8 13:39:14.884: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  8 13:39:14.979: INFO: namespace services-6747 deletion completed in 6.135802267s
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92

• [SLOW TEST:6.273 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  8 13:39:14.980: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0666 on tmpfs
Jan  8 13:39:15.123: INFO: Waiting up to 5m0s for pod "pod-7e22108e-c1c3-4fd7-b595-c30a118428cb" in namespace "emptydir-7009" to be "success or failure"
Jan  8 13:39:15.190: INFO: Pod "pod-7e22108e-c1c3-4fd7-b595-c30a118428cb": Phase="Pending", Reason="", readiness=false. Elapsed: 66.688022ms
Jan  8 13:39:17.196: INFO: Pod "pod-7e22108e-c1c3-4fd7-b595-c30a118428cb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.072468743s
Jan  8 13:39:19.204: INFO: Pod "pod-7e22108e-c1c3-4fd7-b595-c30a118428cb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.081164769s
Jan  8 13:39:21.212: INFO: Pod "pod-7e22108e-c1c3-4fd7-b595-c30a118428cb": Phase="Pending", Reason="", readiness=false. Elapsed: 6.088701421s
Jan  8 13:39:23.230: INFO: Pod "pod-7e22108e-c1c3-4fd7-b595-c30a118428cb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.106587119s
STEP: Saw pod success
Jan  8 13:39:23.230: INFO: Pod "pod-7e22108e-c1c3-4fd7-b595-c30a118428cb" satisfied condition "success or failure"
Jan  8 13:39:23.240: INFO: Trying to get logs from node iruya-node pod pod-7e22108e-c1c3-4fd7-b595-c30a118428cb container test-container: 
STEP: delete the pod
Jan  8 13:39:23.397: INFO: Waiting for pod pod-7e22108e-c1c3-4fd7-b595-c30a118428cb to disappear
Jan  8 13:39:23.420: INFO: Pod pod-7e22108e-c1c3-4fd7-b595-c30a118428cb no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  8 13:39:23.420: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-7009" for this suite.
Jan  8 13:39:29.449: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  8 13:39:29.574: INFO: namespace emptydir-7009 deletion completed in 6.143412583s

• [SLOW TEST:14.595 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  8 13:39:29.575: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating the pod
Jan  8 13:39:38.336: INFO: Successfully updated pod "labelsupdateacae5579-ea2f-4376-bbfa-e4d0c2f12604"
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  8 13:39:40.490: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2261" for this suite.
Jan  8 13:40:02.543: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  8 13:40:02.647: INFO: namespace projected-2261 deletion completed in 22.144583351s

• [SLOW TEST:33.073 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl expose 
  should create services for rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  8 13:40:02.648: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should create services for rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating Redis RC
Jan  8 13:40:02.714: INFO: namespace kubectl-3180
Jan  8 13:40:02.714: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3180'
Jan  8 13:40:05.255: INFO: stderr: ""
Jan  8 13:40:05.255: INFO: stdout: "replicationcontroller/redis-master created\n"
STEP: Waiting for Redis master to start.
Jan  8 13:40:06.262: INFO: Selector matched 1 pods for map[app:redis]
Jan  8 13:40:06.262: INFO: Found 0 / 1
Jan  8 13:40:07.267: INFO: Selector matched 1 pods for map[app:redis]
Jan  8 13:40:07.267: INFO: Found 0 / 1
Jan  8 13:40:08.265: INFO: Selector matched 1 pods for map[app:redis]
Jan  8 13:40:08.266: INFO: Found 0 / 1
Jan  8 13:40:09.263: INFO: Selector matched 1 pods for map[app:redis]
Jan  8 13:40:09.263: INFO: Found 0 / 1
Jan  8 13:40:10.263: INFO: Selector matched 1 pods for map[app:redis]
Jan  8 13:40:10.263: INFO: Found 0 / 1
Jan  8 13:40:11.262: INFO: Selector matched 1 pods for map[app:redis]
Jan  8 13:40:11.262: INFO: Found 0 / 1
Jan  8 13:40:12.270: INFO: Selector matched 1 pods for map[app:redis]
Jan  8 13:40:12.270: INFO: Found 0 / 1
Jan  8 13:40:13.272: INFO: Selector matched 1 pods for map[app:redis]
Jan  8 13:40:13.272: INFO: Found 1 / 1
Jan  8 13:40:13.272: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Jan  8 13:40:13.277: INFO: Selector matched 1 pods for map[app:redis]
Jan  8 13:40:13.277: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Jan  8 13:40:13.277: INFO: wait on redis-master startup in kubectl-3180 
Jan  8 13:40:13.277: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-2r8sg redis-master --namespace=kubectl-3180'
Jan  8 13:40:13.526: INFO: stderr: ""
Jan  8 13:40:13.526: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 08 Jan 13:40:11.197 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 08 Jan 13:40:11.197 # Server started, Redis version 3.2.12\n1:M 08 Jan 13:40:11.197 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 08 Jan 13:40:11.198 * The server is now ready to accept connections on port 6379\n"
STEP: exposing RC
Jan  8 13:40:13.527: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc redis-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-3180'
Jan  8 13:40:13.747: INFO: stderr: ""
Jan  8 13:40:13.747: INFO: stdout: "service/rm2 exposed\n"
Jan  8 13:40:13.759: INFO: Service rm2 in namespace kubectl-3180 found.
STEP: exposing service
Jan  8 13:40:15.769: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-3180'
Jan  8 13:40:16.084: INFO: stderr: ""
Jan  8 13:40:16.085: INFO: stdout: "service/rm3 exposed\n"
Jan  8 13:40:16.089: INFO: Service rm3 in namespace kubectl-3180 found.
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  8 13:40:18.114: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3180" for this suite.
Jan  8 13:40:40.157: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  8 13:40:40.259: INFO: namespace kubectl-3180 deletion completed in 22.138690198s

• [SLOW TEST:37.612 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl expose
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create services for rc  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  8 13:40:40.260: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0777 on node default medium
Jan  8 13:40:40.386: INFO: Waiting up to 5m0s for pod "pod-3f99aa01-6bb8-4b0a-ad67-c3d03ac690c5" in namespace "emptydir-6774" to be "success or failure"
Jan  8 13:40:40.394: INFO: Pod "pod-3f99aa01-6bb8-4b0a-ad67-c3d03ac690c5": Phase="Pending", Reason="", readiness=false. Elapsed: 8.080632ms
Jan  8 13:40:42.406: INFO: Pod "pod-3f99aa01-6bb8-4b0a-ad67-c3d03ac690c5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019940531s
Jan  8 13:40:44.555: INFO: Pod "pod-3f99aa01-6bb8-4b0a-ad67-c3d03ac690c5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.16901486s
Jan  8 13:40:46.569: INFO: Pod "pod-3f99aa01-6bb8-4b0a-ad67-c3d03ac690c5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.18359829s
Jan  8 13:40:48.585: INFO: Pod "pod-3f99aa01-6bb8-4b0a-ad67-c3d03ac690c5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.199232681s
STEP: Saw pod success
Jan  8 13:40:48.585: INFO: Pod "pod-3f99aa01-6bb8-4b0a-ad67-c3d03ac690c5" satisfied condition "success or failure"
Jan  8 13:40:48.590: INFO: Trying to get logs from node iruya-node pod pod-3f99aa01-6bb8-4b0a-ad67-c3d03ac690c5 container test-container: 
STEP: delete the pod
Jan  8 13:40:48.688: INFO: Waiting for pod pod-3f99aa01-6bb8-4b0a-ad67-c3d03ac690c5 to disappear
Jan  8 13:40:48.722: INFO: Pod pod-3f99aa01-6bb8-4b0a-ad67-c3d03ac690c5 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  8 13:40:48.723: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-6774" for this suite.
Jan  8 13:40:54.755: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  8 13:40:54.908: INFO: namespace emptydir-6774 deletion completed in 6.176263972s

• [SLOW TEST:14.648 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  8 13:40:54.909: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod test-webserver-36d3d08b-0d37-4186-aab1-b8acc7bac1a3 in namespace container-probe-2773
Jan  8 13:41:03.101: INFO: Started pod test-webserver-36d3d08b-0d37-4186-aab1-b8acc7bac1a3 in namespace container-probe-2773
STEP: checking the pod's current state and verifying that restartCount is present
Jan  8 13:41:03.122: INFO: Initial restart count of pod test-webserver-36d3d08b-0d37-4186-aab1-b8acc7bac1a3 is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  8 13:45:05.063: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-2773" for this suite.
Jan  8 13:45:11.107: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  8 13:45:11.218: INFO: namespace container-probe-2773 deletion completed in 6.135401385s

• [SLOW TEST:256.309 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Guestbook application 
  should create and stop a working application  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  8 13:45:11.219: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should create and stop a working application  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating all guestbook components
Jan  8 13:45:11.378: INFO: apiVersion: v1
kind: Service
metadata:
  name: redis-slave
  labels:
    app: redis
    role: slave
    tier: backend
spec:
  ports:
  - port: 6379
  selector:
    app: redis
    role: slave
    tier: backend

Jan  8 13:45:11.379: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-965'
Jan  8 13:45:11.991: INFO: stderr: ""
Jan  8 13:45:11.991: INFO: stdout: "service/redis-slave created\n"
Jan  8 13:45:11.992: INFO: apiVersion: v1
kind: Service
metadata:
  name: redis-master
  labels:
    app: redis
    role: master
    tier: backend
spec:
  ports:
  - port: 6379
    targetPort: 6379
  selector:
    app: redis
    role: master
    tier: backend

Jan  8 13:45:11.992: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-965'
Jan  8 13:45:12.651: INFO: stderr: ""
Jan  8 13:45:12.651: INFO: stdout: "service/redis-master created\n"
Jan  8 13:45:12.651: INFO: apiVersion: v1
kind: Service
metadata:
  name: frontend
  labels:
    app: guestbook
    tier: frontend
spec:
  # if your cluster supports it, uncomment the following to automatically create
  # an external load-balanced IP for the frontend service.
  # type: LoadBalancer
  ports:
  - port: 80
  selector:
    app: guestbook
    tier: frontend

Jan  8 13:45:12.652: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-965'
Jan  8 13:45:13.448: INFO: stderr: ""
Jan  8 13:45:13.448: INFO: stdout: "service/frontend created\n"
Jan  8 13:45:13.449: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: frontend
spec:
  replicas: 3
  selector:
    matchLabels:
      app: guestbook
      tier: frontend
  template:
    metadata:
      labels:
        app: guestbook
        tier: frontend
    spec:
      containers:
      - name: php-redis
        image: gcr.io/google-samples/gb-frontend:v6
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        env:
        - name: GET_HOSTS_FROM
          value: dns
          # If your cluster config does not include a dns service, then to
          # instead access environment variables to find service host
          # info, comment out the 'value: dns' line above, and uncomment the
          # line below:
          # value: env
        ports:
        - containerPort: 80

Jan  8 13:45:13.449: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-965'
Jan  8 13:45:14.111: INFO: stderr: ""
Jan  8 13:45:14.111: INFO: stdout: "deployment.apps/frontend created\n"
Jan  8 13:45:14.111: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: redis-master
spec:
  replicas: 1
  selector:
    matchLabels:
      app: redis
      role: master
      tier: backend
  template:
    metadata:
      labels:
        app: redis
        role: master
        tier: backend
    spec:
      containers:
      - name: master
        image: gcr.io/kubernetes-e2e-test-images/redis:1.0
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 6379

Jan  8 13:45:14.111: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-965'
Jan  8 13:45:14.714: INFO: stderr: ""
Jan  8 13:45:14.715: INFO: stdout: "deployment.apps/redis-master created\n"
Jan  8 13:45:14.716: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: redis-slave
spec:
  replicas: 2
  selector:
    matchLabels:
      app: redis
      role: slave
      tier: backend
  template:
    metadata:
      labels:
        app: redis
        role: slave
        tier: backend
    spec:
      containers:
      - name: slave
        image: gcr.io/google-samples/gb-redisslave:v3
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        env:
        - name: GET_HOSTS_FROM
          value: dns
          # If your cluster config does not include a dns service, then to
          # instead access an environment variable to find the master
          # service's host, comment out the 'value: dns' line above, and
          # uncomment the line below:
          # value: env
        ports:
        - containerPort: 6379

Jan  8 13:45:14.716: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-965'
Jan  8 13:45:15.808: INFO: stderr: ""
Jan  8 13:45:15.808: INFO: stdout: "deployment.apps/redis-slave created\n"
STEP: validating guestbook app
Jan  8 13:45:15.808: INFO: Waiting for all frontend pods to be Running.
Jan  8 13:45:35.861: INFO: Waiting for frontend to serve content.
Jan  8 13:45:38.392: INFO: Trying to add a new entry to the guestbook.
Jan  8 13:45:38.486: INFO: Verifying that added entry can be retrieved.
STEP: using delete to clean up resources
Jan  8 13:45:38.518: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-965'
Jan  8 13:45:38.807: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan  8 13:45:38.807: INFO: stdout: "service \"redis-slave\" force deleted\n"
STEP: using delete to clean up resources
Jan  8 13:45:38.807: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-965'
Jan  8 13:45:39.137: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan  8 13:45:39.137: INFO: stdout: "service \"redis-master\" force deleted\n"
STEP: using delete to clean up resources
Jan  8 13:45:39.137: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-965'
Jan  8 13:45:39.312: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan  8 13:45:39.312: INFO: stdout: "service \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Jan  8 13:45:39.313: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-965'
Jan  8 13:45:39.580: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan  8 13:45:39.580: INFO: stdout: "deployment.apps \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Jan  8 13:45:39.581: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-965'
Jan  8 13:45:39.715: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan  8 13:45:39.715: INFO: stdout: "deployment.apps \"redis-master\" force deleted\n"
STEP: using delete to clean up resources
Jan  8 13:45:39.716: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-965'
Jan  8 13:45:39.964: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan  8 13:45:39.965: INFO: stdout: "deployment.apps \"redis-slave\" force deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  8 13:45:39.965: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-965" for this suite.
Jan  8 13:46:32.071: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  8 13:46:32.192: INFO: namespace kubectl-965 deletion completed in 52.175418454s

• [SLOW TEST:80.972 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Guestbook application
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create and stop a working application  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[k8s.io] [sig-node] Events 
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] [sig-node] Events
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  8 13:46:32.192: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename events
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: retrieving the pod
Jan  8 13:46:42.380: INFO: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:send-events-0a6c13d2-71ef-4153-9cd0-192e94f3f0e8,GenerateName:,Namespace:events-3597,SelfLink:/api/v1/namespaces/events-3597/pods/send-events-0a6c13d2-71ef-4153-9cd0-192e94f3f0e8,UID:cac93e36-408e-49ac-8c6f-9e594c34caa4,ResourceVersion:19778189,Generation:0,CreationTimestamp:2020-01-08 13:46:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: foo,time: 314680540,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-bb7fc {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-bb7fc,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{p gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 [] []  [{ 0 80 TCP }] [] [] {map[] map[]} [{default-token-bb7fc true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0022750f0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002275110}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-08 13:46:32 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-08 13:46:40 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-08 13:46:40 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-08 13:46:32 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.1,StartTime:2020-01-08 13:46:32 +0000 UTC,ContainerStatuses:[{p {nil ContainerStateRunning{StartedAt:2020-01-08 13:46:39 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 docker-pullable://gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 docker://fdb15ac485ce7c90cfebff470e00e0a1d8a07a3e869a1c9ccbabe51e1a731bdf}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}

STEP: checking for scheduler event about the pod
Jan  8 13:46:44.390: INFO: Saw scheduler event for our pod.
STEP: checking for kubelet event about the pod
Jan  8 13:46:46.399: INFO: Saw kubelet event for our pod.
STEP: deleting the pod
[AfterEach] [k8s.io] [sig-node] Events
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  8 13:46:46.416: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "events-3597" for this suite.
Jan  8 13:47:28.552: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  8 13:47:28.683: INFO: namespace events-3597 deletion completed in 42.235159655s

• [SLOW TEST:56.491 seconds]
[k8s.io] [sig-node] Events
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  8 13:47:28.684: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jan  8 13:47:28.798: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  8 13:47:36.884: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-6624" for this suite.
Jan  8 13:48:18.917: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  8 13:48:19.161: INFO: namespace pods-6624 deletion completed in 42.270185608s

• [SLOW TEST:50.477 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  8 13:48:19.162: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name projected-secret-test-02eb1531-dc4e-4dff-818d-5fa716c626ee
STEP: Creating a pod to test consume secrets
Jan  8 13:48:19.362: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-8dffc99e-30ab-4221-9544-ddd212bc1546" in namespace "projected-6122" to be "success or failure"
Jan  8 13:48:19.462: INFO: Pod "pod-projected-secrets-8dffc99e-30ab-4221-9544-ddd212bc1546": Phase="Pending", Reason="", readiness=false. Elapsed: 100.287936ms
Jan  8 13:48:21.472: INFO: Pod "pod-projected-secrets-8dffc99e-30ab-4221-9544-ddd212bc1546": Phase="Pending", Reason="", readiness=false. Elapsed: 2.109673938s
Jan  8 13:48:23.480: INFO: Pod "pod-projected-secrets-8dffc99e-30ab-4221-9544-ddd212bc1546": Phase="Pending", Reason="", readiness=false. Elapsed: 4.117510544s
Jan  8 13:48:25.493: INFO: Pod "pod-projected-secrets-8dffc99e-30ab-4221-9544-ddd212bc1546": Phase="Pending", Reason="", readiness=false. Elapsed: 6.130586165s
Jan  8 13:48:27.531: INFO: Pod "pod-projected-secrets-8dffc99e-30ab-4221-9544-ddd212bc1546": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.169046486s
STEP: Saw pod success
Jan  8 13:48:27.531: INFO: Pod "pod-projected-secrets-8dffc99e-30ab-4221-9544-ddd212bc1546" satisfied condition "success or failure"
Jan  8 13:48:27.537: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-8dffc99e-30ab-4221-9544-ddd212bc1546 container projected-secret-volume-test: 
STEP: delete the pod
Jan  8 13:48:27.619: INFO: Waiting for pod pod-projected-secrets-8dffc99e-30ab-4221-9544-ddd212bc1546 to disappear
Jan  8 13:48:27.697: INFO: Pod pod-projected-secrets-8dffc99e-30ab-4221-9544-ddd212bc1546 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  8 13:48:27.697: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6122" for this suite.
Jan  8 13:48:34.638: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  8 13:48:34.866: INFO: namespace projected-6122 deletion completed in 7.163367545s

• [SLOW TEST:15.704 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  8 13:48:34.867: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81
Jan  8 13:48:34.999: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Jan  8 13:48:35.014: INFO: Waiting for terminating namespaces to be deleted...
Jan  8 13:48:35.018: INFO: 
Logging pods the kubelet thinks is on node iruya-node before test
Jan  8 13:48:35.030: INFO: weave-net-rlp57 from kube-system started at 2019-10-12 11:56:39 +0000 UTC (2 container statuses recorded)
Jan  8 13:48:35.030: INFO: 	Container weave ready: true, restart count 0
Jan  8 13:48:35.030: INFO: 	Container weave-npc ready: true, restart count 0
Jan  8 13:48:35.030: INFO: kube-proxy-976zl from kube-system started at 2019-08-04 09:01:39 +0000 UTC (1 container statuses recorded)
Jan  8 13:48:35.030: INFO: 	Container kube-proxy ready: true, restart count 0
Jan  8 13:48:35.030: INFO: 
Logging pods the kubelet thinks is on node iruya-server-sfge57q7djm7 before test
Jan  8 13:48:35.043: INFO: kube-scheduler-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:43 +0000 UTC (1 container statuses recorded)
Jan  8 13:48:35.043: INFO: 	Container kube-scheduler ready: true, restart count 12
Jan  8 13:48:35.043: INFO: coredns-5c98db65d4-xx8w8 from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded)
Jan  8 13:48:35.043: INFO: 	Container coredns ready: true, restart count 0
Jan  8 13:48:35.043: INFO: etcd-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:38 +0000 UTC (1 container statuses recorded)
Jan  8 13:48:35.043: INFO: 	Container etcd ready: true, restart count 0
Jan  8 13:48:35.043: INFO: weave-net-bzl4d from kube-system started at 2019-08-04 08:52:37 +0000 UTC (2 container statuses recorded)
Jan  8 13:48:35.043: INFO: 	Container weave ready: true, restart count 0
Jan  8 13:48:35.043: INFO: 	Container weave-npc ready: true, restart count 0
Jan  8 13:48:35.043: INFO: coredns-5c98db65d4-bm4gs from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded)
Jan  8 13:48:35.043: INFO: 	Container coredns ready: true, restart count 0
Jan  8 13:48:35.043: INFO: kube-controller-manager-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:42 +0000 UTC (1 container statuses recorded)
Jan  8 13:48:35.043: INFO: 	Container kube-controller-manager ready: true, restart count 18
Jan  8 13:48:35.043: INFO: kube-proxy-58v95 from kube-system started at 2019-08-04 08:52:37 +0000 UTC (1 container statuses recorded)
Jan  8 13:48:35.043: INFO: 	Container kube-proxy ready: true, restart count 0
Jan  8 13:48:35.043: INFO: kube-apiserver-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:39 +0000 UTC (1 container statuses recorded)
Jan  8 13:48:35.043: INFO: 	Container kube-apiserver ready: true, restart count 0
[It] validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-80f28176-b033-46dd-89c0-ec5e319883d6 42
STEP: Trying to relaunch the pod, now with labels.
STEP: removing the label kubernetes.io/e2e-80f28176-b033-46dd-89c0-ec5e319883d6 off the node iruya-node
STEP: verifying the node doesn't have the label kubernetes.io/e2e-80f28176-b033-46dd-89c0-ec5e319883d6
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  8 13:48:51.353: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-1989" for this suite.
Jan  8 13:49:11.457: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  8 13:49:11.592: INFO: namespace sched-pred-1989 deletion completed in 20.231324205s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72

• [SLOW TEST:36.725 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23
  validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  8 13:49:11.593: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod liveness-b57d22d1-0533-4127-8c66-113328c1a8b2 in namespace container-probe-9563
Jan  8 13:49:19.752: INFO: Started pod liveness-b57d22d1-0533-4127-8c66-113328c1a8b2 in namespace container-probe-9563
STEP: checking the pod's current state and verifying that restartCount is present
Jan  8 13:49:19.756: INFO: Initial restart count of pod liveness-b57d22d1-0533-4127-8c66-113328c1a8b2 is 0
Jan  8 13:49:39.890: INFO: Restart count of pod container-probe-9563/liveness-b57d22d1-0533-4127-8c66-113328c1a8b2 is now 1 (20.133532713s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  8 13:49:39.979: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-9563" for this suite.
Jan  8 13:49:46.032: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  8 13:49:46.106: INFO: namespace container-probe-9563 deletion completed in 6.112349257s

• [SLOW TEST:34.513 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  8 13:49:46.106: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-map-af572aa7-bb36-4c74-9f11-385e18c717d1
STEP: Creating a pod to test consume configMaps
Jan  8 13:49:46.251: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-14e6fa1e-0116-452f-90a0-dd447715c458" in namespace "projected-3141" to be "success or failure"
Jan  8 13:49:46.260: INFO: Pod "pod-projected-configmaps-14e6fa1e-0116-452f-90a0-dd447715c458": Phase="Pending", Reason="", readiness=false. Elapsed: 9.432317ms
Jan  8 13:49:48.271: INFO: Pod "pod-projected-configmaps-14e6fa1e-0116-452f-90a0-dd447715c458": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020225589s
Jan  8 13:49:50.288: INFO: Pod "pod-projected-configmaps-14e6fa1e-0116-452f-90a0-dd447715c458": Phase="Pending", Reason="", readiness=false. Elapsed: 4.036756416s
Jan  8 13:49:52.299: INFO: Pod "pod-projected-configmaps-14e6fa1e-0116-452f-90a0-dd447715c458": Phase="Pending", Reason="", readiness=false. Elapsed: 6.047977657s
Jan  8 13:49:54.351: INFO: Pod "pod-projected-configmaps-14e6fa1e-0116-452f-90a0-dd447715c458": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.099946453s
STEP: Saw pod success
Jan  8 13:49:54.351: INFO: Pod "pod-projected-configmaps-14e6fa1e-0116-452f-90a0-dd447715c458" satisfied condition "success or failure"
Jan  8 13:49:54.355: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-14e6fa1e-0116-452f-90a0-dd447715c458 container projected-configmap-volume-test: 
STEP: delete the pod
Jan  8 13:49:54.436: INFO: Waiting for pod pod-projected-configmaps-14e6fa1e-0116-452f-90a0-dd447715c458 to disappear
Jan  8 13:49:54.448: INFO: Pod pod-projected-configmaps-14e6fa1e-0116-452f-90a0-dd447715c458 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  8 13:49:54.448: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3141" for this suite.
Jan  8 13:50:00.539: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  8 13:50:00.691: INFO: namespace projected-3141 deletion completed in 6.198873233s

• [SLOW TEST:14.585 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  8 13:50:00.692: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap configmap-1889/configmap-test-17432d11-6606-437e-82d1-5ad1ec1c2c29
STEP: Creating a pod to test consume configMaps
Jan  8 13:50:00.889: INFO: Waiting up to 5m0s for pod "pod-configmaps-f3531639-0208-4051-9fe9-bc27e25403e5" in namespace "configmap-1889" to be "success or failure"
Jan  8 13:50:00.903: INFO: Pod "pod-configmaps-f3531639-0208-4051-9fe9-bc27e25403e5": Phase="Pending", Reason="", readiness=false. Elapsed: 14.164652ms
Jan  8 13:50:02.929: INFO: Pod "pod-configmaps-f3531639-0208-4051-9fe9-bc27e25403e5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.039956856s
Jan  8 13:50:04.942: INFO: Pod "pod-configmaps-f3531639-0208-4051-9fe9-bc27e25403e5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.053342402s
Jan  8 13:50:06.957: INFO: Pod "pod-configmaps-f3531639-0208-4051-9fe9-bc27e25403e5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.068246137s
Jan  8 13:50:08.965: INFO: Pod "pod-configmaps-f3531639-0208-4051-9fe9-bc27e25403e5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.076001679s
STEP: Saw pod success
Jan  8 13:50:08.965: INFO: Pod "pod-configmaps-f3531639-0208-4051-9fe9-bc27e25403e5" satisfied condition "success or failure"
Jan  8 13:50:08.970: INFO: Trying to get logs from node iruya-node pod pod-configmaps-f3531639-0208-4051-9fe9-bc27e25403e5 container env-test: 
STEP: delete the pod
Jan  8 13:50:09.029: INFO: Waiting for pod pod-configmaps-f3531639-0208-4051-9fe9-bc27e25403e5 to disappear
Jan  8 13:50:09.045: INFO: Pod pod-configmaps-f3531639-0208-4051-9fe9-bc27e25403e5 no longer exists
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  8 13:50:09.045: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-1889" for this suite.
Jan  8 13:50:15.170: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  8 13:50:15.295: INFO: namespace configmap-1889 deletion completed in 6.239764382s

• [SLOW TEST:14.603 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] PreStop 
  should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  8 13:50:15.296: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename prestop
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:167
[It] should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating server pod server in namespace prestop-9195
STEP: Waiting for pods to come up.
STEP: Creating tester pod tester in namespace prestop-9195
STEP: Deleting pre-stop pod
Jan  8 13:50:36.564: INFO: Saw: {
	"Hostname": "server",
	"Sent": null,
	"Received": {
		"prestop": 1
	},
	"Errors": null,
	"Log": [
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up."
	],
	"StillContactingPeers": true
}
STEP: Deleting the server pod
[AfterEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  8 13:50:36.579: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "prestop-9195" for this suite.
Jan  8 13:51:18.645: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  8 13:51:18.754: INFO: namespace prestop-9195 deletion completed in 42.139670424s

• [SLOW TEST:63.458 seconds]
[k8s.io] [sig-node] PreStop
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  8 13:51:18.754: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-8dd255d8-ed0a-4846-a6e2-d615ece1a34c
STEP: Creating a pod to test consume configMaps
Jan  8 13:51:18.880: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-56bf2fd0-d3af-49a1-901d-6eff685b3a8f" in namespace "projected-3661" to be "success or failure"
Jan  8 13:51:18.899: INFO: Pod "pod-projected-configmaps-56bf2fd0-d3af-49a1-901d-6eff685b3a8f": Phase="Pending", Reason="", readiness=false. Elapsed: 19.381311ms
Jan  8 13:51:20.908: INFO: Pod "pod-projected-configmaps-56bf2fd0-d3af-49a1-901d-6eff685b3a8f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028375543s
Jan  8 13:51:22.922: INFO: Pod "pod-projected-configmaps-56bf2fd0-d3af-49a1-901d-6eff685b3a8f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.042629956s
Jan  8 13:51:24.935: INFO: Pod "pod-projected-configmaps-56bf2fd0-d3af-49a1-901d-6eff685b3a8f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.055566523s
Jan  8 13:51:26.944: INFO: Pod "pod-projected-configmaps-56bf2fd0-d3af-49a1-901d-6eff685b3a8f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.064242403s
Jan  8 13:51:28.959: INFO: Pod "pod-projected-configmaps-56bf2fd0-d3af-49a1-901d-6eff685b3a8f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.079063701s
STEP: Saw pod success
Jan  8 13:51:28.959: INFO: Pod "pod-projected-configmaps-56bf2fd0-d3af-49a1-901d-6eff685b3a8f" satisfied condition "success or failure"
Jan  8 13:51:28.970: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-56bf2fd0-d3af-49a1-901d-6eff685b3a8f container projected-configmap-volume-test: 
STEP: delete the pod
Jan  8 13:51:29.032: INFO: Waiting for pod pod-projected-configmaps-56bf2fd0-d3af-49a1-901d-6eff685b3a8f to disappear
Jan  8 13:51:29.043: INFO: Pod pod-projected-configmaps-56bf2fd0-d3af-49a1-901d-6eff685b3a8f no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  8 13:51:29.043: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3661" for this suite.
Jan  8 13:51:35.104: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  8 13:51:35.250: INFO: namespace projected-3661 deletion completed in 6.193282398s

• [SLOW TEST:16.495 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  8 13:51:35.250: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44
[It] should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
Jan  8 13:51:35.325: INFO: PodSpec: initContainers in spec.initContainers
Jan  8 13:52:36.713: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-afbaf995-0947-4753-ab63-e05d6e5f462e", GenerateName:"", Namespace:"init-container-7854", SelfLink:"/api/v1/namespaces/init-container-7854/pods/pod-init-afbaf995-0947-4753-ab63-e05d6e5f462e", UID:"fb18159a-e570-4912-b562-0a67b7a05cf2", ResourceVersion:"19778945", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63714088295, loc:(*time.Location)(0x7ea48a0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"325707827"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-tj7c5", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc0014f0480), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-tj7c5", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-tj7c5", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-tj7c5", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc000e8fd58), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"iruya-node", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0021b6900), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc000e8fde0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc000e8fe00)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc000e8fe08), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc000e8fe0c), PreemptionPolicy:(*v1.PreemptionPolicy)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714088295, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714088295, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714088295, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714088295, loc:(*time.Location)(0x7ea48a0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.96.3.65", PodIP:"10.44.0.1", StartTime:(*v1.Time)(0xc003282360), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc001d759d0)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc001d75a40)}, Ready:false, RestartCount:3, Image:"busybox:1.29", ImageID:"docker-pullable://busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"docker://4ec1a96f978a13010eafbf2521d969bb7fb0c436d0509808e551909405943857"}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0032823a0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:""}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc003282380), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:""}}, QOSClass:"Guaranteed"}}
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  8 13:52:36.716: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-7854" for this suite.
Jan  8 13:53:00.755: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  8 13:53:00.958: INFO: namespace init-container-7854 deletion completed in 24.222844165s

• [SLOW TEST:85.708 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run default 
  should create an rc or deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  8 13:53:00.958: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1420
[It] should create an rc or deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Jan  8 13:53:01.048: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-93'
Jan  8 13:53:03.227: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Jan  8 13:53:03.227: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n"
STEP: verifying the pod controlled by e2e-test-nginx-deployment gets created
[AfterEach] [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1426
Jan  8 13:53:05.251: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=kubectl-93'
Jan  8 13:53:05.457: INFO: stderr: ""
Jan  8 13:53:05.457: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  8 13:53:05.458: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-93" for this suite.
Jan  8 13:53:19.488: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  8 13:53:19.617: INFO: namespace kubectl-93 deletion completed in 14.153124172s

• [SLOW TEST:18.659 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create an rc or deployment from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-api-machinery] Garbage collector 
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  8 13:53:19.618: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
Jan  8 13:53:26.138: INFO: 8 pods remaining
Jan  8 13:53:26.139: INFO: 0 pods has nil DeletionTimestamp
Jan  8 13:53:26.139: INFO: 
STEP: Gathering metrics
W0108 13:53:26.803242       8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jan  8 13:53:26.803: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  8 13:53:26.803: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-1582" for this suite.
Jan  8 13:53:37.031: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  8 13:53:37.154: INFO: namespace gc-1582 deletion completed in 10.345967744s

• [SLOW TEST:17.536 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  8 13:53:37.155: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test env composition
Jan  8 13:53:37.257: INFO: Waiting up to 5m0s for pod "var-expansion-8b793b4e-a491-4b32-81b9-71bd7096c089" in namespace "var-expansion-1242" to be "success or failure"
Jan  8 13:53:37.322: INFO: Pod "var-expansion-8b793b4e-a491-4b32-81b9-71bd7096c089": Phase="Pending", Reason="", readiness=false. Elapsed: 64.986224ms
Jan  8 13:53:39.331: INFO: Pod "var-expansion-8b793b4e-a491-4b32-81b9-71bd7096c089": Phase="Pending", Reason="", readiness=false. Elapsed: 2.073495472s
Jan  8 13:53:41.376: INFO: Pod "var-expansion-8b793b4e-a491-4b32-81b9-71bd7096c089": Phase="Pending", Reason="", readiness=false. Elapsed: 4.118547763s
Jan  8 13:53:43.388: INFO: Pod "var-expansion-8b793b4e-a491-4b32-81b9-71bd7096c089": Phase="Pending", Reason="", readiness=false. Elapsed: 6.131068966s
Jan  8 13:53:45.455: INFO: Pod "var-expansion-8b793b4e-a491-4b32-81b9-71bd7096c089": Phase="Pending", Reason="", readiness=false. Elapsed: 8.197932462s
Jan  8 13:53:47.464: INFO: Pod "var-expansion-8b793b4e-a491-4b32-81b9-71bd7096c089": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.207007257s
STEP: Saw pod success
Jan  8 13:53:47.464: INFO: Pod "var-expansion-8b793b4e-a491-4b32-81b9-71bd7096c089" satisfied condition "success or failure"
Jan  8 13:53:47.468: INFO: Trying to get logs from node iruya-node pod var-expansion-8b793b4e-a491-4b32-81b9-71bd7096c089 container dapi-container: 
STEP: delete the pod
Jan  8 13:53:47.537: INFO: Waiting for pod var-expansion-8b793b4e-a491-4b32-81b9-71bd7096c089 to disappear
Jan  8 13:53:47.572: INFO: Pod var-expansion-8b793b4e-a491-4b32-81b9-71bd7096c089 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  8 13:53:47.572: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-1242" for this suite.
Jan  8 13:53:53.646: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  8 13:53:53.819: INFO: namespace var-expansion-1242 deletion completed in 6.241242103s

• [SLOW TEST:16.664 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-network] Proxy version v1 
  should proxy through a service and a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  8 13:53:53.819: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy through a service and a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: starting an echo server on multiple ports
STEP: creating replication controller proxy-service-ght6m in namespace proxy-2726
I0108 13:53:54.103730       8 runners.go:180] Created replication controller with name: proxy-service-ght6m, namespace: proxy-2726, replica count: 1
I0108 13:53:55.154678       8 runners.go:180] proxy-service-ght6m Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0108 13:53:56.154977       8 runners.go:180] proxy-service-ght6m Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0108 13:53:57.155345       8 runners.go:180] proxy-service-ght6m Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0108 13:53:58.155686       8 runners.go:180] proxy-service-ght6m Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0108 13:53:59.156308       8 runners.go:180] proxy-service-ght6m Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0108 13:54:00.156807       8 runners.go:180] proxy-service-ght6m Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0108 13:54:01.157120       8 runners.go:180] proxy-service-ght6m Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0108 13:54:02.157386       8 runners.go:180] proxy-service-ght6m Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0108 13:54:03.157828       8 runners.go:180] proxy-service-ght6m Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0108 13:54:04.158250       8 runners.go:180] proxy-service-ght6m Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Jan  8 13:54:04.177: INFO: setup took 10.235982945s, starting test cases
STEP: running 16 cases, 20 attempts per case, 320 total attempts
Jan  8 13:54:04.206: INFO: (0) /api/v1/namespaces/proxy-2726/pods/proxy-service-ght6m-6trpp/proxy/: test (200; 29.376318ms)
Jan  8 13:54:04.206: INFO: (0) /api/v1/namespaces/proxy-2726/pods/proxy-service-ght6m-6trpp:1080/proxy/: test<... (200; 29.417916ms)
Jan  8 13:54:04.206: INFO: (0) /api/v1/namespaces/proxy-2726/pods/proxy-service-ght6m-6trpp:162/proxy/: bar (200; 29.152987ms)
Jan  8 13:54:04.207: INFO: (0) /api/v1/namespaces/proxy-2726/pods/proxy-service-ght6m-6trpp:160/proxy/: foo (200; 30.023442ms)
Jan  8 13:54:04.207: INFO: (0) /api/v1/namespaces/proxy-2726/pods/http:proxy-service-ght6m-6trpp:160/proxy/: foo (200; 30.451156ms)
Jan  8 13:54:04.207: INFO: (0) /api/v1/namespaces/proxy-2726/pods/http:proxy-service-ght6m-6trpp:1080/proxy/: ... (200; 30.624088ms)
Jan  8 13:54:04.208: INFO: (0) /api/v1/namespaces/proxy-2726/services/http:proxy-service-ght6m:portname2/proxy/: bar (200; 30.771953ms)
Jan  8 13:54:04.209: INFO: (0) /api/v1/namespaces/proxy-2726/services/http:proxy-service-ght6m:portname1/proxy/: foo (200; 31.670251ms)
Jan  8 13:54:04.210: INFO: (0) /api/v1/namespaces/proxy-2726/services/proxy-service-ght6m:portname1/proxy/: foo (200; 33.114381ms)
Jan  8 13:54:04.210: INFO: (0) /api/v1/namespaces/proxy-2726/services/proxy-service-ght6m:portname2/proxy/: bar (200; 33.16221ms)
Jan  8 13:54:04.214: INFO: (0) /api/v1/namespaces/proxy-2726/pods/http:proxy-service-ght6m-6trpp:162/proxy/: bar (200; 37.163479ms)
Jan  8 13:54:04.215: INFO: (0) /api/v1/namespaces/proxy-2726/pods/https:proxy-service-ght6m-6trpp:443/proxy/: ... (200; 12.818644ms)
Jan  8 13:54:04.234: INFO: (1) /api/v1/namespaces/proxy-2726/services/http:proxy-service-ght6m:portname2/proxy/: bar (200; 13.457879ms)
Jan  8 13:54:04.235: INFO: (1) /api/v1/namespaces/proxy-2726/services/proxy-service-ght6m:portname2/proxy/: bar (200; 14.543319ms)
Jan  8 13:54:04.235: INFO: (1) /api/v1/namespaces/proxy-2726/pods/https:proxy-service-ght6m-6trpp:460/proxy/: tls baz (200; 14.597985ms)
Jan  8 13:54:04.235: INFO: (1) /api/v1/namespaces/proxy-2726/services/http:proxy-service-ght6m:portname1/proxy/: foo (200; 14.84032ms)
Jan  8 13:54:04.235: INFO: (1) /api/v1/namespaces/proxy-2726/services/proxy-service-ght6m:portname1/proxy/: foo (200; 14.82045ms)
Jan  8 13:54:04.236: INFO: (1) /api/v1/namespaces/proxy-2726/pods/proxy-service-ght6m-6trpp/proxy/: test (200; 14.903692ms)
Jan  8 13:54:04.236: INFO: (1) /api/v1/namespaces/proxy-2726/pods/proxy-service-ght6m-6trpp:1080/proxy/: test<... (200; 15.503085ms)
Jan  8 13:54:04.236: INFO: (1) /api/v1/namespaces/proxy-2726/pods/https:proxy-service-ght6m-6trpp:443/proxy/: test (200; 15.583736ms)
Jan  8 13:54:04.254: INFO: (2) /api/v1/namespaces/proxy-2726/pods/https:proxy-service-ght6m-6trpp:443/proxy/: test<... (200; 16.028562ms)
Jan  8 13:54:04.255: INFO: (2) /api/v1/namespaces/proxy-2726/pods/https:proxy-service-ght6m-6trpp:462/proxy/: tls qux (200; 15.956651ms)
Jan  8 13:54:04.255: INFO: (2) /api/v1/namespaces/proxy-2726/pods/http:proxy-service-ght6m-6trpp:1080/proxy/: ... (200; 16.096174ms)
Jan  8 13:54:04.255: INFO: (2) /api/v1/namespaces/proxy-2726/services/https:proxy-service-ght6m:tlsportname1/proxy/: tls baz (200; 16.584688ms)
Jan  8 13:54:04.257: INFO: (2) /api/v1/namespaces/proxy-2726/pods/proxy-service-ght6m-6trpp:162/proxy/: bar (200; 18.044355ms)
Jan  8 13:54:04.257: INFO: (2) /api/v1/namespaces/proxy-2726/pods/proxy-service-ght6m-6trpp:160/proxy/: foo (200; 18.565719ms)
Jan  8 13:54:04.257: INFO: (2) /api/v1/namespaces/proxy-2726/pods/http:proxy-service-ght6m-6trpp:162/proxy/: bar (200; 18.80618ms)
Jan  8 13:54:04.260: INFO: (2) /api/v1/namespaces/proxy-2726/services/http:proxy-service-ght6m:portname2/proxy/: bar (200; 21.964037ms)
Jan  8 13:54:04.260: INFO: (2) /api/v1/namespaces/proxy-2726/services/proxy-service-ght6m:portname2/proxy/: bar (200; 21.948598ms)
Jan  8 13:54:04.261: INFO: (2) /api/v1/namespaces/proxy-2726/services/https:proxy-service-ght6m:tlsportname2/proxy/: tls qux (200; 23.043238ms)
Jan  8 13:54:04.263: INFO: (2) /api/v1/namespaces/proxy-2726/services/proxy-service-ght6m:portname1/proxy/: foo (200; 24.308291ms)
Jan  8 13:54:04.263: INFO: (2) /api/v1/namespaces/proxy-2726/services/http:proxy-service-ght6m:portname1/proxy/: foo (200; 24.5993ms)
Jan  8 13:54:04.283: INFO: (3) /api/v1/namespaces/proxy-2726/pods/proxy-service-ght6m-6trpp:162/proxy/: bar (200; 19.906837ms)
Jan  8 13:54:04.283: INFO: (3) /api/v1/namespaces/proxy-2726/services/proxy-service-ght6m:portname1/proxy/: foo (200; 19.926187ms)
Jan  8 13:54:04.284: INFO: (3) /api/v1/namespaces/proxy-2726/pods/https:proxy-service-ght6m-6trpp:443/proxy/: test<... (200; 20.779426ms)
Jan  8 13:54:04.284: INFO: (3) /api/v1/namespaces/proxy-2726/pods/proxy-service-ght6m-6trpp/proxy/: test (200; 20.796622ms)
Jan  8 13:54:04.284: INFO: (3) /api/v1/namespaces/proxy-2726/pods/https:proxy-service-ght6m-6trpp:460/proxy/: tls baz (200; 20.856104ms)
Jan  8 13:54:04.284: INFO: (3) /api/v1/namespaces/proxy-2726/pods/http:proxy-service-ght6m-6trpp:162/proxy/: bar (200; 20.774982ms)
Jan  8 13:54:04.284: INFO: (3) /api/v1/namespaces/proxy-2726/pods/http:proxy-service-ght6m-6trpp:1080/proxy/: ... (200; 20.929726ms)
Jan  8 13:54:04.284: INFO: (3) /api/v1/namespaces/proxy-2726/services/https:proxy-service-ght6m:tlsportname1/proxy/: tls baz (200; 20.852423ms)
Jan  8 13:54:04.284: INFO: (3) /api/v1/namespaces/proxy-2726/pods/proxy-service-ght6m-6trpp:160/proxy/: foo (200; 21.106262ms)
Jan  8 13:54:04.286: INFO: (3) /api/v1/namespaces/proxy-2726/services/proxy-service-ght6m:portname2/proxy/: bar (200; 23.155072ms)
Jan  8 13:54:04.287: INFO: (3) /api/v1/namespaces/proxy-2726/services/http:proxy-service-ght6m:portname1/proxy/: foo (200; 23.411065ms)
Jan  8 13:54:04.287: INFO: (3) /api/v1/namespaces/proxy-2726/services/https:proxy-service-ght6m:tlsportname2/proxy/: tls qux (200; 23.397681ms)
Jan  8 13:54:04.287: INFO: (3) /api/v1/namespaces/proxy-2726/services/http:proxy-service-ght6m:portname2/proxy/: bar (200; 23.516648ms)
Jan  8 13:54:04.302: INFO: (4) /api/v1/namespaces/proxy-2726/pods/proxy-service-ght6m-6trpp:160/proxy/: foo (200; 15.136799ms)
Jan  8 13:54:04.302: INFO: (4) /api/v1/namespaces/proxy-2726/pods/proxy-service-ght6m-6trpp:1080/proxy/: test<... (200; 15.723512ms)
Jan  8 13:54:04.303: INFO: (4) /api/v1/namespaces/proxy-2726/pods/proxy-service-ght6m-6trpp:162/proxy/: bar (200; 16.00727ms)
Jan  8 13:54:04.303: INFO: (4) /api/v1/namespaces/proxy-2726/pods/https:proxy-service-ght6m-6trpp:462/proxy/: tls qux (200; 16.194286ms)
Jan  8 13:54:04.303: INFO: (4) /api/v1/namespaces/proxy-2726/pods/proxy-service-ght6m-6trpp/proxy/: test (200; 16.529728ms)
Jan  8 13:54:04.304: INFO: (4) /api/v1/namespaces/proxy-2726/services/proxy-service-ght6m:portname1/proxy/: foo (200; 16.73883ms)
Jan  8 13:54:04.304: INFO: (4) /api/v1/namespaces/proxy-2726/services/proxy-service-ght6m:portname2/proxy/: bar (200; 17.010916ms)
Jan  8 13:54:04.304: INFO: (4) /api/v1/namespaces/proxy-2726/pods/https:proxy-service-ght6m-6trpp:443/proxy/: ... (200; 17.635664ms)
Jan  8 13:54:04.304: INFO: (4) /api/v1/namespaces/proxy-2726/pods/https:proxy-service-ght6m-6trpp:460/proxy/: tls baz (200; 17.588888ms)
Jan  8 13:54:04.304: INFO: (4) /api/v1/namespaces/proxy-2726/pods/http:proxy-service-ght6m-6trpp:162/proxy/: bar (200; 17.597334ms)
Jan  8 13:54:04.305: INFO: (4) /api/v1/namespaces/proxy-2726/services/https:proxy-service-ght6m:tlsportname1/proxy/: tls baz (200; 18.303929ms)
Jan  8 13:54:04.305: INFO: (4) /api/v1/namespaces/proxy-2726/services/http:proxy-service-ght6m:portname1/proxy/: foo (200; 18.73325ms)
Jan  8 13:54:04.305: INFO: (4) /api/v1/namespaces/proxy-2726/services/http:proxy-service-ght6m:portname2/proxy/: bar (200; 18.66098ms)
Jan  8 13:54:04.305: INFO: (4) /api/v1/namespaces/proxy-2726/services/https:proxy-service-ght6m:tlsportname2/proxy/: tls qux (200; 18.644102ms)
Jan  8 13:54:04.315: INFO: (5) /api/v1/namespaces/proxy-2726/pods/proxy-service-ght6m-6trpp/proxy/: test (200; 9.269265ms)
Jan  8 13:54:04.319: INFO: (5) /api/v1/namespaces/proxy-2726/services/http:proxy-service-ght6m:portname2/proxy/: bar (200; 13.409823ms)
Jan  8 13:54:04.319: INFO: (5) /api/v1/namespaces/proxy-2726/services/proxy-service-ght6m:portname1/proxy/: foo (200; 13.423204ms)
Jan  8 13:54:04.319: INFO: (5) /api/v1/namespaces/proxy-2726/pods/proxy-service-ght6m-6trpp:1080/proxy/: test<... (200; 13.49323ms)
Jan  8 13:54:04.319: INFO: (5) /api/v1/namespaces/proxy-2726/services/http:proxy-service-ght6m:portname1/proxy/: foo (200; 13.590147ms)
Jan  8 13:54:04.319: INFO: (5) /api/v1/namespaces/proxy-2726/pods/proxy-service-ght6m-6trpp:162/proxy/: bar (200; 13.465719ms)
Jan  8 13:54:04.321: INFO: (5) /api/v1/namespaces/proxy-2726/pods/https:proxy-service-ght6m-6trpp:462/proxy/: tls qux (200; 15.590075ms)
Jan  8 13:54:04.321: INFO: (5) /api/v1/namespaces/proxy-2726/services/https:proxy-service-ght6m:tlsportname1/proxy/: tls baz (200; 15.683673ms)
Jan  8 13:54:04.321: INFO: (5) /api/v1/namespaces/proxy-2726/pods/proxy-service-ght6m-6trpp:160/proxy/: foo (200; 15.696699ms)
Jan  8 13:54:04.321: INFO: (5) /api/v1/namespaces/proxy-2726/pods/https:proxy-service-ght6m-6trpp:443/proxy/: ... (200; 18.062364ms)
Jan  8 13:54:04.324: INFO: (5) /api/v1/namespaces/proxy-2726/services/https:proxy-service-ght6m:tlsportname2/proxy/: tls qux (200; 17.987181ms)
Jan  8 13:54:04.336: INFO: (6) /api/v1/namespaces/proxy-2726/pods/http:proxy-service-ght6m-6trpp:1080/proxy/: ... (200; 12.042018ms)
Jan  8 13:54:04.336: INFO: (6) /api/v1/namespaces/proxy-2726/services/http:proxy-service-ght6m:portname2/proxy/: bar (200; 12.041036ms)
Jan  8 13:54:04.337: INFO: (6) /api/v1/namespaces/proxy-2726/pods/proxy-service-ght6m-6trpp:1080/proxy/: test<... (200; 12.758108ms)
Jan  8 13:54:04.337: INFO: (6) /api/v1/namespaces/proxy-2726/services/proxy-service-ght6m:portname2/proxy/: bar (200; 13.469128ms)
Jan  8 13:54:04.338: INFO: (6) /api/v1/namespaces/proxy-2726/pods/http:proxy-service-ght6m-6trpp:162/proxy/: bar (200; 13.806598ms)
Jan  8 13:54:04.338: INFO: (6) /api/v1/namespaces/proxy-2726/services/https:proxy-service-ght6m:tlsportname1/proxy/: tls baz (200; 14.114561ms)
Jan  8 13:54:04.338: INFO: (6) /api/v1/namespaces/proxy-2726/pods/proxy-service-ght6m-6trpp/proxy/: test (200; 14.021329ms)
Jan  8 13:54:04.338: INFO: (6) /api/v1/namespaces/proxy-2726/pods/proxy-service-ght6m-6trpp:160/proxy/: foo (200; 14.545086ms)
Jan  8 13:54:04.339: INFO: (6) /api/v1/namespaces/proxy-2726/services/proxy-service-ght6m:portname1/proxy/: foo (200; 14.798815ms)
Jan  8 13:54:04.339: INFO: (6) /api/v1/namespaces/proxy-2726/services/https:proxy-service-ght6m:tlsportname2/proxy/: tls qux (200; 14.652157ms)
Jan  8 13:54:04.341: INFO: (6) /api/v1/namespaces/proxy-2726/pods/http:proxy-service-ght6m-6trpp:160/proxy/: foo (200; 16.582255ms)
Jan  8 13:54:04.341: INFO: (6) /api/v1/namespaces/proxy-2726/pods/proxy-service-ght6m-6trpp:162/proxy/: bar (200; 16.981558ms)
Jan  8 13:54:04.341: INFO: (6) /api/v1/namespaces/proxy-2726/pods/https:proxy-service-ght6m-6trpp:460/proxy/: tls baz (200; 16.859529ms)
Jan  8 13:54:04.341: INFO: (6) /api/v1/namespaces/proxy-2726/pods/https:proxy-service-ght6m-6trpp:443/proxy/: test (200; 16.238786ms)
Jan  8 13:54:04.360: INFO: (7) /api/v1/namespaces/proxy-2726/pods/http:proxy-service-ght6m-6trpp:160/proxy/: foo (200; 16.97433ms)
Jan  8 13:54:04.361: INFO: (7) /api/v1/namespaces/proxy-2726/pods/https:proxy-service-ght6m-6trpp:462/proxy/: tls qux (200; 17.909938ms)
Jan  8 13:54:04.361: INFO: (7) /api/v1/namespaces/proxy-2726/pods/proxy-service-ght6m-6trpp:1080/proxy/: test<... (200; 17.991742ms)
Jan  8 13:54:04.361: INFO: (7) /api/v1/namespaces/proxy-2726/pods/proxy-service-ght6m-6trpp:162/proxy/: bar (200; 17.637402ms)
Jan  8 13:54:04.361: INFO: (7) /api/v1/namespaces/proxy-2726/pods/https:proxy-service-ght6m-6trpp:460/proxy/: tls baz (200; 17.647563ms)
Jan  8 13:54:04.361: INFO: (7) /api/v1/namespaces/proxy-2726/pods/http:proxy-service-ght6m-6trpp:1080/proxy/: ... (200; 17.784161ms)
Jan  8 13:54:04.362: INFO: (7) /api/v1/namespaces/proxy-2726/services/proxy-service-ght6m:portname1/proxy/: foo (200; 18.946411ms)
Jan  8 13:54:04.363: INFO: (7) /api/v1/namespaces/proxy-2726/services/http:proxy-service-ght6m:portname2/proxy/: bar (200; 19.573555ms)
Jan  8 13:54:04.364: INFO: (7) /api/v1/namespaces/proxy-2726/services/https:proxy-service-ght6m:tlsportname2/proxy/: tls qux (200; 21.252172ms)
Jan  8 13:54:04.365: INFO: (7) /api/v1/namespaces/proxy-2726/services/http:proxy-service-ght6m:portname1/proxy/: foo (200; 21.49149ms)
Jan  8 13:54:04.365: INFO: (7) /api/v1/namespaces/proxy-2726/services/https:proxy-service-ght6m:tlsportname1/proxy/: tls baz (200; 21.814478ms)
Jan  8 13:54:04.367: INFO: (7) /api/v1/namespaces/proxy-2726/services/proxy-service-ght6m:portname2/proxy/: bar (200; 23.712369ms)
Jan  8 13:54:04.382: INFO: (8) /api/v1/namespaces/proxy-2726/pods/proxy-service-ght6m-6trpp:160/proxy/: foo (200; 14.428172ms)
Jan  8 13:54:04.382: INFO: (8) /api/v1/namespaces/proxy-2726/pods/proxy-service-ght6m-6trpp:162/proxy/: bar (200; 14.692091ms)
Jan  8 13:54:04.383: INFO: (8) /api/v1/namespaces/proxy-2726/services/proxy-service-ght6m:portname1/proxy/: foo (200; 15.312449ms)
Jan  8 13:54:04.383: INFO: (8) /api/v1/namespaces/proxy-2726/services/proxy-service-ght6m:portname2/proxy/: bar (200; 15.476209ms)
Jan  8 13:54:04.383: INFO: (8) /api/v1/namespaces/proxy-2726/pods/https:proxy-service-ght6m-6trpp:443/proxy/: test<... (200; 17.079149ms)
Jan  8 13:54:04.385: INFO: (8) /api/v1/namespaces/proxy-2726/pods/http:proxy-service-ght6m-6trpp:162/proxy/: bar (200; 17.469862ms)
Jan  8 13:54:04.385: INFO: (8) /api/v1/namespaces/proxy-2726/pods/proxy-service-ght6m-6trpp/proxy/: test (200; 17.745806ms)
Jan  8 13:54:04.385: INFO: (8) /api/v1/namespaces/proxy-2726/services/http:proxy-service-ght6m:portname2/proxy/: bar (200; 17.709781ms)
Jan  8 13:54:04.385: INFO: (8) /api/v1/namespaces/proxy-2726/services/https:proxy-service-ght6m:tlsportname2/proxy/: tls qux (200; 17.534941ms)
Jan  8 13:54:04.385: INFO: (8) /api/v1/namespaces/proxy-2726/pods/http:proxy-service-ght6m-6trpp:160/proxy/: foo (200; 17.748968ms)
Jan  8 13:54:04.385: INFO: (8) /api/v1/namespaces/proxy-2726/pods/http:proxy-service-ght6m-6trpp:1080/proxy/: ... (200; 17.731584ms)
Jan  8 13:54:04.385: INFO: (8) /api/v1/namespaces/proxy-2726/pods/https:proxy-service-ght6m-6trpp:462/proxy/: tls qux (200; 17.987095ms)
Jan  8 13:54:04.386: INFO: (8) /api/v1/namespaces/proxy-2726/pods/https:proxy-service-ght6m-6trpp:460/proxy/: tls baz (200; 18.287372ms)
Jan  8 13:54:04.386: INFO: (8) /api/v1/namespaces/proxy-2726/services/https:proxy-service-ght6m:tlsportname1/proxy/: tls baz (200; 18.617473ms)
Jan  8 13:54:04.396: INFO: (9) /api/v1/namespaces/proxy-2726/pods/http:proxy-service-ght6m-6trpp:1080/proxy/: ... (200; 9.509667ms)
Jan  8 13:54:04.396: INFO: (9) /api/v1/namespaces/proxy-2726/pods/proxy-service-ght6m-6trpp:160/proxy/: foo (200; 9.595473ms)
Jan  8 13:54:04.396: INFO: (9) /api/v1/namespaces/proxy-2726/services/http:proxy-service-ght6m:portname1/proxy/: foo (200; 9.714023ms)
Jan  8 13:54:04.396: INFO: (9) /api/v1/namespaces/proxy-2726/pods/proxy-service-ght6m-6trpp/proxy/: test (200; 9.517745ms)
Jan  8 13:54:04.396: INFO: (9) /api/v1/namespaces/proxy-2726/pods/https:proxy-service-ght6m-6trpp:443/proxy/: test<... (200; 10.218156ms)
Jan  8 13:54:04.397: INFO: (9) /api/v1/namespaces/proxy-2726/pods/http:proxy-service-ght6m-6trpp:160/proxy/: foo (200; 10.367715ms)
Jan  8 13:54:04.397: INFO: (9) /api/v1/namespaces/proxy-2726/pods/proxy-service-ght6m-6trpp:162/proxy/: bar (200; 10.728403ms)
Jan  8 13:54:04.398: INFO: (9) /api/v1/namespaces/proxy-2726/pods/https:proxy-service-ght6m-6trpp:462/proxy/: tls qux (200; 11.487771ms)
Jan  8 13:54:04.398: INFO: (9) /api/v1/namespaces/proxy-2726/pods/http:proxy-service-ght6m-6trpp:162/proxy/: bar (200; 12.038513ms)
Jan  8 13:54:04.401: INFO: (9) /api/v1/namespaces/proxy-2726/services/proxy-service-ght6m:portname2/proxy/: bar (200; 14.936897ms)
Jan  8 13:54:04.402: INFO: (9) /api/v1/namespaces/proxy-2726/services/proxy-service-ght6m:portname1/proxy/: foo (200; 15.725174ms)
Jan  8 13:54:04.402: INFO: (9) /api/v1/namespaces/proxy-2726/services/http:proxy-service-ght6m:portname2/proxy/: bar (200; 15.850485ms)
Jan  8 13:54:04.403: INFO: (9) /api/v1/namespaces/proxy-2726/services/https:proxy-service-ght6m:tlsportname1/proxy/: tls baz (200; 16.380905ms)
Jan  8 13:54:04.403: INFO: (9) /api/v1/namespaces/proxy-2726/services/https:proxy-service-ght6m:tlsportname2/proxy/: tls qux (200; 16.658893ms)
Jan  8 13:54:04.406: INFO: (10) /api/v1/namespaces/proxy-2726/pods/https:proxy-service-ght6m-6trpp:462/proxy/: tls qux (200; 3.391339ms)
Jan  8 13:54:04.411: INFO: (10) /api/v1/namespaces/proxy-2726/pods/proxy-service-ght6m-6trpp:162/proxy/: bar (200; 7.825101ms)
Jan  8 13:54:04.411: INFO: (10) /api/v1/namespaces/proxy-2726/services/proxy-service-ght6m:portname2/proxy/: bar (200; 7.786922ms)
Jan  8 13:54:04.411: INFO: (10) /api/v1/namespaces/proxy-2726/pods/http:proxy-service-ght6m-6trpp:160/proxy/: foo (200; 7.80978ms)
Jan  8 13:54:04.411: INFO: (10) /api/v1/namespaces/proxy-2726/pods/proxy-service-ght6m-6trpp:160/proxy/: foo (200; 7.92905ms)
Jan  8 13:54:04.411: INFO: (10) /api/v1/namespaces/proxy-2726/pods/https:proxy-service-ght6m-6trpp:443/proxy/: test (200; 7.995955ms)
Jan  8 13:54:04.411: INFO: (10) /api/v1/namespaces/proxy-2726/pods/http:proxy-service-ght6m-6trpp:1080/proxy/: ... (200; 8.168191ms)
Jan  8 13:54:04.412: INFO: (10) /api/v1/namespaces/proxy-2726/pods/proxy-service-ght6m-6trpp:1080/proxy/: test<... (200; 8.372435ms)
Jan  8 13:54:04.412: INFO: (10) /api/v1/namespaces/proxy-2726/pods/http:proxy-service-ght6m-6trpp:162/proxy/: bar (200; 8.646875ms)
Jan  8 13:54:04.413: INFO: (10) /api/v1/namespaces/proxy-2726/pods/https:proxy-service-ght6m-6trpp:460/proxy/: tls baz (200; 9.760318ms)
Jan  8 13:54:04.414: INFO: (10) /api/v1/namespaces/proxy-2726/services/http:proxy-service-ght6m:portname2/proxy/: bar (200; 10.704704ms)
Jan  8 13:54:04.414: INFO: (10) /api/v1/namespaces/proxy-2726/services/proxy-service-ght6m:portname1/proxy/: foo (200; 10.599582ms)
Jan  8 13:54:04.414: INFO: (10) /api/v1/namespaces/proxy-2726/services/https:proxy-service-ght6m:tlsportname1/proxy/: tls baz (200; 10.7358ms)
Jan  8 13:54:04.414: INFO: (10) /api/v1/namespaces/proxy-2726/services/http:proxy-service-ght6m:portname1/proxy/: foo (200; 10.845614ms)
Jan  8 13:54:04.414: INFO: (10) /api/v1/namespaces/proxy-2726/services/https:proxy-service-ght6m:tlsportname2/proxy/: tls qux (200; 11.135368ms)
Jan  8 13:54:04.420: INFO: (11) /api/v1/namespaces/proxy-2726/pods/proxy-service-ght6m-6trpp:1080/proxy/: test<... (200; 5.519657ms)
Jan  8 13:54:04.420: INFO: (11) /api/v1/namespaces/proxy-2726/pods/https:proxy-service-ght6m-6trpp:460/proxy/: tls baz (200; 5.882271ms)
Jan  8 13:54:04.421: INFO: (11) /api/v1/namespaces/proxy-2726/pods/http:proxy-service-ght6m-6trpp:1080/proxy/: ... (200; 6.59964ms)
Jan  8 13:54:04.421: INFO: (11) /api/v1/namespaces/proxy-2726/services/proxy-service-ght6m:portname1/proxy/: foo (200; 6.91699ms)
Jan  8 13:54:04.421: INFO: (11) /api/v1/namespaces/proxy-2726/pods/proxy-service-ght6m-6trpp:160/proxy/: foo (200; 6.806909ms)
Jan  8 13:54:04.424: INFO: (11) /api/v1/namespaces/proxy-2726/pods/proxy-service-ght6m-6trpp:162/proxy/: bar (200; 9.238926ms)
Jan  8 13:54:04.424: INFO: (11) /api/v1/namespaces/proxy-2726/pods/http:proxy-service-ght6m-6trpp:162/proxy/: bar (200; 9.365022ms)
Jan  8 13:54:04.424: INFO: (11) /api/v1/namespaces/proxy-2726/services/https:proxy-service-ght6m:tlsportname2/proxy/: tls qux (200; 10.019063ms)
Jan  8 13:54:04.427: INFO: (11) /api/v1/namespaces/proxy-2726/pods/https:proxy-service-ght6m-6trpp:462/proxy/: tls qux (200; 12.066388ms)
Jan  8 13:54:04.427: INFO: (11) /api/v1/namespaces/proxy-2726/pods/https:proxy-service-ght6m-6trpp:443/proxy/: test (200; 12.645833ms)
Jan  8 13:54:04.428: INFO: (11) /api/v1/namespaces/proxy-2726/services/https:proxy-service-ght6m:tlsportname1/proxy/: tls baz (200; 13.848597ms)
Jan  8 13:54:04.429: INFO: (11) /api/v1/namespaces/proxy-2726/services/proxy-service-ght6m:portname2/proxy/: bar (200; 14.026893ms)
Jan  8 13:54:04.435: INFO: (12) /api/v1/namespaces/proxy-2726/pods/proxy-service-ght6m-6trpp:160/proxy/: foo (200; 6.337486ms)
Jan  8 13:54:04.435: INFO: (12) /api/v1/namespaces/proxy-2726/pods/http:proxy-service-ght6m-6trpp:162/proxy/: bar (200; 6.639156ms)
Jan  8 13:54:04.436: INFO: (12) /api/v1/namespaces/proxy-2726/pods/proxy-service-ght6m-6trpp/proxy/: test (200; 6.903915ms)
Jan  8 13:54:04.438: INFO: (12) /api/v1/namespaces/proxy-2726/pods/proxy-service-ght6m-6trpp:1080/proxy/: test<... (200; 8.920353ms)
Jan  8 13:54:04.438: INFO: (12) /api/v1/namespaces/proxy-2726/pods/http:proxy-service-ght6m-6trpp:160/proxy/: foo (200; 9.12927ms)
Jan  8 13:54:04.438: INFO: (12) /api/v1/namespaces/proxy-2726/pods/https:proxy-service-ght6m-6trpp:462/proxy/: tls qux (200; 9.195272ms)
Jan  8 13:54:04.443: INFO: (12) /api/v1/namespaces/proxy-2726/pods/https:proxy-service-ght6m-6trpp:460/proxy/: tls baz (200; 14.3796ms)
Jan  8 13:54:04.443: INFO: (12) /api/v1/namespaces/proxy-2726/services/proxy-service-ght6m:portname1/proxy/: foo (200; 14.524639ms)
Jan  8 13:54:04.443: INFO: (12) /api/v1/namespaces/proxy-2726/services/https:proxy-service-ght6m:tlsportname2/proxy/: tls qux (200; 14.502175ms)
Jan  8 13:54:04.443: INFO: (12) /api/v1/namespaces/proxy-2726/services/https:proxy-service-ght6m:tlsportname1/proxy/: tls baz (200; 14.554491ms)
Jan  8 13:54:04.443: INFO: (12) /api/v1/namespaces/proxy-2726/pods/https:proxy-service-ght6m-6trpp:443/proxy/: ... (200; 14.558003ms)
Jan  8 13:54:04.443: INFO: (12) /api/v1/namespaces/proxy-2726/services/http:proxy-service-ght6m:portname2/proxy/: bar (200; 14.65349ms)
Jan  8 13:54:04.443: INFO: (12) /api/v1/namespaces/proxy-2726/services/http:proxy-service-ght6m:portname1/proxy/: foo (200; 14.581681ms)
Jan  8 13:54:04.457: INFO: (13) /api/v1/namespaces/proxy-2726/pods/https:proxy-service-ght6m-6trpp:462/proxy/: tls qux (200; 13.444224ms)
Jan  8 13:54:04.457: INFO: (13) /api/v1/namespaces/proxy-2726/pods/proxy-service-ght6m-6trpp:160/proxy/: foo (200; 13.427894ms)
Jan  8 13:54:04.457: INFO: (13) /api/v1/namespaces/proxy-2726/services/http:proxy-service-ght6m:portname1/proxy/: foo (200; 13.654226ms)
Jan  8 13:54:04.457: INFO: (13) /api/v1/namespaces/proxy-2726/pods/proxy-service-ght6m-6trpp:162/proxy/: bar (200; 13.684825ms)
Jan  8 13:54:04.458: INFO: (13) /api/v1/namespaces/proxy-2726/pods/https:proxy-service-ght6m-6trpp:443/proxy/: test<... (200; 14.224106ms)
Jan  8 13:54:04.460: INFO: (13) /api/v1/namespaces/proxy-2726/pods/http:proxy-service-ght6m-6trpp:160/proxy/: foo (200; 16.023204ms)
Jan  8 13:54:04.460: INFO: (13) /api/v1/namespaces/proxy-2726/services/proxy-service-ght6m:portname1/proxy/: foo (200; 16.271926ms)
Jan  8 13:54:04.460: INFO: (13) /api/v1/namespaces/proxy-2726/pods/proxy-service-ght6m-6trpp/proxy/: test (200; 16.502756ms)
Jan  8 13:54:04.461: INFO: (13) /api/v1/namespaces/proxy-2726/pods/http:proxy-service-ght6m-6trpp:162/proxy/: bar (200; 16.87746ms)
Jan  8 13:54:04.461: INFO: (13) /api/v1/namespaces/proxy-2726/services/proxy-service-ght6m:portname2/proxy/: bar (200; 17.165584ms)
Jan  8 13:54:04.461: INFO: (13) /api/v1/namespaces/proxy-2726/services/https:proxy-service-ght6m:tlsportname2/proxy/: tls qux (200; 17.473524ms)
Jan  8 13:54:04.463: INFO: (13) /api/v1/namespaces/proxy-2726/pods/https:proxy-service-ght6m-6trpp:460/proxy/: tls baz (200; 18.925826ms)
Jan  8 13:54:04.463: INFO: (13) /api/v1/namespaces/proxy-2726/pods/http:proxy-service-ght6m-6trpp:1080/proxy/: ... (200; 18.86408ms)
Jan  8 13:54:04.463: INFO: (13) /api/v1/namespaces/proxy-2726/services/https:proxy-service-ght6m:tlsportname1/proxy/: tls baz (200; 18.889047ms)
Jan  8 13:54:04.465: INFO: (13) /api/v1/namespaces/proxy-2726/services/http:proxy-service-ght6m:portname2/proxy/: bar (200; 21.608894ms)
Jan  8 13:54:04.477: INFO: (14) /api/v1/namespaces/proxy-2726/pods/proxy-service-ght6m-6trpp/proxy/: test (200; 11.368325ms)
Jan  8 13:54:04.479: INFO: (14) /api/v1/namespaces/proxy-2726/services/http:proxy-service-ght6m:portname1/proxy/: foo (200; 13.297602ms)
Jan  8 13:54:04.479: INFO: (14) /api/v1/namespaces/proxy-2726/pods/https:proxy-service-ght6m-6trpp:460/proxy/: tls baz (200; 13.403505ms)
Jan  8 13:54:04.479: INFO: (14) /api/v1/namespaces/proxy-2726/pods/https:proxy-service-ght6m-6trpp:462/proxy/: tls qux (200; 13.610589ms)
Jan  8 13:54:04.479: INFO: (14) /api/v1/namespaces/proxy-2726/pods/proxy-service-ght6m-6trpp:1080/proxy/: test<... (200; 13.421354ms)
Jan  8 13:54:04.479: INFO: (14) /api/v1/namespaces/proxy-2726/pods/http:proxy-service-ght6m-6trpp:1080/proxy/: ... (200; 13.578319ms)
Jan  8 13:54:04.480: INFO: (14) /api/v1/namespaces/proxy-2726/pods/http:proxy-service-ght6m-6trpp:160/proxy/: foo (200; 13.885539ms)
Jan  8 13:54:04.480: INFO: (14) /api/v1/namespaces/proxy-2726/pods/proxy-service-ght6m-6trpp:162/proxy/: bar (200; 14.386464ms)
Jan  8 13:54:04.480: INFO: (14) /api/v1/namespaces/proxy-2726/services/https:proxy-service-ght6m:tlsportname1/proxy/: tls baz (200; 14.341201ms)
Jan  8 13:54:04.481: INFO: (14) /api/v1/namespaces/proxy-2726/pods/http:proxy-service-ght6m-6trpp:162/proxy/: bar (200; 14.998645ms)
Jan  8 13:54:04.481: INFO: (14) /api/v1/namespaces/proxy-2726/pods/https:proxy-service-ght6m-6trpp:443/proxy/: test<... (200; 12.50374ms)
Jan  8 13:54:04.496: INFO: (15) /api/v1/namespaces/proxy-2726/pods/proxy-service-ght6m-6trpp/proxy/: test (200; 12.577344ms)
Jan  8 13:54:04.496: INFO: (15) /api/v1/namespaces/proxy-2726/services/proxy-service-ght6m:portname1/proxy/: foo (200; 12.757925ms)
Jan  8 13:54:04.496: INFO: (15) /api/v1/namespaces/proxy-2726/pods/proxy-service-ght6m-6trpp:160/proxy/: foo (200; 12.576945ms)
Jan  8 13:54:04.496: INFO: (15) /api/v1/namespaces/proxy-2726/services/proxy-service-ght6m:portname2/proxy/: bar (200; 12.567341ms)
Jan  8 13:54:04.497: INFO: (15) /api/v1/namespaces/proxy-2726/pods/http:proxy-service-ght6m-6trpp:162/proxy/: bar (200; 13.495073ms)
Jan  8 13:54:04.497: INFO: (15) /api/v1/namespaces/proxy-2726/pods/https:proxy-service-ght6m-6trpp:460/proxy/: tls baz (200; 13.751093ms)
Jan  8 13:54:04.497: INFO: (15) /api/v1/namespaces/proxy-2726/services/https:proxy-service-ght6m:tlsportname1/proxy/: tls baz (200; 13.914153ms)
Jan  8 13:54:04.497: INFO: (15) /api/v1/namespaces/proxy-2726/pods/http:proxy-service-ght6m-6trpp:1080/proxy/: ... (200; 13.810228ms)
Jan  8 13:54:04.497: INFO: (15) /api/v1/namespaces/proxy-2726/pods/http:proxy-service-ght6m-6trpp:160/proxy/: foo (200; 13.872846ms)
Jan  8 13:54:04.497: INFO: (15) /api/v1/namespaces/proxy-2726/services/http:proxy-service-ght6m:portname2/proxy/: bar (200; 14.260823ms)
Jan  8 13:54:04.505: INFO: (16) /api/v1/namespaces/proxy-2726/pods/http:proxy-service-ght6m-6trpp:1080/proxy/: ... (200; 7.854461ms)
Jan  8 13:54:04.505: INFO: (16) /api/v1/namespaces/proxy-2726/pods/https:proxy-service-ght6m-6trpp:460/proxy/: tls baz (200; 7.647192ms)
Jan  8 13:54:04.507: INFO: (16) /api/v1/namespaces/proxy-2726/pods/proxy-service-ght6m-6trpp:160/proxy/: foo (200; 9.526944ms)
Jan  8 13:54:04.507: INFO: (16) /api/v1/namespaces/proxy-2726/pods/https:proxy-service-ght6m-6trpp:443/proxy/: test (200; 11.517862ms)
Jan  8 13:54:04.510: INFO: (16) /api/v1/namespaces/proxy-2726/services/http:proxy-service-ght6m:portname2/proxy/: bar (200; 11.809585ms)
Jan  8 13:54:04.510: INFO: (16) /api/v1/namespaces/proxy-2726/pods/proxy-service-ght6m-6trpp:1080/proxy/: test<... (200; 11.941352ms)
Jan  8 13:54:04.510: INFO: (16) /api/v1/namespaces/proxy-2726/pods/http:proxy-service-ght6m-6trpp:160/proxy/: foo (200; 11.868461ms)
Jan  8 13:54:04.517: INFO: (17) /api/v1/namespaces/proxy-2726/pods/http:proxy-service-ght6m-6trpp:1080/proxy/: ... (200; 7.058303ms)
Jan  8 13:54:04.517: INFO: (17) /api/v1/namespaces/proxy-2726/pods/http:proxy-service-ght6m-6trpp:162/proxy/: bar (200; 7.596479ms)
Jan  8 13:54:04.526: INFO: (17) /api/v1/namespaces/proxy-2726/pods/proxy-service-ght6m-6trpp:1080/proxy/: test<... (200; 15.545962ms)
Jan  8 13:54:04.526: INFO: (17) /api/v1/namespaces/proxy-2726/pods/proxy-service-ght6m-6trpp/proxy/: test (200; 15.504615ms)
Jan  8 13:54:04.526: INFO: (17) /api/v1/namespaces/proxy-2726/pods/http:proxy-service-ght6m-6trpp:160/proxy/: foo (200; 16.042842ms)
Jan  8 13:54:04.526: INFO: (17) /api/v1/namespaces/proxy-2726/pods/proxy-service-ght6m-6trpp:162/proxy/: bar (200; 16.15768ms)
Jan  8 13:54:04.527: INFO: (17) /api/v1/namespaces/proxy-2726/pods/https:proxy-service-ght6m-6trpp:443/proxy/: test (200; 12.568465ms)
Jan  8 13:54:04.553: INFO: (18) /api/v1/namespaces/proxy-2726/services/https:proxy-service-ght6m:tlsportname2/proxy/: tls qux (200; 24.168053ms)
Jan  8 13:54:04.553: INFO: (18) /api/v1/namespaces/proxy-2726/services/http:proxy-service-ght6m:portname1/proxy/: foo (200; 24.654266ms)
Jan  8 13:54:04.553: INFO: (18) /api/v1/namespaces/proxy-2726/services/http:proxy-service-ght6m:portname2/proxy/: bar (200; 24.677528ms)
Jan  8 13:54:04.553: INFO: (18) /api/v1/namespaces/proxy-2726/pods/https:proxy-service-ght6m-6trpp:460/proxy/: tls baz (200; 24.827892ms)
Jan  8 13:54:04.553: INFO: (18) /api/v1/namespaces/proxy-2726/services/proxy-service-ght6m:portname2/proxy/: bar (200; 24.756ms)
Jan  8 13:54:04.554: INFO: (18) /api/v1/namespaces/proxy-2726/services/proxy-service-ght6m:portname1/proxy/: foo (200; 25.02989ms)
Jan  8 13:54:04.554: INFO: (18) /api/v1/namespaces/proxy-2726/services/https:proxy-service-ght6m:tlsportname1/proxy/: tls baz (200; 24.955185ms)
Jan  8 13:54:04.556: INFO: (18) /api/v1/namespaces/proxy-2726/pods/http:proxy-service-ght6m-6trpp:1080/proxy/: ... (200; 27.519695ms)
Jan  8 13:54:04.556: INFO: (18) /api/v1/namespaces/proxy-2726/pods/proxy-service-ght6m-6trpp:162/proxy/: bar (200; 27.503256ms)
Jan  8 13:54:04.556: INFO: (18) /api/v1/namespaces/proxy-2726/pods/http:proxy-service-ght6m-6trpp:160/proxy/: foo (200; 27.529877ms)
Jan  8 13:54:04.556: INFO: (18) /api/v1/namespaces/proxy-2726/pods/https:proxy-service-ght6m-6trpp:462/proxy/: tls qux (200; 27.528627ms)
Jan  8 13:54:04.557: INFO: (18) /api/v1/namespaces/proxy-2726/pods/proxy-service-ght6m-6trpp:1080/proxy/: test<... (200; 27.945113ms)
Jan  8 13:54:04.571: INFO: (19) /api/v1/namespaces/proxy-2726/pods/proxy-service-ght6m-6trpp:162/proxy/: bar (200; 13.27718ms)
Jan  8 13:54:04.571: INFO: (19) /api/v1/namespaces/proxy-2726/pods/proxy-service-ght6m-6trpp:1080/proxy/: test<... (200; 13.289264ms)
Jan  8 13:54:04.571: INFO: (19) /api/v1/namespaces/proxy-2726/services/https:proxy-service-ght6m:tlsportname2/proxy/: tls qux (200; 14.48034ms)
Jan  8 13:54:04.572: INFO: (19) /api/v1/namespaces/proxy-2726/services/https:proxy-service-ght6m:tlsportname1/proxy/: tls baz (200; 14.260015ms)
Jan  8 13:54:04.572: INFO: (19) /api/v1/namespaces/proxy-2726/services/http:proxy-service-ght6m:portname2/proxy/: bar (200; 14.80847ms)
Jan  8 13:54:04.573: INFO: (19) /api/v1/namespaces/proxy-2726/pods/http:proxy-service-ght6m-6trpp:1080/proxy/: ... (200; 15.665661ms)
Jan  8 13:54:04.573: INFO: (19) /api/v1/namespaces/proxy-2726/pods/https:proxy-service-ght6m-6trpp:462/proxy/: tls qux (200; 16.030504ms)
Jan  8 13:54:04.573: INFO: (19) /api/v1/namespaces/proxy-2726/pods/http:proxy-service-ght6m-6trpp:162/proxy/: bar (200; 16.237499ms)
Jan  8 13:54:04.574: INFO: (19) /api/v1/namespaces/proxy-2726/pods/https:proxy-service-ght6m-6trpp:460/proxy/: tls baz (200; 16.619179ms)
Jan  8 13:54:04.574: INFO: (19) /api/v1/namespaces/proxy-2726/services/proxy-service-ght6m:portname2/proxy/: bar (200; 16.726178ms)
Jan  8 13:54:04.574: INFO: (19) /api/v1/namespaces/proxy-2726/services/http:proxy-service-ght6m:portname1/proxy/: foo (200; 16.632488ms)
Jan  8 13:54:04.575: INFO: (19) /api/v1/namespaces/proxy-2726/services/proxy-service-ght6m:portname1/proxy/: foo (200; 17.578451ms)
Jan  8 13:54:04.575: INFO: (19) /api/v1/namespaces/proxy-2726/pods/http:proxy-service-ght6m-6trpp:160/proxy/: foo (200; 17.588564ms)
Jan  8 13:54:04.575: INFO: (19) /api/v1/namespaces/proxy-2726/pods/proxy-service-ght6m-6trpp/proxy/: test (200; 17.983706ms)
Jan  8 13:54:04.575: INFO: (19) /api/v1/namespaces/proxy-2726/pods/proxy-service-ght6m-6trpp:160/proxy/: foo (200; 18.239107ms)
Jan  8 13:54:04.575: INFO: (19) /api/v1/namespaces/proxy-2726/pods/https:proxy-service-ght6m-6trpp:443/proxy/: >> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the container
STEP: wait for the container to reach Failed
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Jan  8 13:54:30.965: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  8 13:54:31.026: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-820" for this suite.
Jan  8 13:54:39.055: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  8 13:54:39.198: INFO: namespace container-runtime-820 deletion completed in 8.167022263s

• [SLOW TEST:16.430 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129
      should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  8 13:54:39.199: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir volume type on tmpfs
Jan  8 13:54:39.335: INFO: Waiting up to 5m0s for pod "pod-44d27aa6-f349-4c17-a513-76246f66c54e" in namespace "emptydir-726" to be "success or failure"
Jan  8 13:54:39.356: INFO: Pod "pod-44d27aa6-f349-4c17-a513-76246f66c54e": Phase="Pending", Reason="", readiness=false. Elapsed: 21.277169ms
Jan  8 13:54:41.362: INFO: Pod "pod-44d27aa6-f349-4c17-a513-76246f66c54e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026846612s
Jan  8 13:54:43.372: INFO: Pod "pod-44d27aa6-f349-4c17-a513-76246f66c54e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.036694318s
Jan  8 13:54:45.380: INFO: Pod "pod-44d27aa6-f349-4c17-a513-76246f66c54e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.045389186s
Jan  8 13:54:47.388: INFO: Pod "pod-44d27aa6-f349-4c17-a513-76246f66c54e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.052670767s
STEP: Saw pod success
Jan  8 13:54:47.388: INFO: Pod "pod-44d27aa6-f349-4c17-a513-76246f66c54e" satisfied condition "success or failure"
Jan  8 13:54:47.392: INFO: Trying to get logs from node iruya-node pod pod-44d27aa6-f349-4c17-a513-76246f66c54e container test-container: 
STEP: delete the pod
Jan  8 13:54:47.490: INFO: Waiting for pod pod-44d27aa6-f349-4c17-a513-76246f66c54e to disappear
Jan  8 13:54:47.503: INFO: Pod pod-44d27aa6-f349-4c17-a513-76246f66c54e no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  8 13:54:47.503: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-726" for this suite.
Jan  8 13:54:53.630: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  8 13:54:53.860: INFO: namespace emptydir-726 deletion completed in 6.297628316s

• [SLOW TEST:14.661 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  8 13:54:53.861: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name projected-secret-test-92f933d4-c381-4596-9f4a-59f563843438
STEP: Creating a pod to test consume secrets
Jan  8 13:54:54.014: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-08580545-7460-47da-81b3-b73d72237839" in namespace "projected-9999" to be "success or failure"
Jan  8 13:54:54.035: INFO: Pod "pod-projected-secrets-08580545-7460-47da-81b3-b73d72237839": Phase="Pending", Reason="", readiness=false. Elapsed: 20.973533ms
Jan  8 13:54:56.044: INFO: Pod "pod-projected-secrets-08580545-7460-47da-81b3-b73d72237839": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029967963s
Jan  8 13:54:58.057: INFO: Pod "pod-projected-secrets-08580545-7460-47da-81b3-b73d72237839": Phase="Pending", Reason="", readiness=false. Elapsed: 4.042918814s
Jan  8 13:55:00.062: INFO: Pod "pod-projected-secrets-08580545-7460-47da-81b3-b73d72237839": Phase="Pending", Reason="", readiness=false. Elapsed: 6.048232655s
Jan  8 13:55:02.071: INFO: Pod "pod-projected-secrets-08580545-7460-47da-81b3-b73d72237839": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.057352881s
STEP: Saw pod success
Jan  8 13:55:02.071: INFO: Pod "pod-projected-secrets-08580545-7460-47da-81b3-b73d72237839" satisfied condition "success or failure"
Jan  8 13:55:02.074: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-08580545-7460-47da-81b3-b73d72237839 container projected-secret-volume-test: 
STEP: delete the pod
Jan  8 13:55:02.124: INFO: Waiting for pod pod-projected-secrets-08580545-7460-47da-81b3-b73d72237839 to disappear
Jan  8 13:55:02.134: INFO: Pod pod-projected-secrets-08580545-7460-47da-81b3-b73d72237839 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  8 13:55:02.135: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9999" for this suite.
Jan  8 13:55:09.032: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  8 13:55:09.162: INFO: namespace projected-9999 deletion completed in 7.009689617s

• [SLOW TEST:15.301 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl patch 
  should add annotations for pods in rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  8 13:55:09.164: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should add annotations for pods in rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating Redis RC
Jan  8 13:55:09.334: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8511'
Jan  8 13:55:09.769: INFO: stderr: ""
Jan  8 13:55:09.769: INFO: stdout: "replicationcontroller/redis-master created\n"
STEP: Waiting for Redis master to start.
Jan  8 13:55:10.779: INFO: Selector matched 1 pods for map[app:redis]
Jan  8 13:55:10.779: INFO: Found 0 / 1
Jan  8 13:55:11.780: INFO: Selector matched 1 pods for map[app:redis]
Jan  8 13:55:11.780: INFO: Found 0 / 1
Jan  8 13:55:12.782: INFO: Selector matched 1 pods for map[app:redis]
Jan  8 13:55:12.782: INFO: Found 0 / 1
Jan  8 13:55:13.828: INFO: Selector matched 1 pods for map[app:redis]
Jan  8 13:55:13.828: INFO: Found 0 / 1
Jan  8 13:55:14.783: INFO: Selector matched 1 pods for map[app:redis]
Jan  8 13:55:14.783: INFO: Found 0 / 1
Jan  8 13:55:15.778: INFO: Selector matched 1 pods for map[app:redis]
Jan  8 13:55:15.778: INFO: Found 0 / 1
Jan  8 13:55:16.784: INFO: Selector matched 1 pods for map[app:redis]
Jan  8 13:55:16.785: INFO: Found 0 / 1
Jan  8 13:55:17.781: INFO: Selector matched 1 pods for map[app:redis]
Jan  8 13:55:17.781: INFO: Found 0 / 1
Jan  8 13:55:18.788: INFO: Selector matched 1 pods for map[app:redis]
Jan  8 13:55:18.789: INFO: Found 1 / 1
Jan  8 13:55:18.789: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
STEP: patching all pods
Jan  8 13:55:18.796: INFO: Selector matched 1 pods for map[app:redis]
Jan  8 13:55:18.796: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Jan  8 13:55:18.796: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod redis-master-gzjdh --namespace=kubectl-8511 -p {"metadata":{"annotations":{"x":"y"}}}'
Jan  8 13:55:18.993: INFO: stderr: ""
Jan  8 13:55:18.993: INFO: stdout: "pod/redis-master-gzjdh patched\n"
STEP: checking annotations
Jan  8 13:55:18.999: INFO: Selector matched 1 pods for map[app:redis]
Jan  8 13:55:18.999: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  8 13:55:18.999: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-8511" for this suite.
Jan  8 13:55:41.083: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  8 13:55:41.219: INFO: namespace kubectl-8511 deletion completed in 22.215747067s

• [SLOW TEST:32.056 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl patch
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should add annotations for pods in rc  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test when starting a container that exits 
  should run with the expected status [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  8 13:55:41.220: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run with the expected status [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpa': should get the expected 'State'
STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpof': should get the expected 'State'
STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpn': should get the expected 'State'
STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance]
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  8 13:56:31.443: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-4553" for this suite.
Jan  8 13:56:37.487: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  8 13:56:37.663: INFO: namespace container-runtime-4553 deletion completed in 6.199601125s

• [SLOW TEST:56.443 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    when starting a container that exits
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:39
      should run with the expected status [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  8 13:56:37.664: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Performing setup for networking test in namespace pod-network-test-2733
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Jan  8 13:56:37.744: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Jan  8 13:57:18.027: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.32.0.4 8081 | grep -v '^\s*$'] Namespace:pod-network-test-2733 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan  8 13:57:18.027: INFO: >>> kubeConfig: /root/.kube/config
I0108 13:57:18.108020       8 log.go:172] (0xc0031dc210) (0xc00191a5a0) Create stream
I0108 13:57:18.108099       8 log.go:172] (0xc0031dc210) (0xc00191a5a0) Stream added, broadcasting: 1
I0108 13:57:18.117160       8 log.go:172] (0xc0031dc210) Reply frame received for 1
I0108 13:57:18.117202       8 log.go:172] (0xc0031dc210) (0xc001d0a0a0) Create stream
I0108 13:57:18.117215       8 log.go:172] (0xc0031dc210) (0xc001d0a0a0) Stream added, broadcasting: 3
I0108 13:57:18.119218       8 log.go:172] (0xc0031dc210) Reply frame received for 3
I0108 13:57:18.119247       8 log.go:172] (0xc0031dc210) (0xc00191a640) Create stream
I0108 13:57:18.119264       8 log.go:172] (0xc0031dc210) (0xc00191a640) Stream added, broadcasting: 5
I0108 13:57:18.120490       8 log.go:172] (0xc0031dc210) Reply frame received for 5
I0108 13:57:19.302598       8 log.go:172] (0xc0031dc210) Data frame received for 3
I0108 13:57:19.302710       8 log.go:172] (0xc001d0a0a0) (3) Data frame handling
I0108 13:57:19.302741       8 log.go:172] (0xc001d0a0a0) (3) Data frame sent
I0108 13:57:19.511909       8 log.go:172] (0xc0031dc210) Data frame received for 1
I0108 13:57:19.511998       8 log.go:172] (0xc00191a5a0) (1) Data frame handling
I0108 13:57:19.512037       8 log.go:172] (0xc00191a5a0) (1) Data frame sent
I0108 13:57:19.512083       8 log.go:172] (0xc0031dc210) (0xc00191a5a0) Stream removed, broadcasting: 1
I0108 13:57:19.512541       8 log.go:172] (0xc0031dc210) (0xc001d0a0a0) Stream removed, broadcasting: 3
I0108 13:57:19.512601       8 log.go:172] (0xc0031dc210) (0xc00191a640) Stream removed, broadcasting: 5
I0108 13:57:19.512623       8 log.go:172] (0xc0031dc210) Go away received
I0108 13:57:19.512656       8 log.go:172] (0xc0031dc210) (0xc00191a5a0) Stream removed, broadcasting: 1
I0108 13:57:19.512675       8 log.go:172] (0xc0031dc210) (0xc001d0a0a0) Stream removed, broadcasting: 3
I0108 13:57:19.512687       8 log.go:172] (0xc0031dc210) (0xc00191a640) Stream removed, broadcasting: 5
Jan  8 13:57:19.512: INFO: Found all expected endpoints: [netserver-0]
Jan  8 13:57:19.520: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.44.0.1 8081 | grep -v '^\s*$'] Namespace:pod-network-test-2733 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan  8 13:57:19.520: INFO: >>> kubeConfig: /root/.kube/config
I0108 13:57:19.648418       8 log.go:172] (0xc0013ca420) (0xc001d0a460) Create stream
I0108 13:57:19.648566       8 log.go:172] (0xc0013ca420) (0xc001d0a460) Stream added, broadcasting: 1
I0108 13:57:19.658956       8 log.go:172] (0xc0013ca420) Reply frame received for 1
I0108 13:57:19.659039       8 log.go:172] (0xc0013ca420) (0xc00191a780) Create stream
I0108 13:57:19.659052       8 log.go:172] (0xc0013ca420) (0xc00191a780) Stream added, broadcasting: 3
I0108 13:57:19.660841       8 log.go:172] (0xc0013ca420) Reply frame received for 3
I0108 13:57:19.660883       8 log.go:172] (0xc0013ca420) (0xc000a9edc0) Create stream
I0108 13:57:19.660893       8 log.go:172] (0xc0013ca420) (0xc000a9edc0) Stream added, broadcasting: 5
I0108 13:57:19.662526       8 log.go:172] (0xc0013ca420) Reply frame received for 5
I0108 13:57:20.792309       8 log.go:172] (0xc0013ca420) Data frame received for 3
I0108 13:57:20.792362       8 log.go:172] (0xc00191a780) (3) Data frame handling
I0108 13:57:20.792385       8 log.go:172] (0xc00191a780) (3) Data frame sent
I0108 13:57:20.982672       8 log.go:172] (0xc0013ca420) Data frame received for 1
I0108 13:57:20.983065       8 log.go:172] (0xc001d0a460) (1) Data frame handling
I0108 13:57:20.983104       8 log.go:172] (0xc001d0a460) (1) Data frame sent
I0108 13:57:20.983120       8 log.go:172] (0xc0013ca420) (0xc001d0a460) Stream removed, broadcasting: 1
I0108 13:57:20.983449       8 log.go:172] (0xc0013ca420) (0xc00191a780) Stream removed, broadcasting: 3
I0108 13:57:20.983756       8 log.go:172] (0xc0013ca420) (0xc000a9edc0) Stream removed, broadcasting: 5
I0108 13:57:20.983841       8 log.go:172] (0xc0013ca420) (0xc001d0a460) Stream removed, broadcasting: 1
I0108 13:57:20.983858       8 log.go:172] (0xc0013ca420) (0xc00191a780) Stream removed, broadcasting: 3
I0108 13:57:20.983868       8 log.go:172] (0xc0013ca420) (0xc000a9edc0) Stream removed, broadcasting: 5
I0108 13:57:20.984237       8 log.go:172] (0xc0013ca420) Go away received
Jan  8 13:57:20.984: INFO: Found all expected endpoints: [netserver-1]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  8 13:57:20.984: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-2733" for this suite.
Jan  8 13:57:45.022: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  8 13:57:45.169: INFO: namespace pod-network-test-2733 deletion completed in 24.170992595s

• [SLOW TEST:67.505 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl describe 
  should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  8 13:57:45.171: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jan  8 13:57:45.244: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-648'
Jan  8 13:57:45.755: INFO: stderr: ""
Jan  8 13:57:45.756: INFO: stdout: "replicationcontroller/redis-master created\n"
Jan  8 13:57:45.756: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-648'
Jan  8 13:57:46.622: INFO: stderr: ""
Jan  8 13:57:46.622: INFO: stdout: "service/redis-master created\n"
STEP: Waiting for Redis master to start.
Jan  8 13:57:47.640: INFO: Selector matched 1 pods for map[app:redis]
Jan  8 13:57:47.640: INFO: Found 0 / 1
Jan  8 13:57:48.658: INFO: Selector matched 1 pods for map[app:redis]
Jan  8 13:57:48.658: INFO: Found 0 / 1
Jan  8 13:57:49.631: INFO: Selector matched 1 pods for map[app:redis]
Jan  8 13:57:49.632: INFO: Found 0 / 1
Jan  8 13:57:50.640: INFO: Selector matched 1 pods for map[app:redis]
Jan  8 13:57:50.641: INFO: Found 0 / 1
Jan  8 13:57:51.643: INFO: Selector matched 1 pods for map[app:redis]
Jan  8 13:57:51.643: INFO: Found 0 / 1
Jan  8 13:57:52.656: INFO: Selector matched 1 pods for map[app:redis]
Jan  8 13:57:52.657: INFO: Found 1 / 1
Jan  8 13:57:52.657: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Jan  8 13:57:52.668: INFO: Selector matched 1 pods for map[app:redis]
Jan  8 13:57:52.668: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Jan  8 13:57:52.669: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe pod redis-master-dgsfl --namespace=kubectl-648'
Jan  8 13:57:52.868: INFO: stderr: ""
Jan  8 13:57:52.868: INFO: stdout: "Name:           redis-master-dgsfl\nNamespace:      kubectl-648\nPriority:       0\nNode:           iruya-node/10.96.3.65\nStart Time:     Wed, 08 Jan 2020 13:57:45 +0000\nLabels:         app=redis\n                role=master\nAnnotations:    \nStatus:         Running\nIP:             10.44.0.1\nControlled By:  ReplicationController/redis-master\nContainers:\n  redis-master:\n    Container ID:   docker://303925805d856b53c3e8517afcc55eabbe97e4db39315d2306402c9001194d36\n    Image:          gcr.io/kubernetes-e2e-test-images/redis:1.0\n    Image ID:       docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830\n    Port:           6379/TCP\n    Host Port:      0/TCP\n    State:          Running\n      Started:      Wed, 08 Jan 2020 13:57:52 +0000\n    Ready:          True\n    Restart Count:  0\n    Environment:    \n    Mounts:\n      /var/run/secrets/kubernetes.io/serviceaccount from default-token-6lxf8 (ro)\nConditions:\n  Type              Status\n  Initialized       True \n  Ready             True \n  ContainersReady   True \n  PodScheduled      True \nVolumes:\n  default-token-6lxf8:\n    Type:        Secret (a volume populated by a Secret)\n    SecretName:  default-token-6lxf8\n    Optional:    false\nQoS Class:       BestEffort\nNode-Selectors:  \nTolerations:     node.kubernetes.io/not-ready:NoExecute for 300s\n                 node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n  Type    Reason     Age   From                 Message\n  ----    ------     ----  ----                 -------\n  Normal  Scheduled  7s    default-scheduler    Successfully assigned kubectl-648/redis-master-dgsfl to iruya-node\n  Normal  Pulled     3s    kubelet, iruya-node  Container image \"gcr.io/kubernetes-e2e-test-images/redis:1.0\" already present on machine\n  Normal  Created    0s    kubelet, iruya-node  Created container redis-master\n  Normal  Started    0s    kubelet, iruya-node  Started container redis-master\n"
Jan  8 13:57:52.868: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe rc redis-master --namespace=kubectl-648'
Jan  8 13:57:53.083: INFO: stderr: ""
Jan  8 13:57:53.083: INFO: stdout: "Name:         redis-master\nNamespace:    kubectl-648\nSelector:     app=redis,role=master\nLabels:       app=redis\n              role=master\nAnnotations:  \nReplicas:     1 current / 1 desired\nPods Status:  1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n  Labels:  app=redis\n           role=master\n  Containers:\n   redis-master:\n    Image:        gcr.io/kubernetes-e2e-test-images/redis:1.0\n    Port:         6379/TCP\n    Host Port:    0/TCP\n    Environment:  \n    Mounts:       \n  Volumes:        \nEvents:\n  Type    Reason            Age   From                    Message\n  ----    ------            ----  ----                    -------\n  Normal  SuccessfulCreate  8s    replication-controller  Created pod: redis-master-dgsfl\n"
Jan  8 13:57:53.083: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe service redis-master --namespace=kubectl-648'
Jan  8 13:57:53.183: INFO: stderr: ""
Jan  8 13:57:53.183: INFO: stdout: "Name:              redis-master\nNamespace:         kubectl-648\nLabels:            app=redis\n                   role=master\nAnnotations:       \nSelector:          app=redis,role=master\nType:              ClusterIP\nIP:                10.97.160.222\nPort:                6379/TCP\nTargetPort:        redis-server/TCP\nEndpoints:         10.44.0.1:6379\nSession Affinity:  None\nEvents:            \n"
Jan  8 13:57:53.186: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe node iruya-node'
Jan  8 13:57:53.309: INFO: stderr: ""
Jan  8 13:57:53.309: INFO: stdout: "Name:               iruya-node\nRoles:              \nLabels:             beta.kubernetes.io/arch=amd64\n                    beta.kubernetes.io/os=linux\n                    kubernetes.io/arch=amd64\n                    kubernetes.io/hostname=iruya-node\n                    kubernetes.io/os=linux\nAnnotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock\n                    node.alpha.kubernetes.io/ttl: 0\n                    volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp:  Sun, 04 Aug 2019 09:01:39 +0000\nTaints:             \nUnschedulable:      false\nConditions:\n  Type                 Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message\n  ----                 ------  -----------------                 ------------------                ------                       -------\n  NetworkUnavailable   False   Sat, 12 Oct 2019 11:56:49 +0000   Sat, 12 Oct 2019 11:56:49 +0000   WeaveIsUp                    Weave pod has set this\n  MemoryPressure       False   Wed, 08 Jan 2020 13:57:33 +0000   Sun, 04 Aug 2019 09:01:39 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available\n  DiskPressure         False   Wed, 08 Jan 2020 13:57:33 +0000   Sun, 04 Aug 2019 09:01:39 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure\n  PIDPressure          False   Wed, 08 Jan 2020 13:57:33 +0000   Sun, 04 Aug 2019 09:01:39 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available\n  Ready                True    Wed, 08 Jan 2020 13:57:33 +0000   Sun, 04 Aug 2019 09:02:19 +0000   KubeletReady                 kubelet is posting ready status. AppArmor enabled\nAddresses:\n  InternalIP:  10.96.3.65\n  Hostname:    iruya-node\nCapacity:\n cpu:                4\n ephemeral-storage:  20145724Ki\n hugepages-2Mi:      0\n memory:             4039076Ki\n pods:               110\nAllocatable:\n cpu:                4\n ephemeral-storage:  18566299208\n hugepages-2Mi:      0\n memory:             3936676Ki\n pods:               110\nSystem Info:\n Machine ID:                 f573dcf04d6f4a87856a35d266a2fa7a\n System UUID:                F573DCF0-4D6F-4A87-856A-35D266A2FA7A\n Boot ID:                    8baf4beb-8391-43e6-b17b-b1e184b5370a\n Kernel Version:             4.15.0-52-generic\n OS Image:                   Ubuntu 18.04.2 LTS\n Operating System:           linux\n Architecture:               amd64\n Container Runtime Version:  docker://18.9.7\n Kubelet Version:            v1.15.1\n Kube-Proxy Version:         v1.15.1\nPodCIDR:                     10.96.1.0/24\nNon-terminated Pods:         (3 in total)\n  Namespace                  Name                  CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE\n  ---------                  ----                  ------------  ----------  ---------------  -------------  ---\n  kube-system                kube-proxy-976zl      0 (0%)        0 (0%)      0 (0%)           0 (0%)         157d\n  kube-system                weave-net-rlp57       20m (0%)      0 (0%)      0 (0%)           0 (0%)         88d\n  kubectl-648                redis-master-dgsfl    0 (0%)        0 (0%)      0 (0%)           0 (0%)         8s\nAllocated resources:\n  (Total limits may be over 100 percent, i.e., overcommitted.)\n  Resource           Requests  Limits\n  --------           --------  ------\n  cpu                20m (0%)  0 (0%)\n  memory             0 (0%)    0 (0%)\n  ephemeral-storage  0 (0%)    0 (0%)\nEvents:              \n"
Jan  8 13:57:53.309: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe namespace kubectl-648'
Jan  8 13:57:53.488: INFO: stderr: ""
Jan  8 13:57:53.488: INFO: stdout: "Name:         kubectl-648\nLabels:       e2e-framework=kubectl\n              e2e-run=b060eb11-87a0-4c96-b55d-d72366d4fc98\nAnnotations:  \nStatus:       Active\n\nNo resource quota.\n\nNo resource limits.\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  8 13:57:53.488: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-648" for this suite.
Jan  8 13:58:15.537: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  8 13:58:15.641: INFO: namespace kubectl-648 deletion completed in 22.148272182s

• [SLOW TEST:30.470 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl describe
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should check if kubectl describe prints relevant information for rc and pods  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  8 13:58:15.641: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75
STEP: Creating service test in namespace statefulset-2447
[It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Initializing watcher for selector baz=blah,foo=bar
STEP: Creating stateful set ss in namespace statefulset-2447
STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-2447
Jan  8 13:58:15.789: INFO: Found 0 stateful pods, waiting for 1
Jan  8 13:58:25.809: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod
Jan  8 13:58:25.818: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2447 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan  8 13:58:26.472: INFO: stderr: "I0108 13:58:26.054186    1050 log.go:172] (0xc0008ec420) (0xc0009066e0) Create stream\nI0108 13:58:26.054403    1050 log.go:172] (0xc0008ec420) (0xc0009066e0) Stream added, broadcasting: 1\nI0108 13:58:26.063565    1050 log.go:172] (0xc0008ec420) Reply frame received for 1\nI0108 13:58:26.063615    1050 log.go:172] (0xc0008ec420) (0xc0006481e0) Create stream\nI0108 13:58:26.063624    1050 log.go:172] (0xc0008ec420) (0xc0006481e0) Stream added, broadcasting: 3\nI0108 13:58:26.065244    1050 log.go:172] (0xc0008ec420) Reply frame received for 3\nI0108 13:58:26.065300    1050 log.go:172] (0xc0008ec420) (0xc000906780) Create stream\nI0108 13:58:26.065328    1050 log.go:172] (0xc0008ec420) (0xc000906780) Stream added, broadcasting: 5\nI0108 13:58:26.067333    1050 log.go:172] (0xc0008ec420) Reply frame received for 5\nI0108 13:58:26.227786    1050 log.go:172] (0xc0008ec420) Data frame received for 5\nI0108 13:58:26.227860    1050 log.go:172] (0xc000906780) (5) Data frame handling\nI0108 13:58:26.227891    1050 log.go:172] (0xc000906780) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0108 13:58:26.285467    1050 log.go:172] (0xc0008ec420) Data frame received for 3\nI0108 13:58:26.285568    1050 log.go:172] (0xc0006481e0) (3) Data frame handling\nI0108 13:58:26.285612    1050 log.go:172] (0xc0006481e0) (3) Data frame sent\nI0108 13:58:26.456235    1050 log.go:172] (0xc0008ec420) Data frame received for 1\nI0108 13:58:26.456488    1050 log.go:172] (0xc0008ec420) (0xc000906780) Stream removed, broadcasting: 5\nI0108 13:58:26.456969    1050 log.go:172] (0xc0009066e0) (1) Data frame handling\nI0108 13:58:26.457017    1050 log.go:172] (0xc0009066e0) (1) Data frame sent\nI0108 13:58:26.457134    1050 log.go:172] (0xc0008ec420) (0xc0006481e0) Stream removed, broadcasting: 3\nI0108 13:58:26.457227    1050 log.go:172] (0xc0008ec420) (0xc0009066e0) Stream removed, broadcasting: 1\nI0108 13:58:26.457267    1050 log.go:172] (0xc0008ec420) Go away received\nI0108 13:58:26.458951    1050 log.go:172] (0xc0008ec420) (0xc0009066e0) Stream removed, broadcasting: 1\nI0108 13:58:26.458970    1050 log.go:172] (0xc0008ec420) (0xc0006481e0) Stream removed, broadcasting: 3\nI0108 13:58:26.458978    1050 log.go:172] (0xc0008ec420) (0xc000906780) Stream removed, broadcasting: 5\n"
Jan  8 13:58:26.473: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan  8 13:58:26.473: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan  8 13:58:26.492: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
Jan  8 13:58:36.516: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Jan  8 13:58:36.516: INFO: Waiting for statefulset status.replicas updated to 0
Jan  8 13:58:36.548: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999384s
Jan  8 13:58:37.566: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.984636947s
Jan  8 13:58:38.586: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.966176319s
Jan  8 13:58:39.598: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.947132867s
Jan  8 13:58:40.606: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.934472229s
Jan  8 13:58:41.619: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.926174656s
Jan  8 13:58:42.628: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.91399977s
Jan  8 13:58:43.639: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.904119633s
Jan  8 13:58:44.664: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.893317861s
Jan  8 13:58:45.673: INFO: Verifying statefulset ss doesn't scale past 1 for another 868.839835ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-2447
Jan  8 13:58:46.691: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2447 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  8 13:58:47.224: INFO: stderr: "I0108 13:58:46.877359    1068 log.go:172] (0xc000b0e370) (0xc000a1a6e0) Create stream\nI0108 13:58:46.877559    1068 log.go:172] (0xc000b0e370) (0xc000a1a6e0) Stream added, broadcasting: 1\nI0108 13:58:46.884065    1068 log.go:172] (0xc000b0e370) Reply frame received for 1\nI0108 13:58:46.884177    1068 log.go:172] (0xc000b0e370) (0xc00066c280) Create stream\nI0108 13:58:46.884191    1068 log.go:172] (0xc000b0e370) (0xc00066c280) Stream added, broadcasting: 3\nI0108 13:58:46.886993    1068 log.go:172] (0xc000b0e370) Reply frame received for 3\nI0108 13:58:46.887036    1068 log.go:172] (0xc000b0e370) (0xc000a1a780) Create stream\nI0108 13:58:46.887051    1068 log.go:172] (0xc000b0e370) (0xc000a1a780) Stream added, broadcasting: 5\nI0108 13:58:46.889072    1068 log.go:172] (0xc000b0e370) Reply frame received for 5\nI0108 13:58:47.003890    1068 log.go:172] (0xc000b0e370) Data frame received for 3\nI0108 13:58:47.003989    1068 log.go:172] (0xc00066c280) (3) Data frame handling\nI0108 13:58:47.004015    1068 log.go:172] (0xc00066c280) (3) Data frame sent\nI0108 13:58:47.004057    1068 log.go:172] (0xc000b0e370) Data frame received for 5\nI0108 13:58:47.004067    1068 log.go:172] (0xc000a1a780) (5) Data frame handling\nI0108 13:58:47.004084    1068 log.go:172] (0xc000a1a780) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0108 13:58:47.210759    1068 log.go:172] (0xc000b0e370) (0xc00066c280) Stream removed, broadcasting: 3\nI0108 13:58:47.211130    1068 log.go:172] (0xc000b0e370) Data frame received for 1\nI0108 13:58:47.211189    1068 log.go:172] (0xc000a1a6e0) (1) Data frame handling\nI0108 13:58:47.211540    1068 log.go:172] (0xc000a1a6e0) (1) Data frame sent\nI0108 13:58:47.211802    1068 log.go:172] (0xc000b0e370) (0xc000a1a780) Stream removed, broadcasting: 5\nI0108 13:58:47.211888    1068 log.go:172] (0xc000b0e370) (0xc000a1a6e0) Stream removed, broadcasting: 1\nI0108 13:58:47.211918    1068 log.go:172] (0xc000b0e370) Go away received\nI0108 13:58:47.213418    1068 log.go:172] (0xc000b0e370) (0xc000a1a6e0) Stream removed, broadcasting: 1\nI0108 13:58:47.213449    1068 log.go:172] (0xc000b0e370) (0xc00066c280) Stream removed, broadcasting: 3\nI0108 13:58:47.213459    1068 log.go:172] (0xc000b0e370) (0xc000a1a780) Stream removed, broadcasting: 5\n"
Jan  8 13:58:47.224: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jan  8 13:58:47.224: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jan  8 13:58:47.252: INFO: Found 1 stateful pods, waiting for 3
Jan  8 13:58:57.369: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Jan  8 13:58:57.369: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Jan  8 13:58:57.369: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Pending - Ready=false
Jan  8 13:59:07.262: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Jan  8 13:59:07.262: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Jan  8 13:59:07.262: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Verifying that stateful set ss was scaled up in order
STEP: Scale down will halt with unhealthy stateful pod
Jan  8 13:59:07.275: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2447 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan  8 13:59:07.849: INFO: stderr: "I0108 13:59:07.541353    1088 log.go:172] (0xc000ae0420) (0xc000932640) Create stream\nI0108 13:59:07.541535    1088 log.go:172] (0xc000ae0420) (0xc000932640) Stream added, broadcasting: 1\nI0108 13:59:07.549463    1088 log.go:172] (0xc000ae0420) Reply frame received for 1\nI0108 13:59:07.549682    1088 log.go:172] (0xc000ae0420) (0xc00090c000) Create stream\nI0108 13:59:07.549714    1088 log.go:172] (0xc000ae0420) (0xc00090c000) Stream added, broadcasting: 3\nI0108 13:59:07.553106    1088 log.go:172] (0xc000ae0420) Reply frame received for 3\nI0108 13:59:07.553156    1088 log.go:172] (0xc000ae0420) (0xc0005f6320) Create stream\nI0108 13:59:07.553167    1088 log.go:172] (0xc000ae0420) (0xc0005f6320) Stream added, broadcasting: 5\nI0108 13:59:07.555010    1088 log.go:172] (0xc000ae0420) Reply frame received for 5\nI0108 13:59:07.690504    1088 log.go:172] (0xc000ae0420) Data frame received for 5\nI0108 13:59:07.690739    1088 log.go:172] (0xc0005f6320) (5) Data frame handling\nI0108 13:59:07.690777    1088 log.go:172] (0xc0005f6320) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0108 13:59:07.690817    1088 log.go:172] (0xc000ae0420) Data frame received for 3\nI0108 13:59:07.690828    1088 log.go:172] (0xc00090c000) (3) Data frame handling\nI0108 13:59:07.690868    1088 log.go:172] (0xc00090c000) (3) Data frame sent\nI0108 13:59:07.835266    1088 log.go:172] (0xc000ae0420) Data frame received for 1\nI0108 13:59:07.835409    1088 log.go:172] (0xc000ae0420) (0xc0005f6320) Stream removed, broadcasting: 5\nI0108 13:59:07.835494    1088 log.go:172] (0xc000932640) (1) Data frame handling\nI0108 13:59:07.835517    1088 log.go:172] (0xc000932640) (1) Data frame sent\nI0108 13:59:07.835771    1088 log.go:172] (0xc000ae0420) (0xc00090c000) Stream removed, broadcasting: 3\nI0108 13:59:07.835807    1088 log.go:172] (0xc000ae0420) (0xc000932640) Stream removed, broadcasting: 1\nI0108 13:59:07.835826    1088 log.go:172] (0xc000ae0420) Go away received\nI0108 13:59:07.837372    1088 log.go:172] (0xc000ae0420) (0xc000932640) Stream removed, broadcasting: 1\nI0108 13:59:07.837396    1088 log.go:172] (0xc000ae0420) (0xc00090c000) Stream removed, broadcasting: 3\nI0108 13:59:07.837404    1088 log.go:172] (0xc000ae0420) (0xc0005f6320) Stream removed, broadcasting: 5\n"
Jan  8 13:59:07.850: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan  8 13:59:07.850: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan  8 13:59:07.850: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2447 ss-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan  8 13:59:08.504: INFO: stderr: "I0108 13:59:08.092221    1108 log.go:172] (0xc00093a2c0) (0xc00098c6e0) Create stream\nI0108 13:59:08.092641    1108 log.go:172] (0xc00093a2c0) (0xc00098c6e0) Stream added, broadcasting: 1\nI0108 13:59:08.096859    1108 log.go:172] (0xc00093a2c0) Reply frame received for 1\nI0108 13:59:08.097014    1108 log.go:172] (0xc00093a2c0) (0xc0005b2280) Create stream\nI0108 13:59:08.097045    1108 log.go:172] (0xc00093a2c0) (0xc0005b2280) Stream added, broadcasting: 3\nI0108 13:59:08.098171    1108 log.go:172] (0xc00093a2c0) Reply frame received for 3\nI0108 13:59:08.098201    1108 log.go:172] (0xc00093a2c0) (0xc00098c780) Create stream\nI0108 13:59:08.098209    1108 log.go:172] (0xc00093a2c0) (0xc00098c780) Stream added, broadcasting: 5\nI0108 13:59:08.099633    1108 log.go:172] (0xc00093a2c0) Reply frame received for 5\nI0108 13:59:08.280754    1108 log.go:172] (0xc00093a2c0) Data frame received for 5\nI0108 13:59:08.280832    1108 log.go:172] (0xc00098c780) (5) Data frame handling\nI0108 13:59:08.280859    1108 log.go:172] (0xc00098c780) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0108 13:59:08.381216    1108 log.go:172] (0xc00093a2c0) Data frame received for 3\nI0108 13:59:08.381374    1108 log.go:172] (0xc0005b2280) (3) Data frame handling\nI0108 13:59:08.381432    1108 log.go:172] (0xc0005b2280) (3) Data frame sent\nI0108 13:59:08.492140    1108 log.go:172] (0xc00093a2c0) (0xc0005b2280) Stream removed, broadcasting: 3\nI0108 13:59:08.492556    1108 log.go:172] (0xc00093a2c0) Data frame received for 1\nI0108 13:59:08.492661    1108 log.go:172] (0xc00098c6e0) (1) Data frame handling\nI0108 13:59:08.492729    1108 log.go:172] (0xc00098c6e0) (1) Data frame sent\nI0108 13:59:08.492766    1108 log.go:172] (0xc00093a2c0) (0xc00098c6e0) Stream removed, broadcasting: 1\nI0108 13:59:08.493512    1108 log.go:172] (0xc00093a2c0) (0xc00098c780) Stream removed, broadcasting: 5\nI0108 13:59:08.493775    1108 log.go:172] (0xc00093a2c0) Go away received\nI0108 13:59:08.495411    1108 log.go:172] (0xc00093a2c0) (0xc00098c6e0) Stream removed, broadcasting: 1\nI0108 13:59:08.495460    1108 log.go:172] (0xc00093a2c0) (0xc0005b2280) Stream removed, broadcasting: 3\nI0108 13:59:08.495471    1108 log.go:172] (0xc00093a2c0) (0xc00098c780) Stream removed, broadcasting: 5\n"
Jan  8 13:59:08.505: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan  8 13:59:08.505: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan  8 13:59:08.505: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2447 ss-2 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan  8 13:59:09.042: INFO: stderr: "I0108 13:59:08.735145    1129 log.go:172] (0xc00012a0b0) (0xc0008645a0) Create stream\nI0108 13:59:08.735324    1129 log.go:172] (0xc00012a0b0) (0xc0008645a0) Stream added, broadcasting: 1\nI0108 13:59:08.741316    1129 log.go:172] (0xc00012a0b0) Reply frame received for 1\nI0108 13:59:08.741368    1129 log.go:172] (0xc00012a0b0) (0xc000836000) Create stream\nI0108 13:59:08.741380    1129 log.go:172] (0xc00012a0b0) (0xc000836000) Stream added, broadcasting: 3\nI0108 13:59:08.742942    1129 log.go:172] (0xc00012a0b0) Reply frame received for 3\nI0108 13:59:08.742967    1129 log.go:172] (0xc00012a0b0) (0xc000632280) Create stream\nI0108 13:59:08.742976    1129 log.go:172] (0xc00012a0b0) (0xc000632280) Stream added, broadcasting: 5\nI0108 13:59:08.744112    1129 log.go:172] (0xc00012a0b0) Reply frame received for 5\nI0108 13:59:08.855359    1129 log.go:172] (0xc00012a0b0) Data frame received for 5\nI0108 13:59:08.855499    1129 log.go:172] (0xc000632280) (5) Data frame handling\nI0108 13:59:08.855527    1129 log.go:172] (0xc000632280) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0108 13:59:08.886681    1129 log.go:172] (0xc00012a0b0) Data frame received for 3\nI0108 13:59:08.886778    1129 log.go:172] (0xc000836000) (3) Data frame handling\nI0108 13:59:08.886804    1129 log.go:172] (0xc000836000) (3) Data frame sent\nI0108 13:59:09.033255    1129 log.go:172] (0xc00012a0b0) (0xc000632280) Stream removed, broadcasting: 5\nI0108 13:59:09.033441    1129 log.go:172] (0xc00012a0b0) (0xc000836000) Stream removed, broadcasting: 3\nI0108 13:59:09.033489    1129 log.go:172] (0xc00012a0b0) Data frame received for 1\nI0108 13:59:09.033518    1129 log.go:172] (0xc0008645a0) (1) Data frame handling\nI0108 13:59:09.033536    1129 log.go:172] (0xc0008645a0) (1) Data frame sent\nI0108 13:59:09.033547    1129 log.go:172] (0xc00012a0b0) (0xc0008645a0) Stream removed, broadcasting: 1\nI0108 13:59:09.033566    1129 log.go:172] (0xc00012a0b0) Go away received\nI0108 13:59:09.034786    1129 log.go:172] (0xc00012a0b0) (0xc0008645a0) Stream removed, broadcasting: 1\nI0108 13:59:09.034803    1129 log.go:172] (0xc00012a0b0) (0xc000836000) Stream removed, broadcasting: 3\nI0108 13:59:09.034807    1129 log.go:172] (0xc00012a0b0) (0xc000632280) Stream removed, broadcasting: 5\n"
Jan  8 13:59:09.043: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan  8 13:59:09.043: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan  8 13:59:09.043: INFO: Waiting for statefulset status.replicas updated to 0
Jan  8 13:59:09.049: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2
Jan  8 13:59:19.083: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Jan  8 13:59:19.083: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Jan  8 13:59:19.083: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Jan  8 13:59:19.121: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.99999931s
Jan  8 13:59:20.130: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.99047213s
Jan  8 13:59:21.141: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.981504965s
Jan  8 13:59:22.169: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.970686821s
Jan  8 13:59:23.178: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.942047044s
Jan  8 13:59:24.923: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.933522886s
Jan  8 13:59:25.937: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.188078445s
Jan  8 13:59:26.948: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.174430844s
Jan  8 13:59:27.956: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.163690296s
Jan  8 13:59:28.969: INFO: Verifying statefulset ss doesn't scale past 3 for another 155.431817ms
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-2447
Jan  8 13:59:29.979: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2447 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  8 13:59:30.638: INFO: stderr: "I0108 13:59:30.266223    1144 log.go:172] (0xc000a96370) (0xc00056a820) Create stream\nI0108 13:59:30.267056    1144 log.go:172] (0xc000a96370) (0xc00056a820) Stream added, broadcasting: 1\nI0108 13:59:30.274727    1144 log.go:172] (0xc000a96370) Reply frame received for 1\nI0108 13:59:30.274794    1144 log.go:172] (0xc000a96370) (0xc00056a8c0) Create stream\nI0108 13:59:30.274804    1144 log.go:172] (0xc000a96370) (0xc00056a8c0) Stream added, broadcasting: 3\nI0108 13:59:30.277063    1144 log.go:172] (0xc000a96370) Reply frame received for 3\nI0108 13:59:30.277203    1144 log.go:172] (0xc000a96370) (0xc00056a960) Create stream\nI0108 13:59:30.277216    1144 log.go:172] (0xc000a96370) (0xc00056a960) Stream added, broadcasting: 5\nI0108 13:59:30.279069    1144 log.go:172] (0xc000a96370) Reply frame received for 5\nI0108 13:59:30.392086    1144 log.go:172] (0xc000a96370) Data frame received for 5\nI0108 13:59:30.392249    1144 log.go:172] (0xc00056a960) (5) Data frame handling\nI0108 13:59:30.392273    1144 log.go:172] (0xc00056a960) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0108 13:59:30.392369    1144 log.go:172] (0xc000a96370) Data frame received for 3\nI0108 13:59:30.392437    1144 log.go:172] (0xc00056a8c0) (3) Data frame handling\nI0108 13:59:30.392467    1144 log.go:172] (0xc00056a8c0) (3) Data frame sent\nI0108 13:59:30.616314    1144 log.go:172] (0xc000a96370) (0xc00056a8c0) Stream removed, broadcasting: 3\nI0108 13:59:30.616732    1144 log.go:172] (0xc000a96370) (0xc00056a960) Stream removed, broadcasting: 5\nI0108 13:59:30.617339    1144 log.go:172] (0xc000a96370) Data frame received for 1\nI0108 13:59:30.617699    1144 log.go:172] (0xc00056a820) (1) Data frame handling\nI0108 13:59:30.617796    1144 log.go:172] (0xc00056a820) (1) Data frame sent\nI0108 13:59:30.617890    1144 log.go:172] (0xc000a96370) (0xc00056a820) Stream removed, broadcasting: 1\nI0108 13:59:30.618027    1144 log.go:172] (0xc000a96370) Go away received\nI0108 13:59:30.619705    1144 log.go:172] (0xc000a96370) (0xc00056a820) Stream removed, broadcasting: 1\nI0108 13:59:30.619731    1144 log.go:172] (0xc000a96370) (0xc00056a8c0) Stream removed, broadcasting: 3\nI0108 13:59:30.619737    1144 log.go:172] (0xc000a96370) (0xc00056a960) Stream removed, broadcasting: 5\n"
Jan  8 13:59:30.638: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jan  8 13:59:30.638: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jan  8 13:59:30.639: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2447 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  8 13:59:30.992: INFO: stderr: "I0108 13:59:30.812923    1164 log.go:172] (0xc000a52160) (0xc00086e000) Create stream\nI0108 13:59:30.813128    1164 log.go:172] (0xc000a52160) (0xc00086e000) Stream added, broadcasting: 1\nI0108 13:59:30.816584    1164 log.go:172] (0xc000a52160) Reply frame received for 1\nI0108 13:59:30.816668    1164 log.go:172] (0xc000a52160) (0xc000338000) Create stream\nI0108 13:59:30.816683    1164 log.go:172] (0xc000a52160) (0xc000338000) Stream added, broadcasting: 3\nI0108 13:59:30.818046    1164 log.go:172] (0xc000a52160) Reply frame received for 3\nI0108 13:59:30.818068    1164 log.go:172] (0xc000a52160) (0xc0003380a0) Create stream\nI0108 13:59:30.818077    1164 log.go:172] (0xc000a52160) (0xc0003380a0) Stream added, broadcasting: 5\nI0108 13:59:30.819294    1164 log.go:172] (0xc000a52160) Reply frame received for 5\nI0108 13:59:30.903744    1164 log.go:172] (0xc000a52160) Data frame received for 5\nI0108 13:59:30.903846    1164 log.go:172] (0xc0003380a0) (5) Data frame handling\nI0108 13:59:30.903869    1164 log.go:172] (0xc0003380a0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0108 13:59:30.903916    1164 log.go:172] (0xc000a52160) Data frame received for 3\nI0108 13:59:30.903949    1164 log.go:172] (0xc000338000) (3) Data frame handling\nI0108 13:59:30.903967    1164 log.go:172] (0xc000338000) (3) Data frame sent\nI0108 13:59:30.980135    1164 log.go:172] (0xc000a52160) Data frame received for 1\nI0108 13:59:30.980359    1164 log.go:172] (0xc00086e000) (1) Data frame handling\nI0108 13:59:30.980404    1164 log.go:172] (0xc00086e000) (1) Data frame sent\nI0108 13:59:30.980460    1164 log.go:172] (0xc000a52160) (0xc00086e000) Stream removed, broadcasting: 1\nI0108 13:59:30.981085    1164 log.go:172] (0xc000a52160) (0xc0003380a0) Stream removed, broadcasting: 5\nI0108 13:59:30.981291    1164 log.go:172] (0xc000a52160) (0xc000338000) Stream removed, broadcasting: 3\nI0108 13:59:30.981825    1164 log.go:172] (0xc000a52160) Go away received\nI0108 13:59:30.982582    1164 log.go:172] (0xc000a52160) (0xc00086e000) Stream removed, broadcasting: 1\nI0108 13:59:30.982600    1164 log.go:172] (0xc000a52160) (0xc000338000) Stream removed, broadcasting: 3\nI0108 13:59:30.982611    1164 log.go:172] (0xc000a52160) (0xc0003380a0) Stream removed, broadcasting: 5\n"
Jan  8 13:59:30.992: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jan  8 13:59:30.992: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jan  8 13:59:30.992: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2447 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  8 13:59:31.521: INFO: rc: 126
Jan  8 13:59:31.521: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2447 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []   OCI runtime exec failed: exec failed: cannot exec a container that has stopped: unknown
 I0108 13:59:31.315654    1185 log.go:172] (0xc000116e70) (0xc000890780) Create stream
I0108 13:59:31.316141    1185 log.go:172] (0xc000116e70) (0xc000890780) Stream added, broadcasting: 1
I0108 13:59:31.322212    1185 log.go:172] (0xc000116e70) Reply frame received for 1
I0108 13:59:31.322273    1185 log.go:172] (0xc000116e70) (0xc000692140) Create stream
I0108 13:59:31.322290    1185 log.go:172] (0xc000116e70) (0xc000692140) Stream added, broadcasting: 3
I0108 13:59:31.323725    1185 log.go:172] (0xc000116e70) Reply frame received for 3
I0108 13:59:31.323756    1185 log.go:172] (0xc000116e70) (0xc000800000) Create stream
I0108 13:59:31.323770    1185 log.go:172] (0xc000116e70) (0xc000800000) Stream added, broadcasting: 5
I0108 13:59:31.326096    1185 log.go:172] (0xc000116e70) Reply frame received for 5
I0108 13:59:31.500706    1185 log.go:172] (0xc000116e70) Data frame received for 3
I0108 13:59:31.500886    1185 log.go:172] (0xc000692140) (3) Data frame handling
I0108 13:59:31.500922    1185 log.go:172] (0xc000692140) (3) Data frame sent
I0108 13:59:31.506191    1185 log.go:172] (0xc000116e70) Data frame received for 1
I0108 13:59:31.506219    1185 log.go:172] (0xc000890780) (1) Data frame handling
I0108 13:59:31.506236    1185 log.go:172] (0xc000890780) (1) Data frame sent
I0108 13:59:31.506956    1185 log.go:172] (0xc000116e70) (0xc000890780) Stream removed, broadcasting: 1
I0108 13:59:31.507100    1185 log.go:172] (0xc000116e70) (0xc000692140) Stream removed, broadcasting: 3
I0108 13:59:31.509370    1185 log.go:172] (0xc000116e70) (0xc000800000) Stream removed, broadcasting: 5
I0108 13:59:31.509622    1185 log.go:172] (0xc000116e70) (0xc000890780) Stream removed, broadcasting: 1
I0108 13:59:31.509666    1185 log.go:172] (0xc000116e70) Go away received
I0108 13:59:31.509763    1185 log.go:172] (0xc000116e70) (0xc000692140) Stream removed, broadcasting: 3
I0108 13:59:31.509828    1185 log.go:172] (0xc000116e70) (0xc000800000) Stream removed, broadcasting: 5
command terminated with exit code 126
 []  0xc001e6a2a0 exit status 126   true [0xc000714858 0xc000714a18 0xc000714a98] [0xc000714858 0xc000714a18 0xc000714a98] [0xc0007149f0 0xc000714a78] [0xba6c50 0xba6c50] 0xc002a6c420 }:
Command stdout:
OCI runtime exec failed: exec failed: cannot exec a container that has stopped: unknown

stderr:
I0108 13:59:31.315654    1185 log.go:172] (0xc000116e70) (0xc000890780) Create stream
I0108 13:59:31.316141    1185 log.go:172] (0xc000116e70) (0xc000890780) Stream added, broadcasting: 1
I0108 13:59:31.322212    1185 log.go:172] (0xc000116e70) Reply frame received for 1
I0108 13:59:31.322273    1185 log.go:172] (0xc000116e70) (0xc000692140) Create stream
I0108 13:59:31.322290    1185 log.go:172] (0xc000116e70) (0xc000692140) Stream added, broadcasting: 3
I0108 13:59:31.323725    1185 log.go:172] (0xc000116e70) Reply frame received for 3
I0108 13:59:31.323756    1185 log.go:172] (0xc000116e70) (0xc000800000) Create stream
I0108 13:59:31.323770    1185 log.go:172] (0xc000116e70) (0xc000800000) Stream added, broadcasting: 5
I0108 13:59:31.326096    1185 log.go:172] (0xc000116e70) Reply frame received for 5
I0108 13:59:31.500706    1185 log.go:172] (0xc000116e70) Data frame received for 3
I0108 13:59:31.500886    1185 log.go:172] (0xc000692140) (3) Data frame handling
I0108 13:59:31.500922    1185 log.go:172] (0xc000692140) (3) Data frame sent
I0108 13:59:31.506191    1185 log.go:172] (0xc000116e70) Data frame received for 1
I0108 13:59:31.506219    1185 log.go:172] (0xc000890780) (1) Data frame handling
I0108 13:59:31.506236    1185 log.go:172] (0xc000890780) (1) Data frame sent
I0108 13:59:31.506956    1185 log.go:172] (0xc000116e70) (0xc000890780) Stream removed, broadcasting: 1
I0108 13:59:31.507100    1185 log.go:172] (0xc000116e70) (0xc000692140) Stream removed, broadcasting: 3
I0108 13:59:31.509370    1185 log.go:172] (0xc000116e70) (0xc000800000) Stream removed, broadcasting: 5
I0108 13:59:31.509622    1185 log.go:172] (0xc000116e70) (0xc000890780) Stream removed, broadcasting: 1
I0108 13:59:31.509666    1185 log.go:172] (0xc000116e70) Go away received
I0108 13:59:31.509763    1185 log.go:172] (0xc000116e70) (0xc000692140) Stream removed, broadcasting: 3
I0108 13:59:31.509828    1185 log.go:172] (0xc000116e70) (0xc000800000) Stream removed, broadcasting: 5
command terminated with exit code 126

error:
exit status 126
Jan  8 13:59:41.522: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2447 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  8 13:59:41.762: INFO: rc: 1
Jan  8 13:59:41.763: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2447 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    error: unable to upgrade connection: container not found ("nginx")
 []  0xc002ab0e40 exit status 1   true [0xc003344098 0xc0033440b0 0xc0033440c8] [0xc003344098 0xc0033440b0 0xc0033440c8] [0xc0033440a8 0xc0033440c0] [0xba6c50 0xba6c50] 0xc0032a3500 }:
Command stdout:

stderr:
error: unable to upgrade connection: container not found ("nginx")

error:
exit status 1
Jan  8 13:59:51.763: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2447 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  8 13:59:51.942: INFO: rc: 1
Jan  8 13:59:51.942: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2447 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc002ab0f30 exit status 1   true [0xc0033440d0 0xc0033440e8 0xc003344100] [0xc0033440d0 0xc0033440e8 0xc003344100] [0xc0033440e0 0xc0033440f8] [0xba6c50 0xba6c50] 0xc0032a3b00 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan  8 14:00:01.943: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2447 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  8 14:00:02.140: INFO: rc: 1
Jan  8 14:00:02.140: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2447 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc0022794d0 exit status 1   true [0xc0020703f0 0xc002070420 0xc002070460] [0xc0020703f0 0xc002070420 0xc002070460] [0xc002070410 0xc002070450] [0xba6c50 0xba6c50] 0xc0027ba5a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan  8 14:00:12.141: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2447 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  8 14:00:12.366: INFO: rc: 1
Jan  8 14:00:12.366: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2447 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc0027fac30 exit status 1   true [0xc001bde0a8 0xc001bde0c0 0xc001bde0d8] [0xc001bde0a8 0xc001bde0c0 0xc001bde0d8] [0xc001bde0b8 0xc001bde0d0] [0xba6c50 0xba6c50] 0xc002941260 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan  8 14:00:22.367: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2447 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  8 14:00:22.493: INFO: rc: 1
Jan  8 14:00:22.494: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2447 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc0022795f0 exit status 1   true [0xc002070470 0xc0020704a0 0xc0020704d8] [0xc002070470 0xc0020704a0 0xc0020704d8] [0xc002070490 0xc0020704c8] [0xba6c50 0xba6c50] 0xc0027bac00 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan  8 14:00:32.494: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2447 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  8 14:00:32.683: INFO: rc: 1
Jan  8 14:00:32.683: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2447 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc002ab1020 exit status 1   true [0xc003344108 0xc003344120 0xc003344138] [0xc003344108 0xc003344120 0xc003344138] [0xc003344118 0xc003344130] [0xba6c50 0xba6c50] 0xc0032a3e00 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan  8 14:00:42.684: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2447 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  8 14:00:42.816: INFO: rc: 1
Jan  8 14:00:42.816: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2447 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc002ab1110 exit status 1   true [0xc003344140 0xc003344158 0xc003344170] [0xc003344140 0xc003344158 0xc003344170] [0xc003344150 0xc003344168] [0xba6c50 0xba6c50] 0xc001fda720 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan  8 14:00:52.816: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2447 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  8 14:00:53.014: INFO: rc: 1
Jan  8 14:00:53.015: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2447 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc001e6a390 exit status 1   true [0xc000714ab0 0xc000714bd0 0xc000714c90] [0xc000714ab0 0xc000714bd0 0xc000714c90] [0xc000714bb8 0xc000714c30] [0xba6c50 0xba6c50] 0xc002a6d1a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan  8 14:01:03.015: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2447 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  8 14:01:03.178: INFO: rc: 1
Jan  8 14:01:03.178: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2447 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc0027fad20 exit status 1   true [0xc001bde0e0 0xc001bde0f8 0xc001bde110] [0xc001bde0e0 0xc001bde0f8 0xc001bde110] [0xc001bde0f0 0xc001bde108] [0xba6c50 0xba6c50] 0xc002941680 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan  8 14:01:13.179: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2447 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  8 14:01:13.344: INFO: rc: 1
Jan  8 14:01:13.344: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2447 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc001520060 exit status 1   true [0xc00035daf8 0xc00035dbe8 0xc00035dd48] [0xc00035daf8 0xc00035dbe8 0xc00035dd48] [0xc00035db88 0xc00035dd00] [0xba6c50 0xba6c50] 0xc0032a2480 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan  8 14:01:23.345: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2447 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  8 14:01:23.556: INFO: rc: 1
Jan  8 14:01:23.556: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2447 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc001520150 exit status 1   true [0xc00035ddd0 0xc00035df90 0xc000540368] [0xc00035ddd0 0xc00035df90 0xc000540368] [0xc00035df70 0xc000540198] [0xba6c50 0xba6c50] 0xc0032a2ba0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan  8 14:01:33.557: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2447 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  8 14:01:33.714: INFO: rc: 1
Jan  8 14:01:33.714: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2447 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc001b16090 exit status 1   true [0xc000714168 0xc000714598 0xc0007148d8] [0xc000714168 0xc000714598 0xc0007148d8] [0xc000714500 0xc000714858] [0xba6c50 0xba6c50] 0xc00216ab40 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan  8 14:01:43.716: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2447 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  8 14:01:43.876: INFO: rc: 1
Jan  8 14:01:43.877: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2447 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc0031160c0 exit status 1   true [0xc002070000 0xc002070050 0xc002070088] [0xc002070000 0xc002070050 0xc002070088] [0xc002070028 0xc002070078] [0xba6c50 0xba6c50] 0xc001224420 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan  8 14:01:53.877: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2447 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  8 14:01:54.044: INFO: rc: 1
Jan  8 14:01:54.045: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2447 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc001b161b0 exit status 1   true [0xc0007149f0 0xc000714a78 0xc000714b60] [0xc0007149f0 0xc000714a78 0xc000714b60] [0xc000714a68 0xc000714ab0] [0xba6c50 0xba6c50] 0xc00216b4a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan  8 14:02:04.045: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2447 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  8 14:02:04.195: INFO: rc: 1
Jan  8 14:02:04.195: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2447 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc001b162d0 exit status 1   true [0xc000714bb8 0xc000714c30 0xc000714cd8] [0xc000714bb8 0xc000714c30 0xc000714cd8] [0xc000714bf8 0xc000714ca0] [0xba6c50 0xba6c50] 0xc00216bf80 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan  8 14:02:14.195: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2447 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  8 14:02:14.364: INFO: rc: 1
Jan  8 14:02:14.365: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2447 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc001520270 exit status 1   true [0xc000541710 0xc0005418f0 0xc000541a38] [0xc000541710 0xc0005418f0 0xc000541a38] [0xc000541848 0xc0005419c8] [0xba6c50 0xba6c50] 0xc0032a3380 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan  8 14:02:24.365: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2447 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  8 14:02:24.594: INFO: rc: 1
Jan  8 14:02:24.595: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2447 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc001b163c0 exit status 1   true [0xc000714ce8 0xc000714d70 0xc000714dc8] [0xc000714ce8 0xc000714d70 0xc000714dc8] [0xc000714d18 0xc000714db0] [0xba6c50 0xba6c50] 0xc001c03740 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan  8 14:02:34.595: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2447 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  8 14:02:34.746: INFO: rc: 1
Jan  8 14:02:34.747: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2447 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc001520360 exit status 1   true [0xc000541ab8 0xc000541c58 0xc000541d48] [0xc000541ab8 0xc000541c58 0xc000541d48] [0xc000541b28 0xc000541d38] [0xba6c50 0xba6c50] 0xc0032a3800 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan  8 14:02:44.747: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2447 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  8 14:02:44.928: INFO: rc: 1
Jan  8 14:02:44.928: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2447 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc0031161e0 exit status 1   true [0xc0020700a0 0xc0020700d8 0xc002070118] [0xc0020700a0 0xc0020700d8 0xc002070118] [0xc0020700c8 0xc002070108] [0xba6c50 0xba6c50] 0xc001224ea0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan  8 14:02:54.929: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2447 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  8 14:02:55.076: INFO: rc: 1
Jan  8 14:02:55.076: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2447 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc001b16540 exit status 1   true [0xc000714de8 0xc000714e70 0xc000714ed8] [0xc000714de8 0xc000714e70 0xc000714ed8] [0xc000714e28 0xc000714eb8] [0xba6c50 0xba6c50] 0xc001c03c80 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan  8 14:03:05.077: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2447 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  8 14:03:07.146: INFO: rc: 1
Jan  8 14:03:07.147: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2447 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc001520450 exit status 1   true [0xc000541df0 0xc000541ed8 0xc003344000] [0xc000541df0 0xc000541ed8 0xc003344000] [0xc000541ec0 0xc000541fa8] [0xba6c50 0xba6c50] 0xc0032a3ce0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan  8 14:03:17.148: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2447 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  8 14:03:17.339: INFO: rc: 1
Jan  8 14:03:17.339: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2447 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc0015200c0 exit status 1   true [0xc00035dab8 0xc00035db88 0xc00035dd00] [0xc00035dab8 0xc00035db88 0xc00035dd00] [0xc00035db38 0xc00035dc68] [0xba6c50 0xba6c50] 0xc00216ab40 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan  8 14:03:27.339: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2447 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  8 14:03:27.596: INFO: rc: 1
Jan  8 14:03:27.596: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2447 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc001b160f0 exit status 1   true [0xc003344008 0xc003344020 0xc003344038] [0xc003344008 0xc003344020 0xc003344038] [0xc003344018 0xc003344030] [0xba6c50 0xba6c50] 0xc0032a2420 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan  8 14:03:37.597: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2447 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  8 14:03:37.808: INFO: rc: 1
Jan  8 14:03:37.808: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2447 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc001b16240 exit status 1   true [0xc003344040 0xc003344058 0xc003344070] [0xc003344040 0xc003344058 0xc003344070] [0xc003344050 0xc003344068] [0xba6c50 0xba6c50] 0xc0032a2b40 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan  8 14:03:47.809: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2447 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  8 14:03:48.047: INFO: rc: 1
Jan  8 14:03:48.047: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2447 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc0031160f0 exit status 1   true [0xc000714168 0xc000714598 0xc0007148d8] [0xc000714168 0xc000714598 0xc0007148d8] [0xc000714500 0xc000714858] [0xba6c50 0xba6c50] 0xc001c03680 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan  8 14:03:58.048: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2447 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  8 14:03:58.269: INFO: rc: 1
Jan  8 14:03:58.269: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2447 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc001734240 exit status 1   true [0xc000540198 0xc000541750 0xc0005419a0] [0xc000540198 0xc000541750 0xc0005419a0] [0xc000541710 0xc0005418f0] [0xba6c50 0xba6c50] 0xc001224420 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan  8 14:04:08.269: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2447 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  8 14:04:08.489: INFO: rc: 1
Jan  8 14:04:08.489: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2447 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc001b16360 exit status 1   true [0xc003344078 0xc003344090 0xc0033440a8] [0xc003344078 0xc003344090 0xc0033440a8] [0xc003344088 0xc0033440a0] [0xba6c50 0xba6c50] 0xc0032a3320 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan  8 14:04:18.490: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2447 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  8 14:04:18.657: INFO: rc: 1
Jan  8 14:04:18.657: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2447 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc001520240 exit status 1   true [0xc00035dd48 0xc00035df70 0xc002070018] [0xc00035dd48 0xc00035df70 0xc002070018] [0xc00035de90 0xc002070000] [0xba6c50 0xba6c50] 0xc00216b4a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan  8 14:04:28.658: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2447 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  8 14:04:28.829: INFO: rc: 1
Jan  8 14:04:28.829: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2447 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc001b164e0 exit status 1   true [0xc0033440b0 0xc0033440c8 0xc0033440e0] [0xc0033440b0 0xc0033440c8 0xc0033440e0] [0xc0033440c0 0xc0033440d8] [0xba6c50 0xba6c50] 0xc0032a37a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan  8 14:04:38.830: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2447 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  8 14:04:38.989: INFO: rc: 1
Jan  8 14:04:38.990: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: 
Jan  8 14:04:38.990: INFO: Scaling statefulset ss to 0
STEP: Verifying that stateful set ss was scaled down in reverse order
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86
Jan  8 14:04:39.006: INFO: Deleting all statefulset in ns statefulset-2447
Jan  8 14:04:39.008: INFO: Scaling statefulset ss to 0
Jan  8 14:04:39.016: INFO: Waiting for statefulset status.replicas updated to 0
Jan  8 14:04:39.018: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  8 14:04:39.032: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-2447" for this suite.
Jan  8 14:04:47.082: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  8 14:04:47.198: INFO: namespace statefulset-2447 deletion completed in 8.157227199s

• [SLOW TEST:391.557 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  8 14:04:47.199: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating pod
Jan  8 14:04:57.364: INFO: Pod pod-hostip-6e2b4cb9-5617-4186-99a2-6cded81a1c27 has hostIP: 10.96.3.65
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  8 14:04:57.364: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-4008" for this suite.
Jan  8 14:05:19.396: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  8 14:05:19.485: INFO: namespace pods-4008 deletion completed in 22.114822375s

• [SLOW TEST:32.286 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  8 14:05:19.486: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Performing setup for networking test in namespace pod-network-test-4588
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Jan  8 14:05:19.545: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Jan  8 14:05:53.864: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostName&protocol=http&host=10.32.0.4&port=8080&tries=1'] Namespace:pod-network-test-4588 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan  8 14:05:53.864: INFO: >>> kubeConfig: /root/.kube/config
I0108 14:05:53.980395       8 log.go:172] (0xc000c5bef0) (0xc003115ea0) Create stream
I0108 14:05:53.980465       8 log.go:172] (0xc000c5bef0) (0xc003115ea0) Stream added, broadcasting: 1
I0108 14:05:53.987994       8 log.go:172] (0xc000c5bef0) Reply frame received for 1
I0108 14:05:53.988029       8 log.go:172] (0xc000c5bef0) (0xc000a9f400) Create stream
I0108 14:05:53.988036       8 log.go:172] (0xc000c5bef0) (0xc000a9f400) Stream added, broadcasting: 3
I0108 14:05:53.989384       8 log.go:172] (0xc000c5bef0) Reply frame received for 3
I0108 14:05:53.989408       8 log.go:172] (0xc000c5bef0) (0xc003115f40) Create stream
I0108 14:05:53.989418       8 log.go:172] (0xc000c5bef0) (0xc003115f40) Stream added, broadcasting: 5
I0108 14:05:53.990900       8 log.go:172] (0xc000c5bef0) Reply frame received for 5
I0108 14:05:54.158302       8 log.go:172] (0xc000c5bef0) Data frame received for 3
I0108 14:05:54.158444       8 log.go:172] (0xc000a9f400) (3) Data frame handling
I0108 14:05:54.158478       8 log.go:172] (0xc000a9f400) (3) Data frame sent
I0108 14:05:54.279846       8 log.go:172] (0xc000c5bef0) Data frame received for 1
I0108 14:05:54.279904       8 log.go:172] (0xc000c5bef0) (0xc000a9f400) Stream removed, broadcasting: 3
I0108 14:05:54.279966       8 log.go:172] (0xc003115ea0) (1) Data frame handling
I0108 14:05:54.279995       8 log.go:172] (0xc003115ea0) (1) Data frame sent
I0108 14:05:54.280037       8 log.go:172] (0xc000c5bef0) (0xc003115f40) Stream removed, broadcasting: 5
I0108 14:05:54.280072       8 log.go:172] (0xc000c5bef0) (0xc003115ea0) Stream removed, broadcasting: 1
I0108 14:05:54.280116       8 log.go:172] (0xc000c5bef0) Go away received
I0108 14:05:54.280375       8 log.go:172] (0xc000c5bef0) (0xc003115ea0) Stream removed, broadcasting: 1
I0108 14:05:54.280391       8 log.go:172] (0xc000c5bef0) (0xc000a9f400) Stream removed, broadcasting: 3
I0108 14:05:54.280404       8 log.go:172] (0xc000c5bef0) (0xc003115f40) Stream removed, broadcasting: 5
Jan  8 14:05:54.280: INFO: Waiting for endpoints: map[]
Jan  8 14:05:54.288: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostName&protocol=http&host=10.44.0.1&port=8080&tries=1'] Namespace:pod-network-test-4588 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan  8 14:05:54.288: INFO: >>> kubeConfig: /root/.kube/config
I0108 14:05:54.341509       8 log.go:172] (0xc0011d08f0) (0xc00165c280) Create stream
I0108 14:05:54.341560       8 log.go:172] (0xc0011d08f0) (0xc00165c280) Stream added, broadcasting: 1
I0108 14:05:54.347267       8 log.go:172] (0xc0011d08f0) Reply frame received for 1
I0108 14:05:54.347328       8 log.go:172] (0xc0011d08f0) (0xc00202f220) Create stream
I0108 14:05:54.347338       8 log.go:172] (0xc0011d08f0) (0xc00202f220) Stream added, broadcasting: 3
I0108 14:05:54.348763       8 log.go:172] (0xc0011d08f0) Reply frame received for 3
I0108 14:05:54.348791       8 log.go:172] (0xc0011d08f0) (0xc00315e5a0) Create stream
I0108 14:05:54.348797       8 log.go:172] (0xc0011d08f0) (0xc00315e5a0) Stream added, broadcasting: 5
I0108 14:05:54.350695       8 log.go:172] (0xc0011d08f0) Reply frame received for 5
I0108 14:05:54.471674       8 log.go:172] (0xc0011d08f0) Data frame received for 3
I0108 14:05:54.471796       8 log.go:172] (0xc00202f220) (3) Data frame handling
I0108 14:05:54.471841       8 log.go:172] (0xc00202f220) (3) Data frame sent
I0108 14:05:54.679107       8 log.go:172] (0xc0011d08f0) Data frame received for 1
I0108 14:05:54.679354       8 log.go:172] (0xc0011d08f0) (0xc00315e5a0) Stream removed, broadcasting: 5
I0108 14:05:54.679426       8 log.go:172] (0xc0011d08f0) (0xc00202f220) Stream removed, broadcasting: 3
I0108 14:05:54.679464       8 log.go:172] (0xc00165c280) (1) Data frame handling
I0108 14:05:54.679537       8 log.go:172] (0xc00165c280) (1) Data frame sent
I0108 14:05:54.679561       8 log.go:172] (0xc0011d08f0) (0xc00165c280) Stream removed, broadcasting: 1
I0108 14:05:54.679602       8 log.go:172] (0xc0011d08f0) Go away received
I0108 14:05:54.679846       8 log.go:172] (0xc0011d08f0) (0xc00165c280) Stream removed, broadcasting: 1
I0108 14:05:54.679864       8 log.go:172] (0xc0011d08f0) (0xc00202f220) Stream removed, broadcasting: 3
I0108 14:05:54.679876       8 log.go:172] (0xc0011d08f0) (0xc00315e5a0) Stream removed, broadcasting: 5
Jan  8 14:05:54.679: INFO: Waiting for endpoints: map[]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  8 14:05:54.680: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-4588" for this suite.
Jan  8 14:06:18.752: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  8 14:06:18.887: INFO: namespace pod-network-test-4588 deletion completed in 24.192377325s

• [SLOW TEST:59.401 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  8 14:06:18.887: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating the pod
Jan  8 14:06:29.622: INFO: Successfully updated pod "labelsupdatee2f524fc-9601-4cb4-a49b-980c4d903fc2"
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  8 14:06:31.782: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-5863" for this suite.
Jan  8 14:06:53.822: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  8 14:06:53.951: INFO: namespace downward-api-5863 deletion completed in 22.160427248s

• [SLOW TEST:35.064 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-storage] Downward API volume 
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  8 14:06:53.952: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating the pod
Jan  8 14:07:02.729: INFO: Successfully updated pod "annotationupdate3980729b-95aa-45c7-aecb-3342d777442e"
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  8 14:07:04.840: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-3797" for this suite.
Jan  8 14:07:26.908: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  8 14:07:26.980: INFO: namespace downward-api-3797 deletion completed in 22.130147229s

• [SLOW TEST:33.029 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-storage] Downward API volume 
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  8 14:07:26.981: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Jan  8 14:07:27.095: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d1501442-c75d-4794-b362-61045295b7bc" in namespace "downward-api-9625" to be "success or failure"
Jan  8 14:07:27.143: INFO: Pod "downwardapi-volume-d1501442-c75d-4794-b362-61045295b7bc": Phase="Pending", Reason="", readiness=false. Elapsed: 47.529858ms
Jan  8 14:07:29.199: INFO: Pod "downwardapi-volume-d1501442-c75d-4794-b362-61045295b7bc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.103621053s
Jan  8 14:07:32.137: INFO: Pod "downwardapi-volume-d1501442-c75d-4794-b362-61045295b7bc": Phase="Pending", Reason="", readiness=false. Elapsed: 5.042241387s
Jan  8 14:07:34.155: INFO: Pod "downwardapi-volume-d1501442-c75d-4794-b362-61045295b7bc": Phase="Pending", Reason="", readiness=false. Elapsed: 7.059613822s
Jan  8 14:07:36.168: INFO: Pod "downwardapi-volume-d1501442-c75d-4794-b362-61045295b7bc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 9.073165552s
STEP: Saw pod success
Jan  8 14:07:36.169: INFO: Pod "downwardapi-volume-d1501442-c75d-4794-b362-61045295b7bc" satisfied condition "success or failure"
Jan  8 14:07:36.174: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-d1501442-c75d-4794-b362-61045295b7bc container client-container: 
STEP: delete the pod
Jan  8 14:07:36.263: INFO: Waiting for pod downwardapi-volume-d1501442-c75d-4794-b362-61045295b7bc to disappear
Jan  8 14:07:36.269: INFO: Pod downwardapi-volume-d1501442-c75d-4794-b362-61045295b7bc no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  8 14:07:36.270: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-9625" for this suite.
Jan  8 14:07:42.298: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  8 14:07:42.407: INFO: namespace downward-api-9625 deletion completed in 6.129951158s

• [SLOW TEST:15.427 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with downward pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  8 14:07:42.407: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with downward pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod pod-subpath-test-downwardapi-mdq7
STEP: Creating a pod to test atomic-volume-subpath
Jan  8 14:07:42.548: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-mdq7" in namespace "subpath-8842" to be "success or failure"
Jan  8 14:07:42.560: INFO: Pod "pod-subpath-test-downwardapi-mdq7": Phase="Pending", Reason="", readiness=false. Elapsed: 12.02805ms
Jan  8 14:07:44.574: INFO: Pod "pod-subpath-test-downwardapi-mdq7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026084176s
Jan  8 14:07:46.583: INFO: Pod "pod-subpath-test-downwardapi-mdq7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.034734853s
Jan  8 14:07:48.595: INFO: Pod "pod-subpath-test-downwardapi-mdq7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.047180389s
Jan  8 14:07:50.607: INFO: Pod "pod-subpath-test-downwardapi-mdq7": Phase="Running", Reason="", readiness=true. Elapsed: 8.059072073s
Jan  8 14:07:52.619: INFO: Pod "pod-subpath-test-downwardapi-mdq7": Phase="Running", Reason="", readiness=true. Elapsed: 10.070445136s
Jan  8 14:07:54.630: INFO: Pod "pod-subpath-test-downwardapi-mdq7": Phase="Running", Reason="", readiness=true. Elapsed: 12.081671278s
Jan  8 14:07:56.636: INFO: Pod "pod-subpath-test-downwardapi-mdq7": Phase="Running", Reason="", readiness=true. Elapsed: 14.087887375s
Jan  8 14:07:58.653: INFO: Pod "pod-subpath-test-downwardapi-mdq7": Phase="Running", Reason="", readiness=true. Elapsed: 16.104734628s
Jan  8 14:08:00.662: INFO: Pod "pod-subpath-test-downwardapi-mdq7": Phase="Running", Reason="", readiness=true. Elapsed: 18.114068449s
Jan  8 14:08:02.675: INFO: Pod "pod-subpath-test-downwardapi-mdq7": Phase="Running", Reason="", readiness=true. Elapsed: 20.127261657s
Jan  8 14:08:04.685: INFO: Pod "pod-subpath-test-downwardapi-mdq7": Phase="Running", Reason="", readiness=true. Elapsed: 22.137397628s
Jan  8 14:08:06.698: INFO: Pod "pod-subpath-test-downwardapi-mdq7": Phase="Running", Reason="", readiness=true. Elapsed: 24.150330066s
Jan  8 14:08:08.712: INFO: Pod "pod-subpath-test-downwardapi-mdq7": Phase="Running", Reason="", readiness=true. Elapsed: 26.16345762s
Jan  8 14:08:10.718: INFO: Pod "pod-subpath-test-downwardapi-mdq7": Phase="Running", Reason="", readiness=true. Elapsed: 28.169713282s
Jan  8 14:08:12.726: INFO: Pod "pod-subpath-test-downwardapi-mdq7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.17806603s
STEP: Saw pod success
Jan  8 14:08:12.726: INFO: Pod "pod-subpath-test-downwardapi-mdq7" satisfied condition "success or failure"
Jan  8 14:08:12.730: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-downwardapi-mdq7 container test-container-subpath-downwardapi-mdq7: 
STEP: delete the pod
Jan  8 14:08:13.177: INFO: Waiting for pod pod-subpath-test-downwardapi-mdq7 to disappear
Jan  8 14:08:13.187: INFO: Pod pod-subpath-test-downwardapi-mdq7 no longer exists
STEP: Deleting pod pod-subpath-test-downwardapi-mdq7
Jan  8 14:08:13.187: INFO: Deleting pod "pod-subpath-test-downwardapi-mdq7" in namespace "subpath-8842"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  8 14:08:13.191: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-8842" for this suite.
Jan  8 14:08:19.229: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  8 14:08:19.323: INFO: namespace subpath-8842 deletion completed in 6.126764759s

• [SLOW TEST:36.916 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with downward pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Should recreate evicted statefulset [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  8 14:08:19.323: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75
STEP: Creating service test in namespace statefulset-545
[It] Should recreate evicted statefulset [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Looking for a node to schedule stateful set and pod
STEP: Creating pod with conflicting port in namespace statefulset-545
STEP: Creating statefulset with conflicting port in namespace statefulset-545
STEP: Waiting until pod test-pod will start running in namespace statefulset-545
STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-545
Jan  8 14:08:29.697: INFO: Observed stateful pod in namespace: statefulset-545, name: ss-0, uid: 4d84536d-b5fe-4334-8a6f-5952fd8dac49, status phase: Pending. Waiting for statefulset controller to delete.
Jan  8 14:13:29.697: INFO: Pod ss-0 expected to be re-created at least once
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86
Jan  8 14:13:29.708: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe po ss-0 --namespace=statefulset-545'
Jan  8 14:13:32.127: INFO: stderr: ""
Jan  8 14:13:32.127: INFO: stdout: "Name:           ss-0\nNamespace:      statefulset-545\nPriority:       0\nNode:           iruya-node/\nLabels:         baz=blah\n                controller-revision-hash=ss-6f98bdb9c4\n                foo=bar\n                statefulset.kubernetes.io/pod-name=ss-0\nAnnotations:    \nStatus:         Pending\nIP:             \nControlled By:  StatefulSet/ss\nContainers:\n  nginx:\n    Image:        docker.io/library/nginx:1.14-alpine\n    Port:         21017/TCP\n    Host Port:    21017/TCP\n    Environment:  \n    Mounts:\n      /var/run/secrets/kubernetes.io/serviceaccount from default-token-vvv5z (ro)\nVolumes:\n  default-token-vvv5z:\n    Type:        Secret (a volume populated by a Secret)\n    SecretName:  default-token-vvv5z\n    Optional:    false\nQoS Class:       BestEffort\nNode-Selectors:  \nTolerations:     node.kubernetes.io/not-ready:NoExecute for 300s\n                 node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n  Type     Reason            Age    From                 Message\n  ----     ------            ----   ----                 -------\n  Warning  PodFitsHostPorts  5m12s  kubelet, iruya-node  Predicate PodFitsHostPorts failed\n"
Jan  8 14:13:32.128: INFO: 
Output of kubectl describe ss-0:
Name:           ss-0
Namespace:      statefulset-545
Priority:       0
Node:           iruya-node/
Labels:         baz=blah
                controller-revision-hash=ss-6f98bdb9c4
                foo=bar
                statefulset.kubernetes.io/pod-name=ss-0
Annotations:    
Status:         Pending
IP:             
Controlled By:  StatefulSet/ss
Containers:
  nginx:
    Image:        docker.io/library/nginx:1.14-alpine
    Port:         21017/TCP
    Host Port:    21017/TCP
    Environment:  
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-vvv5z (ro)
Volumes:
  default-token-vvv5z:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-vvv5z
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason            Age    From                 Message
  ----     ------            ----   ----                 -------
  Warning  PodFitsHostPorts  5m12s  kubelet, iruya-node  Predicate PodFitsHostPorts failed

Jan  8 14:13:32.128: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs ss-0 --namespace=statefulset-545 --tail=100'
Jan  8 14:13:32.386: INFO: rc: 1
Jan  8 14:13:32.387: INFO: 
Last 100 log lines of ss-0:

Jan  8 14:13:32.387: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe po test-pod --namespace=statefulset-545'
Jan  8 14:13:32.627: INFO: stderr: ""
Jan  8 14:13:32.627: INFO: stdout: "Name:         test-pod\nNamespace:    statefulset-545\nPriority:     0\nNode:         iruya-node/10.96.3.65\nStart Time:   Wed, 08 Jan 2020 14:08:19 +0000\nLabels:       \nAnnotations:  \nStatus:       Running\nIP:           10.44.0.1\nContainers:\n  nginx:\n    Container ID:   docker://7f51d2a81b4a3906243349c2600e46028228a406f5ea662205af4dec495586a7\n    Image:          docker.io/library/nginx:1.14-alpine\n    Image ID:       docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7\n    Port:           21017/TCP\n    Host Port:      21017/TCP\n    State:          Running\n      Started:      Wed, 08 Jan 2020 14:08:27 +0000\n    Ready:          True\n    Restart Count:  0\n    Environment:    \n    Mounts:\n      /var/run/secrets/kubernetes.io/serviceaccount from default-token-vvv5z (ro)\nConditions:\n  Type              Status\n  Initialized       True \n  Ready             True \n  ContainersReady   True \n  PodScheduled      True \nVolumes:\n  default-token-vvv5z:\n    Type:        Secret (a volume populated by a Secret)\n    SecretName:  default-token-vvv5z\n    Optional:    false\nQoS Class:       BestEffort\nNode-Selectors:  \nTolerations:     node.kubernetes.io/not-ready:NoExecute for 300s\n                 node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n  Type    Reason   Age   From                 Message\n  ----    ------   ----  ----                 -------\n  Normal  Pulled   5m8s  kubelet, iruya-node  Container image \"docker.io/library/nginx:1.14-alpine\" already present on machine\n  Normal  Created  5m6s  kubelet, iruya-node  Created container nginx\n  Normal  Started  5m5s  kubelet, iruya-node  Started container nginx\n"
Jan  8 14:13:32.627: INFO: 
Output of kubectl describe test-pod:
Name:         test-pod
Namespace:    statefulset-545
Priority:     0
Node:         iruya-node/10.96.3.65
Start Time:   Wed, 08 Jan 2020 14:08:19 +0000
Labels:       
Annotations:  
Status:       Running
IP:           10.44.0.1
Containers:
  nginx:
    Container ID:   docker://7f51d2a81b4a3906243349c2600e46028228a406f5ea662205af4dec495586a7
    Image:          docker.io/library/nginx:1.14-alpine
    Image ID:       docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7
    Port:           21017/TCP
    Host Port:      21017/TCP
    State:          Running
      Started:      Wed, 08 Jan 2020 14:08:27 +0000
    Ready:          True
    Restart Count:  0
    Environment:    
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-vvv5z (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
Volumes:
  default-token-vvv5z:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-vvv5z
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type    Reason   Age   From                 Message
  ----    ------   ----  ----                 -------
  Normal  Pulled   5m8s  kubelet, iruya-node  Container image "docker.io/library/nginx:1.14-alpine" already present on machine
  Normal  Created  5m6s  kubelet, iruya-node  Created container nginx
  Normal  Started  5m5s  kubelet, iruya-node  Started container nginx

Jan  8 14:13:32.628: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs test-pod --namespace=statefulset-545 --tail=100'
Jan  8 14:13:32.756: INFO: stderr: ""
Jan  8 14:13:32.756: INFO: stdout: ""
Jan  8 14:13:32.756: INFO: 
Last 100 log lines of test-pod:

Jan  8 14:13:32.756: INFO: Deleting all statefulset in ns statefulset-545
Jan  8 14:13:32.801: INFO: Scaling statefulset ss to 0
Jan  8 14:13:42.891: INFO: Waiting for statefulset status.replicas updated to 0
Jan  8 14:13:42.895: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Collecting events from namespace "statefulset-545".
STEP: Found 8 events.
Jan  8 14:13:42.923: INFO: At 2020-01-08 14:08:19 +0000 UTC - event for ss: {statefulset-controller } SuccessfulCreate: create Pod ss-0 in StatefulSet ss successful
Jan  8 14:13:42.923: INFO: At 2020-01-08 14:08:19 +0000 UTC - event for ss: {statefulset-controller } RecreatingFailedPod: StatefulSet statefulset-545/ss is recreating failed Pod ss-0
Jan  8 14:13:42.923: INFO: At 2020-01-08 14:08:19 +0000 UTC - event for ss-0: {kubelet iruya-node} PodFitsHostPorts: Predicate PodFitsHostPorts failed
Jan  8 14:13:42.923: INFO: At 2020-01-08 14:08:20 +0000 UTC - event for ss: {statefulset-controller } SuccessfulDelete: delete Pod ss-0 in StatefulSet ss successful
Jan  8 14:13:42.923: INFO: At 2020-01-08 14:08:20 +0000 UTC - event for ss-0: {kubelet iruya-node} PodFitsHostPorts: Predicate PodFitsHostPorts failed
Jan  8 14:13:42.923: INFO: At 2020-01-08 14:08:24 +0000 UTC - event for test-pod: {kubelet iruya-node} Pulled: Container image "docker.io/library/nginx:1.14-alpine" already present on machine
Jan  8 14:13:42.923: INFO: At 2020-01-08 14:08:26 +0000 UTC - event for test-pod: {kubelet iruya-node} Created: Created container nginx
Jan  8 14:13:42.923: INFO: At 2020-01-08 14:08:27 +0000 UTC - event for test-pod: {kubelet iruya-node} Started: Started container nginx
Jan  8 14:13:42.927: INFO: POD       NODE        PHASE    GRACE  CONDITIONS
Jan  8 14:13:42.927: INFO: test-pod  iruya-node  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-08 14:08:19 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-08 14:08:28 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-08 14:08:28 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-08 14:08:19 +0000 UTC  }]
Jan  8 14:13:42.927: INFO: 
Jan  8 14:13:42.936: INFO: 
Logging node info for node iruya-node
Jan  8 14:13:42.940: INFO: Node Info: &Node{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:iruya-node,GenerateName:,Namespace:,SelfLink:/api/v1/nodes/iruya-node,UID:b2aa273d-23ea-4c86-9e2f-72569e3392bd,ResourceVersion:19781534,Generation:0,CreationTimestamp:2019-08-04 09:01:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{beta.kubernetes.io/arch: amd64,beta.kubernetes.io/os: linux,kubernetes.io/arch: amd64,kubernetes.io/hostname: iruya-node,kubernetes.io/os: linux,},Annotations:map[string]string{kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock,node.alpha.kubernetes.io/ttl: 0,volumes.kubernetes.io/controller-managed-attach-detach: true,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:NodeSpec{PodCIDR:10.96.1.0/24,DoNotUse_ExternalID:,ProviderID:,Unschedulable:false,Taints:[],ConfigSource:nil,},Status:NodeStatus{Capacity:ResourceList{cpu: {{4 0} {} 4 DecimalSI},ephemeral-storage: {{20629221376 0} {} 20145724Ki BinarySI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{4136013824 0} {} 4039076Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{4 0} {} 4 DecimalSI},ephemeral-storage: {{18566299208 0} {} 18566299208 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{4031156224 0} {} 3936676Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[{NetworkUnavailable False 2019-10-12 11:56:49 +0000 UTC 2019-10-12 11:56:49 +0000 UTC WeaveIsUp Weave pod has set this} {MemoryPressure False 2020-01-08 14:13:38 +0000 UTC 2019-08-04 09:01:39 +0000 UTC KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2020-01-08 14:13:38 +0000 UTC 2019-08-04 09:01:39 +0000 UTC KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2020-01-08 14:13:38 +0000 UTC 2019-08-04 09:01:39 +0000 UTC KubeletHasSufficientPID kubelet has sufficient PID available} {Ready True 2020-01-08 14:13:38 +0000 UTC 2019-08-04 09:02:19 +0000 UTC KubeletReady kubelet is posting ready status. AppArmor enabled}],Addresses:[{InternalIP 10.96.3.65} {Hostname iruya-node}],DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:f573dcf04d6f4a87856a35d266a2fa7a,SystemUUID:F573DCF0-4D6F-4A87-856A-35D266A2FA7A,BootID:8baf4beb-8391-43e6-b17b-b1e184b5370a,KernelVersion:4.15.0-52-generic,OSImage:Ubuntu 18.04.2 LTS,ContainerRuntimeVersion:docker://18.9.7,KubeletVersion:v1.15.1,KubeProxyVersion:v1.15.1,OperatingSystem:linux,Architecture:amd64,},Images:[{[gcr.io/google-samples/gb-frontend@sha256:35cb427341429fac3df10ff74600ea73e8ec0754d78f9ce89e0b4f3d70d53ba6 gcr.io/google-samples/gb-frontend:v6] 373099368} {[k8s.gcr.io/etcd@sha256:17da501f5d2a675be46040422a27b7cc21b8a43895ac998b171db1c346f361f7 k8s.gcr.io/etcd:3.3.10] 258116302} {[k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa k8s.gcr.io/etcd:3.3.15] 246640776} {[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0] 195659796} {[weaveworks/weave-kube@sha256:8fea236b8e64192c454e459b40381bd48795bd54d791fa684d818afdc12bd100 weaveworks/weave-kube:2.5.2] 148150868} {[httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a httpd:2.4.39-alpine] 126894770} {[httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 httpd:2.4.38-alpine] 123781643} {[gcr.io/google-samples/gb-redisslave@sha256:57730a481f97b3321138161ba2c8c9ca3b32df32ce9180e4029e6940446800ec gcr.io/google-samples/gb-redisslave:v3] 98945667} {[k8s.gcr.io/kube-proxy@sha256:08186f4897488e96cb098dd8d1d931af9a6ea718bb8737bf44bb76e42075f0ce k8s.gcr.io/kube-proxy:v1.15.1] 82408284} {[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:1bafcc6fb1aa990b487850adba9cadc020e42d7905aa8a30481182a477ba24b0 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.10] 61365829} {[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:4057a5580c7b59c4fe10d8ab2732c9dec35eea80fd41f7bafc7bd5acc7edf727 gcr.io/kubernetes-e2e-test-images/agnhost:2.6] 57345321} {[weaveworks/weave-npc@sha256:56c93a359d54107558720a2859b83cb28a31c70c82a1aaa3dc4704e6c62e3b15 weaveworks/weave-npc:2.5.2] 49569458} {[redis@sha256:50899ea1ceed33fa03232f3ac57578a424faa1742c1ac9c7a7bdb95cdf19b858 redis:5.0.5-alpine] 29331594} {[gcr.io/kubernetes-e2e-test-images/nettest@sha256:6aa91bc71993260a87513e31b672ec14ce84bc253cd5233406c6946d3a8f55a1 gcr.io/kubernetes-e2e-test-images/nettest:1.0] 27413498} {[nginx@sha256:57a226fb6ab6823027c0704a9346a890ffb0cacde06bc19bbc234c8720673555 nginx:1.15-alpine] 16087791} {[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine] 16032814} {[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0] 11443478} {[gcr.io/kubernetes-e2e-test-images/dnsutils@sha256:2abeee84efb79c14d731966e034af33bf324d3b26ca28497555511ff094b3ddd gcr.io/kubernetes-e2e-test-images/dnsutils:1.1] 9349974} {[gcr.io/kubernetes-e2e-test-images/hostexec@sha256:90dfe59da029f9e536385037bc64e86cd3d6e55bae613ddbe69e554d79b0639d gcr.io/kubernetes-e2e-test-images/hostexec:1.1] 8490662} {[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0] 6757579} {[gcr.io/kubernetes-e2e-test-images/netexec@sha256:203f0e11dde4baf4b08e27de094890eb3447d807c8b3e990b764b799d3a9e8b7 gcr.io/kubernetes-e2e-test-images/netexec:1.1] 6705349} {[gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 gcr.io/kubernetes-e2e-test-images/redis:1.0] 5905732} {[gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1] 5851985} {[gcr.io/kubernetes-e2e-test-images/liveness@sha256:71c3fc838e0637df570497febafa0ee73bf47176dfd43612de5c55a71230674e gcr.io/kubernetes-e2e-test-images/liveness:1.1] 5829944} {[appropriate/curl@sha256:c8bf5bbec6397465a247c2bb3e589bb77e4f62ff88a027175ecb2d9e4f12c9d7 appropriate/curl:latest] 5496756} {[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0] 4753501} {[gcr.io/kubernetes-e2e-test-images/kitten@sha256:bcbc4875c982ab39aa7c4f6acf4a287f604e996d9f34a3fbda8c3d1a7457d1f6 gcr.io/kubernetes-e2e-test-images/kitten:1.0] 4747037} {[gcr.io/kubernetes-e2e-test-images/test-webserver@sha256:7f93d6e32798ff28bc6289254d0c2867fe2c849c8e46edc50f8624734309812e gcr.io/kubernetes-e2e-test-images/test-webserver:1.0] 4732240} {[gcr.io/kubernetes-e2e-test-images/porter@sha256:d6389405e453950618ae7749d9eee388f0eb32e0328a7e6583c41433aa5f2a77 gcr.io/kubernetes-e2e-test-images/porter:1.0] 4681408} {[gcr.io/kubernetes-e2e-test-images/entrypoint-tester@sha256:ba4681b5299884a3adca70fbde40638373b437a881055ffcd0935b5f43eb15c9 gcr.io/kubernetes-e2e-test-images/entrypoint-tester:1.0] 2729534} {[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0] 1563521} {[gcr.io/kubernetes-e2e-test-images/mounttest-user@sha256:17319ca525ee003681fccf7e8c6b1b910ff4f49b653d939ac7f9b6e7c463933d gcr.io/kubernetes-e2e-test-images/mounttest-user:1.0] 1450451} {[busybox@sha256:fe301db49df08c384001ed752dff6d52b4305a73a7f608f21528048e8a08b51e busybox:latest] 1219782} {[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29] 1154361} {[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1] 742472} {[kubernetes/pause@sha256:b31bfb4d0213f254d361e0079deaaebefa4f82ba7aa76ef82e90b4935ad5b105 kubernetes/pause:latest] 239840}],VolumesInUse:[],VolumesAttached:[],Config:nil,},}
Jan  8 14:13:42.941: INFO: 
Logging kubelet events for node iruya-node
Jan  8 14:13:42.947: INFO: 
Logging pods the kubelet thinks is on node iruya-node
Jan  8 14:13:42.967: INFO: weave-net-rlp57 started at 2019-10-12 11:56:39 +0000 UTC (0+2 container statuses recorded)
Jan  8 14:13:42.967: INFO: 	Container weave ready: true, restart count 0
Jan  8 14:13:42.967: INFO: 	Container weave-npc ready: true, restart count 0
Jan  8 14:13:42.967: INFO: test-pod started at 2020-01-08 14:08:19 +0000 UTC (0+1 container statuses recorded)
Jan  8 14:13:42.967: INFO: 	Container nginx ready: true, restart count 0
Jan  8 14:13:42.967: INFO: kube-proxy-976zl started at 2019-08-04 09:01:39 +0000 UTC (0+1 container statuses recorded)
Jan  8 14:13:42.967: INFO: 	Container kube-proxy ready: true, restart count 0
W0108 14:13:42.975906       8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jan  8 14:13:43.033: INFO: 
Latency metrics for node iruya-node
Jan  8 14:13:43.033: INFO: 
Logging node info for node iruya-server-sfge57q7djm7
Jan  8 14:13:43.038: INFO: Node Info: &Node{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:iruya-server-sfge57q7djm7,GenerateName:,Namespace:,SelfLink:/api/v1/nodes/iruya-server-sfge57q7djm7,UID:67f2a658-4743-4118-95e7-463a23bcd212,ResourceVersion:19781465,Generation:0,CreationTimestamp:2019-08-04 08:52:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{beta.kubernetes.io/arch: amd64,beta.kubernetes.io/os: linux,kubernetes.io/arch: amd64,kubernetes.io/hostname: iruya-server-sfge57q7djm7,kubernetes.io/os: linux,node-role.kubernetes.io/master: ,},Annotations:map[string]string{kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock,node.alpha.kubernetes.io/ttl: 0,volumes.kubernetes.io/controller-managed-attach-detach: true,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:NodeSpec{PodCIDR:10.96.0.0/24,DoNotUse_ExternalID:,ProviderID:,Unschedulable:false,Taints:[],ConfigSource:nil,},Status:NodeStatus{Capacity:ResourceList{cpu: {{4 0} {} 4 DecimalSI},ephemeral-storage: {{20629221376 0} {} 20145724Ki BinarySI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{4136013824 0} {} 4039076Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{4 0} {} 4 DecimalSI},ephemeral-storage: {{18566299208 0} {} 18566299208 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{4031156224 0} {} 3936676Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[{NetworkUnavailable False 2019-08-04 08:53:00 +0000 UTC 2019-08-04 08:53:00 +0000 UTC WeaveIsUp Weave pod has set this} {MemoryPressure False 2020-01-08 14:12:51 +0000 UTC 2019-08-04 08:52:04 +0000 UTC KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2020-01-08 14:12:51 +0000 UTC 2019-08-04 08:52:04 +0000 UTC KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2020-01-08 14:12:51 +0000 UTC 2019-08-04 08:52:04 +0000 UTC KubeletHasSufficientPID kubelet has sufficient PID available} {Ready True 2020-01-08 14:12:51 +0000 UTC 2019-08-04 08:53:09 +0000 UTC KubeletReady kubelet is posting ready status. AppArmor enabled}],Addresses:[{InternalIP 10.96.2.216} {Hostname iruya-server-sfge57q7djm7}],DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:78bacef342604a51913cae58dd95802b,SystemUUID:78BACEF3-4260-4A51-913C-AE58DD95802B,BootID:db143d3a-01b3-4483-b23e-e72adff2b28d,KernelVersion:4.15.0-52-generic,OSImage:Ubuntu 18.04.2 LTS,ContainerRuntimeVersion:docker://18.9.7,KubeletVersion:v1.15.1,KubeProxyVersion:v1.15.1,OperatingSystem:linux,Architecture:amd64,},Images:[{[gcr.io/google-samples/gb-frontend@sha256:35cb427341429fac3df10ff74600ea73e8ec0754d78f9ce89e0b4f3d70d53ba6 gcr.io/google-samples/gb-frontend:v6] 373099368} {[k8s.gcr.io/etcd@sha256:17da501f5d2a675be46040422a27b7cc21b8a43895ac998b171db1c346f361f7 k8s.gcr.io/etcd:3.3.10] 258116302} {[k8s.gcr.io/kube-apiserver@sha256:304a1c38707834062ee87df62ef329d52a8b9a3e70459565d0a396479073f54c k8s.gcr.io/kube-apiserver:v1.15.1] 206827454} {[k8s.gcr.io/kube-controller-manager@sha256:9abae95e428e228fe8f6d1630d55e79e018037460f3731312805c0f37471e4bf k8s.gcr.io/kube-controller-manager:v1.15.1] 158722622} {[weaveworks/weave-kube@sha256:8fea236b8e64192c454e459b40381bd48795bd54d791fa684d818afdc12bd100 weaveworks/weave-kube:2.5.2] 148150868} {[httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a httpd:2.4.39-alpine] 126894770} {[httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 httpd:2.4.38-alpine] 123781643} {[gcr.io/google-samples/gb-redisslave@sha256:57730a481f97b3321138161ba2c8c9ca3b32df32ce9180e4029e6940446800ec gcr.io/google-samples/gb-redisslave:v3] 98945667} {[k8s.gcr.io/kube-proxy@sha256:08186f4897488e96cb098dd8d1d931af9a6ea718bb8737bf44bb76e42075f0ce k8s.gcr.io/kube-proxy:v1.15.1] 82408284} {[k8s.gcr.io/kube-scheduler@sha256:d0ee18a9593013fbc44b1920e4930f29b664b59a3958749763cb33b57e0e8956 k8s.gcr.io/kube-scheduler:v1.15.1] 81107582} {[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:4057a5580c7b59c4fe10d8ab2732c9dec35eea80fd41f7bafc7bd5acc7edf727 gcr.io/kubernetes-e2e-test-images/agnhost:2.6] 57345321} {[weaveworks/weave-npc@sha256:56c93a359d54107558720a2859b83cb28a31c70c82a1aaa3dc4704e6c62e3b15 weaveworks/weave-npc:2.5.2] 49569458} {[k8s.gcr.io/coredns@sha256:02382353821b12c21b062c59184e227e001079bb13ebd01f9d3270ba0fcbf1e4 k8s.gcr.io/coredns:1.3.1] 40303560} {[redis@sha256:50899ea1ceed33fa03232f3ac57578a424faa1742c1ac9c7a7bdb95cdf19b858 redis:5.0.5-alpine] 29331594} {[nginx@sha256:57a226fb6ab6823027c0704a9346a890ffb0cacde06bc19bbc234c8720673555 nginx:1.15-alpine] 16087791} {[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine] 16032814} {[gcr.io/kubernetes-e2e-test-images/netexec@sha256:203f0e11dde4baf4b08e27de094890eb3447d807c8b3e990b764b799d3a9e8b7 gcr.io/kubernetes-e2e-test-images/netexec:1.1] 6705349} {[gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 gcr.io/kubernetes-e2e-test-images/redis:1.0] 5905732} {[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0] 4753501} {[gcr.io/kubernetes-e2e-test-images/kitten@sha256:bcbc4875c982ab39aa7c4f6acf4a287f604e996d9f34a3fbda8c3d1a7457d1f6 gcr.io/kubernetes-e2e-test-images/kitten:1.0] 4747037} {[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0] 1563521} {[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1] 742472} {[kubernetes/pause@sha256:b31bfb4d0213f254d361e0079deaaebefa4f82ba7aa76ef82e90b4935ad5b105 kubernetes/pause:latest] 239840}],VolumesInUse:[],VolumesAttached:[],Config:nil,},}
Jan  8 14:13:43.038: INFO: 
Logging kubelet events for node iruya-server-sfge57q7djm7
Jan  8 14:13:43.042: INFO: 
Logging pods the kubelet thinks is on node iruya-server-sfge57q7djm7
Jan  8 14:13:43.056: INFO: kube-apiserver-iruya-server-sfge57q7djm7 started at 2019-08-04 08:51:39 +0000 UTC (0+1 container statuses recorded)
Jan  8 14:13:43.056: INFO: 	Container kube-apiserver ready: true, restart count 0
Jan  8 14:13:43.056: INFO: kube-scheduler-iruya-server-sfge57q7djm7 started at 2019-08-04 08:51:43 +0000 UTC (0+1 container statuses recorded)
Jan  8 14:13:43.056: INFO: 	Container kube-scheduler ready: true, restart count 12
Jan  8 14:13:43.056: INFO: coredns-5c98db65d4-xx8w8 started at 2019-08-04 08:53:12 +0000 UTC (0+1 container statuses recorded)
Jan  8 14:13:43.056: INFO: 	Container coredns ready: true, restart count 0
Jan  8 14:13:43.056: INFO: etcd-iruya-server-sfge57q7djm7 started at 2019-08-04 08:51:38 +0000 UTC (0+1 container statuses recorded)
Jan  8 14:13:43.056: INFO: 	Container etcd ready: true, restart count 0
Jan  8 14:13:43.056: INFO: weave-net-bzl4d started at 2019-08-04 08:52:37 +0000 UTC (0+2 container statuses recorded)
Jan  8 14:13:43.056: INFO: 	Container weave ready: true, restart count 0
Jan  8 14:13:43.056: INFO: 	Container weave-npc ready: true, restart count 0
Jan  8 14:13:43.056: INFO: coredns-5c98db65d4-bm4gs started at 2019-08-04 08:53:12 +0000 UTC (0+1 container statuses recorded)
Jan  8 14:13:43.056: INFO: 	Container coredns ready: true, restart count 0
Jan  8 14:13:43.056: INFO: kube-controller-manager-iruya-server-sfge57q7djm7 started at 2019-08-04 08:51:42 +0000 UTC (0+1 container statuses recorded)
Jan  8 14:13:43.056: INFO: 	Container kube-controller-manager ready: true, restart count 18
Jan  8 14:13:43.056: INFO: kube-proxy-58v95 started at 2019-08-04 08:52:37 +0000 UTC (0+1 container statuses recorded)
Jan  8 14:13:43.056: INFO: 	Container kube-proxy ready: true, restart count 0
W0108 14:13:43.062681       8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jan  8 14:13:43.092: INFO: 
Latency metrics for node iruya-server-sfge57q7djm7
Jan  8 14:13:43.092: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-545" for this suite.
Jan  8 14:14:07.181: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  8 14:14:07.291: INFO: namespace statefulset-545 deletion completed in 24.193345676s

• Failure [347.967 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    Should recreate evicted statefulset [Conformance] [It]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697

    Jan  8 14:13:29.697: Pod ss-0 expected to be re-created at least once

    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:769
------------------------------
[sig-apps] Deployment 
  deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  8 14:14:07.291: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72
[It] deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jan  8 14:14:07.411: INFO: Pod name cleanup-pod: Found 0 pods out of 1
Jan  8 14:14:12.420: INFO: Pod name cleanup-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Jan  8 14:14:16.433: INFO: Creating deployment test-cleanup-deployment
STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66
Jan  8 14:14:16.479: INFO: Deployment "test-cleanup-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment,GenerateName:,Namespace:deployment-2207,SelfLink:/apis/apps/v1/namespaces/deployment-2207/deployments/test-cleanup-deployment,UID:0d667042-236a-48ed-b602-6b361f135531,ResourceVersion:19781623,Generation:1,CreationTimestamp:2020-01-08 14:14:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[],ReadyReplicas:0,CollisionCount:nil,},}

Jan  8 14:14:16.491: INFO: New ReplicaSet "test-cleanup-deployment-55bbcbc84c" of Deployment "test-cleanup-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment-55bbcbc84c,GenerateName:,Namespace:deployment-2207,SelfLink:/apis/apps/v1/namespaces/deployment-2207/replicasets/test-cleanup-deployment-55bbcbc84c,UID:ec4da2a6-f8f2-4219-85d6-5900af2b31ae,ResourceVersion:19781625,Generation:1,CreationTimestamp:2020-01-08 14:14:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment 0d667042-236a-48ed-b602-6b361f135531 0xc002d0d117 0xc002d0d118}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:0,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Jan  8 14:14:16.491: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment":
Jan  8 14:14:16.492: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller,GenerateName:,Namespace:deployment-2207,SelfLink:/apis/apps/v1/namespaces/deployment-2207/replicasets/test-cleanup-controller,UID:c392aaef-de0b-4e55-b31d-91ae9e5b74ce,ResourceVersion:19781624,Generation:1,CreationTimestamp:2020-01-08 14:14:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment 0d667042-236a-48ed-b602-6b361f135531 0xc002d0d047 0xc002d0d048}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Jan  8 14:14:16.511: INFO: Pod "test-cleanup-controller-xt76n" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller-xt76n,GenerateName:test-cleanup-controller-,Namespace:deployment-2207,SelfLink:/api/v1/namespaces/deployment-2207/pods/test-cleanup-controller-xt76n,UID:971ba258-c9b9-4eb1-8698-9305f1587a35,ResourceVersion:19781619,Generation:0,CreationTimestamp:2020-01-08 14:14:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-controller c392aaef-de0b-4e55-b31d-91ae9e5b74ce 0xc002d0d9f7 0xc002d0d9f8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-k6wc7 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-k6wc7,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-k6wc7 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002d0da70} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002d0da90}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-08 14:14:07 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-08 14:14:14 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-08 14:14:14 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-08 14:14:07 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.1,StartTime:2020-01-08 14:14:07 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-08 14:14:14 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://b9dbf29d447e04701f97d4421a21f44fcad7ef2fbca24ae1721eca788c63c1dc}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  8 14:14:16.511: INFO: Pod "test-cleanup-deployment-55bbcbc84c-r2wc5" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment-55bbcbc84c-r2wc5,GenerateName:test-cleanup-deployment-55bbcbc84c-,Namespace:deployment-2207,SelfLink:/api/v1/namespaces/deployment-2207/pods/test-cleanup-deployment-55bbcbc84c-r2wc5,UID:86e16e29-70ca-4949-85de-236fbdd80526,ResourceVersion:19781626,Generation:0,CreationTimestamp:2020-01-08 14:14:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-deployment-55bbcbc84c ec4da2a6-f8f2-4219-85d6-5900af2b31ae 0xc002d0db77 0xc002d0db78}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-k6wc7 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-k6wc7,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [{default-token-k6wc7 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002d0dbe0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002d0dc00}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  8 14:14:16.512: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-2207" for this suite.
Jan  8 14:14:24.644: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  8 14:14:24.751: INFO: namespace deployment-2207 deletion completed in 8.129467058s

• [SLOW TEST:17.460 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform rolling updates and roll backs of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  8 14:14:24.752: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75
STEP: Creating service test in namespace statefulset-8407
[It] should perform rolling updates and roll backs of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a new StatefulSet
Jan  8 14:14:24.970: INFO: Found 0 stateful pods, waiting for 3
Jan  8 14:14:34.982: INFO: Found 2 stateful pods, waiting for 3
Jan  8 14:14:44.978: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jan  8 14:14:44.978: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jan  8 14:14:44.978: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Jan  8 14:14:54.980: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jan  8 14:14:54.980: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jan  8 14:14:54.980: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
Jan  8 14:14:54.994: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8407 ss2-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan  8 14:14:55.625: INFO: stderr: "I0108 14:14:55.260753    1862 log.go:172] (0xc0007cc420) (0xc00033c6e0) Create stream\nI0108 14:14:55.261083    1862 log.go:172] (0xc0007cc420) (0xc00033c6e0) Stream added, broadcasting: 1\nI0108 14:14:55.264269    1862 log.go:172] (0xc0007cc420) Reply frame received for 1\nI0108 14:14:55.264407    1862 log.go:172] (0xc0007cc420) (0xc00098c000) Create stream\nI0108 14:14:55.264426    1862 log.go:172] (0xc0007cc420) (0xc00098c000) Stream added, broadcasting: 3\nI0108 14:14:55.265624    1862 log.go:172] (0xc0007cc420) Reply frame received for 3\nI0108 14:14:55.265657    1862 log.go:172] (0xc0007cc420) (0xc00033c780) Create stream\nI0108 14:14:55.265665    1862 log.go:172] (0xc0007cc420) (0xc00033c780) Stream added, broadcasting: 5\nI0108 14:14:55.266973    1862 log.go:172] (0xc0007cc420) Reply frame received for 5\nI0108 14:14:55.436601    1862 log.go:172] (0xc0007cc420) Data frame received for 5\nI0108 14:14:55.436711    1862 log.go:172] (0xc00033c780) (5) Data frame handling\nI0108 14:14:55.436732    1862 log.go:172] (0xc00033c780) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0108 14:14:55.510029    1862 log.go:172] (0xc0007cc420) Data frame received for 3\nI0108 14:14:55.510103    1862 log.go:172] (0xc00098c000) (3) Data frame handling\nI0108 14:14:55.510135    1862 log.go:172] (0xc00098c000) (3) Data frame sent\nI0108 14:14:55.605915    1862 log.go:172] (0xc0007cc420) Data frame received for 1\nI0108 14:14:55.606177    1862 log.go:172] (0xc0007cc420) (0xc00033c780) Stream removed, broadcasting: 5\nI0108 14:14:55.606336    1862 log.go:172] (0xc00033c6e0) (1) Data frame handling\nI0108 14:14:55.606376    1862 log.go:172] (0xc00033c6e0) (1) Data frame sent\nI0108 14:14:55.606463    1862 log.go:172] (0xc0007cc420) (0xc00098c000) Stream removed, broadcasting: 3\nI0108 14:14:55.606503    1862 log.go:172] (0xc0007cc420) (0xc00033c6e0) Stream removed, broadcasting: 1\nI0108 14:14:55.611419    1862 log.go:172] (0xc0007cc420) (0xc00033c6e0) Stream removed, broadcasting: 1\nI0108 14:14:55.611470    1862 log.go:172] (0xc0007cc420) (0xc00098c000) Stream removed, broadcasting: 3\nI0108 14:14:55.611506    1862 log.go:172] (0xc0007cc420) (0xc00033c780) Stream removed, broadcasting: 5\nI0108 14:14:55.612455    1862 log.go:172] (0xc0007cc420) Go away received\n"
Jan  8 14:14:55.625: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan  8 14:14:55.625: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

STEP: Updating StatefulSet template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine
Jan  8 14:15:05.687: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Updating Pods in reverse ordinal order
Jan  8 14:15:15.718: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8407 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  8 14:15:16.291: INFO: stderr: "I0108 14:15:15.987172    1882 log.go:172] (0xc0008574a0) (0xc0007f8f00) Create stream\nI0108 14:15:15.987637    1882 log.go:172] (0xc0008574a0) (0xc0007f8f00) Stream added, broadcasting: 1\nI0108 14:15:16.002941    1882 log.go:172] (0xc0008574a0) Reply frame received for 1\nI0108 14:15:16.003039    1882 log.go:172] (0xc0008574a0) (0xc0007f8000) Create stream\nI0108 14:15:16.003054    1882 log.go:172] (0xc0008574a0) (0xc0007f8000) Stream added, broadcasting: 3\nI0108 14:15:16.004703    1882 log.go:172] (0xc0008574a0) Reply frame received for 3\nI0108 14:15:16.004754    1882 log.go:172] (0xc0008574a0) (0xc0003ddae0) Create stream\nI0108 14:15:16.004766    1882 log.go:172] (0xc0008574a0) (0xc0003ddae0) Stream added, broadcasting: 5\nI0108 14:15:16.006308    1882 log.go:172] (0xc0008574a0) Reply frame received for 5\nI0108 14:15:16.138918    1882 log.go:172] (0xc0008574a0) Data frame received for 3\nI0108 14:15:16.139201    1882 log.go:172] (0xc0007f8000) (3) Data frame handling\nI0108 14:15:16.139271    1882 log.go:172] (0xc0007f8000) (3) Data frame sent\nI0108 14:15:16.142933    1882 log.go:172] (0xc0008574a0) Data frame received for 5\nI0108 14:15:16.142996    1882 log.go:172] (0xc0003ddae0) (5) Data frame handling\nI0108 14:15:16.143037    1882 log.go:172] (0xc0003ddae0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0108 14:15:16.249245    1882 log.go:172] (0xc0008574a0) (0xc0007f8000) Stream removed, broadcasting: 3\nI0108 14:15:16.249858    1882 log.go:172] (0xc0008574a0) (0xc0003ddae0) Stream removed, broadcasting: 5\nI0108 14:15:16.250044    1882 log.go:172] (0xc0008574a0) Data frame received for 1\nI0108 14:15:16.250118    1882 log.go:172] (0xc0007f8f00) (1) Data frame handling\nI0108 14:15:16.250174    1882 log.go:172] (0xc0007f8f00) (1) Data frame sent\nI0108 14:15:16.250187    1882 log.go:172] (0xc0008574a0) (0xc0007f8f00) Stream removed, broadcasting: 1\nI0108 14:15:16.250207    1882 log.go:172] (0xc0008574a0) Go away received\nI0108 14:15:16.252928    1882 log.go:172] (0xc0008574a0) (0xc0007f8f00) Stream removed, broadcasting: 1\nI0108 14:15:16.253205    1882 log.go:172] (0xc0008574a0) (0xc0007f8000) Stream removed, broadcasting: 3\nI0108 14:15:16.253245    1882 log.go:172] (0xc0008574a0) (0xc0003ddae0) Stream removed, broadcasting: 5\n"
Jan  8 14:15:16.292: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jan  8 14:15:16.292: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jan  8 14:15:26.342: INFO: Waiting for StatefulSet statefulset-8407/ss2 to complete update
Jan  8 14:15:26.342: INFO: Waiting for Pod statefulset-8407/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan  8 14:15:26.342: INFO: Waiting for Pod statefulset-8407/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan  8 14:15:46.362: INFO: Waiting for StatefulSet statefulset-8407/ss2 to complete update
Jan  8 14:15:46.362: INFO: Waiting for Pod statefulset-8407/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan  8 14:15:56.356: INFO: Waiting for StatefulSet statefulset-8407/ss2 to complete update
Jan  8 14:15:56.356: INFO: Waiting for Pod statefulset-8407/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
STEP: Rolling back to a previous revision
Jan  8 14:16:06.371: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8407 ss2-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan  8 14:16:06.975: INFO: stderr: "I0108 14:16:06.670525    1901 log.go:172] (0xc0009a84d0) (0xc0009a2820) Create stream\nI0108 14:16:06.670964    1901 log.go:172] (0xc0009a84d0) (0xc0009a2820) Stream added, broadcasting: 1\nI0108 14:16:06.686451    1901 log.go:172] (0xc0009a84d0) Reply frame received for 1\nI0108 14:16:06.686587    1901 log.go:172] (0xc0009a84d0) (0xc0009a2000) Create stream\nI0108 14:16:06.686608    1901 log.go:172] (0xc0009a84d0) (0xc0009a2000) Stream added, broadcasting: 3\nI0108 14:16:06.688042    1901 log.go:172] (0xc0009a84d0) Reply frame received for 3\nI0108 14:16:06.688100    1901 log.go:172] (0xc0009a84d0) (0xc0009d2000) Create stream\nI0108 14:16:06.688125    1901 log.go:172] (0xc0009a84d0) (0xc0009d2000) Stream added, broadcasting: 5\nI0108 14:16:06.689826    1901 log.go:172] (0xc0009a84d0) Reply frame received for 5\nI0108 14:16:06.835014    1901 log.go:172] (0xc0009a84d0) Data frame received for 5\nI0108 14:16:06.835369    1901 log.go:172] (0xc0009d2000) (5) Data frame handling\nI0108 14:16:06.835476    1901 log.go:172] (0xc0009d2000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0108 14:16:06.874505    1901 log.go:172] (0xc0009a84d0) Data frame received for 3\nI0108 14:16:06.874645    1901 log.go:172] (0xc0009a2000) (3) Data frame handling\nI0108 14:16:06.874716    1901 log.go:172] (0xc0009a2000) (3) Data frame sent\nI0108 14:16:06.966045    1901 log.go:172] (0xc0009a84d0) (0xc0009a2000) Stream removed, broadcasting: 3\nI0108 14:16:06.966472    1901 log.go:172] (0xc0009a84d0) Data frame received for 1\nI0108 14:16:06.966654    1901 log.go:172] (0xc0009a2820) (1) Data frame handling\nI0108 14:16:06.966753    1901 log.go:172] (0xc0009a2820) (1) Data frame sent\nI0108 14:16:06.966787    1901 log.go:172] (0xc0009a84d0) (0xc0009a2820) Stream removed, broadcasting: 1\nI0108 14:16:06.967137    1901 log.go:172] (0xc0009a84d0) (0xc0009d2000) Stream removed, broadcasting: 5\nI0108 14:16:06.967460    1901 log.go:172] (0xc0009a84d0) Go away received\nI0108 14:16:06.968808    1901 log.go:172] (0xc0009a84d0) (0xc0009a2820) Stream removed, broadcasting: 1\nI0108 14:16:06.968843    1901 log.go:172] (0xc0009a84d0) (0xc0009a2000) Stream removed, broadcasting: 3\nI0108 14:16:06.968856    1901 log.go:172] (0xc0009a84d0) (0xc0009d2000) Stream removed, broadcasting: 5\n"
Jan  8 14:16:06.975: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan  8 14:16:06.975: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan  8 14:16:17.023: INFO: Updating stateful set ss2
STEP: Rolling back update in reverse ordinal order
Jan  8 14:16:27.106: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8407 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  8 14:16:27.566: INFO: stderr: "I0108 14:16:27.376112    1921 log.go:172] (0xc0009fe0b0) (0xc0009c46e0) Create stream\nI0108 14:16:27.376399    1921 log.go:172] (0xc0009fe0b0) (0xc0009c46e0) Stream added, broadcasting: 1\nI0108 14:16:27.380334    1921 log.go:172] (0xc0009fe0b0) Reply frame received for 1\nI0108 14:16:27.380388    1921 log.go:172] (0xc0009fe0b0) (0xc00067c280) Create stream\nI0108 14:16:27.380407    1921 log.go:172] (0xc0009fe0b0) (0xc00067c280) Stream added, broadcasting: 3\nI0108 14:16:27.381678    1921 log.go:172] (0xc0009fe0b0) Reply frame received for 3\nI0108 14:16:27.381773    1921 log.go:172] (0xc0009fe0b0) (0xc0001e0000) Create stream\nI0108 14:16:27.381785    1921 log.go:172] (0xc0009fe0b0) (0xc0001e0000) Stream added, broadcasting: 5\nI0108 14:16:27.383416    1921 log.go:172] (0xc0009fe0b0) Reply frame received for 5\nI0108 14:16:27.461598    1921 log.go:172] (0xc0009fe0b0) Data frame received for 5\nI0108 14:16:27.461617    1921 log.go:172] (0xc0009fe0b0) Data frame received for 3\nI0108 14:16:27.461647    1921 log.go:172] (0xc00067c280) (3) Data frame handling\nI0108 14:16:27.461667    1921 log.go:172] (0xc00067c280) (3) Data frame sent\nI0108 14:16:27.461693    1921 log.go:172] (0xc0001e0000) (5) Data frame handling\nI0108 14:16:27.461699    1921 log.go:172] (0xc0001e0000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0108 14:16:27.554053    1921 log.go:172] (0xc0009fe0b0) Data frame received for 1\nI0108 14:16:27.554158    1921 log.go:172] (0xc0009fe0b0) (0xc0001e0000) Stream removed, broadcasting: 5\nI0108 14:16:27.554240    1921 log.go:172] (0xc0009c46e0) (1) Data frame handling\nI0108 14:16:27.554258    1921 log.go:172] (0xc0009c46e0) (1) Data frame sent\nI0108 14:16:27.554351    1921 log.go:172] (0xc0009fe0b0) (0xc00067c280) Stream removed, broadcasting: 3\nI0108 14:16:27.554445    1921 log.go:172] (0xc0009fe0b0) (0xc0009c46e0) Stream removed, broadcasting: 1\nI0108 14:16:27.554467    1921 log.go:172] (0xc0009fe0b0) Go away received\nI0108 14:16:27.555976    1921 log.go:172] (0xc0009fe0b0) (0xc0009c46e0) Stream removed, broadcasting: 1\nI0108 14:16:27.556006    1921 log.go:172] (0xc0009fe0b0) (0xc00067c280) Stream removed, broadcasting: 3\nI0108 14:16:27.556023    1921 log.go:172] (0xc0009fe0b0) (0xc0001e0000) Stream removed, broadcasting: 5\n"
Jan  8 14:16:27.566: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jan  8 14:16:27.566: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jan  8 14:16:37.604: INFO: Waiting for StatefulSet statefulset-8407/ss2 to complete update
Jan  8 14:16:37.604: INFO: Waiting for Pod statefulset-8407/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Jan  8 14:16:37.604: INFO: Waiting for Pod statefulset-8407/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Jan  8 14:16:47.618: INFO: Waiting for StatefulSet statefulset-8407/ss2 to complete update
Jan  8 14:16:47.618: INFO: Waiting for Pod statefulset-8407/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Jan  8 14:16:47.618: INFO: Waiting for Pod statefulset-8407/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Jan  8 14:16:57.640: INFO: Waiting for StatefulSet statefulset-8407/ss2 to complete update
Jan  8 14:16:57.640: INFO: Waiting for Pod statefulset-8407/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Jan  8 14:17:07.621: INFO: Waiting for StatefulSet statefulset-8407/ss2 to complete update
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86
Jan  8 14:17:17.647: INFO: Deleting all statefulset in ns statefulset-8407
Jan  8 14:17:17.652: INFO: Scaling statefulset ss2 to 0
Jan  8 14:17:47.679: INFO: Waiting for statefulset status.replicas updated to 0
Jan  8 14:17:47.684: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  8 14:17:47.714: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-8407" for this suite.
Jan  8 14:17:55.803: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  8 14:17:55.979: INFO: namespace statefulset-8407 deletion completed in 8.234156071s

• [SLOW TEST:211.228 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should perform rolling updates and roll backs of template modifications [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  8 14:17:55.981: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name projected-secret-test-8d7e7c45-0543-427d-ac9e-b62198d546cc
STEP: Creating a pod to test consume secrets
Jan  8 14:17:56.213: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-f269fc94-012f-4cfd-8412-77cded613114" in namespace "projected-4802" to be "success or failure"
Jan  8 14:17:56.228: INFO: Pod "pod-projected-secrets-f269fc94-012f-4cfd-8412-77cded613114": Phase="Pending", Reason="", readiness=false. Elapsed: 14.953486ms
Jan  8 14:17:58.261: INFO: Pod "pod-projected-secrets-f269fc94-012f-4cfd-8412-77cded613114": Phase="Pending", Reason="", readiness=false. Elapsed: 2.047442631s
Jan  8 14:18:00.267: INFO: Pod "pod-projected-secrets-f269fc94-012f-4cfd-8412-77cded613114": Phase="Pending", Reason="", readiness=false. Elapsed: 4.053570027s
Jan  8 14:18:02.273: INFO: Pod "pod-projected-secrets-f269fc94-012f-4cfd-8412-77cded613114": Phase="Pending", Reason="", readiness=false. Elapsed: 6.060018923s
Jan  8 14:18:04.290: INFO: Pod "pod-projected-secrets-f269fc94-012f-4cfd-8412-77cded613114": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.076878629s
STEP: Saw pod success
Jan  8 14:18:04.290: INFO: Pod "pod-projected-secrets-f269fc94-012f-4cfd-8412-77cded613114" satisfied condition "success or failure"
Jan  8 14:18:04.302: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-f269fc94-012f-4cfd-8412-77cded613114 container projected-secret-volume-test: 
STEP: delete the pod
Jan  8 14:18:04.449: INFO: Waiting for pod pod-projected-secrets-f269fc94-012f-4cfd-8412-77cded613114 to disappear
Jan  8 14:18:04.453: INFO: Pod pod-projected-secrets-f269fc94-012f-4cfd-8412-77cded613114 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  8 14:18:04.453: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4802" for this suite.
Jan  8 14:18:10.486: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  8 14:18:10.587: INFO: namespace projected-4802 deletion completed in 6.130004481s

• [SLOW TEST:14.606 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  8 14:18:10.590: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jan  8 14:18:10.651: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  8 14:18:19.024: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-9216" for this suite.
Jan  8 14:19:11.085: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  8 14:19:11.260: INFO: namespace pods-9216 deletion completed in 52.226558482s

• [SLOW TEST:60.670 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  8 14:19:11.261: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81
Jan  8 14:19:11.320: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Jan  8 14:19:11.328: INFO: Waiting for terminating namespaces to be deleted...
Jan  8 14:19:11.330: INFO: 
Logging pods the kubelet thinks is on node iruya-node before test
Jan  8 14:19:11.341: INFO: weave-net-rlp57 from kube-system started at 2019-10-12 11:56:39 +0000 UTC (2 container statuses recorded)
Jan  8 14:19:11.341: INFO: 	Container weave ready: true, restart count 0
Jan  8 14:19:11.341: INFO: 	Container weave-npc ready: true, restart count 0
Jan  8 14:19:11.341: INFO: kube-proxy-976zl from kube-system started at 2019-08-04 09:01:39 +0000 UTC (1 container statuses recorded)
Jan  8 14:19:11.341: INFO: 	Container kube-proxy ready: true, restart count 0
Jan  8 14:19:11.341: INFO: 
Logging pods the kubelet thinks is on node iruya-server-sfge57q7djm7 before test
Jan  8 14:19:11.372: INFO: kube-controller-manager-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:42 +0000 UTC (1 container statuses recorded)
Jan  8 14:19:11.372: INFO: 	Container kube-controller-manager ready: true, restart count 18
Jan  8 14:19:11.372: INFO: kube-proxy-58v95 from kube-system started at 2019-08-04 08:52:37 +0000 UTC (1 container statuses recorded)
Jan  8 14:19:11.372: INFO: 	Container kube-proxy ready: true, restart count 0
Jan  8 14:19:11.372: INFO: kube-apiserver-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:39 +0000 UTC (1 container statuses recorded)
Jan  8 14:19:11.372: INFO: 	Container kube-apiserver ready: true, restart count 0
Jan  8 14:19:11.372: INFO: kube-scheduler-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:43 +0000 UTC (1 container statuses recorded)
Jan  8 14:19:11.372: INFO: 	Container kube-scheduler ready: true, restart count 12
Jan  8 14:19:11.372: INFO: coredns-5c98db65d4-xx8w8 from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded)
Jan  8 14:19:11.372: INFO: 	Container coredns ready: true, restart count 0
Jan  8 14:19:11.372: INFO: etcd-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:38 +0000 UTC (1 container statuses recorded)
Jan  8 14:19:11.372: INFO: 	Container etcd ready: true, restart count 0
Jan  8 14:19:11.372: INFO: weave-net-bzl4d from kube-system started at 2019-08-04 08:52:37 +0000 UTC (2 container statuses recorded)
Jan  8 14:19:11.372: INFO: 	Container weave ready: true, restart count 0
Jan  8 14:19:11.372: INFO: 	Container weave-npc ready: true, restart count 0
Jan  8 14:19:11.372: INFO: coredns-5c98db65d4-bm4gs from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded)
Jan  8 14:19:11.372: INFO: 	Container coredns ready: true, restart count 0
[It] validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Trying to schedule Pod with nonempty NodeSelector.
STEP: Considering event: 
Type = [Warning], Name = [restricted-pod.15e7ef2871b2d384], Reason = [FailedScheduling], Message = [0/2 nodes are available: 2 node(s) didn't match node selector.]
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  8 14:19:12.414: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-2842" for this suite.
Jan  8 14:19:18.445: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  8 14:19:18.598: INFO: namespace sched-pred-2842 deletion completed in 6.17752573s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72

• [SLOW TEST:7.337 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23
  validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  8 14:19:18.599: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0666 on node default medium
Jan  8 14:19:19.432: INFO: Waiting up to 5m0s for pod "pod-ff854e64-e0ba-4ac6-805b-abb78a366210" in namespace "emptydir-635" to be "success or failure"
Jan  8 14:19:19.440: INFO: Pod "pod-ff854e64-e0ba-4ac6-805b-abb78a366210": Phase="Pending", Reason="", readiness=false. Elapsed: 7.645668ms
Jan  8 14:19:21.445: INFO: Pod "pod-ff854e64-e0ba-4ac6-805b-abb78a366210": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013192547s
Jan  8 14:19:23.449: INFO: Pod "pod-ff854e64-e0ba-4ac6-805b-abb78a366210": Phase="Pending", Reason="", readiness=false. Elapsed: 4.017253508s
Jan  8 14:19:25.466: INFO: Pod "pod-ff854e64-e0ba-4ac6-805b-abb78a366210": Phase="Pending", Reason="", readiness=false. Elapsed: 6.034076053s
Jan  8 14:19:27.543: INFO: Pod "pod-ff854e64-e0ba-4ac6-805b-abb78a366210": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.110528813s
STEP: Saw pod success
Jan  8 14:19:27.543: INFO: Pod "pod-ff854e64-e0ba-4ac6-805b-abb78a366210" satisfied condition "success or failure"
Jan  8 14:19:27.552: INFO: Trying to get logs from node iruya-node pod pod-ff854e64-e0ba-4ac6-805b-abb78a366210 container test-container: 
STEP: delete the pod
Jan  8 14:19:27.638: INFO: Waiting for pod pod-ff854e64-e0ba-4ac6-805b-abb78a366210 to disappear
Jan  8 14:19:27.717: INFO: Pod pod-ff854e64-e0ba-4ac6-805b-abb78a366210 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  8 14:19:27.717: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-635" for this suite.
Jan  8 14:19:33.756: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  8 14:19:33.953: INFO: namespace emptydir-635 deletion completed in 6.22379905s

• [SLOW TEST:15.355 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  8 14:19:33.954: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
[It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jan  8 14:19:34.057: INFO: Creating simple daemon set daemon-set
STEP: Check that daemon pods launch on every node of the cluster.
Jan  8 14:19:34.125: INFO: Number of nodes with available pods: 0
Jan  8 14:19:34.125: INFO: Node iruya-node is running more than one daemon pod
Jan  8 14:19:35.139: INFO: Number of nodes with available pods: 0
Jan  8 14:19:35.139: INFO: Node iruya-node is running more than one daemon pod
Jan  8 14:19:36.840: INFO: Number of nodes with available pods: 0
Jan  8 14:19:36.840: INFO: Node iruya-node is running more than one daemon pod
Jan  8 14:19:37.146: INFO: Number of nodes with available pods: 0
Jan  8 14:19:37.146: INFO: Node iruya-node is running more than one daemon pod
Jan  8 14:19:38.144: INFO: Number of nodes with available pods: 0
Jan  8 14:19:38.144: INFO: Node iruya-node is running more than one daemon pod
Jan  8 14:19:41.299: INFO: Number of nodes with available pods: 0
Jan  8 14:19:41.299: INFO: Node iruya-node is running more than one daemon pod
Jan  8 14:19:42.141: INFO: Number of nodes with available pods: 0
Jan  8 14:19:42.141: INFO: Node iruya-node is running more than one daemon pod
Jan  8 14:19:43.152: INFO: Number of nodes with available pods: 0
Jan  8 14:19:43.152: INFO: Node iruya-node is running more than one daemon pod
Jan  8 14:19:44.138: INFO: Number of nodes with available pods: 0
Jan  8 14:19:44.138: INFO: Node iruya-node is running more than one daemon pod
Jan  8 14:19:45.139: INFO: Number of nodes with available pods: 2
Jan  8 14:19:45.139: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Update daemon pods image.
STEP: Check that daemon pods images are updated.
Jan  8 14:19:45.273: INFO: Wrong image for pod: daemon-set-hf6rg. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  8 14:19:45.273: INFO: Wrong image for pod: daemon-set-tvxwf. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  8 14:19:46.300: INFO: Wrong image for pod: daemon-set-hf6rg. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  8 14:19:46.300: INFO: Wrong image for pod: daemon-set-tvxwf. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  8 14:19:47.298: INFO: Wrong image for pod: daemon-set-hf6rg. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  8 14:19:47.298: INFO: Wrong image for pod: daemon-set-tvxwf. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  8 14:19:48.829: INFO: Wrong image for pod: daemon-set-hf6rg. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  8 14:19:48.829: INFO: Wrong image for pod: daemon-set-tvxwf. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  8 14:19:49.294: INFO: Wrong image for pod: daemon-set-hf6rg. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  8 14:19:49.294: INFO: Wrong image for pod: daemon-set-tvxwf. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  8 14:19:50.297: INFO: Wrong image for pod: daemon-set-hf6rg. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  8 14:19:50.297: INFO: Pod daemon-set-hf6rg is not available
Jan  8 14:19:50.297: INFO: Wrong image for pod: daemon-set-tvxwf. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  8 14:19:51.304: INFO: Wrong image for pod: daemon-set-hf6rg. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  8 14:19:51.304: INFO: Pod daemon-set-hf6rg is not available
Jan  8 14:19:51.304: INFO: Wrong image for pod: daemon-set-tvxwf. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  8 14:19:52.292: INFO: Wrong image for pod: daemon-set-hf6rg. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  8 14:19:52.292: INFO: Pod daemon-set-hf6rg is not available
Jan  8 14:19:52.292: INFO: Wrong image for pod: daemon-set-tvxwf. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  8 14:19:53.294: INFO: Wrong image for pod: daemon-set-hf6rg. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  8 14:19:53.294: INFO: Pod daemon-set-hf6rg is not available
Jan  8 14:19:53.294: INFO: Wrong image for pod: daemon-set-tvxwf. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  8 14:19:54.295: INFO: Wrong image for pod: daemon-set-hf6rg. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  8 14:19:54.295: INFO: Pod daemon-set-hf6rg is not available
Jan  8 14:19:54.295: INFO: Wrong image for pod: daemon-set-tvxwf. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  8 14:19:55.295: INFO: Wrong image for pod: daemon-set-hf6rg. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  8 14:19:55.295: INFO: Pod daemon-set-hf6rg is not available
Jan  8 14:19:55.295: INFO: Wrong image for pod: daemon-set-tvxwf. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  8 14:19:56.294: INFO: Wrong image for pod: daemon-set-hf6rg. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  8 14:19:56.294: INFO: Pod daemon-set-hf6rg is not available
Jan  8 14:19:56.294: INFO: Wrong image for pod: daemon-set-tvxwf. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  8 14:19:57.294: INFO: Wrong image for pod: daemon-set-hf6rg. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  8 14:19:57.295: INFO: Pod daemon-set-hf6rg is not available
Jan  8 14:19:57.295: INFO: Wrong image for pod: daemon-set-tvxwf. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  8 14:19:58.295: INFO: Pod daemon-set-mq4vl is not available
Jan  8 14:19:58.295: INFO: Wrong image for pod: daemon-set-tvxwf. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  8 14:19:59.295: INFO: Pod daemon-set-mq4vl is not available
Jan  8 14:19:59.295: INFO: Wrong image for pod: daemon-set-tvxwf. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  8 14:20:00.298: INFO: Pod daemon-set-mq4vl is not available
Jan  8 14:20:00.298: INFO: Wrong image for pod: daemon-set-tvxwf. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  8 14:20:01.296: INFO: Pod daemon-set-mq4vl is not available
Jan  8 14:20:01.296: INFO: Wrong image for pod: daemon-set-tvxwf. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  8 14:20:02.847: INFO: Pod daemon-set-mq4vl is not available
Jan  8 14:20:02.848: INFO: Wrong image for pod: daemon-set-tvxwf. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  8 14:20:03.326: INFO: Pod daemon-set-mq4vl is not available
Jan  8 14:20:03.326: INFO: Wrong image for pod: daemon-set-tvxwf. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  8 14:20:04.295: INFO: Pod daemon-set-mq4vl is not available
Jan  8 14:20:04.295: INFO: Wrong image for pod: daemon-set-tvxwf. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  8 14:20:05.314: INFO: Wrong image for pod: daemon-set-tvxwf. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  8 14:20:06.297: INFO: Wrong image for pod: daemon-set-tvxwf. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  8 14:20:07.296: INFO: Wrong image for pod: daemon-set-tvxwf. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  8 14:20:08.297: INFO: Wrong image for pod: daemon-set-tvxwf. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  8 14:20:09.295: INFO: Wrong image for pod: daemon-set-tvxwf. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  8 14:20:10.295: INFO: Wrong image for pod: daemon-set-tvxwf. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  8 14:20:10.295: INFO: Pod daemon-set-tvxwf is not available
Jan  8 14:20:11.291: INFO: Pod daemon-set-6hb75 is not available
STEP: Check that daemon pods are still running on every node of the cluster.
Jan  8 14:20:11.299: INFO: Number of nodes with available pods: 1
Jan  8 14:20:11.299: INFO: Node iruya-node is running more than one daemon pod
Jan  8 14:20:12.313: INFO: Number of nodes with available pods: 1
Jan  8 14:20:12.313: INFO: Node iruya-node is running more than one daemon pod
Jan  8 14:20:13.309: INFO: Number of nodes with available pods: 1
Jan  8 14:20:13.309: INFO: Node iruya-node is running more than one daemon pod
Jan  8 14:20:14.311: INFO: Number of nodes with available pods: 1
Jan  8 14:20:14.311: INFO: Node iruya-node is running more than one daemon pod
Jan  8 14:20:15.313: INFO: Number of nodes with available pods: 1
Jan  8 14:20:15.313: INFO: Node iruya-node is running more than one daemon pod
Jan  8 14:20:16.319: INFO: Number of nodes with available pods: 1
Jan  8 14:20:16.319: INFO: Node iruya-node is running more than one daemon pod
Jan  8 14:20:17.313: INFO: Number of nodes with available pods: 1
Jan  8 14:20:17.313: INFO: Node iruya-node is running more than one daemon pod
Jan  8 14:20:18.320: INFO: Number of nodes with available pods: 2
Jan  8 14:20:18.320: INFO: Number of running nodes: 2, number of available pods: 2
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-7966, will wait for the garbage collector to delete the pods
Jan  8 14:20:18.422: INFO: Deleting DaemonSet.extensions daemon-set took: 13.984512ms
Jan  8 14:20:18.722: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.537891ms
Jan  8 14:20:36.636: INFO: Number of nodes with available pods: 0
Jan  8 14:20:36.636: INFO: Number of running nodes: 0, number of available pods: 0
Jan  8 14:20:36.640: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-7966/daemonsets","resourceVersion":"19782684"},"items":null}

Jan  8 14:20:36.643: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-7966/pods","resourceVersion":"19782684"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  8 14:20:36.656: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-7966" for this suite.
Jan  8 14:20:42.702: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  8 14:20:42.824: INFO: namespace daemonsets-7966 deletion completed in 6.164283638s

• [SLOW TEST:68.870 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  8 14:20:42.825: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
[It] should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Jan  8 14:20:43.034: INFO: Number of nodes with available pods: 0
Jan  8 14:20:43.035: INFO: Node iruya-node is running more than one daemon pod
Jan  8 14:20:44.057: INFO: Number of nodes with available pods: 0
Jan  8 14:20:44.057: INFO: Node iruya-node is running more than one daemon pod
Jan  8 14:20:45.193: INFO: Number of nodes with available pods: 0
Jan  8 14:20:45.193: INFO: Node iruya-node is running more than one daemon pod
Jan  8 14:20:46.051: INFO: Number of nodes with available pods: 0
Jan  8 14:20:46.051: INFO: Node iruya-node is running more than one daemon pod
Jan  8 14:20:47.055: INFO: Number of nodes with available pods: 0
Jan  8 14:20:47.055: INFO: Node iruya-node is running more than one daemon pod
Jan  8 14:20:48.064: INFO: Number of nodes with available pods: 0
Jan  8 14:20:48.064: INFO: Node iruya-node is running more than one daemon pod
Jan  8 14:20:50.057: INFO: Number of nodes with available pods: 0
Jan  8 14:20:50.057: INFO: Node iruya-node is running more than one daemon pod
Jan  8 14:20:51.695: INFO: Number of nodes with available pods: 0
Jan  8 14:20:51.695: INFO: Node iruya-node is running more than one daemon pod
Jan  8 14:20:52.050: INFO: Number of nodes with available pods: 0
Jan  8 14:20:52.050: INFO: Node iruya-node is running more than one daemon pod
Jan  8 14:20:53.100: INFO: Number of nodes with available pods: 0
Jan  8 14:20:53.100: INFO: Node iruya-node is running more than one daemon pod
Jan  8 14:20:54.059: INFO: Number of nodes with available pods: 1
Jan  8 14:20:54.059: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan  8 14:20:55.058: INFO: Number of nodes with available pods: 2
Jan  8 14:20:55.059: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived.
Jan  8 14:20:55.173: INFO: Number of nodes with available pods: 1
Jan  8 14:20:55.173: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan  8 14:20:56.609: INFO: Number of nodes with available pods: 1
Jan  8 14:20:56.609: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan  8 14:20:57.255: INFO: Number of nodes with available pods: 1
Jan  8 14:20:57.255: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan  8 14:20:58.817: INFO: Number of nodes with available pods: 1
Jan  8 14:20:58.817: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan  8 14:20:59.190: INFO: Number of nodes with available pods: 1
Jan  8 14:20:59.190: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan  8 14:21:00.192: INFO: Number of nodes with available pods: 1
Jan  8 14:21:00.192: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan  8 14:21:01.497: INFO: Number of nodes with available pods: 1
Jan  8 14:21:01.497: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan  8 14:21:02.297: INFO: Number of nodes with available pods: 1
Jan  8 14:21:02.297: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan  8 14:21:03.197: INFO: Number of nodes with available pods: 1
Jan  8 14:21:03.197: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan  8 14:21:04.189: INFO: Number of nodes with available pods: 2
Jan  8 14:21:04.189: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Wait for the failed daemon pod to be completely deleted.
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-3684, will wait for the garbage collector to delete the pods
Jan  8 14:21:04.271: INFO: Deleting DaemonSet.extensions daemon-set took: 15.185888ms
Jan  8 14:21:04.571: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.720852ms
Jan  8 14:21:17.897: INFO: Number of nodes with available pods: 0
Jan  8 14:21:17.897: INFO: Number of running nodes: 0, number of available pods: 0
Jan  8 14:21:17.906: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-3684/daemonsets","resourceVersion":"19782827"},"items":null}

Jan  8 14:21:17.921: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-3684/pods","resourceVersion":"19782827"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  8 14:21:17.973: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-3684" for this suite.
Jan  8 14:21:24.023: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  8 14:21:24.109: INFO: namespace daemonsets-3684 deletion completed in 6.126693384s

• [SLOW TEST:41.284 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl version 
  should check is all data is printed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  8 14:21:24.109: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should check is all data is printed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jan  8 14:21:24.171: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version'
Jan  8 14:21:24.335: INFO: stderr: ""
Jan  8 14:21:24.336: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"15\", GitVersion:\"v1.15.7\", GitCommit:\"6c143d35bb11d74970e7bc0b6c45b6bfdffc0bd4\", GitTreeState:\"clean\", BuildDate:\"2019-12-22T16:55:20Z\", GoVersion:\"go1.12.14\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"15\", GitVersion:\"v1.15.1\", GitCommit:\"4485c6f18cee9a5d3c3b4e523bd27972b1b53892\", GitTreeState:\"clean\", BuildDate:\"2019-07-18T09:09:21Z\", GoVersion:\"go1.12.5\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  8 14:21:24.336: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4467" for this suite.
Jan  8 14:21:30.371: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  8 14:21:30.511: INFO: namespace kubectl-4467 deletion completed in 6.166197502s

• [SLOW TEST:6.402 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl version
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should check is all data is printed  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  8 14:21:30.512: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Jan  8 14:21:30.645: INFO: Waiting up to 5m0s for pod "downwardapi-volume-add02fcf-de9b-4629-85b2-2244f0aeeb98" in namespace "projected-3550" to be "success or failure"
Jan  8 14:21:30.656: INFO: Pod "downwardapi-volume-add02fcf-de9b-4629-85b2-2244f0aeeb98": Phase="Pending", Reason="", readiness=false. Elapsed: 11.115093ms
Jan  8 14:21:32.669: INFO: Pod "downwardapi-volume-add02fcf-de9b-4629-85b2-2244f0aeeb98": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024541925s
Jan  8 14:21:34.677: INFO: Pod "downwardapi-volume-add02fcf-de9b-4629-85b2-2244f0aeeb98": Phase="Pending", Reason="", readiness=false. Elapsed: 4.031948555s
Jan  8 14:21:36.687: INFO: Pod "downwardapi-volume-add02fcf-de9b-4629-85b2-2244f0aeeb98": Phase="Pending", Reason="", readiness=false. Elapsed: 6.042010176s
Jan  8 14:21:38.696: INFO: Pod "downwardapi-volume-add02fcf-de9b-4629-85b2-2244f0aeeb98": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.0511921s
STEP: Saw pod success
Jan  8 14:21:38.696: INFO: Pod "downwardapi-volume-add02fcf-de9b-4629-85b2-2244f0aeeb98" satisfied condition "success or failure"
Jan  8 14:21:38.699: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-add02fcf-de9b-4629-85b2-2244f0aeeb98 container client-container: 
STEP: delete the pod
Jan  8 14:21:38.762: INFO: Waiting for pod downwardapi-volume-add02fcf-de9b-4629-85b2-2244f0aeeb98 to disappear
Jan  8 14:21:38.767: INFO: Pod downwardapi-volume-add02fcf-de9b-4629-85b2-2244f0aeeb98 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  8 14:21:38.767: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3550" for this suite.
Jan  8 14:21:44.812: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  8 14:21:44.981: INFO: namespace projected-3550 deletion completed in 6.208266785s

• [SLOW TEST:14.469 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Burst scaling should run to completion even with unhealthy pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  8 14:21:44.981: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75
STEP: Creating service test in namespace statefulset-3843
[It] Burst scaling should run to completion even with unhealthy pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating stateful set ss in namespace statefulset-3843
STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-3843
Jan  8 14:21:45.251: INFO: Found 0 stateful pods, waiting for 1
Jan  8 14:21:55.260: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod
Jan  8 14:21:55.265: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3843 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan  8 14:21:55.850: INFO: stderr: "I0108 14:21:55.544785    1959 log.go:172] (0xc000742420) (0xc0004008c0) Create stream\nI0108 14:21:55.545071    1959 log.go:172] (0xc000742420) (0xc0004008c0) Stream added, broadcasting: 1\nI0108 14:21:55.560241    1959 log.go:172] (0xc000742420) Reply frame received for 1\nI0108 14:21:55.560447    1959 log.go:172] (0xc000742420) (0xc00040e000) Create stream\nI0108 14:21:55.560496    1959 log.go:172] (0xc000742420) (0xc00040e000) Stream added, broadcasting: 3\nI0108 14:21:55.564495    1959 log.go:172] (0xc000742420) Reply frame received for 3\nI0108 14:21:55.564527    1959 log.go:172] (0xc000742420) (0xc000400960) Create stream\nI0108 14:21:55.564536    1959 log.go:172] (0xc000742420) (0xc000400960) Stream added, broadcasting: 5\nI0108 14:21:55.565954    1959 log.go:172] (0xc000742420) Reply frame received for 5\nI0108 14:21:55.674050    1959 log.go:172] (0xc000742420) Data frame received for 5\nI0108 14:21:55.674348    1959 log.go:172] (0xc000400960) (5) Data frame handling\nI0108 14:21:55.674453    1959 log.go:172] (0xc000400960) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0108 14:21:55.701168    1959 log.go:172] (0xc000742420) Data frame received for 3\nI0108 14:21:55.701234    1959 log.go:172] (0xc00040e000) (3) Data frame handling\nI0108 14:21:55.701269    1959 log.go:172] (0xc00040e000) (3) Data frame sent\nI0108 14:21:55.831102    1959 log.go:172] (0xc000742420) (0xc000400960) Stream removed, broadcasting: 5\nI0108 14:21:55.831449    1959 log.go:172] (0xc000742420) Data frame received for 1\nI0108 14:21:55.831489    1959 log.go:172] (0xc000742420) (0xc00040e000) Stream removed, broadcasting: 3\nI0108 14:21:55.831559    1959 log.go:172] (0xc0004008c0) (1) Data frame handling\nI0108 14:21:55.831600    1959 log.go:172] (0xc0004008c0) (1) Data frame sent\nI0108 14:21:55.831619    1959 log.go:172] (0xc000742420) (0xc0004008c0) Stream removed, broadcasting: 1\nI0108 14:21:55.832939    1959 log.go:172] (0xc000742420) (0xc0004008c0) Stream removed, broadcasting: 1\nI0108 14:21:55.832965    1959 log.go:172] (0xc000742420) (0xc00040e000) Stream removed, broadcasting: 3\nI0108 14:21:55.832978    1959 log.go:172] (0xc000742420) (0xc000400960) Stream removed, broadcasting: 5\n"
Jan  8 14:21:55.850: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan  8 14:21:55.851: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan  8 14:21:55.862: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
Jan  8 14:22:05.877: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Jan  8 14:22:05.877: INFO: Waiting for statefulset status.replicas updated to 0
Jan  8 14:22:05.958: INFO: POD   NODE        PHASE    GRACE  CONDITIONS
Jan  8 14:22:05.959: INFO: ss-0  iruya-node  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-08 14:21:45 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-08 14:21:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-08 14:21:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-08 14:21:45 +0000 UTC  }]
Jan  8 14:22:05.959: INFO: 
Jan  8 14:22:05.959: INFO: StatefulSet ss has not reached scale 3, at 1
Jan  8 14:22:07.458: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.937378902s
Jan  8 14:22:08.923: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.437767242s
Jan  8 14:22:09.936: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.9728308s
Jan  8 14:22:10.958: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.960321326s
Jan  8 14:22:12.907: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.937953737s
Jan  8 14:22:14.087: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.988958877s
Jan  8 14:22:15.137: INFO: Verifying statefulset ss doesn't scale past 3 for another 809.113251ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-3843
Jan  8 14:22:16.153: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3843 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  8 14:22:16.962: INFO: stderr: "I0108 14:22:16.489561    1977 log.go:172] (0xc0002424d0) (0xc00058e6e0) Create stream\nI0108 14:22:16.490013    1977 log.go:172] (0xc0002424d0) (0xc00058e6e0) Stream added, broadcasting: 1\nI0108 14:22:16.503970    1977 log.go:172] (0xc0002424d0) Reply frame received for 1\nI0108 14:22:16.504131    1977 log.go:172] (0xc0002424d0) (0xc00058e000) Create stream\nI0108 14:22:16.504154    1977 log.go:172] (0xc0002424d0) (0xc00058e000) Stream added, broadcasting: 3\nI0108 14:22:16.509392    1977 log.go:172] (0xc0002424d0) Reply frame received for 3\nI0108 14:22:16.509603    1977 log.go:172] (0xc0002424d0) (0xc00058e0a0) Create stream\nI0108 14:22:16.509621    1977 log.go:172] (0xc0002424d0) (0xc00058e0a0) Stream added, broadcasting: 5\nI0108 14:22:16.511677    1977 log.go:172] (0xc0002424d0) Reply frame received for 5\nI0108 14:22:16.786867    1977 log.go:172] (0xc0002424d0) Data frame received for 5\nI0108 14:22:16.787068    1977 log.go:172] (0xc00058e0a0) (5) Data frame handling\nI0108 14:22:16.787102    1977 log.go:172] (0xc00058e0a0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0108 14:22:16.787173    1977 log.go:172] (0xc0002424d0) Data frame received for 3\nI0108 14:22:16.787259    1977 log.go:172] (0xc00058e000) (3) Data frame handling\nI0108 14:22:16.787291    1977 log.go:172] (0xc00058e000) (3) Data frame sent\nI0108 14:22:16.949611    1977 log.go:172] (0xc0002424d0) Data frame received for 1\nI0108 14:22:16.949926    1977 log.go:172] (0xc0002424d0) (0xc00058e0a0) Stream removed, broadcasting: 5\nI0108 14:22:16.950066    1977 log.go:172] (0xc00058e6e0) (1) Data frame handling\nI0108 14:22:16.950092    1977 log.go:172] (0xc0002424d0) (0xc00058e000) Stream removed, broadcasting: 3\nI0108 14:22:16.950117    1977 log.go:172] (0xc00058e6e0) (1) Data frame sent\nI0108 14:22:16.950148    1977 log.go:172] (0xc0002424d0) (0xc00058e6e0) Stream removed, broadcasting: 1\nI0108 14:22:16.950186    1977 log.go:172] (0xc0002424d0) Go away received\nI0108 14:22:16.952125    1977 log.go:172] (0xc0002424d0) (0xc00058e6e0) Stream removed, broadcasting: 1\nI0108 14:22:16.952138    1977 log.go:172] (0xc0002424d0) (0xc00058e000) Stream removed, broadcasting: 3\nI0108 14:22:16.952148    1977 log.go:172] (0xc0002424d0) (0xc00058e0a0) Stream removed, broadcasting: 5\n"
Jan  8 14:22:16.962: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jan  8 14:22:16.962: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jan  8 14:22:16.962: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3843 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  8 14:22:17.303: INFO: stderr: "I0108 14:22:17.129045    1998 log.go:172] (0xc000a6a420) (0xc0008e2780) Create stream\nI0108 14:22:17.129307    1998 log.go:172] (0xc000a6a420) (0xc0008e2780) Stream added, broadcasting: 1\nI0108 14:22:17.134488    1998 log.go:172] (0xc000a6a420) Reply frame received for 1\nI0108 14:22:17.134594    1998 log.go:172] (0xc000a6a420) (0xc000211b80) Create stream\nI0108 14:22:17.134608    1998 log.go:172] (0xc000a6a420) (0xc000211b80) Stream added, broadcasting: 3\nI0108 14:22:17.137711    1998 log.go:172] (0xc000a6a420) Reply frame received for 3\nI0108 14:22:17.137770    1998 log.go:172] (0xc000a6a420) (0xc0008e2820) Create stream\nI0108 14:22:17.137787    1998 log.go:172] (0xc000a6a420) (0xc0008e2820) Stream added, broadcasting: 5\nI0108 14:22:17.140188    1998 log.go:172] (0xc000a6a420) Reply frame received for 5\nI0108 14:22:17.226244    1998 log.go:172] (0xc000a6a420) Data frame received for 5\nI0108 14:22:17.226433    1998 log.go:172] (0xc0008e2820) (5) Data frame handling\nI0108 14:22:17.226496    1998 log.go:172] (0xc0008e2820) (5) Data frame sent\nI0108 14:22:17.226587    1998 log.go:172] (0xc000a6a420) Data frame received for 5\nI0108 14:22:17.226616    1998 log.go:172] (0xc0008e2820) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nmv: can't rename '/tmp/index.html': No such file or directory\nI0108 14:22:17.226647    1998 log.go:172] (0xc0008e2820) (5) Data frame sent\nI0108 14:22:17.227597    1998 log.go:172] (0xc000a6a420) Data frame received for 5\nI0108 14:22:17.227613    1998 log.go:172] (0xc0008e2820) (5) Data frame handling\nI0108 14:22:17.227627    1998 log.go:172] (0xc0008e2820) (5) Data frame sent\n+ true\nI0108 14:22:17.228082    1998 log.go:172] (0xc000a6a420) Data frame received for 3\nI0108 14:22:17.228092    1998 log.go:172] (0xc000211b80) (3) Data frame handling\nI0108 14:22:17.228105    1998 log.go:172] (0xc000211b80) (3) Data frame sent\nI0108 14:22:17.298187    1998 log.go:172] (0xc000a6a420) Data frame received for 1\nI0108 14:22:17.298250    1998 log.go:172] (0xc000a6a420) (0xc000211b80) Stream removed, broadcasting: 3\nI0108 14:22:17.298330    1998 log.go:172] (0xc000a6a420) (0xc0008e2820) Stream removed, broadcasting: 5\nI0108 14:22:17.298446    1998 log.go:172] (0xc0008e2780) (1) Data frame handling\nI0108 14:22:17.298461    1998 log.go:172] (0xc0008e2780) (1) Data frame sent\nI0108 14:22:17.298466    1998 log.go:172] (0xc000a6a420) (0xc0008e2780) Stream removed, broadcasting: 1\nI0108 14:22:17.298474    1998 log.go:172] (0xc000a6a420) Go away received\nI0108 14:22:17.299330    1998 log.go:172] (0xc000a6a420) (0xc0008e2780) Stream removed, broadcasting: 1\nI0108 14:22:17.299340    1998 log.go:172] (0xc000a6a420) (0xc000211b80) Stream removed, broadcasting: 3\nI0108 14:22:17.299343    1998 log.go:172] (0xc000a6a420) (0xc0008e2820) Stream removed, broadcasting: 5\n"
Jan  8 14:22:17.303: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jan  8 14:22:17.303: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jan  8 14:22:17.303: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3843 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  8 14:22:17.847: INFO: stderr: "I0108 14:22:17.531468    2019 log.go:172] (0xc0008d4dc0) (0xc000a9c820) Create stream\nI0108 14:22:17.531739    2019 log.go:172] (0xc0008d4dc0) (0xc000a9c820) Stream added, broadcasting: 1\nI0108 14:22:17.548627    2019 log.go:172] (0xc0008d4dc0) Reply frame received for 1\nI0108 14:22:17.548735    2019 log.go:172] (0xc0008d4dc0) (0xc000a9c000) Create stream\nI0108 14:22:17.548750    2019 log.go:172] (0xc0008d4dc0) (0xc000a9c000) Stream added, broadcasting: 3\nI0108 14:22:17.550667    2019 log.go:172] (0xc0008d4dc0) Reply frame received for 3\nI0108 14:22:17.550715    2019 log.go:172] (0xc0008d4dc0) (0xc00099e000) Create stream\nI0108 14:22:17.550731    2019 log.go:172] (0xc0008d4dc0) (0xc00099e000) Stream added, broadcasting: 5\nI0108 14:22:17.552513    2019 log.go:172] (0xc0008d4dc0) Reply frame received for 5\nI0108 14:22:17.705603    2019 log.go:172] (0xc0008d4dc0) Data frame received for 5\nI0108 14:22:17.705762    2019 log.go:172] (0xc00099e000) (5) Data frame handling\nI0108 14:22:17.705788    2019 log.go:172] (0xc00099e000) (5) Data frame sent\nI0108 14:22:17.705815    2019 log.go:172] (0xc0008d4dc0) Data frame received for 5\nI0108 14:22:17.705822    2019 log.go:172] (0xc00099e000) (5) Data frame handling\nI0108 14:22:17.705838    2019 log.go:172] (0xc0008d4dc0) Data frame received for 3\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nmv: can't rename '/tmp/index.html': No such file or directory\nI0108 14:22:17.705852    2019 log.go:172] (0xc000a9c000) (3) Data frame handling\nI0108 14:22:17.706007    2019 log.go:172] (0xc000a9c000) (3) Data frame sent\nI0108 14:22:17.706158    2019 log.go:172] (0xc00099e000) (5) Data frame sent\nI0108 14:22:17.706178    2019 log.go:172] (0xc0008d4dc0) Data frame received for 5\nI0108 14:22:17.706189    2019 log.go:172] (0xc00099e000) (5) Data frame handling\nI0108 14:22:17.706199    2019 log.go:172] (0xc00099e000) (5) Data frame sent\n+ true\nI0108 14:22:17.832382    2019 log.go:172] (0xc0008d4dc0) (0xc000a9c000) Stream removed, broadcasting: 3\nI0108 14:22:17.832537    2019 log.go:172] (0xc0008d4dc0) Data frame received for 1\nI0108 14:22:17.832577    2019 log.go:172] (0xc0008d4dc0) (0xc00099e000) Stream removed, broadcasting: 5\nI0108 14:22:17.832613    2019 log.go:172] (0xc000a9c820) (1) Data frame handling\nI0108 14:22:17.832651    2019 log.go:172] (0xc000a9c820) (1) Data frame sent\nI0108 14:22:17.832673    2019 log.go:172] (0xc0008d4dc0) (0xc000a9c820) Stream removed, broadcasting: 1\nI0108 14:22:17.832701    2019 log.go:172] (0xc0008d4dc0) Go away received\nI0108 14:22:17.834702    2019 log.go:172] (0xc0008d4dc0) (0xc000a9c820) Stream removed, broadcasting: 1\nI0108 14:22:17.834734    2019 log.go:172] (0xc0008d4dc0) (0xc000a9c000) Stream removed, broadcasting: 3\nI0108 14:22:17.834758    2019 log.go:172] (0xc0008d4dc0) (0xc00099e000) Stream removed, broadcasting: 5\n"
Jan  8 14:22:17.848: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jan  8 14:22:17.848: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jan  8 14:22:17.857: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Jan  8 14:22:17.857: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Jan  8 14:22:17.857: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Scale down will not halt with unhealthy stateful pod
Jan  8 14:22:17.863: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3843 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan  8 14:22:18.401: INFO: stderr: "I0108 14:22:18.113089    2039 log.go:172] (0xc00011a6e0) (0xc000952640) Create stream\nI0108 14:22:18.113371    2039 log.go:172] (0xc00011a6e0) (0xc000952640) Stream added, broadcasting: 1\nI0108 14:22:18.126150    2039 log.go:172] (0xc00011a6e0) Reply frame received for 1\nI0108 14:22:18.126285    2039 log.go:172] (0xc00011a6e0) (0xc00089e000) Create stream\nI0108 14:22:18.126311    2039 log.go:172] (0xc00011a6e0) (0xc00089e000) Stream added, broadcasting: 3\nI0108 14:22:18.128029    2039 log.go:172] (0xc00011a6e0) Reply frame received for 3\nI0108 14:22:18.128108    2039 log.go:172] (0xc00011a6e0) (0xc00067c0a0) Create stream\nI0108 14:22:18.128136    2039 log.go:172] (0xc00011a6e0) (0xc00067c0a0) Stream added, broadcasting: 5\nI0108 14:22:18.129908    2039 log.go:172] (0xc00011a6e0) Reply frame received for 5\nI0108 14:22:18.244584    2039 log.go:172] (0xc00011a6e0) Data frame received for 5\nI0108 14:22:18.244782    2039 log.go:172] (0xc00067c0a0) (5) Data frame handling\nI0108 14:22:18.244815    2039 log.go:172] (0xc00067c0a0) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0108 14:22:18.244897    2039 log.go:172] (0xc00011a6e0) Data frame received for 3\nI0108 14:22:18.244954    2039 log.go:172] (0xc00089e000) (3) Data frame handling\nI0108 14:22:18.244971    2039 log.go:172] (0xc00089e000) (3) Data frame sent\nI0108 14:22:18.388799    2039 log.go:172] (0xc00011a6e0) Data frame received for 1\nI0108 14:22:18.389058    2039 log.go:172] (0xc00011a6e0) (0xc00089e000) Stream removed, broadcasting: 3\nI0108 14:22:18.389324    2039 log.go:172] (0xc00011a6e0) (0xc00067c0a0) Stream removed, broadcasting: 5\nI0108 14:22:18.389577    2039 log.go:172] (0xc000952640) (1) Data frame handling\nI0108 14:22:18.389638    2039 log.go:172] (0xc000952640) (1) Data frame sent\nI0108 14:22:18.389652    2039 log.go:172] (0xc00011a6e0) (0xc000952640) Stream removed, broadcasting: 1\nI0108 14:22:18.389663    2039 log.go:172] (0xc00011a6e0) Go away received\nI0108 14:22:18.391810    2039 log.go:172] (0xc00011a6e0) (0xc000952640) Stream removed, broadcasting: 1\nI0108 14:22:18.391836    2039 log.go:172] (0xc00011a6e0) (0xc00089e000) Stream removed, broadcasting: 3\nI0108 14:22:18.391851    2039 log.go:172] (0xc00011a6e0) (0xc00067c0a0) Stream removed, broadcasting: 5\n"
Jan  8 14:22:18.401: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan  8 14:22:18.401: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan  8 14:22:18.401: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3843 ss-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan  8 14:22:18.955: INFO: stderr: "I0108 14:22:18.734249    2058 log.go:172] (0xc0008960b0) (0xc0007986e0) Create stream\nI0108 14:22:18.734828    2058 log.go:172] (0xc0008960b0) (0xc0007986e0) Stream added, broadcasting: 1\nI0108 14:22:18.739290    2058 log.go:172] (0xc0008960b0) Reply frame received for 1\nI0108 14:22:18.739392    2058 log.go:172] (0xc0008960b0) (0xc00033e320) Create stream\nI0108 14:22:18.739415    2058 log.go:172] (0xc0008960b0) (0xc00033e320) Stream added, broadcasting: 3\nI0108 14:22:18.741574    2058 log.go:172] (0xc0008960b0) Reply frame received for 3\nI0108 14:22:18.741607    2058 log.go:172] (0xc0008960b0) (0xc0006d6000) Create stream\nI0108 14:22:18.741615    2058 log.go:172] (0xc0008960b0) (0xc0006d6000) Stream added, broadcasting: 5\nI0108 14:22:18.743379    2058 log.go:172] (0xc0008960b0) Reply frame received for 5\nI0108 14:22:18.827977    2058 log.go:172] (0xc0008960b0) Data frame received for 5\nI0108 14:22:18.828064    2058 log.go:172] (0xc0006d6000) (5) Data frame handling\nI0108 14:22:18.828114    2058 log.go:172] (0xc0006d6000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0108 14:22:18.854121    2058 log.go:172] (0xc0008960b0) Data frame received for 3\nI0108 14:22:18.854229    2058 log.go:172] (0xc00033e320) (3) Data frame handling\nI0108 14:22:18.854268    2058 log.go:172] (0xc00033e320) (3) Data frame sent\nI0108 14:22:18.946190    2058 log.go:172] (0xc0008960b0) Data frame received for 1\nI0108 14:22:18.946509    2058 log.go:172] (0xc0008960b0) (0xc00033e320) Stream removed, broadcasting: 3\nI0108 14:22:18.946671    2058 log.go:172] (0xc0007986e0) (1) Data frame handling\nI0108 14:22:18.946716    2058 log.go:172] (0xc0007986e0) (1) Data frame sent\nI0108 14:22:18.946819    2058 log.go:172] (0xc0008960b0) (0xc0006d6000) Stream removed, broadcasting: 5\nI0108 14:22:18.946932    2058 log.go:172] (0xc0008960b0) (0xc0007986e0) Stream removed, broadcasting: 1\nI0108 14:22:18.946985    2058 log.go:172] (0xc0008960b0) Go away received\nI0108 14:22:18.948187    2058 log.go:172] (0xc0008960b0) (0xc0007986e0) Stream removed, broadcasting: 1\nI0108 14:22:18.948208    2058 log.go:172] (0xc0008960b0) (0xc00033e320) Stream removed, broadcasting: 3\nI0108 14:22:18.948215    2058 log.go:172] (0xc0008960b0) (0xc0006d6000) Stream removed, broadcasting: 5\n"
Jan  8 14:22:18.955: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan  8 14:22:18.955: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan  8 14:22:18.955: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3843 ss-2 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan  8 14:22:19.504: INFO: stderr: "I0108 14:22:19.168602    2077 log.go:172] (0xc00091e2c0) (0xc0009d46e0) Create stream\nI0108 14:22:19.168973    2077 log.go:172] (0xc00091e2c0) (0xc0009d46e0) Stream added, broadcasting: 1\nI0108 14:22:19.183249    2077 log.go:172] (0xc00091e2c0) Reply frame received for 1\nI0108 14:22:19.183399    2077 log.go:172] (0xc00091e2c0) (0xc00063a1e0) Create stream\nI0108 14:22:19.183417    2077 log.go:172] (0xc00091e2c0) (0xc00063a1e0) Stream added, broadcasting: 3\nI0108 14:22:19.186834    2077 log.go:172] (0xc00091e2c0) Reply frame received for 3\nI0108 14:22:19.186927    2077 log.go:172] (0xc00091e2c0) (0xc0009d4780) Create stream\nI0108 14:22:19.186952    2077 log.go:172] (0xc00091e2c0) (0xc0009d4780) Stream added, broadcasting: 5\nI0108 14:22:19.188299    2077 log.go:172] (0xc00091e2c0) Reply frame received for 5\nI0108 14:22:19.281951    2077 log.go:172] (0xc00091e2c0) Data frame received for 5\nI0108 14:22:19.282025    2077 log.go:172] (0xc0009d4780) (5) Data frame handling\nI0108 14:22:19.282045    2077 log.go:172] (0xc0009d4780) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0108 14:22:19.304468    2077 log.go:172] (0xc00091e2c0) Data frame received for 3\nI0108 14:22:19.304522    2077 log.go:172] (0xc00063a1e0) (3) Data frame handling\nI0108 14:22:19.304545    2077 log.go:172] (0xc00063a1e0) (3) Data frame sent\nI0108 14:22:19.486647    2077 log.go:172] (0xc00091e2c0) (0xc00063a1e0) Stream removed, broadcasting: 3\nI0108 14:22:19.486880    2077 log.go:172] (0xc00091e2c0) Data frame received for 1\nI0108 14:22:19.486950    2077 log.go:172] (0xc0009d46e0) (1) Data frame handling\nI0108 14:22:19.487046    2077 log.go:172] (0xc0009d46e0) (1) Data frame sent\nI0108 14:22:19.487135    2077 log.go:172] (0xc00091e2c0) (0xc0009d4780) Stream removed, broadcasting: 5\nI0108 14:22:19.487818    2077 log.go:172] (0xc00091e2c0) (0xc0009d46e0) Stream removed, broadcasting: 1\nI0108 14:22:19.488133    2077 log.go:172] (0xc00091e2c0) Go away received\nI0108 14:22:19.490055    2077 log.go:172] (0xc00091e2c0) (0xc0009d46e0) Stream removed, broadcasting: 1\nI0108 14:22:19.490108    2077 log.go:172] (0xc00091e2c0) (0xc00063a1e0) Stream removed, broadcasting: 3\nI0108 14:22:19.490161    2077 log.go:172] (0xc00091e2c0) (0xc0009d4780) Stream removed, broadcasting: 5\n"
Jan  8 14:22:19.504: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan  8 14:22:19.504: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan  8 14:22:19.504: INFO: Waiting for statefulset status.replicas updated to 0
Jan  8 14:22:19.512: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1
Jan  8 14:22:29.526: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Jan  8 14:22:29.527: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Jan  8 14:22:29.527: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Jan  8 14:22:29.589: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Jan  8 14:22:29.589: INFO: ss-0  iruya-node                 Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-08 14:21:45 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-08 14:22:18 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-08 14:22:18 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-08 14:21:45 +0000 UTC  }]
Jan  8 14:22:29.590: INFO: ss-1  iruya-server-sfge57q7djm7  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-08 14:22:06 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-08 14:22:19 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-08 14:22:19 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-08 14:22:05 +0000 UTC  }]
Jan  8 14:22:29.590: INFO: ss-2  iruya-node                 Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-08 14:22:06 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-08 14:22:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-08 14:22:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-08 14:22:05 +0000 UTC  }]
Jan  8 14:22:29.590: INFO: 
Jan  8 14:22:29.590: INFO: StatefulSet ss has not reached scale 0, at 3
Jan  8 14:22:31.109: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Jan  8 14:22:31.109: INFO: ss-0  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-08 14:21:45 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-08 14:22:18 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-08 14:22:18 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-08 14:21:45 +0000 UTC  }]
Jan  8 14:22:31.109: INFO: ss-1  iruya-server-sfge57q7djm7  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-08 14:22:06 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-08 14:22:19 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-08 14:22:19 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-08 14:22:05 +0000 UTC  }]
Jan  8 14:22:31.109: INFO: ss-2  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-08 14:22:06 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-08 14:22:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-08 14:22:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-08 14:22:05 +0000 UTC  }]
Jan  8 14:22:31.109: INFO: 
Jan  8 14:22:31.109: INFO: StatefulSet ss has not reached scale 0, at 3
Jan  8 14:22:32.119: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Jan  8 14:22:32.119: INFO: ss-0  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-08 14:21:45 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-08 14:22:18 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-08 14:22:18 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-08 14:21:45 +0000 UTC  }]
Jan  8 14:22:32.119: INFO: ss-1  iruya-server-sfge57q7djm7  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-08 14:22:06 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-08 14:22:19 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-08 14:22:19 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-08 14:22:05 +0000 UTC  }]
Jan  8 14:22:32.119: INFO: ss-2  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-08 14:22:06 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-08 14:22:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-08 14:22:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-08 14:22:05 +0000 UTC  }]
Jan  8 14:22:32.119: INFO: 
Jan  8 14:22:32.119: INFO: StatefulSet ss has not reached scale 0, at 3
Jan  8 14:22:33.557: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Jan  8 14:22:33.558: INFO: ss-0  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-08 14:21:45 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-08 14:22:18 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-08 14:22:18 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-08 14:21:45 +0000 UTC  }]
Jan  8 14:22:33.558: INFO: ss-1  iruya-server-sfge57q7djm7  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-08 14:22:06 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-08 14:22:19 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-08 14:22:19 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-08 14:22:05 +0000 UTC  }]
Jan  8 14:22:33.558: INFO: ss-2  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-08 14:22:06 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-08 14:22:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-08 14:22:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-08 14:22:05 +0000 UTC  }]
Jan  8 14:22:33.558: INFO: 
Jan  8 14:22:33.558: INFO: StatefulSet ss has not reached scale 0, at 3
Jan  8 14:22:34.581: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Jan  8 14:22:34.581: INFO: ss-0  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-08 14:21:45 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-08 14:22:18 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-08 14:22:18 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-08 14:21:45 +0000 UTC  }]
Jan  8 14:22:34.581: INFO: ss-1  iruya-server-sfge57q7djm7  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-08 14:22:06 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-08 14:22:19 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-08 14:22:19 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-08 14:22:05 +0000 UTC  }]
Jan  8 14:22:34.581: INFO: ss-2  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-08 14:22:06 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-08 14:22:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-08 14:22:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-08 14:22:05 +0000 UTC  }]
Jan  8 14:22:34.581: INFO: 
Jan  8 14:22:34.581: INFO: StatefulSet ss has not reached scale 0, at 3
Jan  8 14:22:35.594: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Jan  8 14:22:35.594: INFO: ss-0  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-08 14:21:45 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-08 14:22:18 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-08 14:22:18 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-08 14:21:45 +0000 UTC  }]
Jan  8 14:22:35.594: INFO: ss-1  iruya-server-sfge57q7djm7  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-08 14:22:06 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-08 14:22:19 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-08 14:22:19 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-08 14:22:05 +0000 UTC  }]
Jan  8 14:22:35.594: INFO: ss-2  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-08 14:22:06 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-08 14:22:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-08 14:22:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-08 14:22:05 +0000 UTC  }]
Jan  8 14:22:35.594: INFO: 
Jan  8 14:22:35.594: INFO: StatefulSet ss has not reached scale 0, at 3
Jan  8 14:22:36.609: INFO: POD   NODE        PHASE    GRACE  CONDITIONS
Jan  8 14:22:36.609: INFO: ss-0  iruya-node  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-08 14:21:45 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-08 14:22:18 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-08 14:22:18 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-08 14:21:45 +0000 UTC  }]
Jan  8 14:22:36.610: INFO: ss-2  iruya-node  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-08 14:22:06 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-08 14:22:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-08 14:22:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-08 14:22:05 +0000 UTC  }]
Jan  8 14:22:36.610: INFO: 
Jan  8 14:22:36.610: INFO: StatefulSet ss has not reached scale 0, at 2
Jan  8 14:22:37.617: INFO: POD   NODE        PHASE    GRACE  CONDITIONS
Jan  8 14:22:37.617: INFO: ss-2  iruya-node  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-08 14:22:06 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-08 14:22:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-08 14:22:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-08 14:22:05 +0000 UTC  }]
Jan  8 14:22:37.617: INFO: 
Jan  8 14:22:37.617: INFO: StatefulSet ss has not reached scale 0, at 1
Jan  8 14:22:38.628: INFO: POD   NODE        PHASE    GRACE  CONDITIONS
Jan  8 14:22:38.628: INFO: ss-2  iruya-node  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-08 14:22:06 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-08 14:22:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-08 14:22:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-08 14:22:05 +0000 UTC  }]
Jan  8 14:22:38.628: INFO: 
Jan  8 14:22:38.628: INFO: StatefulSet ss has not reached scale 0, at 1
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-3843
Jan  8 14:22:39.637: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3843 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  8 14:22:39.908: INFO: rc: 1
Jan  8 14:22:39.908: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3843 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    error: unable to upgrade connection: container not found ("nginx")
 []  0xc0032cc660 exit status 1   true [0xc002248a98 0xc002248ab0 0xc002248b18] [0xc002248a98 0xc002248ab0 0xc002248b18] [0xc002248aa8 0xc002248b00] [0xba6c50 0xba6c50] 0xc00215f200 }:
Command stdout:

stderr:
error: unable to upgrade connection: container not found ("nginx")

error:
exit status 1
Jan  8 14:22:49.909: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3843 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  8 14:22:50.097: INFO: rc: 1
Jan  8 14:22:50.097: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3843 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc002ab1da0 exit status 1   true [0xc00265c8a8 0xc00265c8c0 0xc00265c8d8] [0xc00265c8a8 0xc00265c8c0 0xc00265c8d8] [0xc00265c8b8 0xc00265c8d0] [0xba6c50 0xba6c50] 0xc002941e60 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan  8 14:23:00.097: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3843 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  8 14:23:00.309: INFO: rc: 1
Jan  8 14:23:00.309: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3843 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc0032cc720 exit status 1   true [0xc002248b38 0xc002248b60 0xc002248ba0] [0xc002248b38 0xc002248b60 0xc002248ba0] [0xc002248b58 0xc002248b88] [0xba6c50 0xba6c50] 0xc00215f560 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan  8 14:23:10.310: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3843 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  8 14:23:10.505: INFO: rc: 1
Jan  8 14:23:10.505: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3843 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc002ab1ec0 exit status 1   true [0xc00265c8e0 0xc00265c8f8 0xc00265c910] [0xc00265c8e0 0xc00265c8f8 0xc00265c910] [0xc00265c8f0 0xc00265c908] [0xba6c50 0xba6c50] 0xc0027ba300 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan  8 14:23:20.508: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3843 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  8 14:23:20.752: INFO: rc: 1
Jan  8 14:23:20.752: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3843 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc0032dc0f0 exit status 1   true [0xc001d78028 0xc001d78040 0xc001d78058] [0xc001d78028 0xc001d78040 0xc001d78058] [0xc001d78038 0xc001d78050] [0xba6c50 0xba6c50] 0xc0021b6fc0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan  8 14:23:30.753: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3843 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  8 14:23:31.028: INFO: rc: 1
Jan  8 14:23:31.028: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3843 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc002a140c0 exit status 1   true [0xc000540180 0xc000541710 0xc0005418f0] [0xc000540180 0xc000541710 0xc0005418f0] [0xc000540368 0xc000541848] [0xba6c50 0xba6c50] 0xc001e7a420 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan  8 14:23:41.028: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3843 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  8 14:23:43.004: INFO: rc: 1
Jan  8 14:23:43.005: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3843 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc002de00c0 exit status 1   true [0xc00035dab8 0xc00035db88 0xc00035dd00] [0xc00035dab8 0xc00035db88 0xc00035dd00] [0xc00035db38 0xc00035dc68] [0xba6c50 0xba6c50] 0xc001efaba0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan  8 14:23:53.005: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3843 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  8 14:23:53.232: INFO: rc: 1
Jan  8 14:23:53.232: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3843 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc002a14180 exit status 1   true [0xc0005419a0 0xc000541ab8 0xc000541c58] [0xc0005419a0 0xc000541ab8 0xc000541c58] [0xc000541a38 0xc000541b28] [0xba6c50 0xba6c50] 0xc001e7a780 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan  8 14:24:03.232: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3843 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  8 14:24:03.417: INFO: rc: 1
Jan  8 14:24:03.417: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3843 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc0016c80f0 exit status 1   true [0xc0008fc038 0xc0008fc200 0xc0008fc4d0] [0xc0008fc038 0xc0008fc200 0xc0008fc4d0] [0xc0008fc1c8 0xc0008fc3f8] [0xba6c50 0xba6c50] 0xc0019778c0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan  8 14:24:13.418: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3843 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  8 14:24:13.645: INFO: rc: 1
Jan  8 14:24:13.645: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3843 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc002de01e0 exit status 1   true [0xc00035dd48 0xc00035df70 0xc0022a2008] [0xc00035dd48 0xc00035df70 0xc0022a2008] [0xc00035de90 0xc0022a2000] [0xba6c50 0xba6c50] 0xc001efb9e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan  8 14:24:23.646: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3843 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  8 14:24:23.848: INFO: rc: 1
Jan  8 14:24:23.848: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3843 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc0029a6090 exit status 1   true [0xc0023fc028 0xc0023fc0a0 0xc0023fc160] [0xc0023fc028 0xc0023fc0a0 0xc0023fc160] [0xc0023fc080 0xc0023fc0e8] [0xba6c50 0xba6c50] 0xc002bd0240 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan  8 14:24:33.848: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3843 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  8 14:24:34.056: INFO: rc: 1
Jan  8 14:24:34.056: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3843 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc002de0300 exit status 1   true [0xc0022a2010 0xc0022a2028 0xc0022a2040] [0xc0022a2010 0xc0022a2028 0xc0022a2040] [0xc0022a2020 0xc0022a2038] [0xba6c50 0xba6c50] 0xc001efbf80 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan  8 14:24:44.056: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3843 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  8 14:24:44.192: INFO: rc: 1
Jan  8 14:24:44.192: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3843 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc0029a6180 exit status 1   true [0xc0023fc178 0xc0023fc1c0 0xc0023fc218] [0xc0023fc178 0xc0023fc1c0 0xc0023fc218] [0xc0023fc1a8 0xc0023fc1f0] [0xba6c50 0xba6c50] 0xc002bd0540 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan  8 14:24:54.192: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3843 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  8 14:24:54.350: INFO: rc: 1
Jan  8 14:24:54.350: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3843 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc0029a6270 exit status 1   true [0xc0023fc240 0xc0023fc288 0xc0023fc2e0] [0xc0023fc240 0xc0023fc288 0xc0023fc2e0] [0xc0023fc268 0xc0023fc2b0] [0xba6c50 0xba6c50] 0xc002bd0840 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan  8 14:25:04.351: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3843 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  8 14:25:04.450: INFO: rc: 1
Jan  8 14:25:04.450: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3843 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc002de0420 exit status 1   true [0xc0022a2048 0xc0022a2060 0xc0022a2078] [0xc0022a2048 0xc0022a2060 0xc0022a2078] [0xc0022a2058 0xc0022a2070] [0xba6c50 0xba6c50] 0xc00280c720 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan  8 14:25:14.451: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3843 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  8 14:25:14.564: INFO: rc: 1
Jan  8 14:25:14.564: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3843 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc0029a6330 exit status 1   true [0xc0023fc320 0xc0023fc368 0xc0023fc3e8] [0xc0023fc320 0xc0023fc368 0xc0023fc3e8] [0xc0023fc348 0xc0023fc3b8] [0xba6c50 0xba6c50] 0xc002bd0b40 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan  8 14:25:24.565: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3843 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  8 14:25:24.729: INFO: rc: 1
Jan  8 14:25:24.729: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3843 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc002de0060 exit status 1   true [0xc00035daf8 0xc00035dbe8 0xc00035dd48] [0xc00035daf8 0xc00035dbe8 0xc00035dd48] [0xc00035db88 0xc00035dd00] [0xba6c50 0xba6c50] 0xc001efaba0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan  8 14:25:34.729: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3843 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  8 14:25:34.980: INFO: rc: 1
Jan  8 14:25:34.981: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3843 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc002de0150 exit status 1   true [0xc00035ddd0 0xc00035df90 0xc0022a2010] [0xc00035ddd0 0xc00035df90 0xc0022a2010] [0xc00035df70 0xc0022a2008] [0xba6c50 0xba6c50] 0xc001efb9e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan  8 14:25:44.981: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3843 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  8 14:25:45.112: INFO: rc: 1
Jan  8 14:25:45.112: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3843 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc002de0270 exit status 1   true [0xc0022a2018 0xc0022a2030 0xc0022a2048] [0xc0022a2018 0xc0022a2030 0xc0022a2048] [0xc0022a2028 0xc0022a2040] [0xba6c50 0xba6c50] 0xc001efbf80 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan  8 14:25:55.113: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3843 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  8 14:25:55.273: INFO: rc: 1
Jan  8 14:25:55.274: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3843 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc0029a60f0 exit status 1   true [0xc0023fc028 0xc0023fc0a0 0xc0023fc160] [0xc0023fc028 0xc0023fc0a0 0xc0023fc160] [0xc0023fc080 0xc0023fc0e8] [0xba6c50 0xba6c50] 0xc002bd0240 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan  8 14:26:05.274: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3843 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  8 14:26:05.501: INFO: rc: 1
Jan  8 14:26:05.502: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3843 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc0016c8150 exit status 1   true [0xc0008fc038 0xc0008fc200 0xc0008fc4d0] [0xc0008fc038 0xc0008fc200 0xc0008fc4d0] [0xc0008fc1c8 0xc0008fc3f8] [0xba6c50 0xba6c50] 0xc0019778c0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan  8 14:26:15.502: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3843 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  8 14:26:15.668: INFO: rc: 1
Jan  8 14:26:15.668: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3843 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc0029a6240 exit status 1   true [0xc0023fc178 0xc0023fc1c0 0xc0023fc218] [0xc0023fc178 0xc0023fc1c0 0xc0023fc218] [0xc0023fc1a8 0xc0023fc1f0] [0xba6c50 0xba6c50] 0xc002bd0540 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan  8 14:26:25.668: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3843 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  8 14:26:25.905: INFO: rc: 1
Jan  8 14:26:25.905: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3843 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc0029a6390 exit status 1   true [0xc0023fc240 0xc0023fc288 0xc0023fc2e0] [0xc0023fc240 0xc0023fc288 0xc0023fc2e0] [0xc0023fc268 0xc0023fc2b0] [0xba6c50 0xba6c50] 0xc002bd0840 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan  8 14:26:35.906: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3843 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  8 14:26:36.074: INFO: rc: 1
Jan  8 14:26:36.074: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3843 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc0016c8210 exit status 1   true [0xc0008fc568 0xc0008fc840 0xc0008fc9f0] [0xc0008fc568 0xc0008fc840 0xc0008fc9f0] [0xc0008fc820 0xc0008fc998] [0xba6c50 0xba6c50] 0xc001977d40 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan  8 14:26:46.074: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3843 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  8 14:26:46.257: INFO: rc: 1
Jan  8 14:26:46.257: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3843 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc0016c8300 exit status 1   true [0xc0008fca10 0xc0008fca90 0xc0008fcb00] [0xc0008fca10 0xc0008fca90 0xc0008fcb00] [0xc0008fca80 0xc0008fcaa0] [0xba6c50 0xba6c50] 0xc001e7a180 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan  8 14:26:56.257: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3843 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  8 14:26:56.410: INFO: rc: 1
Jan  8 14:26:56.411: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3843 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc0016c83c0 exit status 1   true [0xc0008fcb30 0xc0008fcb78 0xc0008fcc30] [0xc0008fcb30 0xc0008fcb78 0xc0008fcc30] [0xc0008fcb70 0xc0008fcc10] [0xba6c50 0xba6c50] 0xc001e7a5a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan  8 14:27:06.411: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3843 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  8 14:27:06.563: INFO: rc: 1
Jan  8 14:27:06.563: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3843 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc002de03c0 exit status 1   true [0xc0022a2050 0xc0022a2068 0xc0022a2080] [0xc0022a2050 0xc0022a2068 0xc0022a2080] [0xc0022a2060 0xc0022a2078] [0xba6c50 0xba6c50] 0xc00280c720 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan  8 14:27:16.564: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3843 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  8 14:27:16.723: INFO: rc: 1
Jan  8 14:27:16.723: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3843 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc0016c84e0 exit status 1   true [0xc0008fcc40 0xc0008fcd30 0xc0008fcd70] [0xc0008fcc40 0xc0008fcd30 0xc0008fcd70] [0xc0008fcce8 0xc0008fcd68] [0xba6c50 0xba6c50] 0xc001e7a9c0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan  8 14:27:26.724: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3843 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  8 14:27:26.927: INFO: rc: 1
Jan  8 14:27:26.927: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3843 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc0016c8090 exit status 1   true [0xc00035daf8 0xc00035dbe8 0xc00035dd48] [0xc00035daf8 0xc00035dbe8 0xc00035dd48] [0xc00035db88 0xc00035dd00] [0xba6c50 0xba6c50] 0xc0019778c0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan  8 14:27:36.928: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3843 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  8 14:27:37.093: INFO: rc: 1
Jan  8 14:27:37.093: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3843 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc0029a6090 exit status 1   true [0xc0008fc038 0xc0008fc200 0xc0008fc4d0] [0xc0008fc038 0xc0008fc200 0xc0008fc4d0] [0xc0008fc1c8 0xc0008fc3f8] [0xba6c50 0xba6c50] 0xc001efaba0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan  8 14:27:47.093: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3843 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  8 14:27:47.251: INFO: rc: 1
Jan  8 14:27:47.251: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: 
Jan  8 14:27:47.251: INFO: Scaling statefulset ss to 0
Jan  8 14:27:47.266: INFO: Waiting for statefulset status.replicas updated to 0
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86
Jan  8 14:27:47.270: INFO: Deleting all statefulset in ns statefulset-3843
Jan  8 14:27:47.273: INFO: Scaling statefulset ss to 0
Jan  8 14:27:47.284: INFO: Waiting for statefulset status.replicas updated to 0
Jan  8 14:27:47.287: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  8 14:27:47.304: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-3843" for this suite.
Jan  8 14:27:55.330: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  8 14:27:55.455: INFO: namespace statefulset-3843 deletion completed in 8.147252612s

• [SLOW TEST:370.475 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    Burst scaling should run to completion even with unhealthy pods [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  8 14:27:55.456: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-1101f320-c22f-4ef1-9c9f-8336b5dff66b
STEP: Creating a pod to test consume configMaps
Jan  8 14:27:55.594: INFO: Waiting up to 5m0s for pod "pod-configmaps-eb553a36-099f-4e3b-b421-766c31037da4" in namespace "configmap-9436" to be "success or failure"
Jan  8 14:27:55.620: INFO: Pod "pod-configmaps-eb553a36-099f-4e3b-b421-766c31037da4": Phase="Pending", Reason="", readiness=false. Elapsed: 26.492385ms
Jan  8 14:27:57.629: INFO: Pod "pod-configmaps-eb553a36-099f-4e3b-b421-766c31037da4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035309035s
Jan  8 14:27:59.637: INFO: Pod "pod-configmaps-eb553a36-099f-4e3b-b421-766c31037da4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.042970553s
Jan  8 14:28:01.644: INFO: Pod "pod-configmaps-eb553a36-099f-4e3b-b421-766c31037da4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.050006212s
Jan  8 14:28:03.651: INFO: Pod "pod-configmaps-eb553a36-099f-4e3b-b421-766c31037da4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.056929896s
STEP: Saw pod success
Jan  8 14:28:03.651: INFO: Pod "pod-configmaps-eb553a36-099f-4e3b-b421-766c31037da4" satisfied condition "success or failure"
Jan  8 14:28:03.655: INFO: Trying to get logs from node iruya-node pod pod-configmaps-eb553a36-099f-4e3b-b421-766c31037da4 container configmap-volume-test: 
STEP: delete the pod
Jan  8 14:28:03.711: INFO: Waiting for pod pod-configmaps-eb553a36-099f-4e3b-b421-766c31037da4 to disappear
Jan  8 14:28:03.716: INFO: Pod pod-configmaps-eb553a36-099f-4e3b-b421-766c31037da4 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  8 14:28:03.716: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-9436" for this suite.
Jan  8 14:28:09.848: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  8 14:28:10.001: INFO: namespace configmap-9436 deletion completed in 6.20780441s

• [SLOW TEST:14.545 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  8 14:28:10.002: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name s-test-opt-del-8477ae91-dee8-4b41-8b57-1ddbcd273002
STEP: Creating secret with name s-test-opt-upd-018a2bba-3100-420b-85df-9080f2cee0f6
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-8477ae91-dee8-4b41-8b57-1ddbcd273002
STEP: Updating secret s-test-opt-upd-018a2bba-3100-420b-85df-9080f2cee0f6
STEP: Creating secret with name s-test-opt-create-a1ba9aa5-7305-4d59-9d65-d9b63c2120f0
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  8 14:29:26.104: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2276" for this suite.
Jan  8 14:29:48.140: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  8 14:29:48.267: INFO: namespace projected-2276 deletion completed in 22.155933491s

• [SLOW TEST:98.265 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  8 14:29:48.268: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a service in the namespace
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
STEP: Recreating the namespace
STEP: Verifying there is no service in the namespace
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  8 14:29:54.671: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "namespaces-9747" for this suite.
Jan  8 14:30:00.696: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  8 14:30:00.835: INFO: namespace namespaces-9747 deletion completed in 6.158956976s
STEP: Destroying namespace "nsdeletetest-9926" for this suite.
Jan  8 14:30:00.837: INFO: Namespace nsdeletetest-9926 was already deleted
STEP: Destroying namespace "nsdeletetest-5057" for this suite.
Jan  8 14:30:06.866: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  8 14:30:07.012: INFO: namespace nsdeletetest-5057 deletion completed in 6.174305113s

• [SLOW TEST:18.744 seconds]
[sig-api-machinery] Namespaces [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  8 14:30:07.012: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name projected-secret-test-map-925d19a4-afe6-435b-8a74-01f6159d8848
STEP: Creating a pod to test consume secrets
Jan  8 14:30:07.114: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-b0137077-d579-4e58-a9ea-a401df82b80f" in namespace "projected-2731" to be "success or failure"
Jan  8 14:30:07.200: INFO: Pod "pod-projected-secrets-b0137077-d579-4e58-a9ea-a401df82b80f": Phase="Pending", Reason="", readiness=false. Elapsed: 86.181365ms
Jan  8 14:30:09.211: INFO: Pod "pod-projected-secrets-b0137077-d579-4e58-a9ea-a401df82b80f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.097589469s
Jan  8 14:30:11.226: INFO: Pod "pod-projected-secrets-b0137077-d579-4e58-a9ea-a401df82b80f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.11197166s
Jan  8 14:30:13.236: INFO: Pod "pod-projected-secrets-b0137077-d579-4e58-a9ea-a401df82b80f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.122431832s
Jan  8 14:30:15.250: INFO: Pod "pod-projected-secrets-b0137077-d579-4e58-a9ea-a401df82b80f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.136666397s
STEP: Saw pod success
Jan  8 14:30:15.251: INFO: Pod "pod-projected-secrets-b0137077-d579-4e58-a9ea-a401df82b80f" satisfied condition "success or failure"
Jan  8 14:30:15.257: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-b0137077-d579-4e58-a9ea-a401df82b80f container projected-secret-volume-test: 
STEP: delete the pod
Jan  8 14:30:15.329: INFO: Waiting for pod pod-projected-secrets-b0137077-d579-4e58-a9ea-a401df82b80f to disappear
Jan  8 14:30:15.345: INFO: Pod pod-projected-secrets-b0137077-d579-4e58-a9ea-a401df82b80f no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  8 14:30:15.346: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2731" for this suite.
Jan  8 14:30:21.391: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  8 14:30:21.565: INFO: namespace projected-2731 deletion completed in 6.211681242s

• [SLOW TEST:14.553 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  8 14:30:21.566: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test override arguments
Jan  8 14:30:21.741: INFO: Waiting up to 5m0s for pod "client-containers-8d020109-c36d-4b45-8a41-148d0dd37662" in namespace "containers-7462" to be "success or failure"
Jan  8 14:30:21.746: INFO: Pod "client-containers-8d020109-c36d-4b45-8a41-148d0dd37662": Phase="Pending", Reason="", readiness=false. Elapsed: 4.960559ms
Jan  8 14:30:23.757: INFO: Pod "client-containers-8d020109-c36d-4b45-8a41-148d0dd37662": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015659665s
Jan  8 14:30:25.762: INFO: Pod "client-containers-8d020109-c36d-4b45-8a41-148d0dd37662": Phase="Pending", Reason="", readiness=false. Elapsed: 4.020829247s
Jan  8 14:30:27.772: INFO: Pod "client-containers-8d020109-c36d-4b45-8a41-148d0dd37662": Phase="Pending", Reason="", readiness=false. Elapsed: 6.030772546s
Jan  8 14:30:29.914: INFO: Pod "client-containers-8d020109-c36d-4b45-8a41-148d0dd37662": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.172140102s
STEP: Saw pod success
Jan  8 14:30:29.914: INFO: Pod "client-containers-8d020109-c36d-4b45-8a41-148d0dd37662" satisfied condition "success or failure"
Jan  8 14:30:29.920: INFO: Trying to get logs from node iruya-node pod client-containers-8d020109-c36d-4b45-8a41-148d0dd37662 container test-container: 
STEP: delete the pod
Jan  8 14:30:29.987: INFO: Waiting for pod client-containers-8d020109-c36d-4b45-8a41-148d0dd37662 to disappear
Jan  8 14:30:30.104: INFO: Pod client-containers-8d020109-c36d-4b45-8a41-148d0dd37662 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  8 14:30:30.104: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-7462" for this suite.
Jan  8 14:30:36.150: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  8 14:30:36.237: INFO: namespace containers-7462 deletion completed in 6.124282815s

• [SLOW TEST:14.671 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  8 14:30:36.238: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-1cc5c54c-3062-4668-9dcc-f2c9a74c765f
STEP: Creating a pod to test consume secrets
Jan  8 14:30:36.394: INFO: Waiting up to 5m0s for pod "pod-secrets-59fed0d8-5c29-4a7b-a039-0ce928761533" in namespace "secrets-3328" to be "success or failure"
Jan  8 14:30:36.401: INFO: Pod "pod-secrets-59fed0d8-5c29-4a7b-a039-0ce928761533": Phase="Pending", Reason="", readiness=false. Elapsed: 6.837102ms
Jan  8 14:30:38.414: INFO: Pod "pod-secrets-59fed0d8-5c29-4a7b-a039-0ce928761533": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01985039s
Jan  8 14:30:40.424: INFO: Pod "pod-secrets-59fed0d8-5c29-4a7b-a039-0ce928761533": Phase="Pending", Reason="", readiness=false. Elapsed: 4.03020583s
Jan  8 14:30:42.434: INFO: Pod "pod-secrets-59fed0d8-5c29-4a7b-a039-0ce928761533": Phase="Pending", Reason="", readiness=false. Elapsed: 6.040188398s
Jan  8 14:30:44.447: INFO: Pod "pod-secrets-59fed0d8-5c29-4a7b-a039-0ce928761533": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.052956069s
STEP: Saw pod success
Jan  8 14:30:44.447: INFO: Pod "pod-secrets-59fed0d8-5c29-4a7b-a039-0ce928761533" satisfied condition "success or failure"
Jan  8 14:30:44.460: INFO: Trying to get logs from node iruya-node pod pod-secrets-59fed0d8-5c29-4a7b-a039-0ce928761533 container secret-volume-test: 
STEP: delete the pod
Jan  8 14:30:44.564: INFO: Waiting for pod pod-secrets-59fed0d8-5c29-4a7b-a039-0ce928761533 to disappear
Jan  8 14:30:44.632: INFO: Pod pod-secrets-59fed0d8-5c29-4a7b-a039-0ce928761533 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  8 14:30:44.632: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-3328" for this suite.
Jan  8 14:30:50.677: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  8 14:30:50.752: INFO: namespace secrets-3328 deletion completed in 6.111353022s

• [SLOW TEST:14.515 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  8 14:30:50.753: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Jan  8 14:30:50.841: INFO: Waiting up to 5m0s for pod "downwardapi-volume-5d7c6d9e-a371-4960-95c1-70e6bda7cbb1" in namespace "projected-8153" to be "success or failure"
Jan  8 14:30:50.930: INFO: Pod "downwardapi-volume-5d7c6d9e-a371-4960-95c1-70e6bda7cbb1": Phase="Pending", Reason="", readiness=false. Elapsed: 88.603283ms
Jan  8 14:30:52.941: INFO: Pod "downwardapi-volume-5d7c6d9e-a371-4960-95c1-70e6bda7cbb1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.099117641s
Jan  8 14:30:54.951: INFO: Pod "downwardapi-volume-5d7c6d9e-a371-4960-95c1-70e6bda7cbb1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.109249359s
Jan  8 14:30:57.043: INFO: Pod "downwardapi-volume-5d7c6d9e-a371-4960-95c1-70e6bda7cbb1": Phase="Pending", Reason="", readiness=false. Elapsed: 6.201915584s
Jan  8 14:30:59.053: INFO: Pod "downwardapi-volume-5d7c6d9e-a371-4960-95c1-70e6bda7cbb1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.211934118s
STEP: Saw pod success
Jan  8 14:30:59.053: INFO: Pod "downwardapi-volume-5d7c6d9e-a371-4960-95c1-70e6bda7cbb1" satisfied condition "success or failure"
Jan  8 14:30:59.057: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-5d7c6d9e-a371-4960-95c1-70e6bda7cbb1 container client-container: 
STEP: delete the pod
Jan  8 14:30:59.188: INFO: Waiting for pod downwardapi-volume-5d7c6d9e-a371-4960-95c1-70e6bda7cbb1 to disappear
Jan  8 14:30:59.197: INFO: Pod downwardapi-volume-5d7c6d9e-a371-4960-95c1-70e6bda7cbb1 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  8 14:30:59.198: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8153" for this suite.
Jan  8 14:31:05.234: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  8 14:31:05.361: INFO: namespace projected-8153 deletion completed in 6.1469338s

• [SLOW TEST:14.609 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  8 14:31:05.362: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-map-279077e9-6d15-491d-a77e-c5da67b393e8
STEP: Creating a pod to test consume configMaps
Jan  8 14:31:05.466: INFO: Waiting up to 5m0s for pod "pod-configmaps-d6bdd92a-1e47-4a6f-83f4-9cbe95042aff" in namespace "configmap-1046" to be "success or failure"
Jan  8 14:31:05.487: INFO: Pod "pod-configmaps-d6bdd92a-1e47-4a6f-83f4-9cbe95042aff": Phase="Pending", Reason="", readiness=false. Elapsed: 20.662773ms
Jan  8 14:31:07.497: INFO: Pod "pod-configmaps-d6bdd92a-1e47-4a6f-83f4-9cbe95042aff": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030432951s
Jan  8 14:31:09.503: INFO: Pod "pod-configmaps-d6bdd92a-1e47-4a6f-83f4-9cbe95042aff": Phase="Pending", Reason="", readiness=false. Elapsed: 4.036980466s
Jan  8 14:31:11.511: INFO: Pod "pod-configmaps-d6bdd92a-1e47-4a6f-83f4-9cbe95042aff": Phase="Pending", Reason="", readiness=false. Elapsed: 6.044593197s
Jan  8 14:31:13.519: INFO: Pod "pod-configmaps-d6bdd92a-1e47-4a6f-83f4-9cbe95042aff": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.052990846s
STEP: Saw pod success
Jan  8 14:31:13.519: INFO: Pod "pod-configmaps-d6bdd92a-1e47-4a6f-83f4-9cbe95042aff" satisfied condition "success or failure"
Jan  8 14:31:13.524: INFO: Trying to get logs from node iruya-node pod pod-configmaps-d6bdd92a-1e47-4a6f-83f4-9cbe95042aff container configmap-volume-test: 
STEP: delete the pod
Jan  8 14:31:13.601: INFO: Waiting for pod pod-configmaps-d6bdd92a-1e47-4a6f-83f4-9cbe95042aff to disappear
Jan  8 14:31:13.720: INFO: Pod pod-configmaps-d6bdd92a-1e47-4a6f-83f4-9cbe95042aff no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  8 14:31:13.720: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-1046" for this suite.
Jan  8 14:31:19.761: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  8 14:31:19.869: INFO: namespace configmap-1046 deletion completed in 6.139661084s

• [SLOW TEST:14.508 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  8 14:31:19.871: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-map-88dd5f8f-f1c8-4dd9-9d13-9263d2b1cc19
STEP: Creating a pod to test consume configMaps
Jan  8 14:31:20.011: INFO: Waiting up to 5m0s for pod "pod-configmaps-43e07de4-a278-4d20-a52f-79abf1697d18" in namespace "configmap-604" to be "success or failure"
Jan  8 14:31:20.014: INFO: Pod "pod-configmaps-43e07de4-a278-4d20-a52f-79abf1697d18": Phase="Pending", Reason="", readiness=false. Elapsed: 2.565969ms
Jan  8 14:31:22.031: INFO: Pod "pod-configmaps-43e07de4-a278-4d20-a52f-79abf1697d18": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019981201s
Jan  8 14:31:24.046: INFO: Pod "pod-configmaps-43e07de4-a278-4d20-a52f-79abf1697d18": Phase="Pending", Reason="", readiness=false. Elapsed: 4.035097858s
Jan  8 14:31:26.053: INFO: Pod "pod-configmaps-43e07de4-a278-4d20-a52f-79abf1697d18": Phase="Pending", Reason="", readiness=false. Elapsed: 6.041384252s
Jan  8 14:31:28.063: INFO: Pod "pod-configmaps-43e07de4-a278-4d20-a52f-79abf1697d18": Phase="Pending", Reason="", readiness=false. Elapsed: 8.051786069s
Jan  8 14:31:30.072: INFO: Pod "pod-configmaps-43e07de4-a278-4d20-a52f-79abf1697d18": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.060351394s
STEP: Saw pod success
Jan  8 14:31:30.072: INFO: Pod "pod-configmaps-43e07de4-a278-4d20-a52f-79abf1697d18" satisfied condition "success or failure"
Jan  8 14:31:30.076: INFO: Trying to get logs from node iruya-node pod pod-configmaps-43e07de4-a278-4d20-a52f-79abf1697d18 container configmap-volume-test: 
STEP: delete the pod
Jan  8 14:31:30.150: INFO: Waiting for pod pod-configmaps-43e07de4-a278-4d20-a52f-79abf1697d18 to disappear
Jan  8 14:31:30.158: INFO: Pod pod-configmaps-43e07de4-a278-4d20-a52f-79abf1697d18 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  8 14:31:30.158: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-604" for this suite.
Jan  8 14:31:36.186: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  8 14:31:36.385: INFO: namespace configmap-604 deletion completed in 6.221470467s

• [SLOW TEST:16.515 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[k8s.io] Probing container 
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  8 14:31:36.386: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod busybox-fabd046a-d51f-45e0-afa1-a3af212b66d3 in namespace container-probe-2181
Jan  8 14:31:44.602: INFO: Started pod busybox-fabd046a-d51f-45e0-afa1-a3af212b66d3 in namespace container-probe-2181
STEP: checking the pod's current state and verifying that restartCount is present
Jan  8 14:31:44.607: INFO: Initial restart count of pod busybox-fabd046a-d51f-45e0-afa1-a3af212b66d3 is 0
Jan  8 14:32:39.368: INFO: Restart count of pod container-probe-2181/busybox-fabd046a-d51f-45e0-afa1-a3af212b66d3 is now 1 (54.760022084s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  8 14:32:39.428: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-2181" for this suite.
Jan  8 14:32:47.513: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  8 14:32:47.652: INFO: namespace container-probe-2181 deletion completed in 8.185802241s

• [SLOW TEST:71.266 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should scale a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  8 14:32:47.652: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should scale a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a replication controller
Jan  8 14:32:47.709: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-713'
Jan  8 14:32:48.082: INFO: stderr: ""
Jan  8 14:32:48.082: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jan  8 14:32:48.082: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-713'
Jan  8 14:32:48.296: INFO: stderr: ""
Jan  8 14:32:48.296: INFO: stdout: "update-demo-nautilus-pb7x2 update-demo-nautilus-xthk7 "
Jan  8 14:32:48.297: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-pb7x2 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-713'
Jan  8 14:32:48.412: INFO: stderr: ""
Jan  8 14:32:48.412: INFO: stdout: ""
Jan  8 14:32:48.412: INFO: update-demo-nautilus-pb7x2 is created but not running
Jan  8 14:32:53.413: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-713'
Jan  8 14:32:54.604: INFO: stderr: ""
Jan  8 14:32:54.604: INFO: stdout: "update-demo-nautilus-pb7x2 update-demo-nautilus-xthk7 "
Jan  8 14:32:54.605: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-pb7x2 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-713'
Jan  8 14:32:55.353: INFO: stderr: ""
Jan  8 14:32:55.353: INFO: stdout: ""
Jan  8 14:32:55.353: INFO: update-demo-nautilus-pb7x2 is created but not running
Jan  8 14:33:00.354: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-713'
Jan  8 14:33:00.590: INFO: stderr: ""
Jan  8 14:33:00.590: INFO: stdout: "update-demo-nautilus-pb7x2 update-demo-nautilus-xthk7 "
Jan  8 14:33:00.591: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-pb7x2 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-713'
Jan  8 14:33:00.709: INFO: stderr: ""
Jan  8 14:33:00.709: INFO: stdout: "true"
Jan  8 14:33:00.709: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-pb7x2 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-713'
Jan  8 14:33:00.839: INFO: stderr: ""
Jan  8 14:33:00.839: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan  8 14:33:00.839: INFO: validating pod update-demo-nautilus-pb7x2
Jan  8 14:33:00.869: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan  8 14:33:00.869: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan  8 14:33:00.869: INFO: update-demo-nautilus-pb7x2 is verified up and running
Jan  8 14:33:00.869: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-xthk7 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-713'
Jan  8 14:33:00.993: INFO: stderr: ""
Jan  8 14:33:00.993: INFO: stdout: "true"
Jan  8 14:33:00.993: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-xthk7 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-713'
Jan  8 14:33:01.142: INFO: stderr: ""
Jan  8 14:33:01.142: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan  8 14:33:01.142: INFO: validating pod update-demo-nautilus-xthk7
Jan  8 14:33:01.154: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan  8 14:33:01.154: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan  8 14:33:01.154: INFO: update-demo-nautilus-xthk7 is verified up and running
STEP: scaling down the replication controller
Jan  8 14:33:01.156: INFO: scanned /root for discovery docs: 
Jan  8 14:33:01.156: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-713'
Jan  8 14:33:02.357: INFO: stderr: ""
Jan  8 14:33:02.357: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jan  8 14:33:02.357: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-713'
Jan  8 14:33:02.571: INFO: stderr: ""
Jan  8 14:33:02.571: INFO: stdout: "update-demo-nautilus-pb7x2 update-demo-nautilus-xthk7 "
STEP: Replicas for name=update-demo: expected=1 actual=2
Jan  8 14:33:07.572: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-713'
Jan  8 14:33:07.736: INFO: stderr: ""
Jan  8 14:33:07.736: INFO: stdout: "update-demo-nautilus-pb7x2 update-demo-nautilus-xthk7 "
STEP: Replicas for name=update-demo: expected=1 actual=2
Jan  8 14:33:12.737: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-713'
Jan  8 14:33:12.965: INFO: stderr: ""
Jan  8 14:33:12.965: INFO: stdout: "update-demo-nautilus-pb7x2 update-demo-nautilus-xthk7 "
STEP: Replicas for name=update-demo: expected=1 actual=2
Jan  8 14:33:17.966: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-713'
Jan  8 14:33:18.104: INFO: stderr: ""
Jan  8 14:33:18.104: INFO: stdout: "update-demo-nautilus-pb7x2 "
Jan  8 14:33:18.104: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-pb7x2 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-713'
Jan  8 14:33:18.250: INFO: stderr: ""
Jan  8 14:33:18.250: INFO: stdout: "true"
Jan  8 14:33:18.250: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-pb7x2 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-713'
Jan  8 14:33:18.349: INFO: stderr: ""
Jan  8 14:33:18.349: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan  8 14:33:18.349: INFO: validating pod update-demo-nautilus-pb7x2
Jan  8 14:33:18.356: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan  8 14:33:18.356: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan  8 14:33:18.356: INFO: update-demo-nautilus-pb7x2 is verified up and running
STEP: scaling up the replication controller
Jan  8 14:33:18.358: INFO: scanned /root for discovery docs: 
Jan  8 14:33:18.358: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-713'
Jan  8 14:33:20.400: INFO: stderr: ""
Jan  8 14:33:20.400: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jan  8 14:33:20.400: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-713'
Jan  8 14:33:20.788: INFO: stderr: ""
Jan  8 14:33:20.788: INFO: stdout: "update-demo-nautilus-b46h6 update-demo-nautilus-pb7x2 "
Jan  8 14:33:20.788: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-b46h6 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-713'
Jan  8 14:33:20.914: INFO: stderr: ""
Jan  8 14:33:20.914: INFO: stdout: ""
Jan  8 14:33:20.914: INFO: update-demo-nautilus-b46h6 is created but not running
Jan  8 14:33:25.919: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-713'
Jan  8 14:33:26.125: INFO: stderr: ""
Jan  8 14:33:26.125: INFO: stdout: "update-demo-nautilus-b46h6 update-demo-nautilus-pb7x2 "
Jan  8 14:33:26.125: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-b46h6 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-713'
Jan  8 14:33:26.250: INFO: stderr: ""
Jan  8 14:33:26.250: INFO: stdout: ""
Jan  8 14:33:26.250: INFO: update-demo-nautilus-b46h6 is created but not running
Jan  8 14:33:31.251: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-713'
Jan  8 14:33:31.603: INFO: stderr: ""
Jan  8 14:33:31.603: INFO: stdout: "update-demo-nautilus-b46h6 update-demo-nautilus-pb7x2 "
Jan  8 14:33:31.603: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-b46h6 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-713'
Jan  8 14:33:31.814: INFO: stderr: ""
Jan  8 14:33:31.815: INFO: stdout: "true"
Jan  8 14:33:31.815: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-b46h6 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-713'
Jan  8 14:33:31.945: INFO: stderr: ""
Jan  8 14:33:31.945: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan  8 14:33:31.945: INFO: validating pod update-demo-nautilus-b46h6
Jan  8 14:33:31.949: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan  8 14:33:31.949: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan  8 14:33:31.949: INFO: update-demo-nautilus-b46h6 is verified up and running
Jan  8 14:33:31.949: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-pb7x2 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-713'
Jan  8 14:33:32.031: INFO: stderr: ""
Jan  8 14:33:32.031: INFO: stdout: "true"
Jan  8 14:33:32.031: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-pb7x2 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-713'
Jan  8 14:33:32.176: INFO: stderr: ""
Jan  8 14:33:32.176: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan  8 14:33:32.176: INFO: validating pod update-demo-nautilus-pb7x2
Jan  8 14:33:32.185: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan  8 14:33:32.185: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan  8 14:33:32.185: INFO: update-demo-nautilus-pb7x2 is verified up and running
STEP: using delete to clean up resources
Jan  8 14:33:32.185: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-713'
Jan  8 14:33:32.316: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan  8 14:33:32.316: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Jan  8 14:33:32.316: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-713'
Jan  8 14:33:32.449: INFO: stderr: "No resources found.\n"
Jan  8 14:33:32.449: INFO: stdout: ""
Jan  8 14:33:32.449: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-713 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Jan  8 14:33:32.567: INFO: stderr: ""
Jan  8 14:33:32.567: INFO: stdout: "update-demo-nautilus-b46h6\nupdate-demo-nautilus-pb7x2\n"
Jan  8 14:33:33.068: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-713'
Jan  8 14:33:33.259: INFO: stderr: "No resources found.\n"
Jan  8 14:33:33.259: INFO: stdout: ""
Jan  8 14:33:33.259: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-713 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Jan  8 14:33:33.406: INFO: stderr: ""
Jan  8 14:33:33.406: INFO: stdout: "update-demo-nautilus-b46h6\nupdate-demo-nautilus-pb7x2\n"
Jan  8 14:33:33.567: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-713'
Jan  8 14:33:33.729: INFO: stderr: "No resources found.\n"
Jan  8 14:33:33.729: INFO: stdout: ""
Jan  8 14:33:33.729: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-713 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Jan  8 14:33:33.973: INFO: stderr: ""
Jan  8 14:33:33.973: INFO: stdout: "update-demo-nautilus-b46h6\nupdate-demo-nautilus-pb7x2\n"
Jan  8 14:33:34.067: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-713'
Jan  8 14:33:34.174: INFO: stderr: "No resources found.\n"
Jan  8 14:33:34.174: INFO: stdout: ""
Jan  8 14:33:34.175: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-713 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Jan  8 14:33:34.303: INFO: stderr: ""
Jan  8 14:33:34.303: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  8 14:33:34.303: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-713" for this suite.
Jan  8 14:33:58.423: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  8 14:33:58.596: INFO: namespace kubectl-713 deletion completed in 24.22643018s

• [SLOW TEST:70.944 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should scale a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  8 14:33:58.597: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with configmap pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod pod-subpath-test-configmap-lclj
STEP: Creating a pod to test atomic-volume-subpath
Jan  8 14:33:58.740: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-lclj" in namespace "subpath-9651" to be "success or failure"
Jan  8 14:33:58.764: INFO: Pod "pod-subpath-test-configmap-lclj": Phase="Pending", Reason="", readiness=false. Elapsed: 24.091997ms
Jan  8 14:34:00.775: INFO: Pod "pod-subpath-test-configmap-lclj": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034485468s
Jan  8 14:34:02.781: INFO: Pod "pod-subpath-test-configmap-lclj": Phase="Pending", Reason="", readiness=false. Elapsed: 4.041035025s
Jan  8 14:34:04.794: INFO: Pod "pod-subpath-test-configmap-lclj": Phase="Pending", Reason="", readiness=false. Elapsed: 6.05348655s
Jan  8 14:34:06.804: INFO: Pod "pod-subpath-test-configmap-lclj": Phase="Running", Reason="", readiness=true. Elapsed: 8.063527242s
Jan  8 14:34:09.335: INFO: Pod "pod-subpath-test-configmap-lclj": Phase="Running", Reason="", readiness=true. Elapsed: 10.594554103s
Jan  8 14:34:11.344: INFO: Pod "pod-subpath-test-configmap-lclj": Phase="Running", Reason="", readiness=true. Elapsed: 12.603657374s
Jan  8 14:34:13.358: INFO: Pod "pod-subpath-test-configmap-lclj": Phase="Running", Reason="", readiness=true. Elapsed: 14.617495581s
Jan  8 14:34:15.366: INFO: Pod "pod-subpath-test-configmap-lclj": Phase="Running", Reason="", readiness=true. Elapsed: 16.625792711s
Jan  8 14:34:17.376: INFO: Pod "pod-subpath-test-configmap-lclj": Phase="Running", Reason="", readiness=true. Elapsed: 18.635678207s
Jan  8 14:34:19.415: INFO: Pod "pod-subpath-test-configmap-lclj": Phase="Running", Reason="", readiness=true. Elapsed: 20.674471122s
Jan  8 14:34:21.424: INFO: Pod "pod-subpath-test-configmap-lclj": Phase="Running", Reason="", readiness=true. Elapsed: 22.68371391s
Jan  8 14:34:23.441: INFO: Pod "pod-subpath-test-configmap-lclj": Phase="Running", Reason="", readiness=true. Elapsed: 24.700967619s
Jan  8 14:34:25.460: INFO: Pod "pod-subpath-test-configmap-lclj": Phase="Running", Reason="", readiness=true. Elapsed: 26.719547557s
Jan  8 14:34:27.468: INFO: Pod "pod-subpath-test-configmap-lclj": Phase="Running", Reason="", readiness=true. Elapsed: 28.727995738s
Jan  8 14:34:29.476: INFO: Pod "pod-subpath-test-configmap-lclj": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.735209897s
STEP: Saw pod success
Jan  8 14:34:29.476: INFO: Pod "pod-subpath-test-configmap-lclj" satisfied condition "success or failure"
Jan  8 14:34:29.481: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-configmap-lclj container test-container-subpath-configmap-lclj: 
STEP: delete the pod
Jan  8 14:34:29.574: INFO: Waiting for pod pod-subpath-test-configmap-lclj to disappear
Jan  8 14:34:29.583: INFO: Pod pod-subpath-test-configmap-lclj no longer exists
STEP: Deleting pod pod-subpath-test-configmap-lclj
Jan  8 14:34:29.583: INFO: Deleting pod "pod-subpath-test-configmap-lclj" in namespace "subpath-9651"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  8 14:34:29.588: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-9651" for this suite.
Jan  8 14:34:35.618: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  8 14:34:35.760: INFO: namespace subpath-9651 deletion completed in 6.168293746s

• [SLOW TEST:37.164 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with configmap pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  8 14:34:35.761: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a pod in the namespace
STEP: Waiting for the pod to have running status
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
STEP: Recreating the namespace
STEP: Verifying there are no pods in the namespace
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  8 14:35:08.128: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "namespaces-8923" for this suite.
Jan  8 14:35:14.172: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  8 14:35:14.321: INFO: namespace namespaces-8923 deletion completed in 6.186829329s
STEP: Destroying namespace "nsdeletetest-6991" for this suite.
Jan  8 14:35:14.325: INFO: Namespace nsdeletetest-6991 was already deleted
STEP: Destroying namespace "nsdeletetest-8011" for this suite.
Jan  8 14:35:20.344: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  8 14:35:20.461: INFO: namespace nsdeletetest-8011 deletion completed in 6.136320745s

• [SLOW TEST:44.700 seconds]
[sig-api-machinery] Namespaces [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  8 14:35:20.461: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap configmap-8099/configmap-test-e6aa3899-df6c-451b-9b73-45a3342a9caf
STEP: Creating a pod to test consume configMaps
Jan  8 14:35:20.737: INFO: Waiting up to 5m0s for pod "pod-configmaps-d4375b64-c165-490d-9fee-9d156691a74f" in namespace "configmap-8099" to be "success or failure"
Jan  8 14:35:20.746: INFO: Pod "pod-configmaps-d4375b64-c165-490d-9fee-9d156691a74f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.424629ms
Jan  8 14:35:22.767: INFO: Pod "pod-configmaps-d4375b64-c165-490d-9fee-9d156691a74f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029886465s
Jan  8 14:35:24.777: INFO: Pod "pod-configmaps-d4375b64-c165-490d-9fee-9d156691a74f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.039081021s
Jan  8 14:35:26.878: INFO: Pod "pod-configmaps-d4375b64-c165-490d-9fee-9d156691a74f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.140549348s
Jan  8 14:35:28.903: INFO: Pod "pod-configmaps-d4375b64-c165-490d-9fee-9d156691a74f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.165869655s
STEP: Saw pod success
Jan  8 14:35:28.903: INFO: Pod "pod-configmaps-d4375b64-c165-490d-9fee-9d156691a74f" satisfied condition "success or failure"
Jan  8 14:35:28.912: INFO: Trying to get logs from node iruya-node pod pod-configmaps-d4375b64-c165-490d-9fee-9d156691a74f container env-test: 
STEP: delete the pod
Jan  8 14:35:28.982: INFO: Waiting for pod pod-configmaps-d4375b64-c165-490d-9fee-9d156691a74f to disappear
Jan  8 14:35:29.001: INFO: Pod pod-configmaps-d4375b64-c165-490d-9fee-9d156691a74f no longer exists
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  8 14:35:29.001: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-8099" for this suite.
Jan  8 14:35:35.033: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  8 14:35:35.154: INFO: namespace configmap-8099 deletion completed in 6.146266488s

• [SLOW TEST:14.692 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  8 14:35:35.154: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88
[It] should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating service endpoint-test2 in namespace services-8268
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-8268 to expose endpoints map[]
Jan  8 14:35:35.344: INFO: successfully validated that service endpoint-test2 in namespace services-8268 exposes endpoints map[] (18.929277ms elapsed)
STEP: Creating pod pod1 in namespace services-8268
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-8268 to expose endpoints map[pod1:[80]]
Jan  8 14:35:39.432: INFO: Unexpected endpoints: found map[], expected map[pod1:[80]] (4.070861643s elapsed, will retry)
Jan  8 14:35:42.474: INFO: successfully validated that service endpoint-test2 in namespace services-8268 exposes endpoints map[pod1:[80]] (7.113221372s elapsed)
STEP: Creating pod pod2 in namespace services-8268
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-8268 to expose endpoints map[pod1:[80] pod2:[80]]
Jan  8 14:35:48.201: INFO: Unexpected endpoints: found map[50a6a6ef-b8e2-4679-a134-0d166dd73f43:[80]], expected map[pod1:[80] pod2:[80]] (5.711391098s elapsed, will retry)
Jan  8 14:35:51.272: INFO: successfully validated that service endpoint-test2 in namespace services-8268 exposes endpoints map[pod1:[80] pod2:[80]] (8.782542756s elapsed)
STEP: Deleting pod pod1 in namespace services-8268
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-8268 to expose endpoints map[pod2:[80]]
Jan  8 14:35:52.305: INFO: successfully validated that service endpoint-test2 in namespace services-8268 exposes endpoints map[pod2:[80]] (1.024543115s elapsed)
STEP: Deleting pod pod2 in namespace services-8268
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-8268 to expose endpoints map[]
Jan  8 14:35:52.346: INFO: successfully validated that service endpoint-test2 in namespace services-8268 exposes endpoints map[] (34.788279ms elapsed)
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  8 14:35:52.386: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-8268" for this suite.
Jan  8 14:36:14.538: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  8 14:36:14.785: INFO: namespace services-8268 deletion completed in 22.286867653s
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92

• [SLOW TEST:39.631 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  8 14:36:14.786: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Jan  8 14:36:23.026: INFO: Expected: &{} to match Container's Termination Message:  --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  8 14:36:23.050: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-2453" for this suite.
Jan  8 14:36:29.083: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  8 14:36:29.184: INFO: namespace container-runtime-2453 deletion completed in 6.12837917s

• [SLOW TEST:14.399 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129
      should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  8 14:36:29.186: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-map-a5e3594c-7fa8-40cf-84ac-3ac89076e374
STEP: Creating a pod to test consume secrets
Jan  8 14:36:29.305: INFO: Waiting up to 5m0s for pod "pod-secrets-9deae389-7b98-4000-837a-420c20192933" in namespace "secrets-7580" to be "success or failure"
Jan  8 14:36:29.332: INFO: Pod "pod-secrets-9deae389-7b98-4000-837a-420c20192933": Phase="Pending", Reason="", readiness=false. Elapsed: 26.521519ms
Jan  8 14:36:31.339: INFO: Pod "pod-secrets-9deae389-7b98-4000-837a-420c20192933": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033691402s
Jan  8 14:36:33.379: INFO: Pod "pod-secrets-9deae389-7b98-4000-837a-420c20192933": Phase="Pending", Reason="", readiness=false. Elapsed: 4.073911996s
Jan  8 14:36:35.387: INFO: Pod "pod-secrets-9deae389-7b98-4000-837a-420c20192933": Phase="Pending", Reason="", readiness=false. Elapsed: 6.081406196s
Jan  8 14:36:37.392: INFO: Pod "pod-secrets-9deae389-7b98-4000-837a-420c20192933": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.087353285s
STEP: Saw pod success
Jan  8 14:36:37.393: INFO: Pod "pod-secrets-9deae389-7b98-4000-837a-420c20192933" satisfied condition "success or failure"
Jan  8 14:36:37.398: INFO: Trying to get logs from node iruya-node pod pod-secrets-9deae389-7b98-4000-837a-420c20192933 container secret-volume-test: 
STEP: delete the pod
Jan  8 14:36:37.468: INFO: Waiting for pod pod-secrets-9deae389-7b98-4000-837a-420c20192933 to disappear
Jan  8 14:36:37.479: INFO: Pod pod-secrets-9deae389-7b98-4000-837a-420c20192933 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  8 14:36:37.479: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-7580" for this suite.
Jan  8 14:36:43.499: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  8 14:36:43.688: INFO: namespace secrets-7580 deletion completed in 6.204718555s

• [SLOW TEST:14.503 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should create and stop a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  8 14:36:43.690: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should create and stop a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a replication controller
Jan  8 14:36:43.839: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2208'
Jan  8 14:36:46.198: INFO: stderr: ""
Jan  8 14:36:46.199: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jan  8 14:36:46.199: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2208'
Jan  8 14:36:46.481: INFO: stderr: ""
Jan  8 14:36:46.481: INFO: stdout: "update-demo-nautilus-g7ldm update-demo-nautilus-pqk8m "
Jan  8 14:36:46.481: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-g7ldm -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2208'
Jan  8 14:36:46.664: INFO: stderr: ""
Jan  8 14:36:46.664: INFO: stdout: ""
Jan  8 14:36:46.664: INFO: update-demo-nautilus-g7ldm is created but not running
Jan  8 14:36:51.665: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2208'
Jan  8 14:36:52.955: INFO: stderr: ""
Jan  8 14:36:52.955: INFO: stdout: "update-demo-nautilus-g7ldm update-demo-nautilus-pqk8m "
Jan  8 14:36:52.956: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-g7ldm -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2208'
Jan  8 14:36:54.012: INFO: stderr: ""
Jan  8 14:36:54.012: INFO: stdout: ""
Jan  8 14:36:54.012: INFO: update-demo-nautilus-g7ldm is created but not running
Jan  8 14:36:59.012: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2208'
Jan  8 14:36:59.221: INFO: stderr: ""
Jan  8 14:36:59.221: INFO: stdout: "update-demo-nautilus-g7ldm update-demo-nautilus-pqk8m "
Jan  8 14:36:59.222: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-g7ldm -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2208'
Jan  8 14:36:59.348: INFO: stderr: ""
Jan  8 14:36:59.348: INFO: stdout: "true"
Jan  8 14:36:59.348: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-g7ldm -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-2208'
Jan  8 14:36:59.453: INFO: stderr: ""
Jan  8 14:36:59.453: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan  8 14:36:59.453: INFO: validating pod update-demo-nautilus-g7ldm
Jan  8 14:36:59.466: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan  8 14:36:59.466: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan  8 14:36:59.466: INFO: update-demo-nautilus-g7ldm is verified up and running
Jan  8 14:36:59.466: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-pqk8m -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2208'
Jan  8 14:36:59.568: INFO: stderr: ""
Jan  8 14:36:59.568: INFO: stdout: "true"
Jan  8 14:36:59.568: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-pqk8m -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-2208'
Jan  8 14:36:59.757: INFO: stderr: ""
Jan  8 14:36:59.757: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan  8 14:36:59.757: INFO: validating pod update-demo-nautilus-pqk8m
Jan  8 14:36:59.765: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan  8 14:36:59.765: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan  8 14:36:59.765: INFO: update-demo-nautilus-pqk8m is verified up and running
STEP: using delete to clean up resources
Jan  8 14:36:59.765: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2208'
Jan  8 14:36:59.907: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan  8 14:36:59.907: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Jan  8 14:36:59.907: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-2208'
Jan  8 14:37:00.083: INFO: stderr: "No resources found.\n"
Jan  8 14:37:00.083: INFO: stdout: ""
Jan  8 14:37:00.083: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-2208 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Jan  8 14:37:00.226: INFO: stderr: ""
Jan  8 14:37:00.226: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  8 14:37:00.226: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-2208" for this suite.
Jan  8 14:37:22.338: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  8 14:37:22.439: INFO: namespace kubectl-2208 deletion completed in 22.208081854s

• [SLOW TEST:38.748 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create and stop a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-storage] ConfigMap 
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  8 14:37:22.439: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-upd-03946ba0-4efd-47f1-87ef-0be284501d74
STEP: Creating the pod
STEP: Waiting for pod with text data
STEP: Waiting for pod with binary data
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  8 14:37:32.706: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-1609" for this suite.
Jan  8 14:37:54.731: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  8 14:37:54.888: INFO: namespace configmap-1609 deletion completed in 22.177223817s

• [SLOW TEST:32.450 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  8 14:37:54.889: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0666 on node default medium
Jan  8 14:37:55.068: INFO: Waiting up to 5m0s for pod "pod-b9bbdb3d-6986-4c7d-85d5-08000e890858" in namespace "emptydir-7223" to be "success or failure"
Jan  8 14:37:55.081: INFO: Pod "pod-b9bbdb3d-6986-4c7d-85d5-08000e890858": Phase="Pending", Reason="", readiness=false. Elapsed: 13.10395ms
Jan  8 14:37:57.090: INFO: Pod "pod-b9bbdb3d-6986-4c7d-85d5-08000e890858": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021277781s
Jan  8 14:37:59.099: INFO: Pod "pod-b9bbdb3d-6986-4c7d-85d5-08000e890858": Phase="Pending", Reason="", readiness=false. Elapsed: 4.03062853s
Jan  8 14:38:01.113: INFO: Pod "pod-b9bbdb3d-6986-4c7d-85d5-08000e890858": Phase="Pending", Reason="", readiness=false. Elapsed: 6.044910165s
Jan  8 14:38:03.127: INFO: Pod "pod-b9bbdb3d-6986-4c7d-85d5-08000e890858": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.05917734s
STEP: Saw pod success
Jan  8 14:38:03.128: INFO: Pod "pod-b9bbdb3d-6986-4c7d-85d5-08000e890858" satisfied condition "success or failure"
Jan  8 14:38:03.131: INFO: Trying to get logs from node iruya-node pod pod-b9bbdb3d-6986-4c7d-85d5-08000e890858 container test-container: 
STEP: delete the pod
Jan  8 14:38:03.253: INFO: Waiting for pod pod-b9bbdb3d-6986-4c7d-85d5-08000e890858 to disappear
Jan  8 14:38:03.263: INFO: Pod pod-b9bbdb3d-6986-4c7d-85d5-08000e890858 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  8 14:38:03.263: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-7223" for this suite.
Jan  8 14:38:09.304: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  8 14:38:09.440: INFO: namespace emptydir-7223 deletion completed in 6.172508628s

• [SLOW TEST:14.551 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  8 14:38:09.441: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Performing setup for networking test in namespace pod-network-test-4115
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Jan  8 14:38:09.560: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Jan  8 14:38:45.872: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.44.0.1:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-4115 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan  8 14:38:45.872: INFO: >>> kubeConfig: /root/.kube/config
I0108 14:38:45.961694       8 log.go:172] (0xc0011226e0) (0xc00202e780) Create stream
I0108 14:38:45.961754       8 log.go:172] (0xc0011226e0) (0xc00202e780) Stream added, broadcasting: 1
I0108 14:38:45.974371       8 log.go:172] (0xc0011226e0) Reply frame received for 1
I0108 14:38:45.974468       8 log.go:172] (0xc0011226e0) (0xc00165cfa0) Create stream
I0108 14:38:45.974483       8 log.go:172] (0xc0011226e0) (0xc00165cfa0) Stream added, broadcasting: 3
I0108 14:38:45.977875       8 log.go:172] (0xc0011226e0) Reply frame received for 3
I0108 14:38:45.977913       8 log.go:172] (0xc0011226e0) (0xc001029720) Create stream
I0108 14:38:45.977922       8 log.go:172] (0xc0011226e0) (0xc001029720) Stream added, broadcasting: 5
I0108 14:38:45.979881       8 log.go:172] (0xc0011226e0) Reply frame received for 5
I0108 14:38:46.153489       8 log.go:172] (0xc0011226e0) Data frame received for 3
I0108 14:38:46.153628       8 log.go:172] (0xc00165cfa0) (3) Data frame handling
I0108 14:38:46.153668       8 log.go:172] (0xc00165cfa0) (3) Data frame sent
I0108 14:38:46.277975       8 log.go:172] (0xc0011226e0) (0xc00165cfa0) Stream removed, broadcasting: 3
I0108 14:38:46.278368       8 log.go:172] (0xc0011226e0) Data frame received for 1
I0108 14:38:46.278422       8 log.go:172] (0xc00202e780) (1) Data frame handling
I0108 14:38:46.278465       8 log.go:172] (0xc00202e780) (1) Data frame sent
I0108 14:38:46.278506       8 log.go:172] (0xc0011226e0) (0xc00202e780) Stream removed, broadcasting: 1
I0108 14:38:46.278543       8 log.go:172] (0xc0011226e0) (0xc001029720) Stream removed, broadcasting: 5
I0108 14:38:46.278634       8 log.go:172] (0xc0011226e0) Go away received
I0108 14:38:46.278835       8 log.go:172] (0xc0011226e0) (0xc00202e780) Stream removed, broadcasting: 1
I0108 14:38:46.278857       8 log.go:172] (0xc0011226e0) (0xc00165cfa0) Stream removed, broadcasting: 3
I0108 14:38:46.278882       8 log.go:172] (0xc0011226e0) (0xc001029720) Stream removed, broadcasting: 5
Jan  8 14:38:46.278: INFO: Found all expected endpoints: [netserver-0]
Jan  8 14:38:46.289: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.32.0.4:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-4115 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan  8 14:38:46.289: INFO: >>> kubeConfig: /root/.kube/config
I0108 14:38:46.382013       8 log.go:172] (0xc000860b00) (0xc0014aa320) Create stream
I0108 14:38:46.382086       8 log.go:172] (0xc000860b00) (0xc0014aa320) Stream added, broadcasting: 1
I0108 14:38:46.391668       8 log.go:172] (0xc000860b00) Reply frame received for 1
I0108 14:38:46.391710       8 log.go:172] (0xc000860b00) (0xc00165d5e0) Create stream
I0108 14:38:46.391726       8 log.go:172] (0xc000860b00) (0xc00165d5e0) Stream added, broadcasting: 3
I0108 14:38:46.395261       8 log.go:172] (0xc000860b00) Reply frame received for 3
I0108 14:38:46.395287       8 log.go:172] (0xc000860b00) (0xc0010297c0) Create stream
I0108 14:38:46.395296       8 log.go:172] (0xc000860b00) (0xc0010297c0) Stream added, broadcasting: 5
I0108 14:38:46.402539       8 log.go:172] (0xc000860b00) Reply frame received for 5
I0108 14:38:46.708694       8 log.go:172] (0xc000860b00) Data frame received for 3
I0108 14:38:46.708788       8 log.go:172] (0xc00165d5e0) (3) Data frame handling
I0108 14:38:46.708803       8 log.go:172] (0xc00165d5e0) (3) Data frame sent
I0108 14:38:46.803558       8 log.go:172] (0xc000860b00) Data frame received for 1
I0108 14:38:46.803682       8 log.go:172] (0xc000860b00) (0xc0010297c0) Stream removed, broadcasting: 5
I0108 14:38:46.803719       8 log.go:172] (0xc0014aa320) (1) Data frame handling
I0108 14:38:46.803745       8 log.go:172] (0xc0014aa320) (1) Data frame sent
I0108 14:38:46.803761       8 log.go:172] (0xc000860b00) (0xc00165d5e0) Stream removed, broadcasting: 3
I0108 14:38:46.803788       8 log.go:172] (0xc000860b00) (0xc0014aa320) Stream removed, broadcasting: 1
I0108 14:38:46.803825       8 log.go:172] (0xc000860b00) Go away received
I0108 14:38:46.803995       8 log.go:172] (0xc000860b00) (0xc0014aa320) Stream removed, broadcasting: 1
I0108 14:38:46.804012       8 log.go:172] (0xc000860b00) (0xc00165d5e0) Stream removed, broadcasting: 3
I0108 14:38:46.804023       8 log.go:172] (0xc000860b00) (0xc0010297c0) Stream removed, broadcasting: 5
Jan  8 14:38:46.804: INFO: Found all expected endpoints: [netserver-1]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  8 14:38:46.804: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-4115" for this suite.
Jan  8 14:39:10.836: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  8 14:39:10.994: INFO: namespace pod-network-test-4115 deletion completed in 24.180932335s

• [SLOW TEST:61.553 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox Pod with hostAliases 
  should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  8 14:39:10.994: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  8 14:39:19.170: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-5028" for this suite.
Jan  8 14:40:01.215: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  8 14:40:01.339: INFO: namespace kubelet-test-5028 deletion completed in 42.162528469s

• [SLOW TEST:50.345 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when scheduling a busybox Pod with hostAliases
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136
    should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  8 14:40:01.339: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test substitution in container's command
Jan  8 14:40:01.405: INFO: Waiting up to 5m0s for pod "var-expansion-cc48df1f-167e-46ee-a503-199d2f17ad0d" in namespace "var-expansion-2340" to be "success or failure"
Jan  8 14:40:01.423: INFO: Pod "var-expansion-cc48df1f-167e-46ee-a503-199d2f17ad0d": Phase="Pending", Reason="", readiness=false. Elapsed: 17.935743ms
Jan  8 14:40:03.449: INFO: Pod "var-expansion-cc48df1f-167e-46ee-a503-199d2f17ad0d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.043760228s
Jan  8 14:40:05.466: INFO: Pod "var-expansion-cc48df1f-167e-46ee-a503-199d2f17ad0d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.060686965s
Jan  8 14:40:07.481: INFO: Pod "var-expansion-cc48df1f-167e-46ee-a503-199d2f17ad0d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.075711482s
Jan  8 14:40:09.495: INFO: Pod "var-expansion-cc48df1f-167e-46ee-a503-199d2f17ad0d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.089763082s
STEP: Saw pod success
Jan  8 14:40:09.495: INFO: Pod "var-expansion-cc48df1f-167e-46ee-a503-199d2f17ad0d" satisfied condition "success or failure"
Jan  8 14:40:09.499: INFO: Trying to get logs from node iruya-node pod var-expansion-cc48df1f-167e-46ee-a503-199d2f17ad0d container dapi-container: 
STEP: delete the pod
Jan  8 14:40:09.663: INFO: Waiting for pod var-expansion-cc48df1f-167e-46ee-a503-199d2f17ad0d to disappear
Jan  8 14:40:09.674: INFO: Pod var-expansion-cc48df1f-167e-46ee-a503-199d2f17ad0d no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  8 14:40:09.674: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-2340" for this suite.
Jan  8 14:40:15.753: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  8 14:40:15.909: INFO: namespace var-expansion-2340 deletion completed in 6.226271416s

• [SLOW TEST:14.570 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  8 14:40:15.910: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72
[It] deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jan  8 14:40:16.069: INFO: Creating deployment "nginx-deployment"
Jan  8 14:40:16.075: INFO: Waiting for observed generation 1
Jan  8 14:40:19.533: INFO: Waiting for all required pods to come up
Jan  8 14:40:19.905: INFO: Pod name nginx: Found 10 pods out of 10
STEP: ensuring each pod is running
Jan  8 14:40:45.926: INFO: Waiting for deployment "nginx-deployment" to complete
Jan  8 14:40:45.948: INFO: Updating deployment "nginx-deployment" with a non-existent image
Jan  8 14:40:45.988: INFO: Updating deployment nginx-deployment
Jan  8 14:40:45.988: INFO: Waiting for observed generation 2
Jan  8 14:40:49.501: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8
Jan  8 14:40:49.512: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8
Jan  8 14:40:49.846: INFO: Waiting for the first rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas
Jan  8 14:40:49.917: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0
Jan  8 14:40:49.917: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5
Jan  8 14:40:49.922: INFO: Waiting for the second rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas
Jan  8 14:40:49.960: INFO: Verifying that deployment "nginx-deployment" has minimum required number of available replicas
Jan  8 14:40:49.961: INFO: Scaling up the deployment "nginx-deployment" from 10 to 30
Jan  8 14:40:49.970: INFO: Updating deployment nginx-deployment
Jan  8 14:40:49.970: INFO: Waiting for the replicasets of deployment "nginx-deployment" to have desired number of replicas
Jan  8 14:40:52.804: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20
Jan  8 14:40:53.469: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66
Jan  8 14:40:58.546: INFO: Deployment "nginx-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment,GenerateName:,Namespace:deployment-461,SelfLink:/apis/apps/v1/namespaces/deployment-461/deployments/nginx-deployment,UID:7a5ceaeb-ea26-4b15-869d-00f6ab885ecc,ResourceVersion:19785607,Generation:3,CreationTimestamp:2020-01-08 14:40:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*30,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:33,UpdatedReplicas:13,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[{Available False 2020-01-08 14:40:52 +0000 UTC 2020-01-08 14:40:52 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2020-01-08 14:40:56 +0000 UTC 2020-01-08 14:40:16 +0000 UTC ReplicaSetUpdated ReplicaSet "nginx-deployment-55fb7cb77f" is progressing.}],ReadyReplicas:8,CollisionCount:nil,},}

Jan  8 14:41:00.415: INFO: New ReplicaSet "nginx-deployment-55fb7cb77f" of Deployment "nginx-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f,GenerateName:,Namespace:deployment-461,SelfLink:/apis/apps/v1/namespaces/deployment-461/replicasets/nginx-deployment-55fb7cb77f,UID:c39f4060-a3c8-46d0-8f98-6eb449187577,ResourceVersion:19785601,Generation:3,CreationTimestamp:2020-01-08 14:40:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 7a5ceaeb-ea26-4b15-869d-00f6ab885ecc 0xc001666717 0xc001666718}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*13,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Jan  8 14:41:00.415: INFO: All old ReplicaSets of Deployment "nginx-deployment":
Jan  8 14:41:00.415: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498,GenerateName:,Namespace:deployment-461,SelfLink:/apis/apps/v1/namespaces/deployment-461/replicasets/nginx-deployment-7b8c6f4498,UID:5f8711aa-bd97-4cf6-ba89-e3cc116a9f81,ResourceVersion:19785600,Generation:3,CreationTimestamp:2020-01-08 14:40:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 7a5ceaeb-ea26-4b15-869d-00f6ab885ecc 0xc0016667f7 0xc0016667f8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*20,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[],},}
Jan  8 14:41:02.778: INFO: Pod "nginx-deployment-55fb7cb77f-2xn5h" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-2xn5h,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-461,SelfLink:/api/v1/namespaces/deployment-461/pods/nginx-deployment-55fb7cb77f-2xn5h,UID:32063644-d8ca-4f3c-9b31-1a29e9e8b24b,ResourceVersion:19785612,Generation:0,CreationTimestamp:2020-01-08 14:40:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f c39f4060-a3c8-46d0-8f98-6eb449187577 0xc001667717 0xc001667718}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-vfm6s {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vfm6s,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-vfm6s true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001667820} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001667870}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-08 14:40:53 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-08 14:40:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-08 14:40:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-08 14:40:53 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2020-01-08 14:40:53 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  8 14:41:02.778: INFO: Pod "nginx-deployment-55fb7cb77f-6snnz" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-6snnz,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-461,SelfLink:/api/v1/namespaces/deployment-461/pods/nginx-deployment-55fb7cb77f-6snnz,UID:a3712acc-1602-4810-97c6-4e71cfccb4ef,ResourceVersion:19785513,Generation:0,CreationTimestamp:2020-01-08 14:40:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f c39f4060-a3c8-46d0-8f98-6eb449187577 0xc001667a77 0xc001667a78}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-vfm6s {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vfm6s,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-vfm6s true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001667b00} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001667b20}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-08 14:40:46 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-08 14:40:46 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-08 14:40:46 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-08 14:40:46 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2020-01-08 14:40:46 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  8 14:41:02.778: INFO: Pod "nginx-deployment-55fb7cb77f-96wgx" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-96wgx,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-461,SelfLink:/api/v1/namespaces/deployment-461/pods/nginx-deployment-55fb7cb77f-96wgx,UID:a338da76-4abd-4422-9c01-53528e400407,ResourceVersion:19785510,Generation:0,CreationTimestamp:2020-01-08 14:40:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f c39f4060-a3c8-46d0-8f98-6eb449187577 0xc001667cf7 0xc001667cf8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-vfm6s {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vfm6s,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-vfm6s true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001667dc0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001667e30}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-08 14:40:46 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-08 14:40:46 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-08 14:40:46 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-08 14:40:46 +0000 UTC  }],Message:,Reason:,HostIP:10.96.2.216,PodIP:,StartTime:2020-01-08 14:40:46 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  8 14:41:02.778: INFO: Pod "nginx-deployment-55fb7cb77f-dqgkt" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-dqgkt,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-461,SelfLink:/api/v1/namespaces/deployment-461/pods/nginx-deployment-55fb7cb77f-dqgkt,UID:b2d79fe5-2700-4369-8d5a-ce934ea38f41,ResourceVersion:19785539,Generation:0,CreationTimestamp:2020-01-08 14:40:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f c39f4060-a3c8-46d0-8f98-6eb449187577 0xc00216e007 0xc00216e008}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-vfm6s {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vfm6s,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-vfm6s true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00216e080} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00216e0b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-08 14:40:48 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-08 14:40:48 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-08 14:40:48 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-08 14:40:46 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2020-01-08 14:40:48 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  8 14:41:02.778: INFO: Pod "nginx-deployment-55fb7cb77f-dqw84" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-dqw84,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-461,SelfLink:/api/v1/namespaces/deployment-461/pods/nginx-deployment-55fb7cb77f-dqw84,UID:fc043dd0-7fab-4c6c-9a6e-070a1386ea44,ResourceVersion:19785566,Generation:0,CreationTimestamp:2020-01-08 14:40:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f c39f4060-a3c8-46d0-8f98-6eb449187577 0xc00216e1a7 0xc00216e1a8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-vfm6s {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vfm6s,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-vfm6s true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00216e210} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00216e230}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-08 14:40:53 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  8 14:41:02.778: INFO: Pod "nginx-deployment-55fb7cb77f-fs9gr" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-fs9gr,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-461,SelfLink:/api/v1/namespaces/deployment-461/pods/nginx-deployment-55fb7cb77f-fs9gr,UID:177033ec-f186-435a-a7c5-86cdafe7283c,ResourceVersion:19785537,Generation:0,CreationTimestamp:2020-01-08 14:40:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f c39f4060-a3c8-46d0-8f98-6eb449187577 0xc00216e2b7 0xc00216e2b8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-vfm6s {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vfm6s,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-vfm6s true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00216e320} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00216e340}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-08 14:40:46 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-08 14:40:46 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-08 14:40:46 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-08 14:40:46 +0000 UTC  }],Message:,Reason:,HostIP:10.96.2.216,PodIP:,StartTime:2020-01-08 14:40:46 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  8 14:41:02.779: INFO: Pod "nginx-deployment-55fb7cb77f-g7svs" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-g7svs,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-461,SelfLink:/api/v1/namespaces/deployment-461/pods/nginx-deployment-55fb7cb77f-g7svs,UID:9cd059b8-28e4-4cb9-abbb-2a57163eea18,ResourceVersion:19785579,Generation:0,CreationTimestamp:2020-01-08 14:40:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f c39f4060-a3c8-46d0-8f98-6eb449187577 0xc00216e447 0xc00216e448}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-vfm6s {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vfm6s,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-vfm6s true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00216e4c0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00216e4e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-08 14:40:53 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  8 14:41:02.779: INFO: Pod "nginx-deployment-55fb7cb77f-j2qv7" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-j2qv7,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-461,SelfLink:/api/v1/namespaces/deployment-461/pods/nginx-deployment-55fb7cb77f-j2qv7,UID:72468d15-6b47-4504-b9a8-f80a79df5b0b,ResourceVersion:19785614,Generation:0,CreationTimestamp:2020-01-08 14:40:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f c39f4060-a3c8-46d0-8f98-6eb449187577 0xc00216e567 0xc00216e568}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-vfm6s {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vfm6s,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-vfm6s true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00216e5d0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00216e5f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-08 14:40:53 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-08 14:40:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-08 14:40:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-08 14:40:53 +0000 UTC  }],Message:,Reason:,HostIP:10.96.2.216,PodIP:,StartTime:2020-01-08 14:40:53 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  8 14:41:02.779: INFO: Pod "nginx-deployment-55fb7cb77f-kjmcx" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-kjmcx,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-461,SelfLink:/api/v1/namespaces/deployment-461/pods/nginx-deployment-55fb7cb77f-kjmcx,UID:f4f4cda7-56e0-421e-ab80-2e121af8e2ad,ResourceVersion:19785591,Generation:0,CreationTimestamp:2020-01-08 14:40:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f c39f4060-a3c8-46d0-8f98-6eb449187577 0xc00216e6c7 0xc00216e6c8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-vfm6s {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vfm6s,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-vfm6s true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00216e730} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00216e750}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-08 14:40:54 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  8 14:41:02.779: INFO: Pod "nginx-deployment-55fb7cb77f-swf87" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-swf87,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-461,SelfLink:/api/v1/namespaces/deployment-461/pods/nginx-deployment-55fb7cb77f-swf87,UID:b562df44-99a0-4478-b554-e410dfbfc01c,ResourceVersion:19785580,Generation:0,CreationTimestamp:2020-01-08 14:40:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f c39f4060-a3c8-46d0-8f98-6eb449187577 0xc00216e7d7 0xc00216e7d8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-vfm6s {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vfm6s,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-vfm6s true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00216e850} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00216e870}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-08 14:40:53 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  8 14:41:02.779: INFO: Pod "nginx-deployment-55fb7cb77f-tdzrs" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-tdzrs,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-461,SelfLink:/api/v1/namespaces/deployment-461/pods/nginx-deployment-55fb7cb77f-tdzrs,UID:c39663c7-5941-4ac2-a75c-30d505f4c005,ResourceVersion:19785575,Generation:0,CreationTimestamp:2020-01-08 14:40:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f c39f4060-a3c8-46d0-8f98-6eb449187577 0xc00216e8f7 0xc00216e8f8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-vfm6s {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vfm6s,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-vfm6s true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00216e980} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00216e9a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-08 14:40:53 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  8 14:41:02.780: INFO: Pod "nginx-deployment-55fb7cb77f-z7hcr" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-z7hcr,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-461,SelfLink:/api/v1/namespaces/deployment-461/pods/nginx-deployment-55fb7cb77f-z7hcr,UID:2a83f45f-9ac2-4641-81c5-cb87e60aabda,ResourceVersion:19785577,Generation:0,CreationTimestamp:2020-01-08 14:40:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f c39f4060-a3c8-46d0-8f98-6eb449187577 0xc00216ea27 0xc00216ea28}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-vfm6s {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vfm6s,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-vfm6s true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00216eab0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00216ead0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-08 14:40:53 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  8 14:41:02.780: INFO: Pod "nginx-deployment-55fb7cb77f-zprjt" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-zprjt,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-461,SelfLink:/api/v1/namespaces/deployment-461/pods/nginx-deployment-55fb7cb77f-zprjt,UID:5fc6f044-ca3f-4c1c-925b-829e4038970e,ResourceVersion:19785530,Generation:0,CreationTimestamp:2020-01-08 14:40:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f c39f4060-a3c8-46d0-8f98-6eb449187577 0xc00216eb57 0xc00216eb58}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-vfm6s {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vfm6s,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-vfm6s true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00216ebd0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00216ebf0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-08 14:40:46 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-08 14:40:46 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-08 14:40:46 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-08 14:40:46 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2020-01-08 14:40:46 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  8 14:41:02.780: INFO: Pod "nginx-deployment-7b8c6f4498-2xbdk" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-2xbdk,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-461,SelfLink:/api/v1/namespaces/deployment-461/pods/nginx-deployment-7b8c6f4498-2xbdk,UID:006a935f-1723-40f2-98e3-5fbd7cd181a5,ResourceVersion:19785454,Generation:0,CreationTimestamp:2020-01-08 14:40:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 5f8711aa-bd97-4cf6-ba89-e3cc116a9f81 0xc00216ecc7 0xc00216ecc8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-vfm6s {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vfm6s,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-vfm6s true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00216ed40} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00216ed60}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-08 14:40:16 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-08 14:40:40 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-08 14:40:40 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-08 14:40:16 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.5,StartTime:2020-01-08 14:40:16 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-08 14:40:39 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://f56bc4b406c914e14d0c35735b9e581cccc36d426fa3cf6725dada4fbdf398f9}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  8 14:41:02.780: INFO: Pod "nginx-deployment-7b8c6f4498-597qd" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-597qd,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-461,SelfLink:/api/v1/namespaces/deployment-461/pods/nginx-deployment-7b8c6f4498-597qd,UID:bdcdb22f-9cbe-463b-8a96-d98c3d69b655,ResourceVersion:19785474,Generation:0,CreationTimestamp:2020-01-08 14:40:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 5f8711aa-bd97-4cf6-ba89-e3cc116a9f81 0xc00216ee37 0xc00216ee38}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-vfm6s {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vfm6s,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-vfm6s true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00216eeb0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00216eed0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-08 14:40:16 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-08 14:40:44 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-08 14:40:44 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-08 14:40:16 +0000 UTC  }],Message:,Reason:,HostIP:10.96.2.216,PodIP:10.32.0.6,StartTime:2020-01-08 14:40:16 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-08 14:40:42 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://299e197a8869d730f0e1da325b2cbafe9dd57b0e1b89f668211fbb097fea0d8e}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  8 14:41:02.780: INFO: Pod "nginx-deployment-7b8c6f4498-5gdjg" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-5gdjg,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-461,SelfLink:/api/v1/namespaces/deployment-461/pods/nginx-deployment-7b8c6f4498-5gdjg,UID:6a9b9db8-b9ab-466b-973f-b51871880b0f,ResourceVersion:19785597,Generation:0,CreationTimestamp:2020-01-08 14:40:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 5f8711aa-bd97-4cf6-ba89-e3cc116a9f81 0xc00216efa7 0xc00216efa8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-vfm6s {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vfm6s,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-vfm6s true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00216f010} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00216f030}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-08 14:40:53 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-08 14:40:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-08 14:40:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-08 14:40:52 +0000 UTC  }],Message:,Reason:,HostIP:10.96.2.216,PodIP:,StartTime:2020-01-08 14:40:53 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  8 14:41:02.780: INFO: Pod "nginx-deployment-7b8c6f4498-6fr7c" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-6fr7c,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-461,SelfLink:/api/v1/namespaces/deployment-461/pods/nginx-deployment-7b8c6f4498-6fr7c,UID:3b2d14bb-bd3b-4bad-a77b-1ed60ebe423e,ResourceVersion:19785573,Generation:0,CreationTimestamp:2020-01-08 14:40:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 5f8711aa-bd97-4cf6-ba89-e3cc116a9f81 0xc00216f107 0xc00216f108}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-vfm6s {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vfm6s,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-vfm6s true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00216f180} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00216f1a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-08 14:40:53 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  8 14:41:02.781: INFO: Pod "nginx-deployment-7b8c6f4498-6kqjk" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-6kqjk,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-461,SelfLink:/api/v1/namespaces/deployment-461/pods/nginx-deployment-7b8c6f4498-6kqjk,UID:634949f2-b767-4dba-8fb9-2301b7854256,ResourceVersion:19785596,Generation:0,CreationTimestamp:2020-01-08 14:40:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 5f8711aa-bd97-4cf6-ba89-e3cc116a9f81 0xc00216f227 0xc00216f228}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-vfm6s {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vfm6s,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-vfm6s true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00216f2a0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00216f2c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-08 14:40:54 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  8 14:41:02.781: INFO: Pod "nginx-deployment-7b8c6f4498-8zr9c" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-8zr9c,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-461,SelfLink:/api/v1/namespaces/deployment-461/pods/nginx-deployment-7b8c6f4498-8zr9c,UID:d3ec8662-bd31-4f78-9a5a-a14bdeca6a1c,ResourceVersion:19785595,Generation:0,CreationTimestamp:2020-01-08 14:40:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 5f8711aa-bd97-4cf6-ba89-e3cc116a9f81 0xc00216f347 0xc00216f348}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-vfm6s {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vfm6s,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-vfm6s true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00216f3b0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00216f3d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-08 14:40:54 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  8 14:41:02.782: INFO: Pod "nginx-deployment-7b8c6f4498-bptw4" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-bptw4,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-461,SelfLink:/api/v1/namespaces/deployment-461/pods/nginx-deployment-7b8c6f4498-bptw4,UID:82c2cee4-977d-4cc7-8771-ef2d3091fbcc,ResourceVersion:19785432,Generation:0,CreationTimestamp:2020-01-08 14:40:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 5f8711aa-bd97-4cf6-ba89-e3cc116a9f81 0xc00216f457 0xc00216f458}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-vfm6s {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vfm6s,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-vfm6s true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00216f4d0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00216f4f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-08 14:40:16 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-08 14:40:40 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-08 14:40:40 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-08 14:40:16 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.4,StartTime:2020-01-08 14:40:16 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-08 14:40:39 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://2c623860d2adcc20c8172f880780d175994500ee41f43677477a14dc90f8a9b5}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  8 14:41:02.782: INFO: Pod "nginx-deployment-7b8c6f4498-fgc5v" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-fgc5v,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-461,SelfLink:/api/v1/namespaces/deployment-461/pods/nginx-deployment-7b8c6f4498-fgc5v,UID:f1bd3bb1-6369-4c3d-a161-47e8314bfcb7,ResourceVersion:19785438,Generation:0,CreationTimestamp:2020-01-08 14:40:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 5f8711aa-bd97-4cf6-ba89-e3cc116a9f81 0xc00216f5c7 0xc00216f5c8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-vfm6s {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vfm6s,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-vfm6s true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00216f660} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00216f680}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-08 14:40:16 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-08 14:40:40 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-08 14:40:40 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-08 14:40:16 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.2,StartTime:2020-01-08 14:40:16 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-08 14:40:39 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://651d74939364cdeef7c08b08c41ca8669e1bf3a30a88a9d1bc47d9da1009e031}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  8 14:41:02.782: INFO: Pod "nginx-deployment-7b8c6f4498-j45jp" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-j45jp,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-461,SelfLink:/api/v1/namespaces/deployment-461/pods/nginx-deployment-7b8c6f4498-j45jp,UID:37a13407-7b7a-43c1-b2e0-f6cc1bc42a0a,ResourceVersion:19785594,Generation:0,CreationTimestamp:2020-01-08 14:40:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 5f8711aa-bd97-4cf6-ba89-e3cc116a9f81 0xc00216f757 0xc00216f758}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-vfm6s {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vfm6s,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-vfm6s true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00216f7c0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00216f7e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-08 14:40:54 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  8 14:41:02.782: INFO: Pod "nginx-deployment-7b8c6f4498-kxrxh" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-kxrxh,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-461,SelfLink:/api/v1/namespaces/deployment-461/pods/nginx-deployment-7b8c6f4498-kxrxh,UID:4aac9511-b8d9-4e38-9ec5-11060e6c34d3,ResourceVersion:19785465,Generation:0,CreationTimestamp:2020-01-08 14:40:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 5f8711aa-bd97-4cf6-ba89-e3cc116a9f81 0xc00216f867 0xc00216f868}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-vfm6s {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vfm6s,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-vfm6s true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00216f8d0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00216f8f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-08 14:40:16 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-08 14:40:44 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-08 14:40:44 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-08 14:40:16 +0000 UTC  }],Message:,Reason:,HostIP:10.96.2.216,PodIP:10.32.0.4,StartTime:2020-01-08 14:40:16 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-08 14:40:42 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://a9ef68020c3cc669b720fd785dcf0e42874f95586a4c05ccf619ce06c9813d57}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  8 14:41:02.782: INFO: Pod "nginx-deployment-7b8c6f4498-mh5g4" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-mh5g4,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-461,SelfLink:/api/v1/namespaces/deployment-461/pods/nginx-deployment-7b8c6f4498-mh5g4,UID:203af3a7-6371-4785-a45f-2739246b56b5,ResourceVersion:19785623,Generation:0,CreationTimestamp:2020-01-08 14:40:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 5f8711aa-bd97-4cf6-ba89-e3cc116a9f81 0xc00216f9c7 0xc00216f9c8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-vfm6s {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vfm6s,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-vfm6s true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00216fa40} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00216fa60}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-08 14:40:54 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-08 14:40:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-08 14:40:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-08 14:40:53 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2020-01-08 14:40:54 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  8 14:41:02.783: INFO: Pod "nginx-deployment-7b8c6f4498-nj6nb" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-nj6nb,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-461,SelfLink:/api/v1/namespaces/deployment-461/pods/nginx-deployment-7b8c6f4498-nj6nb,UID:db39a613-5bf1-466b-bfb8-169db0aa1a61,ResourceVersion:19785588,Generation:0,CreationTimestamp:2020-01-08 14:40:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 5f8711aa-bd97-4cf6-ba89-e3cc116a9f81 0xc00216fb27 0xc00216fb28}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-vfm6s {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vfm6s,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-vfm6s true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00216fba0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00216fbc0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-08 14:40:53 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-08 14:40:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-08 14:40:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-08 14:40:53 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2020-01-08 14:40:53 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  8 14:41:02.783: INFO: Pod "nginx-deployment-7b8c6f4498-nvzs6" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-nvzs6,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-461,SelfLink:/api/v1/namespaces/deployment-461/pods/nginx-deployment-7b8c6f4498-nvzs6,UID:68e63b51-37c7-4d58-a9f4-28bfdb7488af,ResourceVersion:19785576,Generation:0,CreationTimestamp:2020-01-08 14:40:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 5f8711aa-bd97-4cf6-ba89-e3cc116a9f81 0xc00216fc87 0xc00216fc88}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-vfm6s {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vfm6s,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-vfm6s true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00216fcf0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00216fd10}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-08 14:40:53 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  8 14:41:02.783: INFO: Pod "nginx-deployment-7b8c6f4498-pzc6w" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-pzc6w,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-461,SelfLink:/api/v1/namespaces/deployment-461/pods/nginx-deployment-7b8c6f4498-pzc6w,UID:dd97d6bd-c7cf-428d-a726-b24bf2d6864f,ResourceVersion:19785471,Generation:0,CreationTimestamp:2020-01-08 14:40:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 5f8711aa-bd97-4cf6-ba89-e3cc116a9f81 0xc00216fd97 0xc00216fd98}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-vfm6s {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vfm6s,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-vfm6s true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00216fe00} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00216fe20}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-08 14:40:16 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-08 14:40:44 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-08 14:40:44 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-08 14:40:16 +0000 UTC  }],Message:,Reason:,HostIP:10.96.2.216,PodIP:10.32.0.8,StartTime:2020-01-08 14:40:16 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-08 14:40:42 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://311ba83dbd526a51ec249668e0833d64387ee6d03f1b18264a2f8fa2564ed608}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  8 14:41:02.783: INFO: Pod "nginx-deployment-7b8c6f4498-s7lnp" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-s7lnp,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-461,SelfLink:/api/v1/namespaces/deployment-461/pods/nginx-deployment-7b8c6f4498-s7lnp,UID:915d20f6-fca1-4fe3-889a-cbe397c5be72,ResourceVersion:19785451,Generation:0,CreationTimestamp:2020-01-08 14:40:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 5f8711aa-bd97-4cf6-ba89-e3cc116a9f81 0xc00216fef7 0xc00216fef8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-vfm6s {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vfm6s,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-vfm6s true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00216ff70} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00216ff90}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-08 14:40:16 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-08 14:40:40 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-08 14:40:40 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-08 14:40:16 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.3,StartTime:2020-01-08 14:40:16 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-08 14:40:39 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://20ca3e3b453b584b0c95b8071c7e3b5d7d69ee703c9d2a27925a70b1539a7c3b}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  8 14:41:02.783: INFO: Pod "nginx-deployment-7b8c6f4498-wckqp" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-wckqp,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-461,SelfLink:/api/v1/namespaces/deployment-461/pods/nginx-deployment-7b8c6f4498-wckqp,UID:f7efd2a2-eded-465c-89c9-3241882b7d65,ResourceVersion:19785593,Generation:0,CreationTimestamp:2020-01-08 14:40:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 5f8711aa-bd97-4cf6-ba89-e3cc116a9f81 0xc0027b6087 0xc0027b6088}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-vfm6s {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vfm6s,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-vfm6s true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0027b6100} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0027b6120}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-08 14:40:54 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  8 14:41:02.783: INFO: Pod "nginx-deployment-7b8c6f4498-wr95g" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-wr95g,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-461,SelfLink:/api/v1/namespaces/deployment-461/pods/nginx-deployment-7b8c6f4498-wr95g,UID:6e90b864-9243-4368-95f0-167022134c4f,ResourceVersion:19785592,Generation:0,CreationTimestamp:2020-01-08 14:40:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 5f8711aa-bd97-4cf6-ba89-e3cc116a9f81 0xc0027b61a7 0xc0027b61a8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-vfm6s {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vfm6s,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-vfm6s true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0027b6220} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0027b6240}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-08 14:40:54 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  8 14:41:02.783: INFO: Pod "nginx-deployment-7b8c6f4498-xfn4v" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-xfn4v,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-461,SelfLink:/api/v1/namespaces/deployment-461/pods/nginx-deployment-7b8c6f4498-xfn4v,UID:69ab2bd4-a9de-483f-a533-42103aa53356,ResourceVersion:19785578,Generation:0,CreationTimestamp:2020-01-08 14:40:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 5f8711aa-bd97-4cf6-ba89-e3cc116a9f81 0xc0027b62c7 0xc0027b62c8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-vfm6s {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vfm6s,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-vfm6s true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0027b6330} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0027b6350}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-08 14:40:53 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  8 14:41:02.783: INFO: Pod "nginx-deployment-7b8c6f4498-zb9ht" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-zb9ht,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-461,SelfLink:/api/v1/namespaces/deployment-461/pods/nginx-deployment-7b8c6f4498-zb9ht,UID:7a2b0cdd-c33d-4397-af08-3e8419f29d6f,ResourceVersion:19785558,Generation:0,CreationTimestamp:2020-01-08 14:40:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 5f8711aa-bd97-4cf6-ba89-e3cc116a9f81 0xc0027b63d7 0xc0027b63d8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-vfm6s {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vfm6s,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-vfm6s true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0027b6440} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0027b6460}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-08 14:40:53 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  8 14:41:02.783: INFO: Pod "nginx-deployment-7b8c6f4498-zg9zv" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-zg9zv,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-461,SelfLink:/api/v1/namespaces/deployment-461/pods/nginx-deployment-7b8c6f4498-zg9zv,UID:1ef75b4c-a1d7-4d1e-9cc4-887b36b429c0,ResourceVersion:19785445,Generation:0,CreationTimestamp:2020-01-08 14:40:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 5f8711aa-bd97-4cf6-ba89-e3cc116a9f81 0xc0027b64e7 0xc0027b64e8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-vfm6s {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vfm6s,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-vfm6s true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0027b6560} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0027b6580}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-08 14:40:16 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-08 14:40:40 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-08 14:40:40 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-08 14:40:16 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.1,StartTime:2020-01-08 14:40:16 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-08 14:40:36 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://adc12a5229b1f24c5ccdce5217f85acf34d604082c6ad1dac151efbf27678e21}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  8 14:41:02.784: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-461" for this suite.
Jan  8 14:41:58.940: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  8 14:41:59.086: INFO: namespace deployment-461 deletion completed in 53.16750862s

• [SLOW TEST:103.176 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  8 14:41:59.087: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Given a ReplicationController is created
STEP: When the matched label of one of its pods change
Jan  8 14:41:59.260: INFO: Pod name pod-release: Found 0 pods out of 1
Jan  8 14:42:04.274: INFO: Pod name pod-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  8 14:42:04.360: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-643" for this suite.
Jan  8 14:42:10.518: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  8 14:42:10.642: INFO: namespace replication-controller-643 deletion completed in 6.183582645s

• [SLOW TEST:11.555 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  8 14:42:10.643: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-d0f57a4c-5a96-46cb-9152-1161aade9266
STEP: Creating a pod to test consume secrets
Jan  8 14:42:10.843: INFO: Waiting up to 5m0s for pod "pod-secrets-b1e8e81b-b737-4780-88e5-f560f870c6f0" in namespace "secrets-890" to be "success or failure"
Jan  8 14:42:10.874: INFO: Pod "pod-secrets-b1e8e81b-b737-4780-88e5-f560f870c6f0": Phase="Pending", Reason="", readiness=false. Elapsed: 30.642694ms
Jan  8 14:42:12.911: INFO: Pod "pod-secrets-b1e8e81b-b737-4780-88e5-f560f870c6f0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.067342539s
Jan  8 14:42:14.924: INFO: Pod "pod-secrets-b1e8e81b-b737-4780-88e5-f560f870c6f0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.080718875s
Jan  8 14:42:16.939: INFO: Pod "pod-secrets-b1e8e81b-b737-4780-88e5-f560f870c6f0": Phase="Pending", Reason="", readiness=false. Elapsed: 6.095919237s
Jan  8 14:42:18.947: INFO: Pod "pod-secrets-b1e8e81b-b737-4780-88e5-f560f870c6f0": Phase="Pending", Reason="", readiness=false. Elapsed: 8.103698143s
Jan  8 14:42:20.958: INFO: Pod "pod-secrets-b1e8e81b-b737-4780-88e5-f560f870c6f0": Phase="Pending", Reason="", readiness=false. Elapsed: 10.114360941s
Jan  8 14:42:22.968: INFO: Pod "pod-secrets-b1e8e81b-b737-4780-88e5-f560f870c6f0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.124186851s
STEP: Saw pod success
Jan  8 14:42:22.968: INFO: Pod "pod-secrets-b1e8e81b-b737-4780-88e5-f560f870c6f0" satisfied condition "success or failure"
Jan  8 14:42:22.973: INFO: Trying to get logs from node iruya-node pod pod-secrets-b1e8e81b-b737-4780-88e5-f560f870c6f0 container secret-volume-test: 
STEP: delete the pod
Jan  8 14:42:23.062: INFO: Waiting for pod pod-secrets-b1e8e81b-b737-4780-88e5-f560f870c6f0 to disappear
Jan  8 14:42:23.067: INFO: Pod pod-secrets-b1e8e81b-b737-4780-88e5-f560f870c6f0 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  8 14:42:23.068: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-890" for this suite.
Jan  8 14:42:29.107: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  8 14:42:29.288: INFO: namespace secrets-890 deletion completed in 6.211105842s

• [SLOW TEST:18.646 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  8 14:42:29.289: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-d84f549b-06ee-4a12-a6f7-8672efc94f7f
STEP: Creating a pod to test consume configMaps
Jan  8 14:42:29.372: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-c08d030e-ed19-4d0e-9009-5b2eab574ca3" in namespace "projected-8073" to be "success or failure"
Jan  8 14:42:29.429: INFO: Pod "pod-projected-configmaps-c08d030e-ed19-4d0e-9009-5b2eab574ca3": Phase="Pending", Reason="", readiness=false. Elapsed: 56.112081ms
Jan  8 14:42:31.437: INFO: Pod "pod-projected-configmaps-c08d030e-ed19-4d0e-9009-5b2eab574ca3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.064802405s
Jan  8 14:42:33.450: INFO: Pod "pod-projected-configmaps-c08d030e-ed19-4d0e-9009-5b2eab574ca3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.077284369s
Jan  8 14:42:35.459: INFO: Pod "pod-projected-configmaps-c08d030e-ed19-4d0e-9009-5b2eab574ca3": Phase="Pending", Reason="", readiness=false. Elapsed: 6.086462043s
Jan  8 14:42:37.466: INFO: Pod "pod-projected-configmaps-c08d030e-ed19-4d0e-9009-5b2eab574ca3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.093761821s
STEP: Saw pod success
Jan  8 14:42:37.466: INFO: Pod "pod-projected-configmaps-c08d030e-ed19-4d0e-9009-5b2eab574ca3" satisfied condition "success or failure"
Jan  8 14:42:37.472: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-c08d030e-ed19-4d0e-9009-5b2eab574ca3 container projected-configmap-volume-test: 
STEP: delete the pod
Jan  8 14:42:37.558: INFO: Waiting for pod pod-projected-configmaps-c08d030e-ed19-4d0e-9009-5b2eab574ca3 to disappear
Jan  8 14:42:37.639: INFO: Pod pod-projected-configmaps-c08d030e-ed19-4d0e-9009-5b2eab574ca3 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  8 14:42:37.639: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8073" for this suite.
Jan  8 14:42:43.667: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  8 14:42:43.812: INFO: namespace projected-8073 deletion completed in 6.167448389s

• [SLOW TEST:14.524 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl logs 
  should be able to retrieve and filter logs  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  8 14:42:43.815: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1292
STEP: creating an rc
Jan  8 14:42:43.941: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9887'
Jan  8 14:42:44.600: INFO: stderr: ""
Jan  8 14:42:44.600: INFO: stdout: "replicationcontroller/redis-master created\n"
[It] should be able to retrieve and filter logs  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Waiting for Redis master to start.
Jan  8 14:42:45.611: INFO: Selector matched 1 pods for map[app:redis]
Jan  8 14:42:45.612: INFO: Found 0 / 1
Jan  8 14:42:46.614: INFO: Selector matched 1 pods for map[app:redis]
Jan  8 14:42:46.614: INFO: Found 0 / 1
Jan  8 14:42:47.615: INFO: Selector matched 1 pods for map[app:redis]
Jan  8 14:42:47.616: INFO: Found 0 / 1
Jan  8 14:42:48.607: INFO: Selector matched 1 pods for map[app:redis]
Jan  8 14:42:48.607: INFO: Found 0 / 1
Jan  8 14:42:49.614: INFO: Selector matched 1 pods for map[app:redis]
Jan  8 14:42:49.614: INFO: Found 0 / 1
Jan  8 14:42:50.619: INFO: Selector matched 1 pods for map[app:redis]
Jan  8 14:42:50.619: INFO: Found 0 / 1
Jan  8 14:42:51.627: INFO: Selector matched 1 pods for map[app:redis]
Jan  8 14:42:51.627: INFO: Found 0 / 1
Jan  8 14:42:52.620: INFO: Selector matched 1 pods for map[app:redis]
Jan  8 14:42:52.620: INFO: Found 0 / 1
Jan  8 14:42:53.617: INFO: Selector matched 1 pods for map[app:redis]
Jan  8 14:42:53.617: INFO: Found 1 / 1
Jan  8 14:42:53.617: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Jan  8 14:42:53.624: INFO: Selector matched 1 pods for map[app:redis]
Jan  8 14:42:53.624: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
STEP: checking for a matching strings
Jan  8 14:42:53.625: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-gph7x redis-master --namespace=kubectl-9887'
Jan  8 14:42:53.837: INFO: stderr: ""
Jan  8 14:42:53.837: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 08 Jan 14:42:51.697 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 08 Jan 14:42:51.697 # Server started, Redis version 3.2.12\n1:M 08 Jan 14:42:51.697 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 08 Jan 14:42:51.697 * The server is now ready to accept connections on port 6379\n"
STEP: limiting log lines
Jan  8 14:42:53.837: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-gph7x redis-master --namespace=kubectl-9887 --tail=1'
Jan  8 14:42:54.027: INFO: stderr: ""
Jan  8 14:42:54.027: INFO: stdout: "1:M 08 Jan 14:42:51.697 * The server is now ready to accept connections on port 6379\n"
STEP: limiting log bytes
Jan  8 14:42:54.027: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-gph7x redis-master --namespace=kubectl-9887 --limit-bytes=1'
Jan  8 14:42:54.167: INFO: stderr: ""
Jan  8 14:42:54.167: INFO: stdout: " "
STEP: exposing timestamps
Jan  8 14:42:54.167: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-gph7x redis-master --namespace=kubectl-9887 --tail=1 --timestamps'
Jan  8 14:42:54.381: INFO: stderr: ""
Jan  8 14:42:54.381: INFO: stdout: "2020-01-08T14:42:51.698200662Z 1:M 08 Jan 14:42:51.697 * The server is now ready to accept connections on port 6379\n"
STEP: restricting to a time range
Jan  8 14:42:56.881: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-gph7x redis-master --namespace=kubectl-9887 --since=1s'
Jan  8 14:42:57.112: INFO: stderr: ""
Jan  8 14:42:57.112: INFO: stdout: ""
Jan  8 14:42:57.112: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-gph7x redis-master --namespace=kubectl-9887 --since=24h'
Jan  8 14:42:57.315: INFO: stderr: ""
Jan  8 14:42:57.315: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 08 Jan 14:42:51.697 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 08 Jan 14:42:51.697 # Server started, Redis version 3.2.12\n1:M 08 Jan 14:42:51.697 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 08 Jan 14:42:51.697 * The server is now ready to accept connections on port 6379\n"
[AfterEach] [k8s.io] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1298
STEP: using delete to clean up resources
Jan  8 14:42:57.316: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9887'
Jan  8 14:42:57.414: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan  8 14:42:57.414: INFO: stdout: "replicationcontroller \"redis-master\" force deleted\n"
Jan  8 14:42:57.414: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=nginx --no-headers --namespace=kubectl-9887'
Jan  8 14:42:57.678: INFO: stderr: "No resources found.\n"
Jan  8 14:42:57.678: INFO: stdout: ""
Jan  8 14:42:57.679: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=nginx --namespace=kubectl-9887 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Jan  8 14:42:57.823: INFO: stderr: ""
Jan  8 14:42:57.824: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  8 14:42:57.824: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-9887" for this suite.
Jan  8 14:43:19.887: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  8 14:43:19.978: INFO: namespace kubectl-9887 deletion completed in 22.142183624s

• [SLOW TEST:36.163 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should be able to retrieve and filter logs  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  8 14:43:19.978: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Given a Pod with a 'name' label pod-adoption is created
STEP: When a replication controller with a matching selector is created
STEP: Then the orphan pod is adopted
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  8 14:43:31.129: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-6369" for this suite.
Jan  8 14:43:53.177: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  8 14:43:53.357: INFO: namespace replication-controller-6369 deletion completed in 22.221151687s

• [SLOW TEST:33.379 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  8 14:43:53.357: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward api env vars
Jan  8 14:43:53.568: INFO: Waiting up to 5m0s for pod "downward-api-a11341cf-eb32-4fb5-921a-f39d56e684a3" in namespace "downward-api-4092" to be "success or failure"
Jan  8 14:43:53.682: INFO: Pod "downward-api-a11341cf-eb32-4fb5-921a-f39d56e684a3": Phase="Pending", Reason="", readiness=false. Elapsed: 113.954463ms
Jan  8 14:43:55.690: INFO: Pod "downward-api-a11341cf-eb32-4fb5-921a-f39d56e684a3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.122068928s
Jan  8 14:43:57.777: INFO: Pod "downward-api-a11341cf-eb32-4fb5-921a-f39d56e684a3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.208842845s
Jan  8 14:43:59.787: INFO: Pod "downward-api-a11341cf-eb32-4fb5-921a-f39d56e684a3": Phase="Pending", Reason="", readiness=false. Elapsed: 6.219217557s
Jan  8 14:44:01.799: INFO: Pod "downward-api-a11341cf-eb32-4fb5-921a-f39d56e684a3": Phase="Pending", Reason="", readiness=false. Elapsed: 8.230745569s
Jan  8 14:44:03.812: INFO: Pod "downward-api-a11341cf-eb32-4fb5-921a-f39d56e684a3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.244425587s
STEP: Saw pod success
Jan  8 14:44:03.813: INFO: Pod "downward-api-a11341cf-eb32-4fb5-921a-f39d56e684a3" satisfied condition "success or failure"
Jan  8 14:44:03.828: INFO: Trying to get logs from node iruya-node pod downward-api-a11341cf-eb32-4fb5-921a-f39d56e684a3 container dapi-container: 
STEP: delete the pod
Jan  8 14:44:04.078: INFO: Waiting for pod downward-api-a11341cf-eb32-4fb5-921a-f39d56e684a3 to disappear
Jan  8 14:44:04.082: INFO: Pod downward-api-a11341cf-eb32-4fb5-921a-f39d56e684a3 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  8 14:44:04.082: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-4092" for this suite.
Jan  8 14:44:10.126: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  8 14:44:10.265: INFO: namespace downward-api-4092 deletion completed in 6.17816489s

• [SLOW TEST:16.908 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  8 14:44:10.266: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Jan  8 14:44:10.398: INFO: Waiting up to 5m0s for pod "downwardapi-volume-73987752-0836-4015-898c-26823b28fceb" in namespace "downward-api-6481" to be "success or failure"
Jan  8 14:44:10.430: INFO: Pod "downwardapi-volume-73987752-0836-4015-898c-26823b28fceb": Phase="Pending", Reason="", readiness=false. Elapsed: 31.553235ms
Jan  8 14:44:12.440: INFO: Pod "downwardapi-volume-73987752-0836-4015-898c-26823b28fceb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.041747573s
Jan  8 14:44:14.448: INFO: Pod "downwardapi-volume-73987752-0836-4015-898c-26823b28fceb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.050453637s
Jan  8 14:44:16.464: INFO: Pod "downwardapi-volume-73987752-0836-4015-898c-26823b28fceb": Phase="Pending", Reason="", readiness=false. Elapsed: 6.066383965s
Jan  8 14:44:18.484: INFO: Pod "downwardapi-volume-73987752-0836-4015-898c-26823b28fceb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.085749891s
STEP: Saw pod success
Jan  8 14:44:18.484: INFO: Pod "downwardapi-volume-73987752-0836-4015-898c-26823b28fceb" satisfied condition "success or failure"
Jan  8 14:44:18.497: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-73987752-0836-4015-898c-26823b28fceb container client-container: 
STEP: delete the pod
Jan  8 14:44:18.568: INFO: Waiting for pod downwardapi-volume-73987752-0836-4015-898c-26823b28fceb to disappear
Jan  8 14:44:18.606: INFO: Pod downwardapi-volume-73987752-0836-4015-898c-26823b28fceb no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  8 14:44:18.606: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-6481" for this suite.
Jan  8 14:44:24.673: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  8 14:44:24.810: INFO: namespace downward-api-6481 deletion completed in 6.195053068s

• [SLOW TEST:14.544 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run rc 
  should create an rc from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  8 14:44:24.811: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1456
[It] should create an rc from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Jan  8 14:44:24.940: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=kubectl-1856'
Jan  8 14:44:25.245: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Jan  8 14:44:25.245: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n"
STEP: verifying the rc e2e-test-nginx-rc was created
STEP: verifying the pod controlled by rc e2e-test-nginx-rc was created
STEP: confirm that you can get logs from an rc
Jan  8 14:44:25.306: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-nginx-rc-xnjvz]
Jan  8 14:44:25.306: INFO: Waiting up to 5m0s for pod "e2e-test-nginx-rc-xnjvz" in namespace "kubectl-1856" to be "running and ready"
Jan  8 14:44:25.321: INFO: Pod "e2e-test-nginx-rc-xnjvz": Phase="Pending", Reason="", readiness=false. Elapsed: 14.399045ms
Jan  8 14:44:27.330: INFO: Pod "e2e-test-nginx-rc-xnjvz": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024038468s
Jan  8 14:44:29.345: INFO: Pod "e2e-test-nginx-rc-xnjvz": Phase="Pending", Reason="", readiness=false. Elapsed: 4.038475736s
Jan  8 14:44:31.362: INFO: Pod "e2e-test-nginx-rc-xnjvz": Phase="Pending", Reason="", readiness=false. Elapsed: 6.05590682s
Jan  8 14:44:33.396: INFO: Pod "e2e-test-nginx-rc-xnjvz": Phase="Running", Reason="", readiness=true. Elapsed: 8.089569448s
Jan  8 14:44:33.396: INFO: Pod "e2e-test-nginx-rc-xnjvz" satisfied condition "running and ready"
Jan  8 14:44:33.396: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-nginx-rc-xnjvz]
Jan  8 14:44:33.396: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-nginx-rc --namespace=kubectl-1856'
Jan  8 14:44:33.634: INFO: stderr: ""
Jan  8 14:44:33.634: INFO: stdout: ""
[AfterEach] [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1461
Jan  8 14:44:33.635: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=kubectl-1856'
Jan  8 14:44:33.778: INFO: stderr: ""
Jan  8 14:44:33.778: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  8 14:44:33.779: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1856" for this suite.
Jan  8 14:44:55.848: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  8 14:44:55.992: INFO: namespace kubectl-1856 deletion completed in 22.20186983s

• [SLOW TEST:31.182 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create an rc from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  8 14:44:55.993: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-bdce508d-8cd5-4300-a735-a535480eeace
STEP: Creating a pod to test consume secrets
Jan  8 14:44:56.102: INFO: Waiting up to 5m0s for pod "pod-secrets-acf61f2a-1a45-4c5a-98f5-9259cb68d0ff" in namespace "secrets-5862" to be "success or failure"
Jan  8 14:44:56.120: INFO: Pod "pod-secrets-acf61f2a-1a45-4c5a-98f5-9259cb68d0ff": Phase="Pending", Reason="", readiness=false. Elapsed: 18.344309ms
Jan  8 14:44:58.168: INFO: Pod "pod-secrets-acf61f2a-1a45-4c5a-98f5-9259cb68d0ff": Phase="Pending", Reason="", readiness=false. Elapsed: 2.066001357s
Jan  8 14:45:00.203: INFO: Pod "pod-secrets-acf61f2a-1a45-4c5a-98f5-9259cb68d0ff": Phase="Pending", Reason="", readiness=false. Elapsed: 4.100617575s
Jan  8 14:45:02.247: INFO: Pod "pod-secrets-acf61f2a-1a45-4c5a-98f5-9259cb68d0ff": Phase="Pending", Reason="", readiness=false. Elapsed: 6.14543413s
Jan  8 14:45:04.256: INFO: Pod "pod-secrets-acf61f2a-1a45-4c5a-98f5-9259cb68d0ff": Phase="Pending", Reason="", readiness=false. Elapsed: 8.15385326s
Jan  8 14:45:06.266: INFO: Pod "pod-secrets-acf61f2a-1a45-4c5a-98f5-9259cb68d0ff": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.164184322s
STEP: Saw pod success
Jan  8 14:45:06.266: INFO: Pod "pod-secrets-acf61f2a-1a45-4c5a-98f5-9259cb68d0ff" satisfied condition "success or failure"
Jan  8 14:45:06.273: INFO: Trying to get logs from node iruya-node pod pod-secrets-acf61f2a-1a45-4c5a-98f5-9259cb68d0ff container secret-volume-test: 
STEP: delete the pod
Jan  8 14:45:06.332: INFO: Waiting for pod pod-secrets-acf61f2a-1a45-4c5a-98f5-9259cb68d0ff to disappear
Jan  8 14:45:06.379: INFO: Pod pod-secrets-acf61f2a-1a45-4c5a-98f5-9259cb68d0ff no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  8 14:45:06.379: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-5862" for this suite.
Jan  8 14:45:12.434: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  8 14:45:12.549: INFO: namespace secrets-5862 deletion completed in 6.163200729s

• [SLOW TEST:16.556 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  8 14:45:12.550: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name s-test-opt-del-93039d89-33e8-4d2d-b602-d85e907e6745
STEP: Creating secret with name s-test-opt-upd-807edda8-e228-4b4c-b401-cad0adb6acac
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-93039d89-33e8-4d2d-b602-d85e907e6745
STEP: Updating secret s-test-opt-upd-807edda8-e228-4b4c-b401-cad0adb6acac
STEP: Creating secret with name s-test-opt-create-168ec409-03bc-4ebf-ae09-25abf2e64e1a
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  8 14:45:29.068: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-1618" for this suite.
Jan  8 14:45:51.101: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  8 14:45:51.211: INFO: namespace secrets-1618 deletion completed in 22.132983788s

• [SLOW TEST:38.661 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with secret pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  8 14:45:51.211: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with secret pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod pod-subpath-test-secret-thkn
STEP: Creating a pod to test atomic-volume-subpath
Jan  8 14:45:51.324: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-thkn" in namespace "subpath-7483" to be "success or failure"
Jan  8 14:45:51.348: INFO: Pod "pod-subpath-test-secret-thkn": Phase="Pending", Reason="", readiness=false. Elapsed: 23.488856ms
Jan  8 14:45:53.357: INFO: Pod "pod-subpath-test-secret-thkn": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032720766s
Jan  8 14:45:55.362: INFO: Pod "pod-subpath-test-secret-thkn": Phase="Pending", Reason="", readiness=false. Elapsed: 4.037468353s
Jan  8 14:45:57.370: INFO: Pod "pod-subpath-test-secret-thkn": Phase="Pending", Reason="", readiness=false. Elapsed: 6.045441448s
Jan  8 14:45:59.378: INFO: Pod "pod-subpath-test-secret-thkn": Phase="Pending", Reason="", readiness=false. Elapsed: 8.053828496s
Jan  8 14:46:01.385: INFO: Pod "pod-subpath-test-secret-thkn": Phase="Running", Reason="", readiness=true. Elapsed: 10.060998214s
Jan  8 14:46:03.393: INFO: Pod "pod-subpath-test-secret-thkn": Phase="Running", Reason="", readiness=true. Elapsed: 12.068505807s
Jan  8 14:46:05.405: INFO: Pod "pod-subpath-test-secret-thkn": Phase="Running", Reason="", readiness=true. Elapsed: 14.080992484s
Jan  8 14:46:07.413: INFO: Pod "pod-subpath-test-secret-thkn": Phase="Running", Reason="", readiness=true. Elapsed: 16.088971619s
Jan  8 14:46:09.429: INFO: Pod "pod-subpath-test-secret-thkn": Phase="Running", Reason="", readiness=true. Elapsed: 18.104310701s
Jan  8 14:46:11.436: INFO: Pod "pod-subpath-test-secret-thkn": Phase="Running", Reason="", readiness=true. Elapsed: 20.111944393s
Jan  8 14:46:13.450: INFO: Pod "pod-subpath-test-secret-thkn": Phase="Running", Reason="", readiness=true. Elapsed: 22.126031484s
Jan  8 14:46:15.466: INFO: Pod "pod-subpath-test-secret-thkn": Phase="Running", Reason="", readiness=true. Elapsed: 24.141627213s
Jan  8 14:46:17.475: INFO: Pod "pod-subpath-test-secret-thkn": Phase="Running", Reason="", readiness=true. Elapsed: 26.150369735s
Jan  8 14:46:19.488: INFO: Pod "pod-subpath-test-secret-thkn": Phase="Running", Reason="", readiness=true. Elapsed: 28.163722922s
Jan  8 14:46:21.505: INFO: Pod "pod-subpath-test-secret-thkn": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.180711516s
STEP: Saw pod success
Jan  8 14:46:21.505: INFO: Pod "pod-subpath-test-secret-thkn" satisfied condition "success or failure"
Jan  8 14:46:21.509: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-secret-thkn container test-container-subpath-secret-thkn: 
STEP: delete the pod
Jan  8 14:46:21.576: INFO: Waiting for pod pod-subpath-test-secret-thkn to disappear
Jan  8 14:46:21.583: INFO: Pod pod-subpath-test-secret-thkn no longer exists
STEP: Deleting pod pod-subpath-test-secret-thkn
Jan  8 14:46:21.583: INFO: Deleting pod "pod-subpath-test-secret-thkn" in namespace "subpath-7483"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  8 14:46:21.586: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-7483" for this suite.
Jan  8 14:46:27.658: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  8 14:46:27.830: INFO: namespace subpath-7483 deletion completed in 6.240225541s

• [SLOW TEST:36.619 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with secret pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[k8s.io] Probing container 
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  8 14:46:27.831: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jan  8 14:46:57.973: INFO: Container started at 2020-01-08 14:46:33 +0000 UTC, pod became ready at 2020-01-08 14:46:57 +0000 UTC
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  8 14:46:57.973: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-1337" for this suite.
Jan  8 14:47:20.023: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  8 14:47:20.185: INFO: namespace container-probe-1337 deletion completed in 22.205004809s

• [SLOW TEST:52.354 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-storage] Projected configMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  8 14:47:20.185: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with configMap that has name projected-configmap-test-upd-d3083faa-ef08-4403-9f24-1d109a734dad
STEP: Creating the pod
STEP: Updating configmap projected-configmap-test-upd-d3083faa-ef08-4403-9f24-1d109a734dad
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  8 14:47:30.677: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3731" for this suite.
Jan  8 14:47:52.783: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  8 14:47:52.944: INFO: namespace projected-3731 deletion completed in 22.24942207s

• [SLOW TEST:32.759 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicaSet 
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  8 14:47:52.945: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Given a Pod with a 'name' label pod-adoption-release is created
STEP: When a replicaset with a matching selector is created
STEP: Then the orphan pod is adopted
STEP: When the matched label of one of its pods change
Jan  8 14:48:02.129: INFO: Pod name pod-adoption-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  8 14:48:02.215: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replicaset-4132" for this suite.
Jan  8 14:48:20.262: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  8 14:48:20.380: INFO: namespace replicaset-4132 deletion completed in 18.158775242s

• [SLOW TEST:27.434 seconds]
[sig-apps] ReplicaSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-network] DNS 
  should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  8 14:48:20.380: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-270.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-270.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-270.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-270.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-270.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-270.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe /etc/hosts
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jan  8 14:48:32.597: INFO: Unable to read wheezy_udp@PodARecord from pod dns-270/dns-test-3b695879-96ba-4f33-99f6-1fe01fe53c2b: the server could not find the requested resource (get pods dns-test-3b695879-96ba-4f33-99f6-1fe01fe53c2b)
Jan  8 14:48:32.601: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-270/dns-test-3b695879-96ba-4f33-99f6-1fe01fe53c2b: the server could not find the requested resource (get pods dns-test-3b695879-96ba-4f33-99f6-1fe01fe53c2b)
Jan  8 14:48:32.606: INFO: Unable to read jessie_hosts@dns-querier-1.dns-test-service.dns-270.svc.cluster.local from pod dns-270/dns-test-3b695879-96ba-4f33-99f6-1fe01fe53c2b: the server could not find the requested resource (get pods dns-test-3b695879-96ba-4f33-99f6-1fe01fe53c2b)
Jan  8 14:48:32.610: INFO: Unable to read jessie_hosts@dns-querier-1 from pod dns-270/dns-test-3b695879-96ba-4f33-99f6-1fe01fe53c2b: the server could not find the requested resource (get pods dns-test-3b695879-96ba-4f33-99f6-1fe01fe53c2b)
Jan  8 14:48:32.612: INFO: Unable to read jessie_udp@PodARecord from pod dns-270/dns-test-3b695879-96ba-4f33-99f6-1fe01fe53c2b: the server could not find the requested resource (get pods dns-test-3b695879-96ba-4f33-99f6-1fe01fe53c2b)
Jan  8 14:48:32.615: INFO: Unable to read jessie_tcp@PodARecord from pod dns-270/dns-test-3b695879-96ba-4f33-99f6-1fe01fe53c2b: the server could not find the requested resource (get pods dns-test-3b695879-96ba-4f33-99f6-1fe01fe53c2b)
Jan  8 14:48:32.615: INFO: Lookups using dns-270/dns-test-3b695879-96ba-4f33-99f6-1fe01fe53c2b failed for: [wheezy_udp@PodARecord wheezy_tcp@PodARecord jessie_hosts@dns-querier-1.dns-test-service.dns-270.svc.cluster.local jessie_hosts@dns-querier-1 jessie_udp@PodARecord jessie_tcp@PodARecord]

Jan  8 14:48:37.671: INFO: DNS probes using dns-270/dns-test-3b695879-96ba-4f33-99f6-1fe01fe53c2b succeeded

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  8 14:48:37.774: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-270" for this suite.
Jan  8 14:48:43.921: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  8 14:48:44.020: INFO: namespace dns-270 deletion completed in 6.208435639s

• [SLOW TEST:23.640 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  8 14:48:44.021: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0777 on tmpfs
Jan  8 14:48:44.095: INFO: Waiting up to 5m0s for pod "pod-7931500c-c3f3-478d-bb99-2481eb21cd8b" in namespace "emptydir-8221" to be "success or failure"
Jan  8 14:48:44.144: INFO: Pod "pod-7931500c-c3f3-478d-bb99-2481eb21cd8b": Phase="Pending", Reason="", readiness=false. Elapsed: 48.542622ms
Jan  8 14:48:46.154: INFO: Pod "pod-7931500c-c3f3-478d-bb99-2481eb21cd8b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.058157922s
Jan  8 14:48:48.163: INFO: Pod "pod-7931500c-c3f3-478d-bb99-2481eb21cd8b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.06774711s
Jan  8 14:48:50.172: INFO: Pod "pod-7931500c-c3f3-478d-bb99-2481eb21cd8b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.076657032s
Jan  8 14:48:52.208: INFO: Pod "pod-7931500c-c3f3-478d-bb99-2481eb21cd8b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.112591203s
STEP: Saw pod success
Jan  8 14:48:52.208: INFO: Pod "pod-7931500c-c3f3-478d-bb99-2481eb21cd8b" satisfied condition "success or failure"
Jan  8 14:48:52.221: INFO: Trying to get logs from node iruya-node pod pod-7931500c-c3f3-478d-bb99-2481eb21cd8b container test-container: 
STEP: delete the pod
Jan  8 14:48:52.646: INFO: Waiting for pod pod-7931500c-c3f3-478d-bb99-2481eb21cd8b to disappear
Jan  8 14:48:52.658: INFO: Pod pod-7931500c-c3f3-478d-bb99-2481eb21cd8b no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  8 14:48:52.658: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-8221" for this suite.
Jan  8 14:48:58.692: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  8 14:48:58.817: INFO: namespace emptydir-8221 deletion completed in 6.152668576s

• [SLOW TEST:14.796 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  8 14:48:58.817: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs
STEP: Gathering metrics
W0108 14:49:29.527490       8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jan  8 14:49:29.527: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  8 14:49:29.527: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-2395" for this suite.
Jan  8 14:49:36.108: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  8 14:49:36.527: INFO: namespace gc-2395 deletion completed in 6.993879044s

• [SLOW TEST:37.710 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  8 14:49:36.528: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test override all
Jan  8 14:49:36.912: INFO: Waiting up to 5m0s for pod "client-containers-fa67212a-f3be-4245-bf57-31dc31f8b718" in namespace "containers-2470" to be "success or failure"
Jan  8 14:49:36.933: INFO: Pod "client-containers-fa67212a-f3be-4245-bf57-31dc31f8b718": Phase="Pending", Reason="", readiness=false. Elapsed: 20.999522ms
Jan  8 14:49:39.109: INFO: Pod "client-containers-fa67212a-f3be-4245-bf57-31dc31f8b718": Phase="Pending", Reason="", readiness=false. Elapsed: 2.196606163s
Jan  8 14:49:41.116: INFO: Pod "client-containers-fa67212a-f3be-4245-bf57-31dc31f8b718": Phase="Pending", Reason="", readiness=false. Elapsed: 4.203803195s
Jan  8 14:49:43.200: INFO: Pod "client-containers-fa67212a-f3be-4245-bf57-31dc31f8b718": Phase="Pending", Reason="", readiness=false. Elapsed: 6.288049477s
Jan  8 14:49:45.208: INFO: Pod "client-containers-fa67212a-f3be-4245-bf57-31dc31f8b718": Phase="Pending", Reason="", readiness=false. Elapsed: 8.296000139s
Jan  8 14:49:47.216: INFO: Pod "client-containers-fa67212a-f3be-4245-bf57-31dc31f8b718": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.303992659s
STEP: Saw pod success
Jan  8 14:49:47.217: INFO: Pod "client-containers-fa67212a-f3be-4245-bf57-31dc31f8b718" satisfied condition "success or failure"
Jan  8 14:49:47.221: INFO: Trying to get logs from node iruya-node pod client-containers-fa67212a-f3be-4245-bf57-31dc31f8b718 container test-container: 
STEP: delete the pod
Jan  8 14:49:47.408: INFO: Waiting for pod client-containers-fa67212a-f3be-4245-bf57-31dc31f8b718 to disappear
Jan  8 14:49:47.417: INFO: Pod client-containers-fa67212a-f3be-4245-bf57-31dc31f8b718 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  8 14:49:47.417: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-2470" for this suite.
Jan  8 14:49:53.508: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  8 14:49:53.646: INFO: namespace containers-2470 deletion completed in 6.221679076s

• [SLOW TEST:17.119 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  8 14:49:53.647: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0666 on tmpfs
Jan  8 14:49:53.840: INFO: Waiting up to 5m0s for pod "pod-b2db17e3-5ae4-49cd-9219-4e3088b60b3b" in namespace "emptydir-2382" to be "success or failure"
Jan  8 14:49:53.846: INFO: Pod "pod-b2db17e3-5ae4-49cd-9219-4e3088b60b3b": Phase="Pending", Reason="", readiness=false. Elapsed: 5.451979ms
Jan  8 14:49:55.872: INFO: Pod "pod-b2db17e3-5ae4-49cd-9219-4e3088b60b3b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032056672s
Jan  8 14:49:57.881: INFO: Pod "pod-b2db17e3-5ae4-49cd-9219-4e3088b60b3b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.040963262s
Jan  8 14:49:59.896: INFO: Pod "pod-b2db17e3-5ae4-49cd-9219-4e3088b60b3b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.055508915s
Jan  8 14:50:01.905: INFO: Pod "pod-b2db17e3-5ae4-49cd-9219-4e3088b60b3b": Phase="Pending", Reason="", readiness=false. Elapsed: 8.064643653s
Jan  8 14:50:03.918: INFO: Pod "pod-b2db17e3-5ae4-49cd-9219-4e3088b60b3b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.078020447s
STEP: Saw pod success
Jan  8 14:50:03.918: INFO: Pod "pod-b2db17e3-5ae4-49cd-9219-4e3088b60b3b" satisfied condition "success or failure"
Jan  8 14:50:03.931: INFO: Trying to get logs from node iruya-node pod pod-b2db17e3-5ae4-49cd-9219-4e3088b60b3b container test-container: 
STEP: delete the pod
Jan  8 14:50:04.078: INFO: Waiting for pod pod-b2db17e3-5ae4-49cd-9219-4e3088b60b3b to disappear
Jan  8 14:50:04.093: INFO: Pod pod-b2db17e3-5ae4-49cd-9219-4e3088b60b3b no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  8 14:50:04.093: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-2382" for this suite.
Jan  8 14:50:10.128: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  8 14:50:10.251: INFO: namespace emptydir-2382 deletion completed in 6.152335619s

• [SLOW TEST:16.604 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not cause race condition when used for configmaps [Serial] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  8 14:50:10.252: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not cause race condition when used for configmaps [Serial] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating 50 configmaps
STEP: Creating RC which spawns configmap-volume pods
Jan  8 14:50:11.143: INFO: Pod name wrapped-volume-race-24fceaea-f49c-48a3-972a-9f1f6cb17a8f: Found 0 pods out of 5
Jan  8 14:50:16.175: INFO: Pod name wrapped-volume-race-24fceaea-f49c-48a3-972a-9f1f6cb17a8f: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-24fceaea-f49c-48a3-972a-9f1f6cb17a8f in namespace emptydir-wrapper-3477, will wait for the garbage collector to delete the pods
Jan  8 14:50:46.282: INFO: Deleting ReplicationController wrapped-volume-race-24fceaea-f49c-48a3-972a-9f1f6cb17a8f took: 16.444307ms
Jan  8 14:50:46.683: INFO: Terminating ReplicationController wrapped-volume-race-24fceaea-f49c-48a3-972a-9f1f6cb17a8f pods took: 400.515681ms
STEP: Creating RC which spawns configmap-volume pods
Jan  8 14:51:37.155: INFO: Pod name wrapped-volume-race-d0b367dc-e175-4088-aa4a-e5c2a05d3953: Found 0 pods out of 5
Jan  8 14:51:42.184: INFO: Pod name wrapped-volume-race-d0b367dc-e175-4088-aa4a-e5c2a05d3953: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-d0b367dc-e175-4088-aa4a-e5c2a05d3953 in namespace emptydir-wrapper-3477, will wait for the garbage collector to delete the pods
Jan  8 14:52:10.332: INFO: Deleting ReplicationController wrapped-volume-race-d0b367dc-e175-4088-aa4a-e5c2a05d3953 took: 19.08196ms
Jan  8 14:52:10.632: INFO: Terminating ReplicationController wrapped-volume-race-d0b367dc-e175-4088-aa4a-e5c2a05d3953 pods took: 300.615504ms
STEP: Creating RC which spawns configmap-volume pods
Jan  8 14:52:56.971: INFO: Pod name wrapped-volume-race-20d03f42-f5fb-4ae1-9378-af1a293ec46a: Found 0 pods out of 5
Jan  8 14:53:01.984: INFO: Pod name wrapped-volume-race-20d03f42-f5fb-4ae1-9378-af1a293ec46a: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-20d03f42-f5fb-4ae1-9378-af1a293ec46a in namespace emptydir-wrapper-3477, will wait for the garbage collector to delete the pods
Jan  8 14:53:34.112: INFO: Deleting ReplicationController wrapped-volume-race-20d03f42-f5fb-4ae1-9378-af1a293ec46a took: 15.343845ms
Jan  8 14:53:34.612: INFO: Terminating ReplicationController wrapped-volume-race-20d03f42-f5fb-4ae1-9378-af1a293ec46a pods took: 500.528001ms
STEP: Cleaning up the configMaps
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  8 14:54:17.429: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-wrapper-3477" for this suite.
Jan  8 14:54:27.449: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  8 14:54:27.614: INFO: namespace emptydir-wrapper-3477 deletion completed in 10.181114872s

• [SLOW TEST:257.363 seconds]
[sig-storage] EmptyDir wrapper volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  should not cause race condition when used for configmaps [Serial] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  8 14:54:27.615: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jan  8 14:54:37.949: INFO: Waiting up to 5m0s for pod "client-envvars-54136c60-f1c7-4b84-b26a-8c7938838103" in namespace "pods-1585" to be "success or failure"
Jan  8 14:54:37.962: INFO: Pod "client-envvars-54136c60-f1c7-4b84-b26a-8c7938838103": Phase="Pending", Reason="", readiness=false. Elapsed: 13.330411ms
Jan  8 14:54:39.975: INFO: Pod "client-envvars-54136c60-f1c7-4b84-b26a-8c7938838103": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025562063s
Jan  8 14:54:41.983: INFO: Pod "client-envvars-54136c60-f1c7-4b84-b26a-8c7938838103": Phase="Pending", Reason="", readiness=false. Elapsed: 4.03423462s
Jan  8 14:54:43.993: INFO: Pod "client-envvars-54136c60-f1c7-4b84-b26a-8c7938838103": Phase="Pending", Reason="", readiness=false. Elapsed: 6.043644087s
Jan  8 14:54:46.003: INFO: Pod "client-envvars-54136c60-f1c7-4b84-b26a-8c7938838103": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.053757074s
STEP: Saw pod success
Jan  8 14:54:46.003: INFO: Pod "client-envvars-54136c60-f1c7-4b84-b26a-8c7938838103" satisfied condition "success or failure"
Jan  8 14:54:46.005: INFO: Trying to get logs from node iruya-node pod client-envvars-54136c60-f1c7-4b84-b26a-8c7938838103 container env3cont: 
STEP: delete the pod
Jan  8 14:54:46.058: INFO: Waiting for pod client-envvars-54136c60-f1c7-4b84-b26a-8c7938838103 to disappear
Jan  8 14:54:46.077: INFO: Pod client-envvars-54136c60-f1c7-4b84-b26a-8c7938838103 no longer exists
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  8 14:54:46.077: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-1585" for this suite.
Jan  8 14:55:38.229: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  8 14:55:38.393: INFO: namespace pods-1585 deletion completed in 52.257693859s

• [SLOW TEST:70.778 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run job 
  should create a job from an image when restart is OnFailure  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  8 14:55:38.394: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1612
[It] should create a job from an image when restart is OnFailure  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Jan  8 14:55:38.473: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-1133'
Jan  8 14:55:40.525: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Jan  8 14:55:40.526: INFO: stdout: "job.batch/e2e-test-nginx-job created\n"
STEP: verifying the job e2e-test-nginx-job was created
[AfterEach] [k8s.io] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1617
Jan  8 14:55:40.574: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-nginx-job --namespace=kubectl-1133'
Jan  8 14:55:40.985: INFO: stderr: ""
Jan  8 14:55:40.985: INFO: stdout: "job.batch \"e2e-test-nginx-job\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  8 14:55:40.985: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1133" for this suite.
Jan  8 14:56:03.082: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  8 14:56:03.232: INFO: namespace kubectl-1133 deletion completed in 22.214977483s

• [SLOW TEST:24.838 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create a job from an image when restart is OnFailure  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run --rm job 
  should create a job from an image, then delete the job  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  8 14:56:03.232: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should create a job from an image, then delete the job  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: executing a command with run --rm and attach with stdin
Jan  8 14:56:03.399: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-8639 run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed''
Jan  8 14:56:11.710: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0108 14:56:10.495893    3929 log.go:172] (0xc000a5e160) (0xc0006b4500) Create stream\nI0108 14:56:10.496440    3929 log.go:172] (0xc000a5e160) (0xc0006b4500) Stream added, broadcasting: 1\nI0108 14:56:10.509418    3929 log.go:172] (0xc000a5e160) Reply frame received for 1\nI0108 14:56:10.509646    3929 log.go:172] (0xc000a5e160) (0xc00092a640) Create stream\nI0108 14:56:10.509670    3929 log.go:172] (0xc000a5e160) (0xc00092a640) Stream added, broadcasting: 3\nI0108 14:56:10.517995    3929 log.go:172] (0xc000a5e160) Reply frame received for 3\nI0108 14:56:10.518224    3929 log.go:172] (0xc000a5e160) (0xc000420000) Create stream\nI0108 14:56:10.518237    3929 log.go:172] (0xc000a5e160) (0xc000420000) Stream added, broadcasting: 5\nI0108 14:56:10.520511    3929 log.go:172] (0xc000a5e160) Reply frame received for 5\nI0108 14:56:10.520589    3929 log.go:172] (0xc000a5e160) (0xc00092a6e0) Create stream\nI0108 14:56:10.520603    3929 log.go:172] (0xc000a5e160) (0xc00092a6e0) Stream added, broadcasting: 7\nI0108 14:56:10.522379    3929 log.go:172] (0xc000a5e160) Reply frame received for 7\nI0108 14:56:10.522863    3929 log.go:172] (0xc00092a640) (3) Writing data frame\nI0108 14:56:10.523061    3929 log.go:172] (0xc00092a640) (3) Writing data frame\nI0108 14:56:10.533551    3929 log.go:172] (0xc000a5e160) Data frame received for 5\nI0108 14:56:10.533577    3929 log.go:172] (0xc000420000) (5) Data frame handling\nI0108 14:56:10.533609    3929 log.go:172] (0xc000420000) (5) Data frame sent\nI0108 14:56:10.543573    3929 log.go:172] (0xc000a5e160) Data frame received for 5\nI0108 14:56:10.543589    3929 log.go:172] (0xc000420000) (5) Data frame handling\nI0108 14:56:10.543603    3929 log.go:172] (0xc000420000) (5) Data frame sent\nI0108 14:56:11.643530    3929 log.go:172] (0xc000a5e160) (0xc00092a640) Stream removed, broadcasting: 3\nI0108 14:56:11.644058    3929 log.go:172] (0xc000a5e160) Data frame received for 1\nI0108 14:56:11.644127    3929 log.go:172] (0xc0006b4500) (1) Data frame handling\nI0108 14:56:11.644181    3929 log.go:172] (0xc0006b4500) (1) Data frame sent\nI0108 14:56:11.644539    3929 log.go:172] (0xc000a5e160) (0xc000420000) Stream removed, broadcasting: 5\nI0108 14:56:11.645197    3929 log.go:172] (0xc000a5e160) (0xc00092a6e0) Stream removed, broadcasting: 7\nI0108 14:56:11.645420    3929 log.go:172] (0xc000a5e160) (0xc0006b4500) Stream removed, broadcasting: 1\nI0108 14:56:11.645481    3929 log.go:172] (0xc000a5e160) Go away received\nI0108 14:56:11.645941    3929 log.go:172] (0xc000a5e160) (0xc0006b4500) Stream removed, broadcasting: 1\nI0108 14:56:11.645972    3929 log.go:172] (0xc000a5e160) (0xc00092a640) Stream removed, broadcasting: 3\nI0108 14:56:11.645984    3929 log.go:172] (0xc000a5e160) (0xc000420000) Stream removed, broadcasting: 5\nI0108 14:56:11.645999    3929 log.go:172] (0xc000a5e160) (0xc00092a6e0) Stream removed, broadcasting: 7\n"
Jan  8 14:56:11.710: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n"
STEP: verifying the job e2e-test-rm-busybox-job was deleted
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  8 14:56:13.724: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-8639" for this suite.
Jan  8 14:56:19.826: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  8 14:56:19.912: INFO: namespace kubectl-8639 deletion completed in 6.180324611s

• [SLOW TEST:16.680 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run --rm job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create a job from an image, then delete the job  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-storage] Downward API volume 
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  8 14:56:19.913: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Jan  8 14:56:20.041: INFO: Waiting up to 5m0s for pod "downwardapi-volume-6db2359d-811f-4ad2-9060-82294219a41d" in namespace "downward-api-9397" to be "success or failure"
Jan  8 14:56:20.056: INFO: Pod "downwardapi-volume-6db2359d-811f-4ad2-9060-82294219a41d": Phase="Pending", Reason="", readiness=false. Elapsed: 14.908186ms
Jan  8 14:56:22.073: INFO: Pod "downwardapi-volume-6db2359d-811f-4ad2-9060-82294219a41d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032211957s
Jan  8 14:56:24.081: INFO: Pod "downwardapi-volume-6db2359d-811f-4ad2-9060-82294219a41d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.040508033s
Jan  8 14:56:26.097: INFO: Pod "downwardapi-volume-6db2359d-811f-4ad2-9060-82294219a41d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.056069999s
Jan  8 14:56:28.104: INFO: Pod "downwardapi-volume-6db2359d-811f-4ad2-9060-82294219a41d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.063524602s
STEP: Saw pod success
Jan  8 14:56:28.104: INFO: Pod "downwardapi-volume-6db2359d-811f-4ad2-9060-82294219a41d" satisfied condition "success or failure"
Jan  8 14:56:28.109: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-6db2359d-811f-4ad2-9060-82294219a41d container client-container: 
STEP: delete the pod
Jan  8 14:56:28.151: INFO: Waiting for pod downwardapi-volume-6db2359d-811f-4ad2-9060-82294219a41d to disappear
Jan  8 14:56:28.169: INFO: Pod downwardapi-volume-6db2359d-811f-4ad2-9060-82294219a41d no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  8 14:56:28.169: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-9397" for this suite.
Jan  8 14:56:34.375: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  8 14:56:34.517: INFO: namespace downward-api-9397 deletion completed in 6.340880662s

• [SLOW TEST:14.605 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  8 14:56:34.518: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
[It] should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Jan  8 14:56:34.727: INFO: Number of nodes with available pods: 0
Jan  8 14:56:34.727: INFO: Node iruya-node is running more than one daemon pod
Jan  8 14:56:35.922: INFO: Number of nodes with available pods: 0
Jan  8 14:56:35.922: INFO: Node iruya-node is running more than one daemon pod
Jan  8 14:56:36.742: INFO: Number of nodes with available pods: 0
Jan  8 14:56:36.742: INFO: Node iruya-node is running more than one daemon pod
Jan  8 14:56:37.742: INFO: Number of nodes with available pods: 0
Jan  8 14:56:37.742: INFO: Node iruya-node is running more than one daemon pod
Jan  8 14:56:38.737: INFO: Number of nodes with available pods: 0
Jan  8 14:56:38.737: INFO: Node iruya-node is running more than one daemon pod
Jan  8 14:56:41.018: INFO: Number of nodes with available pods: 0
Jan  8 14:56:41.018: INFO: Node iruya-node is running more than one daemon pod
Jan  8 14:56:41.935: INFO: Number of nodes with available pods: 0
Jan  8 14:56:41.935: INFO: Node iruya-node is running more than one daemon pod
Jan  8 14:56:42.741: INFO: Number of nodes with available pods: 0
Jan  8 14:56:42.741: INFO: Node iruya-node is running more than one daemon pod
Jan  8 14:56:43.742: INFO: Number of nodes with available pods: 0
Jan  8 14:56:43.742: INFO: Node iruya-node is running more than one daemon pod
Jan  8 14:56:44.746: INFO: Number of nodes with available pods: 1
Jan  8 14:56:44.746: INFO: Node iruya-node is running more than one daemon pod
Jan  8 14:56:45.746: INFO: Number of nodes with available pods: 2
Jan  8 14:56:45.746: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Stop a daemon pod, check that the daemon pod is revived.
Jan  8 14:56:45.804: INFO: Number of nodes with available pods: 1
Jan  8 14:56:45.804: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan  8 14:56:46.822: INFO: Number of nodes with available pods: 1
Jan  8 14:56:46.822: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan  8 14:56:47.829: INFO: Number of nodes with available pods: 1
Jan  8 14:56:47.829: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan  8 14:56:48.819: INFO: Number of nodes with available pods: 1
Jan  8 14:56:48.819: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan  8 14:56:49.822: INFO: Number of nodes with available pods: 1
Jan  8 14:56:49.822: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan  8 14:56:50.824: INFO: Number of nodes with available pods: 1
Jan  8 14:56:50.824: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan  8 14:56:51.826: INFO: Number of nodes with available pods: 1
Jan  8 14:56:51.826: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan  8 14:56:52.830: INFO: Number of nodes with available pods: 1
Jan  8 14:56:52.831: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan  8 14:56:53.822: INFO: Number of nodes with available pods: 1
Jan  8 14:56:53.822: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan  8 14:56:54.856: INFO: Number of nodes with available pods: 1
Jan  8 14:56:54.856: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan  8 14:56:55.840: INFO: Number of nodes with available pods: 1
Jan  8 14:56:55.840: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan  8 14:56:56.817: INFO: Number of nodes with available pods: 1
Jan  8 14:56:56.817: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan  8 14:56:57.970: INFO: Number of nodes with available pods: 1
Jan  8 14:56:57.970: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan  8 14:56:58.910: INFO: Number of nodes with available pods: 1
Jan  8 14:56:58.910: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan  8 14:56:59.831: INFO: Number of nodes with available pods: 1
Jan  8 14:56:59.831: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan  8 14:57:00.816: INFO: Number of nodes with available pods: 1
Jan  8 14:57:00.816: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan  8 14:57:03.636: INFO: Number of nodes with available pods: 1
Jan  8 14:57:03.637: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan  8 14:57:04.395: INFO: Number of nodes with available pods: 1
Jan  8 14:57:04.395: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan  8 14:57:05.259: INFO: Number of nodes with available pods: 1
Jan  8 14:57:05.259: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan  8 14:57:05.885: INFO: Number of nodes with available pods: 1
Jan  8 14:57:05.885: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan  8 14:57:06.815: INFO: Number of nodes with available pods: 1
Jan  8 14:57:06.815: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan  8 14:57:07.819: INFO: Number of nodes with available pods: 2
Jan  8 14:57:07.819: INFO: Number of running nodes: 2, number of available pods: 2
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-4265, will wait for the garbage collector to delete the pods
Jan  8 14:57:07.893: INFO: Deleting DaemonSet.extensions daemon-set took: 13.865739ms
Jan  8 14:57:08.193: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.33551ms
Jan  8 14:57:17.900: INFO: Number of nodes with available pods: 0
Jan  8 14:57:17.900: INFO: Number of running nodes: 0, number of available pods: 0
Jan  8 14:57:17.904: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-4265/daemonsets","resourceVersion":"19788717"},"items":null}

Jan  8 14:57:17.936: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-4265/pods","resourceVersion":"19788717"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  8 14:57:17.951: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-4265" for this suite.
Jan  8 14:57:23.991: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  8 14:57:24.063: INFO: namespace daemonsets-4265 deletion completed in 6.108198416s

• [SLOW TEST:49.545 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  8 14:57:24.064: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44
[It] should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
Jan  8 14:57:24.198: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  8 14:57:41.343: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-2178" for this suite.
Jan  8 14:58:03.402: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  8 14:58:03.532: INFO: namespace init-container-2178 deletion completed in 22.18190533s

• [SLOW TEST:39.469 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-apps] Deployment 
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  8 14:58:03.533: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72
[It] RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jan  8 14:58:03.649: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted)
Jan  8 14:58:03.660: INFO: Pod name sample-pod: Found 0 pods out of 1
Jan  8 14:58:08.668: INFO: Pod name sample-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Jan  8 14:58:12.686: INFO: Creating deployment "test-rolling-update-deployment"
Jan  8 14:58:12.704: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has
Jan  8 14:58:12.711: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created
Jan  8 14:58:14.723: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected
Jan  8 14:58:14.726: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714092292, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714092292, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714092292, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714092292, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  8 14:58:16.732: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714092292, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714092292, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714092292, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714092292, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  8 14:58:18.744: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714092292, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714092292, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714092292, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714092292, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  8 14:58:20.734: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted)
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66
Jan  8 14:58:20.748: INFO: Deployment "test-rolling-update-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment,GenerateName:,Namespace:deployment-4258,SelfLink:/apis/apps/v1/namespaces/deployment-4258/deployments/test-rolling-update-deployment,UID:b4ece27a-d816-4a76-a3cb-132c829cf9df,ResourceVersion:19788906,Generation:1,CreationTimestamp:2020-01-08 14:58:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-01-08 14:58:12 +0000 UTC 2020-01-08 14:58:12 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-01-08 14:58:19 +0000 UTC 2020-01-08 14:58:12 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rolling-update-deployment-79f6b9d75c" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},}

Jan  8 14:58:20.753: INFO: New ReplicaSet "test-rolling-update-deployment-79f6b9d75c" of Deployment "test-rolling-update-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-79f6b9d75c,GenerateName:,Namespace:deployment-4258,SelfLink:/apis/apps/v1/namespaces/deployment-4258/replicasets/test-rolling-update-deployment-79f6b9d75c,UID:2a446807-36fc-4dc0-8c8d-afdbc2a668be,ResourceVersion:19788895,Generation:1,CreationTimestamp:2020-01-08 14:58:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment b4ece27a-d816-4a76-a3cb-132c829cf9df 0xc0032ade47 0xc0032ade48}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Jan  8 14:58:20.753: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment":
Jan  8 14:58:20.753: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-controller,GenerateName:,Namespace:deployment-4258,SelfLink:/apis/apps/v1/namespaces/deployment-4258/replicasets/test-rolling-update-controller,UID:a0ba722f-a234-482f-9e9a-079922e662c1,ResourceVersion:19788905,Generation:2,CreationTimestamp:2020-01-08 14:58:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305832,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment b4ece27a-d816-4a76-a3cb-132c829cf9df 0xc0032add77 0xc0032add78}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Jan  8 14:58:20.757: INFO: Pod "test-rolling-update-deployment-79f6b9d75c-5g422" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-79f6b9d75c-5g422,GenerateName:test-rolling-update-deployment-79f6b9d75c-,Namespace:deployment-4258,SelfLink:/api/v1/namespaces/deployment-4258/pods/test-rolling-update-deployment-79f6b9d75c-5g422,UID:e9718486-60cf-418f-8d7b-50cce517d025,ResourceVersion:19788894,Generation:0,CreationTimestamp:2020-01-08 14:58:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rolling-update-deployment-79f6b9d75c 2a446807-36fc-4dc0-8c8d-afdbc2a668be 0xc000e8e6b7 0xc000e8e6b8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-8lgqt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-8lgqt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [{default-token-8lgqt true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000e8e730} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000e8e750}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-08 14:58:12 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-08 14:58:19 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-08 14:58:19 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-08 14:58:12 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.2,StartTime:2020-01-08 14:58:12 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-01-08 14:58:18 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 docker://d6638ed0ec8ed37ba67bc6258618d1ca24002ac32e860baed5ffba70b3a777a9}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  8 14:58:20.758: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-4258" for this suite.
Jan  8 14:58:27.187: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  8 14:58:27.287: INFO: namespace deployment-4258 deletion completed in 6.522297919s

• [SLOW TEST:23.754 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  8 14:58:27.287: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jan  8 14:58:27.666: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"9ac3d2cb-1e52-410a-8e0d-bd1ff9382a2c", Controller:(*bool)(0xc002c3b882), BlockOwnerDeletion:(*bool)(0xc002c3b883)}}
Jan  8 14:58:27.696: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"2548fb9d-472b-4490-8817-b13c34dff1c4", Controller:(*bool)(0xc0027b71aa), BlockOwnerDeletion:(*bool)(0xc0027b71ab)}}
Jan  8 14:58:27.719: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"a1edae59-22ec-42ba-b82a-2fde21451e06", Controller:(*bool)(0xc002c3bfd2), BlockOwnerDeletion:(*bool)(0xc002c3bfd3)}}
[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  8 14:58:33.581: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-7209" for this suite.
Jan  8 14:58:39.813: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  8 14:58:39.983: INFO: namespace gc-7209 deletion completed in 6.383504611s

• [SLOW TEST:12.696 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  8 14:58:39.985: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0644 on tmpfs
Jan  8 14:58:40.080: INFO: Waiting up to 5m0s for pod "pod-2df6f30a-21ed-47e4-8168-92e0d06e956b" in namespace "emptydir-3331" to be "success or failure"
Jan  8 14:58:40.087: INFO: Pod "pod-2df6f30a-21ed-47e4-8168-92e0d06e956b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.782307ms
Jan  8 14:58:42.102: INFO: Pod "pod-2df6f30a-21ed-47e4-8168-92e0d06e956b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021388242s
Jan  8 14:58:44.109: INFO: Pod "pod-2df6f30a-21ed-47e4-8168-92e0d06e956b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.029019989s
Jan  8 14:58:46.118: INFO: Pod "pod-2df6f30a-21ed-47e4-8168-92e0d06e956b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.038228529s
Jan  8 14:58:48.131: INFO: Pod "pod-2df6f30a-21ed-47e4-8168-92e0d06e956b": Phase="Pending", Reason="", readiness=false. Elapsed: 8.050833292s
Jan  8 14:58:50.140: INFO: Pod "pod-2df6f30a-21ed-47e4-8168-92e0d06e956b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.060295307s
STEP: Saw pod success
Jan  8 14:58:50.141: INFO: Pod "pod-2df6f30a-21ed-47e4-8168-92e0d06e956b" satisfied condition "success or failure"
Jan  8 14:58:50.145: INFO: Trying to get logs from node iruya-node pod pod-2df6f30a-21ed-47e4-8168-92e0d06e956b container test-container: 
STEP: delete the pod
Jan  8 14:58:50.241: INFO: Waiting for pod pod-2df6f30a-21ed-47e4-8168-92e0d06e956b to disappear
Jan  8 14:58:50.251: INFO: Pod pod-2df6f30a-21ed-47e4-8168-92e0d06e956b no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  8 14:58:50.251: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-3331" for this suite.
Jan  8 14:58:56.324: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  8 14:58:56.505: INFO: namespace emptydir-3331 deletion completed in 6.24636418s

• [SLOW TEST:16.521 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  8 14:58:56.506: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward api env vars
Jan  8 14:58:56.653: INFO: Waiting up to 5m0s for pod "downward-api-1b068b97-d7f7-4123-bedb-24fd567571c9" in namespace "downward-api-472" to be "success or failure"
Jan  8 14:58:56.680: INFO: Pod "downward-api-1b068b97-d7f7-4123-bedb-24fd567571c9": Phase="Pending", Reason="", readiness=false. Elapsed: 27.217268ms
Jan  8 14:58:58.688: INFO: Pod "downward-api-1b068b97-d7f7-4123-bedb-24fd567571c9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035458093s
Jan  8 14:59:00.696: INFO: Pod "downward-api-1b068b97-d7f7-4123-bedb-24fd567571c9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.043440668s
Jan  8 14:59:02.703: INFO: Pod "downward-api-1b068b97-d7f7-4123-bedb-24fd567571c9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.049758768s
Jan  8 14:59:05.129: INFO: Pod "downward-api-1b068b97-d7f7-4123-bedb-24fd567571c9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.4758123s
STEP: Saw pod success
Jan  8 14:59:05.129: INFO: Pod "downward-api-1b068b97-d7f7-4123-bedb-24fd567571c9" satisfied condition "success or failure"
Jan  8 14:59:05.204: INFO: Trying to get logs from node iruya-node pod downward-api-1b068b97-d7f7-4123-bedb-24fd567571c9 container dapi-container: 
STEP: delete the pod
Jan  8 14:59:05.298: INFO: Waiting for pod downward-api-1b068b97-d7f7-4123-bedb-24fd567571c9 to disappear
Jan  8 14:59:05.306: INFO: Pod downward-api-1b068b97-d7f7-4123-bedb-24fd567571c9 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  8 14:59:05.306: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-472" for this suite.
Jan  8 14:59:11.335: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  8 14:59:11.449: INFO: namespace downward-api-472 deletion completed in 6.136742808s

• [SLOW TEST:14.943 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a read only busybox container 
  should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  8 14:59:11.449: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  8 14:59:19.719: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-9190" for this suite.
Jan  8 15:00:11.749: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  8 15:00:11.929: INFO: namespace kubelet-test-9190 deletion completed in 52.204315622s

• [SLOW TEST:60.480 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when scheduling a read only busybox container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:187
    should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should do a rolling update of a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  8 15:00:11.930: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should do a rolling update of a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the initial replication controller
Jan  8 15:00:12.007: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6074'
Jan  8 15:00:12.443: INFO: stderr: ""
Jan  8 15:00:12.443: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jan  8 15:00:12.443: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6074'
Jan  8 15:00:12.730: INFO: stderr: ""
Jan  8 15:00:12.730: INFO: stdout: "update-demo-nautilus-wrg9q update-demo-nautilus-xr8vv "
Jan  8 15:00:12.731: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-wrg9q -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6074'
Jan  8 15:00:12.849: INFO: stderr: ""
Jan  8 15:00:12.849: INFO: stdout: ""
Jan  8 15:00:12.849: INFO: update-demo-nautilus-wrg9q is created but not running
Jan  8 15:00:17.850: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6074'
Jan  8 15:00:18.915: INFO: stderr: ""
Jan  8 15:00:18.915: INFO: stdout: "update-demo-nautilus-wrg9q update-demo-nautilus-xr8vv "
Jan  8 15:00:18.915: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-wrg9q -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6074'
Jan  8 15:00:19.672: INFO: stderr: ""
Jan  8 15:00:19.672: INFO: stdout: ""
Jan  8 15:00:19.672: INFO: update-demo-nautilus-wrg9q is created but not running
Jan  8 15:00:24.672: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6074'
Jan  8 15:00:24.813: INFO: stderr: ""
Jan  8 15:00:24.813: INFO: stdout: "update-demo-nautilus-wrg9q update-demo-nautilus-xr8vv "
Jan  8 15:00:24.813: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-wrg9q -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6074'
Jan  8 15:00:24.984: INFO: stderr: ""
Jan  8 15:00:24.984: INFO: stdout: "true"
Jan  8 15:00:24.985: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-wrg9q -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6074'
Jan  8 15:00:25.084: INFO: stderr: ""
Jan  8 15:00:25.084: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan  8 15:00:25.084: INFO: validating pod update-demo-nautilus-wrg9q
Jan  8 15:00:25.091: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan  8 15:00:25.091: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan  8 15:00:25.091: INFO: update-demo-nautilus-wrg9q is verified up and running
Jan  8 15:00:25.091: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-xr8vv -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6074'
Jan  8 15:00:25.241: INFO: stderr: ""
Jan  8 15:00:25.241: INFO: stdout: "true"
Jan  8 15:00:25.241: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-xr8vv -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6074'
Jan  8 15:00:25.353: INFO: stderr: ""
Jan  8 15:00:25.353: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan  8 15:00:25.353: INFO: validating pod update-demo-nautilus-xr8vv
Jan  8 15:00:25.367: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan  8 15:00:25.367: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan  8 15:00:25.367: INFO: update-demo-nautilus-xr8vv is verified up and running
STEP: rolling-update to new replication controller
Jan  8 15:00:25.368: INFO: scanned /root for discovery docs: 
Jan  8 15:00:25.368: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=kubectl-6074'
Jan  8 15:00:57.017: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Jan  8 15:00:57.017: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jan  8 15:00:57.018: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6074'
Jan  8 15:00:57.124: INFO: stderr: ""
Jan  8 15:00:57.124: INFO: stdout: "update-demo-kitten-g4b2n update-demo-kitten-snvwn "
Jan  8 15:00:57.124: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-g4b2n -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6074'
Jan  8 15:00:57.223: INFO: stderr: ""
Jan  8 15:00:57.223: INFO: stdout: "true"
Jan  8 15:00:57.223: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-g4b2n -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6074'
Jan  8 15:00:57.316: INFO: stderr: ""
Jan  8 15:00:57.316: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Jan  8 15:00:57.316: INFO: validating pod update-demo-kitten-g4b2n
Jan  8 15:00:57.330: INFO: got data: {
  "image": "kitten.jpg"
}

Jan  8 15:00:57.330: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Jan  8 15:00:57.330: INFO: update-demo-kitten-g4b2n is verified up and running
Jan  8 15:00:57.330: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-snvwn -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6074'
Jan  8 15:00:57.429: INFO: stderr: ""
Jan  8 15:00:57.429: INFO: stdout: "true"
Jan  8 15:00:57.429: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-snvwn -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6074'
Jan  8 15:00:57.566: INFO: stderr: ""
Jan  8 15:00:57.567: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Jan  8 15:00:57.567: INFO: validating pod update-demo-kitten-snvwn
Jan  8 15:00:57.592: INFO: got data: {
  "image": "kitten.jpg"
}

Jan  8 15:00:57.592: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Jan  8 15:00:57.592: INFO: update-demo-kitten-snvwn is verified up and running
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  8 15:00:57.592: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6074" for this suite.
Jan  8 15:01:25.659: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  8 15:01:25.800: INFO: namespace kubectl-6074 deletion completed in 28.203625344s

• [SLOW TEST:73.870 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should do a rolling update of a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-api-machinery] Watchers 
  should receive events on concurrent watches in same order [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  8 15:01:25.800: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should receive events on concurrent watches in same order [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: starting a background goroutine to produce watch events
STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  8 15:01:31.431: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-5482" for this suite.
Jan  8 15:01:37.614: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  8 15:01:37.746: INFO: namespace watch-5482 deletion completed in 6.264550825s

• [SLOW TEST:11.946 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should receive events on concurrent watches in same order [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  8 15:01:37.746: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-5709.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-5709.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-5709.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-5709.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-5709.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-5709.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-5709.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-5709.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-5709.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-5709.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-5709.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-5709.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5709.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 249.225.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.225.249_udp@PTR;check="$$(dig +tcp +noall +answer +search 249.225.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.225.249_tcp@PTR;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-5709.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-5709.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-5709.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-5709.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-5709.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-5709.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-5709.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-5709.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-5709.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-5709.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-5709.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-5709.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5709.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 249.225.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.225.249_udp@PTR;check="$$(dig +tcp +noall +answer +search 249.225.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.225.249_tcp@PTR;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jan  8 15:01:52.757: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-5709/dns-test-4989d519-1542-4bf3-9a6b-ab4507aba508: the server could not find the requested resource (get pods dns-test-4989d519-1542-4bf3-9a6b-ab4507aba508)
Jan  8 15:01:52.763: INFO: Unable to read 10.96.225.249_udp@PTR from pod dns-5709/dns-test-4989d519-1542-4bf3-9a6b-ab4507aba508: the server could not find the requested resource (get pods dns-test-4989d519-1542-4bf3-9a6b-ab4507aba508)
Jan  8 15:01:52.768: INFO: Unable to read 10.96.225.249_tcp@PTR from pod dns-5709/dns-test-4989d519-1542-4bf3-9a6b-ab4507aba508: the server could not find the requested resource (get pods dns-test-4989d519-1542-4bf3-9a6b-ab4507aba508)
Jan  8 15:01:52.776: INFO: Unable to read jessie_udp@dns-test-service.dns-5709.svc.cluster.local from pod dns-5709/dns-test-4989d519-1542-4bf3-9a6b-ab4507aba508: the server could not find the requested resource (get pods dns-test-4989d519-1542-4bf3-9a6b-ab4507aba508)
Jan  8 15:01:52.780: INFO: Unable to read jessie_tcp@dns-test-service.dns-5709.svc.cluster.local from pod dns-5709/dns-test-4989d519-1542-4bf3-9a6b-ab4507aba508: the server could not find the requested resource (get pods dns-test-4989d519-1542-4bf3-9a6b-ab4507aba508)
Jan  8 15:01:52.785: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5709.svc.cluster.local from pod dns-5709/dns-test-4989d519-1542-4bf3-9a6b-ab4507aba508: the server could not find the requested resource (get pods dns-test-4989d519-1542-4bf3-9a6b-ab4507aba508)
Jan  8 15:01:52.825: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5709.svc.cluster.local from pod dns-5709/dns-test-4989d519-1542-4bf3-9a6b-ab4507aba508: the server could not find the requested resource (get pods dns-test-4989d519-1542-4bf3-9a6b-ab4507aba508)
Jan  8 15:01:52.828: INFO: Unable to read jessie_udp@_http._tcp.test-service-2.dns-5709.svc.cluster.local from pod dns-5709/dns-test-4989d519-1542-4bf3-9a6b-ab4507aba508: the server could not find the requested resource (get pods dns-test-4989d519-1542-4bf3-9a6b-ab4507aba508)
Jan  8 15:01:52.833: INFO: Unable to read jessie_tcp@_http._tcp.test-service-2.dns-5709.svc.cluster.local from pod dns-5709/dns-test-4989d519-1542-4bf3-9a6b-ab4507aba508: the server could not find the requested resource (get pods dns-test-4989d519-1542-4bf3-9a6b-ab4507aba508)
Jan  8 15:01:52.836: INFO: Unable to read jessie_udp@PodARecord from pod dns-5709/dns-test-4989d519-1542-4bf3-9a6b-ab4507aba508: the server could not find the requested resource (get pods dns-test-4989d519-1542-4bf3-9a6b-ab4507aba508)
Jan  8 15:01:52.841: INFO: Unable to read jessie_tcp@PodARecord from pod dns-5709/dns-test-4989d519-1542-4bf3-9a6b-ab4507aba508: the server could not find the requested resource (get pods dns-test-4989d519-1542-4bf3-9a6b-ab4507aba508)
Jan  8 15:01:52.854: INFO: Lookups using dns-5709/dns-test-4989d519-1542-4bf3-9a6b-ab4507aba508 failed for: [wheezy_tcp@PodARecord 10.96.225.249_udp@PTR 10.96.225.249_tcp@PTR jessie_udp@dns-test-service.dns-5709.svc.cluster.local jessie_tcp@dns-test-service.dns-5709.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-5709.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-5709.svc.cluster.local jessie_udp@_http._tcp.test-service-2.dns-5709.svc.cluster.local jessie_tcp@_http._tcp.test-service-2.dns-5709.svc.cluster.local jessie_udp@PodARecord jessie_tcp@PodARecord]

Jan  8 15:01:58.277: INFO: DNS probes using dns-5709/dns-test-4989d519-1542-4bf3-9a6b-ab4507aba508 succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  8 15:01:58.663: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-5709" for this suite.
Jan  8 15:02:04.731: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  8 15:02:04.962: INFO: namespace dns-5709 deletion completed in 6.265620504s

• [SLOW TEST:27.216 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  8 15:02:04.963: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-map-e235ffb2-b482-495b-b803-0e5438867baa
STEP: Creating a pod to test consume configMaps
Jan  8 15:02:05.312: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-d4c49601-7a21-4732-aee2-d56e6dfb15ba" in namespace "projected-7815" to be "success or failure"
Jan  8 15:02:05.329: INFO: Pod "pod-projected-configmaps-d4c49601-7a21-4732-aee2-d56e6dfb15ba": Phase="Pending", Reason="", readiness=false. Elapsed: 16.695153ms
Jan  8 15:02:07.340: INFO: Pod "pod-projected-configmaps-d4c49601-7a21-4732-aee2-d56e6dfb15ba": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027409836s
Jan  8 15:02:09.452: INFO: Pod "pod-projected-configmaps-d4c49601-7a21-4732-aee2-d56e6dfb15ba": Phase="Pending", Reason="", readiness=false. Elapsed: 4.139567151s
Jan  8 15:02:11.461: INFO: Pod "pod-projected-configmaps-d4c49601-7a21-4732-aee2-d56e6dfb15ba": Phase="Pending", Reason="", readiness=false. Elapsed: 6.148844442s
Jan  8 15:02:13.472: INFO: Pod "pod-projected-configmaps-d4c49601-7a21-4732-aee2-d56e6dfb15ba": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.159247273s
STEP: Saw pod success
Jan  8 15:02:13.472: INFO: Pod "pod-projected-configmaps-d4c49601-7a21-4732-aee2-d56e6dfb15ba" satisfied condition "success or failure"
Jan  8 15:02:13.477: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-d4c49601-7a21-4732-aee2-d56e6dfb15ba container projected-configmap-volume-test: 
STEP: delete the pod
Jan  8 15:02:13.538: INFO: Waiting for pod pod-projected-configmaps-d4c49601-7a21-4732-aee2-d56e6dfb15ba to disappear
Jan  8 15:02:13.543: INFO: Pod pod-projected-configmaps-d4c49601-7a21-4732-aee2-d56e6dfb15ba no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  8 15:02:13.543: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7815" for this suite.
Jan  8 15:02:19.666: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  8 15:02:19.848: INFO: namespace projected-7815 deletion completed in 6.270909636s

• [SLOW TEST:14.885 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  8 15:02:19.849: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test use defaults
Jan  8 15:02:19.984: INFO: Waiting up to 5m0s for pod "client-containers-f4df67a7-36a8-4e93-912c-15d06dddfdc2" in namespace "containers-2449" to be "success or failure"
Jan  8 15:02:19.994: INFO: Pod "client-containers-f4df67a7-36a8-4e93-912c-15d06dddfdc2": Phase="Pending", Reason="", readiness=false. Elapsed: 9.407079ms
Jan  8 15:02:22.004: INFO: Pod "client-containers-f4df67a7-36a8-4e93-912c-15d06dddfdc2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020038818s
Jan  8 15:02:24.016: INFO: Pod "client-containers-f4df67a7-36a8-4e93-912c-15d06dddfdc2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.031992535s
Jan  8 15:02:26.023: INFO: Pod "client-containers-f4df67a7-36a8-4e93-912c-15d06dddfdc2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.039149106s
Jan  8 15:02:28.031: INFO: Pod "client-containers-f4df67a7-36a8-4e93-912c-15d06dddfdc2": Phase="Pending", Reason="", readiness=false. Elapsed: 8.046754416s
Jan  8 15:02:30.038: INFO: Pod "client-containers-f4df67a7-36a8-4e93-912c-15d06dddfdc2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.054176733s
STEP: Saw pod success
Jan  8 15:02:30.039: INFO: Pod "client-containers-f4df67a7-36a8-4e93-912c-15d06dddfdc2" satisfied condition "success or failure"
Jan  8 15:02:30.042: INFO: Trying to get logs from node iruya-node pod client-containers-f4df67a7-36a8-4e93-912c-15d06dddfdc2 container test-container: 
STEP: delete the pod
Jan  8 15:02:30.125: INFO: Waiting for pod client-containers-f4df67a7-36a8-4e93-912c-15d06dddfdc2 to disappear
Jan  8 15:02:30.132: INFO: Pod client-containers-f4df67a7-36a8-4e93-912c-15d06dddfdc2 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  8 15:02:30.132: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-2449" for this suite.
Jan  8 15:02:36.176: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  8 15:02:36.303: INFO: namespace containers-2449 deletion completed in 6.159095444s

• [SLOW TEST:16.454 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Proxy server 
  should support proxy with --port 0  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  8 15:02:36.303: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should support proxy with --port 0  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: starting the proxy server
Jan  8 15:02:36.412: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter'
STEP: curling proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  8 15:02:36.518: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-8932" for this suite.
Jan  8 15:02:42.563: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  8 15:02:42.690: INFO: namespace kubectl-8932 deletion completed in 6.164569199s

• [SLOW TEST:6.388 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Proxy server
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should support proxy with --port 0  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] Projected combined 
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected combined
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  8 15:02:42.691: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-projected-all-test-volume-2c9f46dc-b188-4f97-8bd9-338264ddc140
STEP: Creating secret with name secret-projected-all-test-volume-832405a1-01de-44fc-b93e-a39c8cc65ee5
STEP: Creating a pod to test Check all projections for projected volume plugin
Jan  8 15:02:42.827: INFO: Waiting up to 5m0s for pod "projected-volume-bd761100-dcfa-4ce6-9e12-93faeefff9f8" in namespace "projected-8714" to be "success or failure"
Jan  8 15:02:42.832: INFO: Pod "projected-volume-bd761100-dcfa-4ce6-9e12-93faeefff9f8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.948737ms
Jan  8 15:02:44.846: INFO: Pod "projected-volume-bd761100-dcfa-4ce6-9e12-93faeefff9f8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019597079s
Jan  8 15:02:46.865: INFO: Pod "projected-volume-bd761100-dcfa-4ce6-9e12-93faeefff9f8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.038660745s
Jan  8 15:02:48.888: INFO: Pod "projected-volume-bd761100-dcfa-4ce6-9e12-93faeefff9f8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.061002409s
Jan  8 15:02:50.902: INFO: Pod "projected-volume-bd761100-dcfa-4ce6-9e12-93faeefff9f8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.075733719s
STEP: Saw pod success
Jan  8 15:02:50.902: INFO: Pod "projected-volume-bd761100-dcfa-4ce6-9e12-93faeefff9f8" satisfied condition "success or failure"
Jan  8 15:02:50.907: INFO: Trying to get logs from node iruya-node pod projected-volume-bd761100-dcfa-4ce6-9e12-93faeefff9f8 container projected-all-volume-test: 
STEP: delete the pod
Jan  8 15:02:50.992: INFO: Waiting for pod projected-volume-bd761100-dcfa-4ce6-9e12-93faeefff9f8 to disappear
Jan  8 15:02:50.997: INFO: Pod projected-volume-bd761100-dcfa-4ce6-9e12-93faeefff9f8 no longer exists
[AfterEach] [sig-storage] Projected combined
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  8 15:02:50.997: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8714" for this suite.
Jan  8 15:02:57.079: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  8 15:02:57.193: INFO: namespace projected-8714 deletion completed in 6.19165374s

• [SLOW TEST:14.503 seconds]
[sig-storage] Projected combined
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  8 15:02:57.194: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-upd-13054cdd-03a8-4c6f-bddd-4a9b7063ee3e
STEP: Creating the pod
STEP: Updating configmap configmap-test-upd-13054cdd-03a8-4c6f-bddd-4a9b7063ee3e
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  8 15:04:17.074: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-2378" for this suite.
Jan  8 15:04:39.108: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  8 15:04:39.262: INFO: namespace configmap-2378 deletion completed in 22.179943468s

• [SLOW TEST:102.068 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  8 15:04:39.262: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward api env vars
Jan  8 15:04:39.396: INFO: Waiting up to 5m0s for pod "downward-api-b8e76d03-27de-4965-81d8-0900d4d957c8" in namespace "downward-api-6018" to be "success or failure"
Jan  8 15:04:39.469: INFO: Pod "downward-api-b8e76d03-27de-4965-81d8-0900d4d957c8": Phase="Pending", Reason="", readiness=false. Elapsed: 73.253095ms
Jan  8 15:04:41.482: INFO: Pod "downward-api-b8e76d03-27de-4965-81d8-0900d4d957c8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.086300276s
Jan  8 15:04:43.489: INFO: Pod "downward-api-b8e76d03-27de-4965-81d8-0900d4d957c8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.092661308s
Jan  8 15:04:45.501: INFO: Pod "downward-api-b8e76d03-27de-4965-81d8-0900d4d957c8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.104368632s
Jan  8 15:04:47.510: INFO: Pod "downward-api-b8e76d03-27de-4965-81d8-0900d4d957c8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.113343921s
STEP: Saw pod success
Jan  8 15:04:47.510: INFO: Pod "downward-api-b8e76d03-27de-4965-81d8-0900d4d957c8" satisfied condition "success or failure"
Jan  8 15:04:47.515: INFO: Trying to get logs from node iruya-node pod downward-api-b8e76d03-27de-4965-81d8-0900d4d957c8 container dapi-container: 
STEP: delete the pod
Jan  8 15:04:47.592: INFO: Waiting for pod downward-api-b8e76d03-27de-4965-81d8-0900d4d957c8 to disappear
Jan  8 15:04:47.635: INFO: Pod downward-api-b8e76d03-27de-4965-81d8-0900d4d957c8 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  8 15:04:47.635: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-6018" for this suite.
Jan  8 15:04:53.674: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  8 15:04:53.795: INFO: namespace downward-api-6018 deletion completed in 6.153783352s

• [SLOW TEST:14.533 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  8 15:04:53.796: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Jan  8 15:04:53.891: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8417f6a8-980d-4e1d-9097-663a675199fe" in namespace "projected-8879" to be "success or failure"
Jan  8 15:04:53.966: INFO: Pod "downwardapi-volume-8417f6a8-980d-4e1d-9097-663a675199fe": Phase="Pending", Reason="", readiness=false. Elapsed: 75.500436ms
Jan  8 15:04:55.978: INFO: Pod "downwardapi-volume-8417f6a8-980d-4e1d-9097-663a675199fe": Phase="Pending", Reason="", readiness=false. Elapsed: 2.08655779s
Jan  8 15:04:57.985: INFO: Pod "downwardapi-volume-8417f6a8-980d-4e1d-9097-663a675199fe": Phase="Pending", Reason="", readiness=false. Elapsed: 4.094203323s
Jan  8 15:05:00.001: INFO: Pod "downwardapi-volume-8417f6a8-980d-4e1d-9097-663a675199fe": Phase="Pending", Reason="", readiness=false. Elapsed: 6.110405087s
Jan  8 15:05:02.013: INFO: Pod "downwardapi-volume-8417f6a8-980d-4e1d-9097-663a675199fe": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.122312403s
STEP: Saw pod success
Jan  8 15:05:02.013: INFO: Pod "downwardapi-volume-8417f6a8-980d-4e1d-9097-663a675199fe" satisfied condition "success or failure"
Jan  8 15:05:02.021: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-8417f6a8-980d-4e1d-9097-663a675199fe container client-container: 
STEP: delete the pod
Jan  8 15:05:02.105: INFO: Waiting for pod downwardapi-volume-8417f6a8-980d-4e1d-9097-663a675199fe to disappear
Jan  8 15:05:02.114: INFO: Pod downwardapi-volume-8417f6a8-980d-4e1d-9097-663a675199fe no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  8 15:05:02.114: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8879" for this suite.
Jan  8 15:05:08.172: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  8 15:05:08.333: INFO: namespace projected-8879 deletion completed in 6.212951503s

• [SLOW TEST:14.537 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  8 15:05:08.333: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72
[It] deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jan  8 15:05:08.471: INFO: Pod name rollover-pod: Found 0 pods out of 1
Jan  8 15:05:13.482: INFO: Pod name rollover-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Jan  8 15:05:15.495: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready
Jan  8 15:05:17.503: INFO: Creating deployment "test-rollover-deployment"
Jan  8 15:05:17.543: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations
Jan  8 15:05:19.556: INFO: Check revision of new replica set for deployment "test-rollover-deployment"
Jan  8 15:05:19.566: INFO: Ensure that both replica sets have 1 created replica
Jan  8 15:05:19.574: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update
Jan  8 15:05:19.584: INFO: Updating deployment test-rollover-deployment
Jan  8 15:05:19.584: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller
Jan  8 15:05:21.612: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2
Jan  8 15:05:21.625: INFO: Make sure deployment "test-rollover-deployment" is complete
Jan  8 15:05:21.639: INFO: all replica sets need to contain the pod-template-hash label
Jan  8 15:05:21.639: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714092717, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714092717, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714092720, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714092717, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  8 15:05:23.669: INFO: all replica sets need to contain the pod-template-hash label
Jan  8 15:05:23.669: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714092717, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714092717, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714092720, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714092717, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  8 15:05:25.655: INFO: all replica sets need to contain the pod-template-hash label
Jan  8 15:05:25.655: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714092717, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714092717, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714092720, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714092717, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  8 15:05:27.654: INFO: all replica sets need to contain the pod-template-hash label
Jan  8 15:05:27.654: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714092717, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714092717, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714092720, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714092717, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  8 15:05:29.657: INFO: all replica sets need to contain the pod-template-hash label
Jan  8 15:05:29.657: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714092717, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714092717, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714092729, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714092717, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  8 15:05:31.655: INFO: all replica sets need to contain the pod-template-hash label
Jan  8 15:05:31.655: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714092717, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714092717, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714092729, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714092717, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  8 15:05:33.654: INFO: all replica sets need to contain the pod-template-hash label
Jan  8 15:05:33.654: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714092717, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714092717, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714092729, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714092717, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  8 15:05:35.665: INFO: all replica sets need to contain the pod-template-hash label
Jan  8 15:05:35.665: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714092717, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714092717, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714092729, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714092717, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  8 15:05:37.652: INFO: all replica sets need to contain the pod-template-hash label
Jan  8 15:05:37.653: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714092717, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714092717, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63714092729, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63714092717, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  8 15:05:39.651: INFO: 
Jan  8 15:05:39.651: INFO: Ensure that both old replica sets have no replicas
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66
Jan  8 15:05:39.662: INFO: Deployment "test-rollover-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment,GenerateName:,Namespace:deployment-9037,SelfLink:/apis/apps/v1/namespaces/deployment-9037/deployments/test-rollover-deployment,UID:f4f03382-8a8f-4a53-91bb-74962e56d712,ResourceVersion:19790144,Generation:2,CreationTimestamp:2020-01-08 15:05:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-01-08 15:05:17 +0000 UTC 2020-01-08 15:05:17 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-01-08 15:05:39 +0000 UTC 2020-01-08 15:05:17 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rollover-deployment-854595fc44" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},}

Jan  8 15:05:39.666: INFO: New ReplicaSet "test-rollover-deployment-854595fc44" of Deployment "test-rollover-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-854595fc44,GenerateName:,Namespace:deployment-9037,SelfLink:/apis/apps/v1/namespaces/deployment-9037/replicasets/test-rollover-deployment-854595fc44,UID:66bb54ee-d5f2-4f7b-9900-544c087d403a,ResourceVersion:19790134,Generation:2,CreationTimestamp:2020-01-08 15:05:19 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment f4f03382-8a8f-4a53-91bb-74962e56d712 0xc001666f37 0xc001666f38}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Jan  8 15:05:39.667: INFO: All old ReplicaSets of Deployment "test-rollover-deployment":
Jan  8 15:05:39.667: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-controller,GenerateName:,Namespace:deployment-9037,SelfLink:/apis/apps/v1/namespaces/deployment-9037/replicasets/test-rollover-controller,UID:7c3a4961-1088-4657-abd8-ec7c472778d0,ResourceVersion:19790143,Generation:2,CreationTimestamp:2020-01-08 15:05:08 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment f4f03382-8a8f-4a53-91bb-74962e56d712 0xc001666c87 0xc001666c88}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Jan  8 15:05:39.667: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-9b8b997cf,GenerateName:,Namespace:deployment-9037,SelfLink:/apis/apps/v1/namespaces/deployment-9037/replicasets/test-rollover-deployment-9b8b997cf,UID:c0f1293b-79c7-42a6-b834-68af7c3940a7,ResourceVersion:19790099,Generation:2,CreationTimestamp:2020-01-08 15:05:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment f4f03382-8a8f-4a53-91bb-74962e56d712 0xc001667180 0xc001667181}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Jan  8 15:05:39.672: INFO: Pod "test-rollover-deployment-854595fc44-fpzz2" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-854595fc44-fpzz2,GenerateName:test-rollover-deployment-854595fc44-,Namespace:deployment-9037,SelfLink:/api/v1/namespaces/deployment-9037/pods/test-rollover-deployment-854595fc44-fpzz2,UID:8deb2b22-194d-459f-80c8-655895277693,ResourceVersion:19790118,Generation:0,CreationTimestamp:2020-01-08 15:05:19 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rollover-deployment-854595fc44 66bb54ee-d5f2-4f7b-9900-544c087d403a 0xc0029d5487 0xc0029d5488}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-bslss {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-bslss,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [{default-token-bslss true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0029d5560} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0029d5630}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-08 15:05:19 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-08 15:05:28 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-08 15:05:28 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-08 15:05:19 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.2,StartTime:2020-01-08 15:05:19 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-01-08 15:05:27 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 docker://117ec780d54a34290e7369965e1a9092ef72a5accc2413d3acc5ed9d64169139}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  8 15:05:39.672: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-9037" for this suite.
Jan  8 15:05:45.726: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  8 15:05:45.854: INFO: namespace deployment-9037 deletion completed in 6.175924547s

• [SLOW TEST:37.521 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  8 15:05:45.854: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Jan  8 15:05:45.924: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1fa07303-4326-4229-9972-b85adb7106c7" in namespace "projected-1872" to be "success or failure"
Jan  8 15:05:46.068: INFO: Pod "downwardapi-volume-1fa07303-4326-4229-9972-b85adb7106c7": Phase="Pending", Reason="", readiness=false. Elapsed: 143.465812ms
Jan  8 15:05:48.079: INFO: Pod "downwardapi-volume-1fa07303-4326-4229-9972-b85adb7106c7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.154149365s
Jan  8 15:05:50.095: INFO: Pod "downwardapi-volume-1fa07303-4326-4229-9972-b85adb7106c7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.170943998s
Jan  8 15:05:52.106: INFO: Pod "downwardapi-volume-1fa07303-4326-4229-9972-b85adb7106c7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.181691436s
Jan  8 15:05:54.163: INFO: Pod "downwardapi-volume-1fa07303-4326-4229-9972-b85adb7106c7": Phase="Pending", Reason="", readiness=false. Elapsed: 8.238237723s
Jan  8 15:05:56.172: INFO: Pod "downwardapi-volume-1fa07303-4326-4229-9972-b85adb7106c7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.247526464s
STEP: Saw pod success
Jan  8 15:05:56.172: INFO: Pod "downwardapi-volume-1fa07303-4326-4229-9972-b85adb7106c7" satisfied condition "success or failure"
Jan  8 15:05:56.176: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-1fa07303-4326-4229-9972-b85adb7106c7 container client-container: 
STEP: delete the pod
Jan  8 15:05:56.236: INFO: Waiting for pod downwardapi-volume-1fa07303-4326-4229-9972-b85adb7106c7 to disappear
Jan  8 15:05:56.243: INFO: Pod downwardapi-volume-1fa07303-4326-4229-9972-b85adb7106c7 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  8 15:05:56.244: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1872" for this suite.
Jan  8 15:06:02.311: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  8 15:06:02.399: INFO: namespace projected-1872 deletion completed in 6.113463842s

• [SLOW TEST:16.545 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  8 15:06:02.399: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Performing setup for networking test in namespace pod-network-test-115
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Jan  8 15:06:02.539: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Jan  8 15:06:38.904: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostName&protocol=udp&host=10.32.0.4&port=8081&tries=1'] Namespace:pod-network-test-115 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan  8 15:06:38.904: INFO: >>> kubeConfig: /root/.kube/config
I0108 15:06:38.991382       8 log.go:172] (0xc0012aef20) (0xc001c82140) Create stream
I0108 15:06:38.991431       8 log.go:172] (0xc0012aef20) (0xc001c82140) Stream added, broadcasting: 1
I0108 15:06:38.998979       8 log.go:172] (0xc0012aef20) Reply frame received for 1
I0108 15:06:38.999037       8 log.go:172] (0xc0012aef20) (0xc0014ab220) Create stream
I0108 15:06:38.999045       8 log.go:172] (0xc0012aef20) (0xc0014ab220) Stream added, broadcasting: 3
I0108 15:06:39.001629       8 log.go:172] (0xc0012aef20) Reply frame received for 3
I0108 15:06:39.001666       8 log.go:172] (0xc0012aef20) (0xc001744b40) Create stream
I0108 15:06:39.001679       8 log.go:172] (0xc0012aef20) (0xc001744b40) Stream added, broadcasting: 5
I0108 15:06:39.003441       8 log.go:172] (0xc0012aef20) Reply frame received for 5
I0108 15:06:39.196153       8 log.go:172] (0xc0012aef20) Data frame received for 3
I0108 15:06:39.196330       8 log.go:172] (0xc0014ab220) (3) Data frame handling
I0108 15:06:39.196358       8 log.go:172] (0xc0014ab220) (3) Data frame sent
I0108 15:06:39.413789       8 log.go:172] (0xc0012aef20) Data frame received for 1
I0108 15:06:39.414057       8 log.go:172] (0xc0012aef20) (0xc001744b40) Stream removed, broadcasting: 5
I0108 15:06:39.414114       8 log.go:172] (0xc001c82140) (1) Data frame handling
I0108 15:06:39.414160       8 log.go:172] (0xc001c82140) (1) Data frame sent
I0108 15:06:39.414208       8 log.go:172] (0xc0012aef20) (0xc0014ab220) Stream removed, broadcasting: 3
I0108 15:06:39.414226       8 log.go:172] (0xc0012aef20) (0xc001c82140) Stream removed, broadcasting: 1
I0108 15:06:39.414260       8 log.go:172] (0xc0012aef20) Go away received
I0108 15:06:39.414990       8 log.go:172] (0xc0012aef20) (0xc001c82140) Stream removed, broadcasting: 1
I0108 15:06:39.415024       8 log.go:172] (0xc0012aef20) (0xc0014ab220) Stream removed, broadcasting: 3
I0108 15:06:39.415042       8 log.go:172] (0xc0012aef20) (0xc001744b40) Stream removed, broadcasting: 5
Jan  8 15:06:39.415: INFO: Waiting for endpoints: map[]
Jan  8 15:06:39.423: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostName&protocol=udp&host=10.44.0.1&port=8081&tries=1'] Namespace:pod-network-test-115 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan  8 15:06:39.423: INFO: >>> kubeConfig: /root/.kube/config
I0108 15:06:39.485375       8 log.go:172] (0xc000860bb0) (0xc0014abcc0) Create stream
I0108 15:06:39.485463       8 log.go:172] (0xc000860bb0) (0xc0014abcc0) Stream added, broadcasting: 1
I0108 15:06:39.491396       8 log.go:172] (0xc000860bb0) Reply frame received for 1
I0108 15:06:39.491467       8 log.go:172] (0xc000860bb0) (0xc0014abd60) Create stream
I0108 15:06:39.491498       8 log.go:172] (0xc000860bb0) (0xc0014abd60) Stream added, broadcasting: 3
I0108 15:06:39.492630       8 log.go:172] (0xc000860bb0) Reply frame received for 3
I0108 15:06:39.492664       8 log.go:172] (0xc000860bb0) (0xc0014abea0) Create stream
I0108 15:06:39.492670       8 log.go:172] (0xc000860bb0) (0xc0014abea0) Stream added, broadcasting: 5
I0108 15:06:39.494653       8 log.go:172] (0xc000860bb0) Reply frame received for 5
I0108 15:06:39.638384       8 log.go:172] (0xc000860bb0) Data frame received for 3
I0108 15:06:39.638456       8 log.go:172] (0xc0014abd60) (3) Data frame handling
I0108 15:06:39.638481       8 log.go:172] (0xc0014abd60) (3) Data frame sent
I0108 15:06:39.754650       8 log.go:172] (0xc000860bb0) (0xc0014abd60) Stream removed, broadcasting: 3
I0108 15:06:39.754851       8 log.go:172] (0xc000860bb0) Data frame received for 1
I0108 15:06:39.754884       8 log.go:172] (0xc0014abcc0) (1) Data frame handling
I0108 15:06:39.754906       8 log.go:172] (0xc000860bb0) (0xc0014abea0) Stream removed, broadcasting: 5
I0108 15:06:39.754963       8 log.go:172] (0xc0014abcc0) (1) Data frame sent
I0108 15:06:39.754985       8 log.go:172] (0xc000860bb0) (0xc0014abcc0) Stream removed, broadcasting: 1
I0108 15:06:39.755013       8 log.go:172] (0xc000860bb0) Go away received
I0108 15:06:39.755167       8 log.go:172] (0xc000860bb0) (0xc0014abcc0) Stream removed, broadcasting: 1
I0108 15:06:39.755188       8 log.go:172] (0xc000860bb0) (0xc0014abd60) Stream removed, broadcasting: 3
I0108 15:06:39.755202       8 log.go:172] (0xc000860bb0) (0xc0014abea0) Stream removed, broadcasting: 5
Jan  8 15:06:39.755: INFO: Waiting for endpoints: map[]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  8 15:06:39.755: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-115" for this suite.
Jan  8 15:07:03.789: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  8 15:07:03.955: INFO: namespace pod-network-test-115 deletion completed in 24.192900022s

• [SLOW TEST:61.556 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[k8s.io] Pods 
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  8 15:07:03.956: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Jan  8 15:07:12.704: INFO: Successfully updated pod "pod-update-activedeadlineseconds-95c09c46-d2aa-416d-9006-469c4239c880"
Jan  8 15:07:12.704: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-95c09c46-d2aa-416d-9006-469c4239c880" in namespace "pods-5483" to be "terminated due to deadline exceeded"
Jan  8 15:07:12.711: INFO: Pod "pod-update-activedeadlineseconds-95c09c46-d2aa-416d-9006-469c4239c880": Phase="Running", Reason="", readiness=true. Elapsed: 7.110075ms
Jan  8 15:07:14.727: INFO: Pod "pod-update-activedeadlineseconds-95c09c46-d2aa-416d-9006-469c4239c880": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.022954236s
Jan  8 15:07:14.727: INFO: Pod "pod-update-activedeadlineseconds-95c09c46-d2aa-416d-9006-469c4239c880" satisfied condition "terminated due to deadline exceeded"
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  8 15:07:14.727: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-5483" for this suite.
Jan  8 15:07:20.778: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  8 15:07:20.930: INFO: namespace pods-5483 deletion completed in 6.194045874s

• [SLOW TEST:16.974 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-apps] ReplicationController 
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  8 15:07:20.930: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating replication controller my-hostname-basic-1bd906b6-b9b6-4988-8526-cb9abb1914a0
Jan  8 15:07:21.049: INFO: Pod name my-hostname-basic-1bd906b6-b9b6-4988-8526-cb9abb1914a0: Found 0 pods out of 1
Jan  8 15:07:26.062: INFO: Pod name my-hostname-basic-1bd906b6-b9b6-4988-8526-cb9abb1914a0: Found 1 pods out of 1
Jan  8 15:07:26.062: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-1bd906b6-b9b6-4988-8526-cb9abb1914a0" are running
Jan  8 15:07:30.079: INFO: Pod "my-hostname-basic-1bd906b6-b9b6-4988-8526-cb9abb1914a0-qkzsm" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-08 15:07:21 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-08 15:07:21 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-1bd906b6-b9b6-4988-8526-cb9abb1914a0]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-08 15:07:21 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-1bd906b6-b9b6-4988-8526-cb9abb1914a0]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-08 15:07:21 +0000 UTC Reason: Message:}])
Jan  8 15:07:30.079: INFO: Trying to dial the pod
Jan  8 15:07:35.141: INFO: Controller my-hostname-basic-1bd906b6-b9b6-4988-8526-cb9abb1914a0: Got expected result from replica 1 [my-hostname-basic-1bd906b6-b9b6-4988-8526-cb9abb1914a0-qkzsm]: "my-hostname-basic-1bd906b6-b9b6-4988-8526-cb9abb1914a0-qkzsm", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  8 15:07:35.141: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-6165" for this suite.
Jan  8 15:07:41.252: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  8 15:07:41.387: INFO: namespace replication-controller-6165 deletion completed in 6.23570306s

• [SLOW TEST:20.457 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  8 15:07:41.388: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Cleaning up the secret
STEP: Cleaning up the configmap
STEP: Cleaning up the pod
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  8 15:07:51.930: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-wrapper-5316" for this suite.
Jan  8 15:07:58.038: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  8 15:07:58.202: INFO: namespace emptydir-wrapper-5316 deletion completed in 6.199311766s

• [SLOW TEST:16.814 seconds]
[sig-storage] EmptyDir wrapper volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  8 15:07:58.203: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  8 15:08:58.354: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-4996" for this suite.
Jan  8 15:09:20.386: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  8 15:09:20.477: INFO: namespace container-probe-4996 deletion completed in 22.117047919s

• [SLOW TEST:82.275 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  8 15:09:20.478: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-map-5ab25cb3-7d6e-459f-a42b-ff8d6eff17bd
STEP: Creating a pod to test consume configMaps
Jan  8 15:09:20.651: INFO: Waiting up to 5m0s for pod "pod-configmaps-ed6f7393-7404-4a76-9ab9-8d153b88e999" in namespace "configmap-3848" to be "success or failure"
Jan  8 15:09:20.660: INFO: Pod "pod-configmaps-ed6f7393-7404-4a76-9ab9-8d153b88e999": Phase="Pending", Reason="", readiness=false. Elapsed: 9.400011ms
Jan  8 15:09:22.675: INFO: Pod "pod-configmaps-ed6f7393-7404-4a76-9ab9-8d153b88e999": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02417734s
Jan  8 15:09:24.686: INFO: Pod "pod-configmaps-ed6f7393-7404-4a76-9ab9-8d153b88e999": Phase="Pending", Reason="", readiness=false. Elapsed: 4.035554205s
Jan  8 15:09:26.696: INFO: Pod "pod-configmaps-ed6f7393-7404-4a76-9ab9-8d153b88e999": Phase="Pending", Reason="", readiness=false. Elapsed: 6.045007416s
Jan  8 15:09:28.705: INFO: Pod "pod-configmaps-ed6f7393-7404-4a76-9ab9-8d153b88e999": Phase="Pending", Reason="", readiness=false. Elapsed: 8.05402925s
Jan  8 15:09:30.820: INFO: Pod "pod-configmaps-ed6f7393-7404-4a76-9ab9-8d153b88e999": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.169129789s
STEP: Saw pod success
Jan  8 15:09:30.820: INFO: Pod "pod-configmaps-ed6f7393-7404-4a76-9ab9-8d153b88e999" satisfied condition "success or failure"
Jan  8 15:09:30.834: INFO: Trying to get logs from node iruya-node pod pod-configmaps-ed6f7393-7404-4a76-9ab9-8d153b88e999 container configmap-volume-test: 
STEP: delete the pod
Jan  8 15:09:30.999: INFO: Waiting for pod pod-configmaps-ed6f7393-7404-4a76-9ab9-8d153b88e999 to disappear
Jan  8 15:09:31.007: INFO: Pod pod-configmaps-ed6f7393-7404-4a76-9ab9-8d153b88e999 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  8 15:09:31.007: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-3848" for this suite.
Jan  8 15:09:37.076: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  8 15:09:37.184: INFO: namespace configmap-3848 deletion completed in 6.170942286s

• [SLOW TEST:16.706 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should fail to create secret due to empty secret key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  8 15:09:37.185: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create secret due to empty secret key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name secret-emptykey-test-58b274ee-9652-4626-8f45-74a9596d2c10
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  8 15:09:37.249: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-3917" for this suite.
Jan  8 15:09:43.279: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  8 15:09:43.391: INFO: namespace secrets-3917 deletion completed in 6.131623856s

• [SLOW TEST:6.206 seconds]
[sig-api-machinery] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31
  should fail to create secret due to empty secret key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  8 15:09:43.392: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods
STEP: Gathering metrics
W0108 15:10:25.458026       8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jan  8 15:10:25.458: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  8 15:10:25.458: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-9778" for this suite.
Jan  8 15:10:47.488: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  8 15:10:47.574: INFO: namespace gc-9778 deletion completed in 22.109160906s

• [SLOW TEST:64.182 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl label 
  should update the label on a resource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  8 15:10:47.575: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1210
STEP: creating the pod
Jan  8 15:10:47.713: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1986'
Jan  8 15:10:50.081: INFO: stderr: ""
Jan  8 15:10:50.081: INFO: stdout: "pod/pause created\n"
Jan  8 15:10:50.081: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause]
Jan  8 15:10:50.082: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-1986" to be "running and ready"
Jan  8 15:10:50.097: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 15.770394ms
Jan  8 15:10:52.105: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023405297s
Jan  8 15:10:54.120: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 4.038180677s
Jan  8 15:10:56.138: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 6.056457014s
Jan  8 15:10:58.147: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 8.065376024s
Jan  8 15:10:58.147: INFO: Pod "pause" satisfied condition "running and ready"
Jan  8 15:10:58.147: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause]
[It] should update the label on a resource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: adding the label testing-label with value testing-label-value to a pod
Jan  8 15:10:58.147: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-1986'
Jan  8 15:10:58.367: INFO: stderr: ""
Jan  8 15:10:58.367: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod has the label testing-label with the value testing-label-value
Jan  8 15:10:58.368: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-1986'
Jan  8 15:10:58.558: INFO: stderr: ""
Jan  8 15:10:58.558: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          8s    testing-label-value\n"
STEP: removing the label testing-label of a pod
Jan  8 15:10:58.559: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-1986'
Jan  8 15:10:58.763: INFO: stderr: ""
Jan  8 15:10:58.763: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod doesn't have the label testing-label
Jan  8 15:10:58.763: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-1986'
Jan  8 15:10:59.096: INFO: stderr: ""
Jan  8 15:10:59.096: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          9s    \n"
[AfterEach] [k8s.io] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1217
STEP: using delete to clean up resources
Jan  8 15:10:59.096: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1986'
Jan  8 15:10:59.281: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan  8 15:10:59.282: INFO: stdout: "pod \"pause\" force deleted\n"
Jan  8 15:10:59.282: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-1986'
Jan  8 15:10:59.423: INFO: stderr: "No resources found.\n"
Jan  8 15:10:59.423: INFO: stdout: ""
Jan  8 15:10:59.423: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-1986 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Jan  8 15:10:59.632: INFO: stderr: ""
Jan  8 15:10:59.632: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  8 15:10:59.632: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1986" for this suite.
Jan  8 15:11:05.687: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  8 15:11:05.842: INFO: namespace kubectl-1986 deletion completed in 6.206746111s

• [SLOW TEST:18.267 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should update the label on a resource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-apps] Job 
  should delete a job [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  8 15:11:05.842: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete a job [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a job
STEP: Ensuring active pods == parallelism
STEP: delete a job
STEP: deleting Job.batch foo in namespace job-7382, will wait for the garbage collector to delete the pods
Jan  8 15:11:18.024: INFO: Deleting Job.batch foo took: 10.384587ms
Jan  8 15:11:18.324: INFO: Terminating Job.batch foo pods took: 300.336062ms
STEP: Ensuring job was deleted
[AfterEach] [sig-apps] Job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  8 15:11:56.732: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-7382" for this suite.
Jan  8 15:12:02.773: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  8 15:12:02.972: INFO: namespace job-7382 deletion completed in 6.230343551s

• [SLOW TEST:57.129 seconds]
[sig-apps] Job
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should delete a job [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with projected pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  8 15:12:02.972: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with projected pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod pod-subpath-test-projected-tb6x
STEP: Creating a pod to test atomic-volume-subpath
Jan  8 15:12:03.173: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-tb6x" in namespace "subpath-8253" to be "success or failure"
Jan  8 15:12:03.190: INFO: Pod "pod-subpath-test-projected-tb6x": Phase="Pending", Reason="", readiness=false. Elapsed: 17.41563ms
Jan  8 15:12:05.269: INFO: Pod "pod-subpath-test-projected-tb6x": Phase="Pending", Reason="", readiness=false. Elapsed: 2.096476684s
Jan  8 15:12:07.281: INFO: Pod "pod-subpath-test-projected-tb6x": Phase="Pending", Reason="", readiness=false. Elapsed: 4.108595941s
Jan  8 15:12:09.292: INFO: Pod "pod-subpath-test-projected-tb6x": Phase="Pending", Reason="", readiness=false. Elapsed: 6.118851242s
Jan  8 15:12:11.308: INFO: Pod "pod-subpath-test-projected-tb6x": Phase="Pending", Reason="", readiness=false. Elapsed: 8.134990535s
Jan  8 15:12:13.371: INFO: Pod "pod-subpath-test-projected-tb6x": Phase="Running", Reason="", readiness=true. Elapsed: 10.198442356s
Jan  8 15:12:15.392: INFO: Pod "pod-subpath-test-projected-tb6x": Phase="Running", Reason="", readiness=true. Elapsed: 12.218924385s
Jan  8 15:12:17.401: INFO: Pod "pod-subpath-test-projected-tb6x": Phase="Running", Reason="", readiness=true. Elapsed: 14.22783341s
Jan  8 15:12:19.408: INFO: Pod "pod-subpath-test-projected-tb6x": Phase="Running", Reason="", readiness=true. Elapsed: 16.235475011s
Jan  8 15:12:21.420: INFO: Pod "pod-subpath-test-projected-tb6x": Phase="Running", Reason="", readiness=true. Elapsed: 18.247024585s
Jan  8 15:12:23.429: INFO: Pod "pod-subpath-test-projected-tb6x": Phase="Running", Reason="", readiness=true. Elapsed: 20.256527377s
Jan  8 15:12:25.439: INFO: Pod "pod-subpath-test-projected-tb6x": Phase="Running", Reason="", readiness=true. Elapsed: 22.266561317s
Jan  8 15:12:27.448: INFO: Pod "pod-subpath-test-projected-tb6x": Phase="Running", Reason="", readiness=true. Elapsed: 24.274960136s
Jan  8 15:12:29.457: INFO: Pod "pod-subpath-test-projected-tb6x": Phase="Running", Reason="", readiness=true. Elapsed: 26.283918192s
Jan  8 15:12:31.469: INFO: Pod "pod-subpath-test-projected-tb6x": Phase="Running", Reason="", readiness=true. Elapsed: 28.295855487s
Jan  8 15:12:33.479: INFO: Pod "pod-subpath-test-projected-tb6x": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.30585589s
STEP: Saw pod success
Jan  8 15:12:33.479: INFO: Pod "pod-subpath-test-projected-tb6x" satisfied condition "success or failure"
Jan  8 15:12:33.486: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-projected-tb6x container test-container-subpath-projected-tb6x: 
STEP: delete the pod
Jan  8 15:12:33.629: INFO: Waiting for pod pod-subpath-test-projected-tb6x to disappear
Jan  8 15:12:33.653: INFO: Pod pod-subpath-test-projected-tb6x no longer exists
STEP: Deleting pod pod-subpath-test-projected-tb6x
Jan  8 15:12:33.653: INFO: Deleting pod "pod-subpath-test-projected-tb6x" in namespace "subpath-8253"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  8 15:12:33.660: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-8253" for this suite.
Jan  8 15:12:39.697: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  8 15:12:39.873: INFO: namespace subpath-8253 deletion completed in 6.206627067s

• [SLOW TEST:36.901 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with projected pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  8 15:12:39.875: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Jan  8 15:12:40.031: INFO: Waiting up to 5m0s for pod "downwardapi-volume-5c5d5095-3164-4664-9bdc-1b87ca1b37b8" in namespace "projected-3216" to be "success or failure"
Jan  8 15:12:40.157: INFO: Pod "downwardapi-volume-5c5d5095-3164-4664-9bdc-1b87ca1b37b8": Phase="Pending", Reason="", readiness=false. Elapsed: 126.236669ms
Jan  8 15:12:42.167: INFO: Pod "downwardapi-volume-5c5d5095-3164-4664-9bdc-1b87ca1b37b8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.136016984s
Jan  8 15:12:44.200: INFO: Pod "downwardapi-volume-5c5d5095-3164-4664-9bdc-1b87ca1b37b8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.169186337s
Jan  8 15:12:46.208: INFO: Pod "downwardapi-volume-5c5d5095-3164-4664-9bdc-1b87ca1b37b8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.177366974s
Jan  8 15:12:48.227: INFO: Pod "downwardapi-volume-5c5d5095-3164-4664-9bdc-1b87ca1b37b8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.196031296s
STEP: Saw pod success
Jan  8 15:12:48.227: INFO: Pod "downwardapi-volume-5c5d5095-3164-4664-9bdc-1b87ca1b37b8" satisfied condition "success or failure"
Jan  8 15:12:48.232: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-5c5d5095-3164-4664-9bdc-1b87ca1b37b8 container client-container: 
STEP: delete the pod
Jan  8 15:12:48.323: INFO: Waiting for pod downwardapi-volume-5c5d5095-3164-4664-9bdc-1b87ca1b37b8 to disappear
Jan  8 15:12:48.361: INFO: Pod downwardapi-volume-5c5d5095-3164-4664-9bdc-1b87ca1b37b8 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  8 15:12:48.362: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3216" for this suite.
Jan  8 15:12:54.394: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  8 15:12:54.528: INFO: namespace projected-3216 deletion completed in 6.157539257s

• [SLOW TEST:14.653 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  8 15:12:54.529: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-225.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-225.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jan  8 15:13:06.720: INFO: Unable to read wheezy_udp@kubernetes.default.svc.cluster.local from pod dns-225/dns-test-554acdc5-28ed-484e-897d-3e24862507da: the server could not find the requested resource (get pods dns-test-554acdc5-28ed-484e-897d-3e24862507da)
Jan  8 15:13:06.736: INFO: Unable to read wheezy_tcp@kubernetes.default.svc.cluster.local from pod dns-225/dns-test-554acdc5-28ed-484e-897d-3e24862507da: the server could not find the requested resource (get pods dns-test-554acdc5-28ed-484e-897d-3e24862507da)
Jan  8 15:13:06.747: INFO: Unable to read wheezy_udp@PodARecord from pod dns-225/dns-test-554acdc5-28ed-484e-897d-3e24862507da: the server could not find the requested resource (get pods dns-test-554acdc5-28ed-484e-897d-3e24862507da)
Jan  8 15:13:06.753: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-225/dns-test-554acdc5-28ed-484e-897d-3e24862507da: the server could not find the requested resource (get pods dns-test-554acdc5-28ed-484e-897d-3e24862507da)
Jan  8 15:13:06.758: INFO: Unable to read jessie_udp@kubernetes.default.svc.cluster.local from pod dns-225/dns-test-554acdc5-28ed-484e-897d-3e24862507da: the server could not find the requested resource (get pods dns-test-554acdc5-28ed-484e-897d-3e24862507da)
Jan  8 15:13:06.766: INFO: Unable to read jessie_tcp@kubernetes.default.svc.cluster.local from pod dns-225/dns-test-554acdc5-28ed-484e-897d-3e24862507da: the server could not find the requested resource (get pods dns-test-554acdc5-28ed-484e-897d-3e24862507da)
Jan  8 15:13:06.774: INFO: Unable to read jessie_udp@PodARecord from pod dns-225/dns-test-554acdc5-28ed-484e-897d-3e24862507da: the server could not find the requested resource (get pods dns-test-554acdc5-28ed-484e-897d-3e24862507da)
Jan  8 15:13:06.779: INFO: Unable to read jessie_tcp@PodARecord from pod dns-225/dns-test-554acdc5-28ed-484e-897d-3e24862507da: the server could not find the requested resource (get pods dns-test-554acdc5-28ed-484e-897d-3e24862507da)
Jan  8 15:13:06.779: INFO: Lookups using dns-225/dns-test-554acdc5-28ed-484e-897d-3e24862507da failed for: [wheezy_udp@kubernetes.default.svc.cluster.local wheezy_tcp@kubernetes.default.svc.cluster.local wheezy_udp@PodARecord wheezy_tcp@PodARecord jessie_udp@kubernetes.default.svc.cluster.local jessie_tcp@kubernetes.default.svc.cluster.local jessie_udp@PodARecord jessie_tcp@PodARecord]

Jan  8 15:13:11.878: INFO: DNS probes using dns-225/dns-test-554acdc5-28ed-484e-897d-3e24862507da succeeded

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  8 15:13:11.981: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-225" for this suite.
Jan  8 15:13:18.080: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  8 15:13:18.252: INFO: namespace dns-225 deletion completed in 6.22793311s

• [SLOW TEST:23.723 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  8 15:13:18.252: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0777 on tmpfs
Jan  8 15:13:18.426: INFO: Waiting up to 5m0s for pod "pod-686ce3b2-d550-4932-aeb3-d329da4992ec" in namespace "emptydir-7410" to be "success or failure"
Jan  8 15:13:18.435: INFO: Pod "pod-686ce3b2-d550-4932-aeb3-d329da4992ec": Phase="Pending", Reason="", readiness=false. Elapsed: 8.289484ms
Jan  8 15:13:20.444: INFO: Pod "pod-686ce3b2-d550-4932-aeb3-d329da4992ec": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017952477s
Jan  8 15:13:22.454: INFO: Pod "pod-686ce3b2-d550-4932-aeb3-d329da4992ec": Phase="Pending", Reason="", readiness=false. Elapsed: 4.027077997s
Jan  8 15:13:24.467: INFO: Pod "pod-686ce3b2-d550-4932-aeb3-d329da4992ec": Phase="Pending", Reason="", readiness=false. Elapsed: 6.040388825s
Jan  8 15:13:26.480: INFO: Pod "pod-686ce3b2-d550-4932-aeb3-d329da4992ec": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.054053496s
STEP: Saw pod success
Jan  8 15:13:26.481: INFO: Pod "pod-686ce3b2-d550-4932-aeb3-d329da4992ec" satisfied condition "success or failure"
Jan  8 15:13:26.486: INFO: Trying to get logs from node iruya-node pod pod-686ce3b2-d550-4932-aeb3-d329da4992ec container test-container: 
STEP: delete the pod
Jan  8 15:13:26.637: INFO: Waiting for pod pod-686ce3b2-d550-4932-aeb3-d329da4992ec to disappear
Jan  8 15:13:26.645: INFO: Pod pod-686ce3b2-d550-4932-aeb3-d329da4992ec no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  8 15:13:26.646: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-7410" for this suite.
Jan  8 15:13:32.682: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  8 15:13:32.806: INFO: namespace emptydir-7410 deletion completed in 6.154675372s

• [SLOW TEST:14.554 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Service endpoints latency 
  should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Service endpoints latency
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  8 15:13:32.807: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svc-latency
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating replication controller svc-latency-rc in namespace svc-latency-3127
I0108 15:13:32.925954       8 runners.go:180] Created replication controller with name: svc-latency-rc, namespace: svc-latency-3127, replica count: 1
I0108 15:13:33.976536       8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0108 15:13:34.976847       8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0108 15:13:35.977306       8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0108 15:13:36.978619       8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0108 15:13:37.979688       8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0108 15:13:38.980092       8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0108 15:13:39.980484       8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0108 15:13:40.980757       8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Jan  8 15:13:41.174: INFO: Created: latency-svc-ffd4l
Jan  8 15:13:41.193: INFO: Got endpoints: latency-svc-ffd4l [112.653257ms]
Jan  8 15:13:41.234: INFO: Created: latency-svc-4dj7n
Jan  8 15:13:41.245: INFO: Got endpoints: latency-svc-4dj7n [51.380149ms]
Jan  8 15:13:41.279: INFO: Created: latency-svc-zxlr7
Jan  8 15:13:41.394: INFO: Got endpoints: latency-svc-zxlr7 [198.835497ms]
Jan  8 15:13:41.407: INFO: Created: latency-svc-24d9g
Jan  8 15:13:41.419: INFO: Got endpoints: latency-svc-24d9g [221.547674ms]
Jan  8 15:13:41.463: INFO: Created: latency-svc-9vgrl
Jan  8 15:13:41.473: INFO: Got endpoints: latency-svc-9vgrl [276.086347ms]
Jan  8 15:13:41.597: INFO: Created: latency-svc-6c24s
Jan  8 15:13:41.606: INFO: Got endpoints: latency-svc-6c24s [409.65699ms]
Jan  8 15:13:41.646: INFO: Created: latency-svc-2jnr7
Jan  8 15:13:41.650: INFO: Got endpoints: latency-svc-2jnr7 [453.635034ms]
Jan  8 15:13:41.762: INFO: Created: latency-svc-72gmg
Jan  8 15:13:41.802: INFO: Created: latency-svc-m64pt
Jan  8 15:13:41.808: INFO: Got endpoints: latency-svc-72gmg [611.215198ms]
Jan  8 15:13:41.817: INFO: Got endpoints: latency-svc-m64pt [621.075559ms]
Jan  8 15:13:41.959: INFO: Created: latency-svc-vl4l6
Jan  8 15:13:41.968: INFO: Got endpoints: latency-svc-vl4l6 [771.803544ms]
Jan  8 15:13:42.032: INFO: Created: latency-svc-2lf6r
Jan  8 15:13:42.035: INFO: Got endpoints: latency-svc-2lf6r [838.198593ms]
Jan  8 15:13:42.164: INFO: Created: latency-svc-grr9v
Jan  8 15:13:42.175: INFO: Got endpoints: latency-svc-grr9v [978.053045ms]
Jan  8 15:13:42.223: INFO: Created: latency-svc-z9xnn
Jan  8 15:13:42.230: INFO: Got endpoints: latency-svc-z9xnn [1.033319628s]
Jan  8 15:13:42.397: INFO: Created: latency-svc-7gpb8
Jan  8 15:13:42.398: INFO: Got endpoints: latency-svc-7gpb8 [1.200478545s]
Jan  8 15:13:42.474: INFO: Created: latency-svc-nm52v
Jan  8 15:13:42.545: INFO: Got endpoints: latency-svc-nm52v [1.349199489s]
Jan  8 15:13:42.590: INFO: Created: latency-svc-tznlr
Jan  8 15:13:42.607: INFO: Got endpoints: latency-svc-tznlr [1.40933414s]
Jan  8 15:13:42.637: INFO: Created: latency-svc-2zhgf
Jan  8 15:13:42.702: INFO: Got endpoints: latency-svc-2zhgf [1.456489315s]
Jan  8 15:13:42.731: INFO: Created: latency-svc-v77b2
Jan  8 15:13:42.738: INFO: Got endpoints: latency-svc-v77b2 [1.34326095s]
Jan  8 15:13:42.783: INFO: Created: latency-svc-ncwz9
Jan  8 15:13:42.783: INFO: Got endpoints: latency-svc-ncwz9 [1.363971686s]
Jan  8 15:13:42.863: INFO: Created: latency-svc-j7w6l
Jan  8 15:13:42.884: INFO: Got endpoints: latency-svc-j7w6l [1.411367789s]
Jan  8 15:13:42.943: INFO: Created: latency-svc-fdh5g
Jan  8 15:13:42.954: INFO: Got endpoints: latency-svc-fdh5g [1.348101264s]
Jan  8 15:13:43.106: INFO: Created: latency-svc-gn9pm
Jan  8 15:13:43.111: INFO: Got endpoints: latency-svc-gn9pm [1.460714635s]
Jan  8 15:13:43.154: INFO: Created: latency-svc-jxnvx
Jan  8 15:13:43.160: INFO: Got endpoints: latency-svc-jxnvx [1.351682332s]
Jan  8 15:13:43.204: INFO: Created: latency-svc-lhf6b
Jan  8 15:13:43.256: INFO: Got endpoints: latency-svc-lhf6b [1.438285695s]
Jan  8 15:13:43.324: INFO: Created: latency-svc-x52db
Jan  8 15:13:43.335: INFO: Got endpoints: latency-svc-x52db [1.366845982s]
Jan  8 15:13:43.418: INFO: Created: latency-svc-2bqpg
Jan  8 15:13:43.432: INFO: Got endpoints: latency-svc-2bqpg [1.396894112s]
Jan  8 15:13:43.477: INFO: Created: latency-svc-mrbjd
Jan  8 15:13:43.487: INFO: Got endpoints: latency-svc-mrbjd [1.311635305s]
Jan  8 15:13:43.562: INFO: Created: latency-svc-hlghj
Jan  8 15:13:43.569: INFO: Got endpoints: latency-svc-hlghj [1.338302306s]
Jan  8 15:13:43.611: INFO: Created: latency-svc-q5lvl
Jan  8 15:13:43.619: INFO: Got endpoints: latency-svc-q5lvl [1.221444611s]
Jan  8 15:13:43.724: INFO: Created: latency-svc-7gl92
Jan  8 15:13:43.734: INFO: Got endpoints: latency-svc-7gl92 [1.188638537s]
Jan  8 15:13:43.778: INFO: Created: latency-svc-t87np
Jan  8 15:13:43.808: INFO: Got endpoints: latency-svc-t87np [1.201163184s]
Jan  8 15:13:43.898: INFO: Created: latency-svc-x7pfz
Jan  8 15:13:43.915: INFO: Got endpoints: latency-svc-x7pfz [1.212874143s]
Jan  8 15:13:43.984: INFO: Created: latency-svc-jqlhm
Jan  8 15:13:44.112: INFO: Got endpoints: latency-svc-jqlhm [1.374093291s]
Jan  8 15:13:44.157: INFO: Created: latency-svc-4jhjq
Jan  8 15:13:44.174: INFO: Got endpoints: latency-svc-4jhjq [1.39080518s]
Jan  8 15:13:44.206: INFO: Created: latency-svc-kmhjv
Jan  8 15:13:44.309: INFO: Got endpoints: latency-svc-kmhjv [1.424397819s]
Jan  8 15:13:44.337: INFO: Created: latency-svc-psqlj
Jan  8 15:13:44.344: INFO: Got endpoints: latency-svc-psqlj [1.389223635s]
Jan  8 15:13:44.398: INFO: Created: latency-svc-4zgwf
Jan  8 15:13:44.401: INFO: Got endpoints: latency-svc-4zgwf [1.290001123s]
Jan  8 15:13:44.517: INFO: Created: latency-svc-fh8qz
Jan  8 15:13:44.528: INFO: Got endpoints: latency-svc-fh8qz [1.367853375s]
Jan  8 15:13:44.588: INFO: Created: latency-svc-5p8kj
Jan  8 15:13:44.632: INFO: Got endpoints: latency-svc-5p8kj [1.375912857s]
Jan  8 15:13:44.663: INFO: Created: latency-svc-jnhsh
Jan  8 15:13:44.670: INFO: Got endpoints: latency-svc-jnhsh [1.335464665s]
Jan  8 15:13:44.722: INFO: Created: latency-svc-zsr66
Jan  8 15:13:44.727: INFO: Got endpoints: latency-svc-zsr66 [1.2940716s]
Jan  8 15:13:44.832: INFO: Created: latency-svc-skcmj
Jan  8 15:13:44.859: INFO: Got endpoints: latency-svc-skcmj [1.3721983s]
Jan  8 15:13:44.900: INFO: Created: latency-svc-5hh8f
Jan  8 15:13:44.943: INFO: Got endpoints: latency-svc-5hh8f [1.37448908s]
Jan  8 15:13:44.978: INFO: Created: latency-svc-9w9m2
Jan  8 15:13:45.206: INFO: Got endpoints: latency-svc-9w9m2 [1.586388724s]
Jan  8 15:13:45.214: INFO: Created: latency-svc-fcsnx
Jan  8 15:13:45.232: INFO: Got endpoints: latency-svc-fcsnx [1.497660061s]
Jan  8 15:13:45.265: INFO: Created: latency-svc-5lwg6
Jan  8 15:13:45.275: INFO: Got endpoints: latency-svc-5lwg6 [1.46643928s]
Jan  8 15:13:45.310: INFO: Created: latency-svc-zcqzf
Jan  8 15:13:45.369: INFO: Got endpoints: latency-svc-zcqzf [1.453406271s]
Jan  8 15:13:45.399: INFO: Created: latency-svc-ml5sw
Jan  8 15:13:45.411: INFO: Got endpoints: latency-svc-ml5sw [1.298308494s]
Jan  8 15:13:45.441: INFO: Created: latency-svc-j7bqb
Jan  8 15:13:45.445: INFO: Got endpoints: latency-svc-j7bqb [1.270774065s]
Jan  8 15:13:45.525: INFO: Created: latency-svc-2gk4r
Jan  8 15:13:45.530: INFO: Got endpoints: latency-svc-2gk4r [1.22057831s]
Jan  8 15:13:45.577: INFO: Created: latency-svc-952b5
Jan  8 15:13:45.590: INFO: Got endpoints: latency-svc-952b5 [1.246094072s]
Jan  8 15:13:45.724: INFO: Created: latency-svc-dk72d
Jan  8 15:13:45.754: INFO: Got endpoints: latency-svc-dk72d [1.352806553s]
Jan  8 15:13:45.812: INFO: Created: latency-svc-9bjrg
Jan  8 15:13:45.874: INFO: Got endpoints: latency-svc-9bjrg [1.346479991s]
Jan  8 15:13:45.883: INFO: Created: latency-svc-bbb64
Jan  8 15:13:45.887: INFO: Got endpoints: latency-svc-bbb64 [1.254743696s]
Jan  8 15:13:45.966: INFO: Created: latency-svc-6bwtb
Jan  8 15:13:46.083: INFO: Got endpoints: latency-svc-6bwtb [1.41244801s]
Jan  8 15:13:46.158: INFO: Created: latency-svc-pxh68
Jan  8 15:13:46.158: INFO: Got endpoints: latency-svc-pxh68 [1.431813387s]
Jan  8 15:13:46.288: INFO: Created: latency-svc-kkmx5
Jan  8 15:13:46.296: INFO: Got endpoints: latency-svc-kkmx5 [1.436731291s]
Jan  8 15:13:46.355: INFO: Created: latency-svc-npp79
Jan  8 15:13:46.364: INFO: Got endpoints: latency-svc-npp79 [1.420346051s]
Jan  8 15:13:46.471: INFO: Created: latency-svc-4dxpt
Jan  8 15:13:46.477: INFO: Got endpoints: latency-svc-4dxpt [1.271573278s]
Jan  8 15:13:46.536: INFO: Created: latency-svc-6tcz9
Jan  8 15:13:46.624: INFO: Got endpoints: latency-svc-6tcz9 [1.392224103s]
Jan  8 15:13:46.646: INFO: Created: latency-svc-hnwvv
Jan  8 15:13:46.662: INFO: Got endpoints: latency-svc-hnwvv [1.387487376s]
Jan  8 15:13:46.693: INFO: Created: latency-svc-wtf46
Jan  8 15:13:47.211: INFO: Got endpoints: latency-svc-wtf46 [1.841750368s]
Jan  8 15:13:47.236: INFO: Created: latency-svc-chqpt
Jan  8 15:13:47.262: INFO: Got endpoints: latency-svc-chqpt [1.851048229s]
Jan  8 15:13:47.291: INFO: Created: latency-svc-mvcxl
Jan  8 15:13:47.299: INFO: Got endpoints: latency-svc-mvcxl [1.853943099s]
Jan  8 15:13:47.397: INFO: Created: latency-svc-pq9w6
Jan  8 15:13:47.410: INFO: Got endpoints: latency-svc-pq9w6 [148.245614ms]
Jan  8 15:13:47.443: INFO: Created: latency-svc-v6p8n
Jan  8 15:13:47.453: INFO: Got endpoints: latency-svc-v6p8n [1.922983036s]
Jan  8 15:13:47.546: INFO: Created: latency-svc-d4fhf
Jan  8 15:13:47.556: INFO: Got endpoints: latency-svc-d4fhf [1.965399284s]
Jan  8 15:13:47.596: INFO: Created: latency-svc-flnxx
Jan  8 15:13:47.611: INFO: Got endpoints: latency-svc-flnxx [1.857566956s]
Jan  8 15:13:47.696: INFO: Created: latency-svc-brst7
Jan  8 15:13:47.706: INFO: Got endpoints: latency-svc-brst7 [1.830871055s]
Jan  8 15:13:47.756: INFO: Created: latency-svc-ptz7m
Jan  8 15:13:47.762: INFO: Got endpoints: latency-svc-ptz7m [1.875566762s]
Jan  8 15:13:47.864: INFO: Created: latency-svc-t99lv
Jan  8 15:13:47.866: INFO: Got endpoints: latency-svc-t99lv [1.78349245s]
Jan  8 15:13:47.924: INFO: Created: latency-svc-xcnqx
Jan  8 15:13:47.924: INFO: Got endpoints: latency-svc-xcnqx [1.765375517s]
Jan  8 15:13:48.053: INFO: Created: latency-svc-w6hgk
Jan  8 15:13:48.065: INFO: Got endpoints: latency-svc-w6hgk [1.76875627s]
Jan  8 15:13:48.107: INFO: Created: latency-svc-6h9gt
Jan  8 15:13:48.110: INFO: Got endpoints: latency-svc-6h9gt [1.745674069s]
Jan  8 15:13:48.277: INFO: Created: latency-svc-2d4bs
Jan  8 15:13:48.294: INFO: Got endpoints: latency-svc-2d4bs [1.816748444s]
Jan  8 15:13:48.349: INFO: Created: latency-svc-lhklj
Jan  8 15:13:48.495: INFO: Got endpoints: latency-svc-lhklj [1.870077267s]
Jan  8 15:13:48.513: INFO: Created: latency-svc-9sp7d
Jan  8 15:13:48.530: INFO: Got endpoints: latency-svc-9sp7d [1.867144964s]
Jan  8 15:13:48.577: INFO: Created: latency-svc-psbxd
Jan  8 15:13:48.660: INFO: Got endpoints: latency-svc-psbxd [1.449228084s]
Jan  8 15:13:48.661: INFO: Created: latency-svc-xkxvg
Jan  8 15:13:48.669: INFO: Got endpoints: latency-svc-xkxvg [1.369432569s]
Jan  8 15:13:48.721: INFO: Created: latency-svc-nd7lq
Jan  8 15:13:48.729: INFO: Got endpoints: latency-svc-nd7lq [1.318542336s]
Jan  8 15:13:48.838: INFO: Created: latency-svc-gfwmp
Jan  8 15:13:48.845: INFO: Got endpoints: latency-svc-gfwmp [1.391534972s]
Jan  8 15:13:48.917: INFO: Created: latency-svc-rd4cm
Jan  8 15:13:48.919: INFO: Got endpoints: latency-svc-rd4cm [1.363409489s]
Jan  8 15:13:49.140: INFO: Created: latency-svc-kp7jr
Jan  8 15:13:49.151: INFO: Got endpoints: latency-svc-kp7jr [1.539062063s]
Jan  8 15:13:49.400: INFO: Created: latency-svc-mhj69
Jan  8 15:13:49.434: INFO: Got endpoints: latency-svc-mhj69 [1.727974162s]
Jan  8 15:13:49.584: INFO: Created: latency-svc-q8qws
Jan  8 15:13:49.630: INFO: Got endpoints: latency-svc-q8qws [1.867663404s]
Jan  8 15:13:49.671: INFO: Created: latency-svc-md4xj
Jan  8 15:13:49.769: INFO: Got endpoints: latency-svc-md4xj [1.902375619s]
Jan  8 15:13:49.849: INFO: Created: latency-svc-d4sw4
Jan  8 15:13:50.017: INFO: Got endpoints: latency-svc-d4sw4 [2.093124645s]
Jan  8 15:13:50.018: INFO: Created: latency-svc-rcwfk
Jan  8 15:13:50.039: INFO: Got endpoints: latency-svc-rcwfk [1.973991001s]
Jan  8 15:13:50.107: INFO: Created: latency-svc-4n4dx
Jan  8 15:13:50.247: INFO: Got endpoints: latency-svc-4n4dx [2.136922462s]
Jan  8 15:13:50.255: INFO: Created: latency-svc-z59mj
Jan  8 15:13:50.260: INFO: Got endpoints: latency-svc-z59mj [1.965206955s]
Jan  8 15:13:50.340: INFO: Created: latency-svc-nxk4m
Jan  8 15:13:50.431: INFO: Got endpoints: latency-svc-nxk4m [1.935138803s]
Jan  8 15:13:50.449: INFO: Created: latency-svc-lq6ph
Jan  8 15:13:50.449: INFO: Got endpoints: latency-svc-lq6ph [1.91882097s]
Jan  8 15:13:50.501: INFO: Created: latency-svc-7d656
Jan  8 15:13:50.504: INFO: Got endpoints: latency-svc-7d656 [1.843725246s]
Jan  8 15:13:50.669: INFO: Created: latency-svc-7bt52
Jan  8 15:13:50.719: INFO: Got endpoints: latency-svc-7bt52 [2.050031003s]
Jan  8 15:13:50.726: INFO: Created: latency-svc-s4jgd
Jan  8 15:13:50.735: INFO: Got endpoints: latency-svc-s4jgd [2.006234399s]
Jan  8 15:13:50.876: INFO: Created: latency-svc-hvwh9
Jan  8 15:13:50.900: INFO: Got endpoints: latency-svc-hvwh9 [2.055209637s]
Jan  8 15:13:50.945: INFO: Created: latency-svc-xvbd9
Jan  8 15:13:51.057: INFO: Got endpoints: latency-svc-xvbd9 [2.137512045s]
Jan  8 15:13:51.063: INFO: Created: latency-svc-h5hqr
Jan  8 15:13:51.075: INFO: Got endpoints: latency-svc-h5hqr [1.924713877s]
Jan  8 15:13:51.116: INFO: Created: latency-svc-ql866
Jan  8 15:13:51.127: INFO: Got endpoints: latency-svc-ql866 [1.693659992s]
Jan  8 15:13:51.295: INFO: Created: latency-svc-9gs8f
Jan  8 15:13:51.313: INFO: Got endpoints: latency-svc-9gs8f [1.682176238s]
Jan  8 15:13:51.364: INFO: Created: latency-svc-q2k75
Jan  8 15:13:51.452: INFO: Got endpoints: latency-svc-q2k75 [1.683093651s]
Jan  8 15:13:51.490: INFO: Created: latency-svc-cnn59
Jan  8 15:13:51.493: INFO: Got endpoints: latency-svc-cnn59 [1.475187702s]
Jan  8 15:13:51.553: INFO: Created: latency-svc-xvvpg
Jan  8 15:13:51.643: INFO: Got endpoints: latency-svc-xvvpg [1.603703751s]
Jan  8 15:13:51.658: INFO: Created: latency-svc-gjjzn
Jan  8 15:13:51.668: INFO: Got endpoints: latency-svc-gjjzn [1.421165563s]
Jan  8 15:13:51.843: INFO: Created: latency-svc-8nr45
Jan  8 15:13:51.867: INFO: Got endpoints: latency-svc-8nr45 [1.60765935s]
Jan  8 15:13:51.919: INFO: Created: latency-svc-4dvvv
Jan  8 15:13:51.928: INFO: Got endpoints: latency-svc-4dvvv [1.49759738s]
Jan  8 15:13:52.093: INFO: Created: latency-svc-497v7
Jan  8 15:13:52.119: INFO: Got endpoints: latency-svc-497v7 [1.669928343s]
Jan  8 15:13:52.250: INFO: Created: latency-svc-k5hl8
Jan  8 15:13:52.268: INFO: Got endpoints: latency-svc-k5hl8 [1.764013541s]
Jan  8 15:13:52.319: INFO: Created: latency-svc-xrq8s
Jan  8 15:13:52.335: INFO: Got endpoints: latency-svc-xrq8s [1.615884946s]
Jan  8 15:13:52.515: INFO: Created: latency-svc-z5v99
Jan  8 15:13:52.515: INFO: Got endpoints: latency-svc-z5v99 [1.779349885s]
Jan  8 15:13:52.634: INFO: Created: latency-svc-9gjqk
Jan  8 15:13:52.663: INFO: Got endpoints: latency-svc-9gjqk [1.763228835s]
Jan  8 15:13:52.733: INFO: Created: latency-svc-fjqlf
Jan  8 15:13:52.800: INFO: Got endpoints: latency-svc-fjqlf [1.743089681s]
Jan  8 15:13:52.859: INFO: Created: latency-svc-l4xtr
Jan  8 15:13:52.888: INFO: Created: latency-svc-ctz7v
Jan  8 15:13:52.890: INFO: Got endpoints: latency-svc-l4xtr [1.814560024s]
Jan  8 15:13:53.052: INFO: Got endpoints: latency-svc-ctz7v [1.924457664s]
Jan  8 15:13:53.082: INFO: Created: latency-svc-dj7rv
Jan  8 15:13:53.098: INFO: Got endpoints: latency-svc-dj7rv [1.784956556s]
Jan  8 15:13:53.224: INFO: Created: latency-svc-zdf2z
Jan  8 15:13:53.239: INFO: Got endpoints: latency-svc-zdf2z [1.786884168s]
Jan  8 15:13:53.292: INFO: Created: latency-svc-lwbwr
Jan  8 15:13:53.423: INFO: Got endpoints: latency-svc-lwbwr [1.930862177s]
Jan  8 15:13:53.438: INFO: Created: latency-svc-dwd5g
Jan  8 15:13:53.448: INFO: Got endpoints: latency-svc-dwd5g [1.805426752s]
Jan  8 15:13:53.495: INFO: Created: latency-svc-hx5g9
Jan  8 15:13:53.500: INFO: Got endpoints: latency-svc-hx5g9 [1.831890243s]
Jan  8 15:13:53.609: INFO: Created: latency-svc-mkrxc
Jan  8 15:13:53.613: INFO: Got endpoints: latency-svc-mkrxc [1.745531764s]
Jan  8 15:13:53.683: INFO: Created: latency-svc-gv5km
Jan  8 15:13:53.691: INFO: Got endpoints: latency-svc-gv5km [1.762237119s]
Jan  8 15:13:53.857: INFO: Created: latency-svc-z4l75
Jan  8 15:13:53.913: INFO: Created: latency-svc-227rf
Jan  8 15:13:53.914: INFO: Got endpoints: latency-svc-z4l75 [1.794849971s]
Jan  8 15:13:53.931: INFO: Got endpoints: latency-svc-227rf [1.662497364s]
Jan  8 15:13:54.107: INFO: Created: latency-svc-7zfvt
Jan  8 15:13:54.123: INFO: Got endpoints: latency-svc-7zfvt [1.787389815s]
Jan  8 15:13:54.201: INFO: Created: latency-svc-jsv7p
Jan  8 15:13:54.297: INFO: Got endpoints: latency-svc-jsv7p [1.781449503s]
Jan  8 15:13:54.331: INFO: Created: latency-svc-26ff7
Jan  8 15:13:54.345: INFO: Got endpoints: latency-svc-26ff7 [1.681331495s]
Jan  8 15:13:54.538: INFO: Created: latency-svc-rrj62
Jan  8 15:13:54.606: INFO: Got endpoints: latency-svc-rrj62 [1.80578285s]
Jan  8 15:13:54.620: INFO: Created: latency-svc-fn7gm
Jan  8 15:13:54.783: INFO: Got endpoints: latency-svc-fn7gm [1.893294495s]
Jan  8 15:13:54.846: INFO: Created: latency-svc-tg8vs
Jan  8 15:13:55.029: INFO: Got endpoints: latency-svc-tg8vs [1.977077068s]
Jan  8 15:13:55.032: INFO: Created: latency-svc-4t77f
Jan  8 15:13:55.039: INFO: Got endpoints: latency-svc-4t77f [1.941518867s]
Jan  8 15:13:55.225: INFO: Created: latency-svc-bbjmp
Jan  8 15:13:55.263: INFO: Got endpoints: latency-svc-bbjmp [2.023437888s]
Jan  8 15:13:55.264: INFO: Created: latency-svc-ccp99
Jan  8 15:13:55.273: INFO: Got endpoints: latency-svc-ccp99 [1.849129502s]
Jan  8 15:13:55.302: INFO: Created: latency-svc-l8lpp
Jan  8 15:13:55.310: INFO: Got endpoints: latency-svc-l8lpp [1.861217456s]
Jan  8 15:13:55.493: INFO: Created: latency-svc-6m2rh
Jan  8 15:13:55.505: INFO: Got endpoints: latency-svc-6m2rh [2.004435996s]
Jan  8 15:13:55.617: INFO: Created: latency-svc-z4hph
Jan  8 15:13:55.619: INFO: Got endpoints: latency-svc-z4hph [2.005520894s]
Jan  8 15:13:55.693: INFO: Created: latency-svc-ws5rf
Jan  8 15:13:55.693: INFO: Got endpoints: latency-svc-ws5rf [2.00172708s]
Jan  8 15:13:55.855: INFO: Created: latency-svc-bnz6w
Jan  8 15:13:55.875: INFO: Got endpoints: latency-svc-bnz6w [1.961775442s]
Jan  8 15:13:56.127: INFO: Created: latency-svc-d4jlh
Jan  8 15:13:56.146: INFO: Got endpoints: latency-svc-d4jlh [2.215028947s]
Jan  8 15:13:56.199: INFO: Created: latency-svc-892lg
Jan  8 15:13:56.218: INFO: Got endpoints: latency-svc-892lg [2.094749714s]
Jan  8 15:13:56.315: INFO: Created: latency-svc-9pl9v
Jan  8 15:13:56.364: INFO: Got endpoints: latency-svc-9pl9v [2.067767693s]
Jan  8 15:13:56.372: INFO: Created: latency-svc-6jgpn
Jan  8 15:13:56.377: INFO: Got endpoints: latency-svc-6jgpn [2.031786174s]
Jan  8 15:13:56.470: INFO: Created: latency-svc-xb7m8
Jan  8 15:13:56.484: INFO: Got endpoints: latency-svc-xb7m8 [1.877092428s]
Jan  8 15:13:56.554: INFO: Created: latency-svc-pw7p5
Jan  8 15:13:56.666: INFO: Got endpoints: latency-svc-pw7p5 [1.881755338s]
Jan  8 15:13:56.698: INFO: Created: latency-svc-w92w6
Jan  8 15:13:56.754: INFO: Got endpoints: latency-svc-w92w6 [1.724040076s]
Jan  8 15:13:56.767: INFO: Created: latency-svc-79ffn
Jan  8 15:13:56.902: INFO: Got endpoints: latency-svc-79ffn [1.862102237s]
Jan  8 15:13:56.911: INFO: Created: latency-svc-p5579
Jan  8 15:13:56.921: INFO: Got endpoints: latency-svc-p5579 [1.6583069s]
Jan  8 15:13:56.996: INFO: Created: latency-svc-zjtwg
Jan  8 15:13:57.112: INFO: Got endpoints: latency-svc-zjtwg [1.83891579s]
Jan  8 15:13:57.177: INFO: Created: latency-svc-2bhjt
Jan  8 15:13:57.485: INFO: Got endpoints: latency-svc-2bhjt [2.175413826s]
Jan  8 15:13:57.551: INFO: Created: latency-svc-zjctw
Jan  8 15:13:57.567: INFO: Got endpoints: latency-svc-zjctw [2.061808356s]
Jan  8 15:13:57.661: INFO: Created: latency-svc-75gdd
Jan  8 15:13:57.700: INFO: Got endpoints: latency-svc-75gdd [2.081030032s]
Jan  8 15:13:57.705: INFO: Created: latency-svc-9fvw8
Jan  8 15:13:57.712: INFO: Got endpoints: latency-svc-9fvw8 [2.019748232s]
Jan  8 15:13:57.754: INFO: Created: latency-svc-hcwxd
Jan  8 15:13:57.826: INFO: Got endpoints: latency-svc-hcwxd [1.950269764s]
Jan  8 15:13:57.862: INFO: Created: latency-svc-89bl6
Jan  8 15:13:57.864: INFO: Got endpoints: latency-svc-89bl6 [1.717575073s]
Jan  8 15:13:57.902: INFO: Created: latency-svc-f7ljz
Jan  8 15:13:57.913: INFO: Got endpoints: latency-svc-f7ljz [1.69545414s]
Jan  8 15:13:58.014: INFO: Created: latency-svc-k8zrw
Jan  8 15:13:58.022: INFO: Got endpoints: latency-svc-k8zrw [1.657553457s]
Jan  8 15:13:58.155: INFO: Created: latency-svc-7sltj
Jan  8 15:13:58.160: INFO: Got endpoints: latency-svc-7sltj [1.783333046s]
Jan  8 15:13:58.219: INFO: Created: latency-svc-rvbbk
Jan  8 15:13:58.224: INFO: Got endpoints: latency-svc-rvbbk [1.739545766s]
Jan  8 15:13:58.350: INFO: Created: latency-svc-7kflb
Jan  8 15:13:58.350: INFO: Got endpoints: latency-svc-7kflb [1.683707409s]
Jan  8 15:13:58.413: INFO: Created: latency-svc-spcbw
Jan  8 15:13:58.428: INFO: Got endpoints: latency-svc-spcbw [1.674133437s]
Jan  8 15:13:58.547: INFO: Created: latency-svc-vqkfs
Jan  8 15:13:58.551: INFO: Got endpoints: latency-svc-vqkfs [1.649586401s]
Jan  8 15:13:58.614: INFO: Created: latency-svc-2dq4v
Jan  8 15:13:58.676: INFO: Got endpoints: latency-svc-2dq4v [1.754558398s]
Jan  8 15:13:58.714: INFO: Created: latency-svc-mvgv7
Jan  8 15:13:58.725: INFO: Got endpoints: latency-svc-mvgv7 [1.613569257s]
Jan  8 15:13:58.781: INFO: Created: latency-svc-4g49m
Jan  8 15:13:58.847: INFO: Got endpoints: latency-svc-4g49m [1.361742978s]
Jan  8 15:13:58.898: INFO: Created: latency-svc-929qc
Jan  8 15:13:58.914: INFO: Got endpoints: latency-svc-929qc [1.347142124s]
Jan  8 15:13:59.103: INFO: Created: latency-svc-h4qlp
Jan  8 15:13:59.118: INFO: Got endpoints: latency-svc-h4qlp [1.418083735s]
Jan  8 15:13:59.166: INFO: Created: latency-svc-rh4xc
Jan  8 15:13:59.171: INFO: Got endpoints: latency-svc-rh4xc [1.458932535s]
Jan  8 15:13:59.305: INFO: Created: latency-svc-94244
Jan  8 15:13:59.315: INFO: Got endpoints: latency-svc-94244 [1.488549483s]
Jan  8 15:13:59.379: INFO: Created: latency-svc-7bwqs
Jan  8 15:13:59.387: INFO: Got endpoints: latency-svc-7bwqs [1.523177402s]
Jan  8 15:13:59.489: INFO: Created: latency-svc-lgr7d
Jan  8 15:13:59.495: INFO: Got endpoints: latency-svc-lgr7d [1.581390029s]
Jan  8 15:13:59.559: INFO: Created: latency-svc-wnt7b
Jan  8 15:13:59.570: INFO: Got endpoints: latency-svc-wnt7b [1.547419396s]
Jan  8 15:13:59.682: INFO: Created: latency-svc-pzq2g
Jan  8 15:13:59.700: INFO: Got endpoints: latency-svc-pzq2g [1.540183961s]
Jan  8 15:13:59.816: INFO: Created: latency-svc-s6d7p
Jan  8 15:13:59.817: INFO: Got endpoints: latency-svc-s6d7p [1.592826994s]
Jan  8 15:13:59.843: INFO: Created: latency-svc-t4ftk
Jan  8 15:13:59.868: INFO: Got endpoints: latency-svc-t4ftk [1.518356757s]
Jan  8 15:13:59.947: INFO: Created: latency-svc-khk8l
Jan  8 15:14:00.004: INFO: Got endpoints: latency-svc-khk8l [1.575905599s]
Jan  8 15:14:00.015: INFO: Created: latency-svc-527db
Jan  8 15:14:00.019: INFO: Got endpoints: latency-svc-527db [1.467312777s]
Jan  8 15:14:00.152: INFO: Created: latency-svc-pvtck
Jan  8 15:14:00.166: INFO: Got endpoints: latency-svc-pvtck [1.490043853s]
Jan  8 15:14:00.215: INFO: Created: latency-svc-7l5cr
Jan  8 15:14:00.293: INFO: Got endpoints: latency-svc-7l5cr [1.567718035s]
Jan  8 15:14:00.327: INFO: Created: latency-svc-zmk8s
Jan  8 15:14:00.338: INFO: Got endpoints: latency-svc-zmk8s [1.490510087s]
Jan  8 15:14:00.384: INFO: Created: latency-svc-wxqw2
Jan  8 15:14:00.488: INFO: Got endpoints: latency-svc-wxqw2 [1.573555473s]
Jan  8 15:14:00.499: INFO: Created: latency-svc-q5pjh
Jan  8 15:14:00.512: INFO: Got endpoints: latency-svc-q5pjh [1.393412013s]
Jan  8 15:14:00.548: INFO: Created: latency-svc-4z75z
Jan  8 15:14:00.672: INFO: Got endpoints: latency-svc-4z75z [1.500848351s]
Jan  8 15:14:00.674: INFO: Created: latency-svc-kpn2b
Jan  8 15:14:00.682: INFO: Got endpoints: latency-svc-kpn2b [1.366992514s]
Jan  8 15:14:00.756: INFO: Created: latency-svc-s7zhd
Jan  8 15:14:00.756: INFO: Got endpoints: latency-svc-s7zhd [1.368847717s]
Jan  8 15:14:00.859: INFO: Created: latency-svc-c6gwd
Jan  8 15:14:00.873: INFO: Got endpoints: latency-svc-c6gwd [1.377880744s]
Jan  8 15:14:00.916: INFO: Created: latency-svc-vhrgn
Jan  8 15:14:01.087: INFO: Got endpoints: latency-svc-vhrgn [1.516933479s]
Jan  8 15:14:01.093: INFO: Created: latency-svc-rtx95
Jan  8 15:14:01.098: INFO: Got endpoints: latency-svc-rtx95 [1.397109848s]
Jan  8 15:14:01.173: INFO: Created: latency-svc-cnj2b
Jan  8 15:14:01.181: INFO: Got endpoints: latency-svc-cnj2b [1.364187364s]
Jan  8 15:14:01.290: INFO: Created: latency-svc-24lx4
Jan  8 15:14:01.417: INFO: Got endpoints: latency-svc-24lx4 [1.548905809s]
Jan  8 15:14:01.466: INFO: Created: latency-svc-mzjj5
Jan  8 15:14:01.470: INFO: Got endpoints: latency-svc-mzjj5 [1.466030772s]
Jan  8 15:14:01.518: INFO: Created: latency-svc-fhv2r
Jan  8 15:14:01.598: INFO: Got endpoints: latency-svc-fhv2r [1.578807058s]
Jan  8 15:14:01.645: INFO: Created: latency-svc-4zhxq
Jan  8 15:14:01.651: INFO: Got endpoints: latency-svc-4zhxq [1.484262431s]
Jan  8 15:14:01.692: INFO: Created: latency-svc-ql4nh
Jan  8 15:14:01.765: INFO: Got endpoints: latency-svc-ql4nh [1.471423519s]
Jan  8 15:14:01.800: INFO: Created: latency-svc-khf69
Jan  8 15:14:01.841: INFO: Got endpoints: latency-svc-khf69 [1.502681953s]
Jan  8 15:14:01.843: INFO: Created: latency-svc-gnsp7
Jan  8 15:14:01.854: INFO: Got endpoints: latency-svc-gnsp7 [1.365797328s]
Jan  8 15:14:01.945: INFO: Created: latency-svc-h4xjc
Jan  8 15:14:01.989: INFO: Got endpoints: latency-svc-h4xjc [1.477075087s]
Jan  8 15:14:01.996: INFO: Created: latency-svc-nj28m
Jan  8 15:14:02.021: INFO: Got endpoints: latency-svc-nj28m [1.348473443s]
Jan  8 15:14:02.135: INFO: Created: latency-svc-jnkrl
Jan  8 15:14:02.154: INFO: Got endpoints: latency-svc-jnkrl [1.471873244s]
Jan  8 15:14:02.253: INFO: Created: latency-svc-wz6wx
Jan  8 15:14:02.300: INFO: Got endpoints: latency-svc-wz6wx [1.543555409s]
Jan  8 15:14:02.319: INFO: Created: latency-svc-zp7x5
Jan  8 15:14:02.319: INFO: Got endpoints: latency-svc-zp7x5 [1.446104985s]
Jan  8 15:14:02.429: INFO: Created: latency-svc-628b4
Jan  8 15:14:02.435: INFO: Got endpoints: latency-svc-628b4 [1.347299145s]
Jan  8 15:14:02.480: INFO: Created: latency-svc-cwbpx
Jan  8 15:14:02.487: INFO: Got endpoints: latency-svc-cwbpx [1.388892135s]
Jan  8 15:14:02.487: INFO: Latencies: [51.380149ms 148.245614ms 198.835497ms 221.547674ms 276.086347ms 409.65699ms 453.635034ms 611.215198ms 621.075559ms 771.803544ms 838.198593ms 978.053045ms 1.033319628s 1.188638537s 1.200478545s 1.201163184s 1.212874143s 1.22057831s 1.221444611s 1.246094072s 1.254743696s 1.270774065s 1.271573278s 1.290001123s 1.2940716s 1.298308494s 1.311635305s 1.318542336s 1.335464665s 1.338302306s 1.34326095s 1.346479991s 1.347142124s 1.347299145s 1.348101264s 1.348473443s 1.349199489s 1.351682332s 1.352806553s 1.361742978s 1.363409489s 1.363971686s 1.364187364s 1.365797328s 1.366845982s 1.366992514s 1.367853375s 1.368847717s 1.369432569s 1.3721983s 1.374093291s 1.37448908s 1.375912857s 1.377880744s 1.387487376s 1.388892135s 1.389223635s 1.39080518s 1.391534972s 1.392224103s 1.393412013s 1.396894112s 1.397109848s 1.40933414s 1.411367789s 1.41244801s 1.418083735s 1.420346051s 1.421165563s 1.424397819s 1.431813387s 1.436731291s 1.438285695s 1.446104985s 1.449228084s 1.453406271s 1.456489315s 1.458932535s 1.460714635s 1.466030772s 1.46643928s 1.467312777s 1.471423519s 1.471873244s 1.475187702s 1.477075087s 1.484262431s 1.488549483s 1.490043853s 1.490510087s 1.49759738s 1.497660061s 1.500848351s 1.502681953s 1.516933479s 1.518356757s 1.523177402s 1.539062063s 1.540183961s 1.543555409s 1.547419396s 1.548905809s 1.567718035s 1.573555473s 1.575905599s 1.578807058s 1.581390029s 1.586388724s 1.592826994s 1.603703751s 1.60765935s 1.613569257s 1.615884946s 1.649586401s 1.657553457s 1.6583069s 1.662497364s 1.669928343s 1.674133437s 1.681331495s 1.682176238s 1.683093651s 1.683707409s 1.693659992s 1.69545414s 1.717575073s 1.724040076s 1.727974162s 1.739545766s 1.743089681s 1.745531764s 1.745674069s 1.754558398s 1.762237119s 1.763228835s 1.764013541s 1.765375517s 1.76875627s 1.779349885s 1.781449503s 1.783333046s 1.78349245s 1.784956556s 1.786884168s 1.787389815s 1.794849971s 1.805426752s 1.80578285s 1.814560024s 1.816748444s 1.830871055s 1.831890243s 1.83891579s 1.841750368s 1.843725246s 1.849129502s 1.851048229s 1.853943099s 1.857566956s 1.861217456s 1.862102237s 1.867144964s 1.867663404s 1.870077267s 1.875566762s 1.877092428s 1.881755338s 1.893294495s 1.902375619s 1.91882097s 1.922983036s 1.924457664s 1.924713877s 1.930862177s 1.935138803s 1.941518867s 1.950269764s 1.961775442s 1.965206955s 1.965399284s 1.973991001s 1.977077068s 2.00172708s 2.004435996s 2.005520894s 2.006234399s 2.019748232s 2.023437888s 2.031786174s 2.050031003s 2.055209637s 2.061808356s 2.067767693s 2.081030032s 2.093124645s 2.094749714s 2.136922462s 2.137512045s 2.175413826s 2.215028947s]
Jan  8 15:14:02.487: INFO: 50 %ile: 1.547419396s
Jan  8 15:14:02.487: INFO: 90 %ile: 1.973991001s
Jan  8 15:14:02.487: INFO: 99 %ile: 2.175413826s
Jan  8 15:14:02.488: INFO: Total sample count: 200
[AfterEach] [sig-network] Service endpoints latency
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  8 15:14:02.488: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svc-latency-3127" for this suite.
Jan  8 15:14:42.585: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  8 15:14:42.687: INFO: namespace svc-latency-3127 deletion completed in 40.182960048s

• [SLOW TEST:69.881 seconds]
[sig-network] Service endpoints latency
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  8 15:14:42.688: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
[It] should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jan  8 15:14:42.831: INFO: Creating daemon "daemon-set" with a node selector
STEP: Initially, daemon pods should not be running on any nodes.
Jan  8 15:14:42.844: INFO: Number of nodes with available pods: 0
Jan  8 15:14:42.844: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Change node label to blue, check that daemon pod is launched.
Jan  8 15:14:42.947: INFO: Number of nodes with available pods: 0
Jan  8 15:14:42.947: INFO: Node iruya-node is running more than one daemon pod
Jan  8 15:14:43.955: INFO: Number of nodes with available pods: 0
Jan  8 15:14:43.955: INFO: Node iruya-node is running more than one daemon pod
Jan  8 15:14:44.956: INFO: Number of nodes with available pods: 0
Jan  8 15:14:44.956: INFO: Node iruya-node is running more than one daemon pod
Jan  8 15:14:45.960: INFO: Number of nodes with available pods: 0
Jan  8 15:14:45.960: INFO: Node iruya-node is running more than one daemon pod
Jan  8 15:14:46.954: INFO: Number of nodes with available pods: 0
Jan  8 15:14:46.954: INFO: Node iruya-node is running more than one daemon pod
Jan  8 15:14:47.964: INFO: Number of nodes with available pods: 0
Jan  8 15:14:47.964: INFO: Node iruya-node is running more than one daemon pod
Jan  8 15:14:48.956: INFO: Number of nodes with available pods: 0
Jan  8 15:14:48.956: INFO: Node iruya-node is running more than one daemon pod
Jan  8 15:14:49.968: INFO: Number of nodes with available pods: 1
Jan  8 15:14:49.968: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Update the node label to green, and wait for daemons to be unscheduled
Jan  8 15:14:50.020: INFO: Number of nodes with available pods: 1
Jan  8 15:14:50.020: INFO: Number of running nodes: 0, number of available pods: 1
Jan  8 15:14:51.028: INFO: Number of nodes with available pods: 0
Jan  8 15:14:51.028: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate
Jan  8 15:14:51.064: INFO: Number of nodes with available pods: 0
Jan  8 15:14:51.064: INFO: Node iruya-node is running more than one daemon pod
Jan  8 15:14:52.072: INFO: Number of nodes with available pods: 0
Jan  8 15:14:52.072: INFO: Node iruya-node is running more than one daemon pod
Jan  8 15:14:53.073: INFO: Number of nodes with available pods: 0
Jan  8 15:14:53.073: INFO: Node iruya-node is running more than one daemon pod
Jan  8 15:14:54.079: INFO: Number of nodes with available pods: 0
Jan  8 15:14:54.079: INFO: Node iruya-node is running more than one daemon pod
Jan  8 15:14:55.132: INFO: Number of nodes with available pods: 0
Jan  8 15:14:55.132: INFO: Node iruya-node is running more than one daemon pod
Jan  8 15:14:56.072: INFO: Number of nodes with available pods: 0
Jan  8 15:14:56.072: INFO: Node iruya-node is running more than one daemon pod
Jan  8 15:14:57.082: INFO: Number of nodes with available pods: 0
Jan  8 15:14:57.082: INFO: Node iruya-node is running more than one daemon pod
Jan  8 15:14:58.074: INFO: Number of nodes with available pods: 0
Jan  8 15:14:58.074: INFO: Node iruya-node is running more than one daemon pod
Jan  8 15:14:59.071: INFO: Number of nodes with available pods: 0
Jan  8 15:14:59.071: INFO: Node iruya-node is running more than one daemon pod
Jan  8 15:15:00.104: INFO: Number of nodes with available pods: 0
Jan  8 15:15:00.104: INFO: Node iruya-node is running more than one daemon pod
Jan  8 15:15:01.074: INFO: Number of nodes with available pods: 0
Jan  8 15:15:01.074: INFO: Node iruya-node is running more than one daemon pod
Jan  8 15:15:02.074: INFO: Number of nodes with available pods: 0
Jan  8 15:15:02.074: INFO: Node iruya-node is running more than one daemon pod
Jan  8 15:15:03.072: INFO: Number of nodes with available pods: 0
Jan  8 15:15:03.072: INFO: Node iruya-node is running more than one daemon pod
Jan  8 15:15:04.078: INFO: Number of nodes with available pods: 0
Jan  8 15:15:04.078: INFO: Node iruya-node is running more than one daemon pod
Jan  8 15:15:05.073: INFO: Number of nodes with available pods: 0
Jan  8 15:15:05.073: INFO: Node iruya-node is running more than one daemon pod
Jan  8 15:15:06.073: INFO: Number of nodes with available pods: 0
Jan  8 15:15:06.073: INFO: Node iruya-node is running more than one daemon pod
Jan  8 15:15:07.071: INFO: Number of nodes with available pods: 0
Jan  8 15:15:07.071: INFO: Node iruya-node is running more than one daemon pod
Jan  8 15:15:08.071: INFO: Number of nodes with available pods: 0
Jan  8 15:15:08.071: INFO: Node iruya-node is running more than one daemon pod
Jan  8 15:15:09.070: INFO: Number of nodes with available pods: 0
Jan  8 15:15:09.070: INFO: Node iruya-node is running more than one daemon pod
Jan  8 15:15:10.070: INFO: Number of nodes with available pods: 0
Jan  8 15:15:10.070: INFO: Node iruya-node is running more than one daemon pod
Jan  8 15:15:11.076: INFO: Number of nodes with available pods: 0
Jan  8 15:15:11.076: INFO: Node iruya-node is running more than one daemon pod
Jan  8 15:15:12.073: INFO: Number of nodes with available pods: 0
Jan  8 15:15:12.073: INFO: Node iruya-node is running more than one daemon pod
Jan  8 15:15:13.074: INFO: Number of nodes with available pods: 0
Jan  8 15:15:13.074: INFO: Node iruya-node is running more than one daemon pod
Jan  8 15:15:14.079: INFO: Number of nodes with available pods: 0
Jan  8 15:15:14.080: INFO: Node iruya-node is running more than one daemon pod
Jan  8 15:15:15.074: INFO: Number of nodes with available pods: 1
Jan  8 15:15:15.074: INFO: Number of running nodes: 1, number of available pods: 1
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-907, will wait for the garbage collector to delete the pods
Jan  8 15:15:15.197: INFO: Deleting DaemonSet.extensions daemon-set took: 39.627328ms
Jan  8 15:15:15.498: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.550388ms
Jan  8 15:15:26.825: INFO: Number of nodes with available pods: 0
Jan  8 15:15:26.825: INFO: Number of running nodes: 0, number of available pods: 0
Jan  8 15:15:26.830: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-907/daemonsets","resourceVersion":"19793036"},"items":null}

Jan  8 15:15:26.835: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-907/pods","resourceVersion":"19793036"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  8 15:15:26.916: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-907" for this suite.
Jan  8 15:15:32.968: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  8 15:15:33.063: INFO: namespace daemonsets-907 deletion completed in 6.138859807s

• [SLOW TEST:50.375 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  8 15:15:33.064: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: modifying the configmap a second time
STEP: deleting the configmap
STEP: creating a watch on configmaps from the resource version returned by the first update
STEP: Expecting to observe notifications for all changes to the configmap after the first update
Jan  8 15:15:33.216: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:watch-9865,SelfLink:/api/v1/namespaces/watch-9865/configmaps/e2e-watch-test-resource-version,UID:8d5e4a92-2cea-4d3e-b6fd-a86338633a09,ResourceVersion:19793069,Generation:0,CreationTimestamp:2020-01-08 15:15:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Jan  8 15:15:33.216: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:watch-9865,SelfLink:/api/v1/namespaces/watch-9865/configmaps/e2e-watch-test-resource-version,UID:8d5e4a92-2cea-4d3e-b6fd-a86338633a09,ResourceVersion:19793070,Generation:0,CreationTimestamp:2020-01-08 15:15:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  8 15:15:33.216: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-9865" for this suite.
Jan  8 15:15:39.244: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  8 15:15:39.394: INFO: namespace watch-9865 deletion completed in 6.169599365s

• [SLOW TEST:6.331 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  8 15:15:39.395: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
[It] should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jan  8 15:15:39.564: INFO: Create a RollingUpdate DaemonSet
Jan  8 15:15:39.575: INFO: Check that daemon pods launch on every node of the cluster
Jan  8 15:15:39.656: INFO: Number of nodes with available pods: 0
Jan  8 15:15:39.656: INFO: Node iruya-node is running more than one daemon pod
Jan  8 15:15:41.627: INFO: Number of nodes with available pods: 0
Jan  8 15:15:41.627: INFO: Node iruya-node is running more than one daemon pod
Jan  8 15:15:42.014: INFO: Number of nodes with available pods: 0
Jan  8 15:15:42.014: INFO: Node iruya-node is running more than one daemon pod
Jan  8 15:15:42.675: INFO: Number of nodes with available pods: 0
Jan  8 15:15:42.675: INFO: Node iruya-node is running more than one daemon pod
Jan  8 15:15:43.705: INFO: Number of nodes with available pods: 0
Jan  8 15:15:43.705: INFO: Node iruya-node is running more than one daemon pod
Jan  8 15:15:44.666: INFO: Number of nodes with available pods: 0
Jan  8 15:15:44.666: INFO: Node iruya-node is running more than one daemon pod
Jan  8 15:15:46.465: INFO: Number of nodes with available pods: 0
Jan  8 15:15:46.465: INFO: Node iruya-node is running more than one daemon pod
Jan  8 15:15:47.502: INFO: Number of nodes with available pods: 0
Jan  8 15:15:47.502: INFO: Node iruya-node is running more than one daemon pod
Jan  8 15:15:48.141: INFO: Number of nodes with available pods: 0
Jan  8 15:15:48.141: INFO: Node iruya-node is running more than one daemon pod
Jan  8 15:15:48.673: INFO: Number of nodes with available pods: 0
Jan  8 15:15:48.673: INFO: Node iruya-node is running more than one daemon pod
Jan  8 15:15:49.673: INFO: Number of nodes with available pods: 0
Jan  8 15:15:49.673: INFO: Node iruya-node is running more than one daemon pod
Jan  8 15:15:50.730: INFO: Number of nodes with available pods: 2
Jan  8 15:15:50.730: INFO: Number of running nodes: 2, number of available pods: 2
Jan  8 15:15:50.730: INFO: Update the DaemonSet to trigger a rollout
Jan  8 15:15:50.742: INFO: Updating DaemonSet daemon-set
Jan  8 15:16:09.829: INFO: Roll back the DaemonSet before rollout is complete
Jan  8 15:16:09.899: INFO: Updating DaemonSet daemon-set
Jan  8 15:16:09.899: INFO: Make sure DaemonSet rollback is complete
Jan  8 15:16:09.915: INFO: Wrong image for pod: daemon-set-cz4nm. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent.
Jan  8 15:16:09.915: INFO: Pod daemon-set-cz4nm is not available
Jan  8 15:16:11.041: INFO: Wrong image for pod: daemon-set-cz4nm. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent.
Jan  8 15:16:11.041: INFO: Pod daemon-set-cz4nm is not available
Jan  8 15:16:12.039: INFO: Wrong image for pod: daemon-set-cz4nm. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent.
Jan  8 15:16:12.039: INFO: Pod daemon-set-cz4nm is not available
Jan  8 15:16:13.042: INFO: Wrong image for pod: daemon-set-cz4nm. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent.
Jan  8 15:16:13.042: INFO: Pod daemon-set-cz4nm is not available
Jan  8 15:16:14.388: INFO: Pod daemon-set-mmfxw is not available
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-6597, will wait for the garbage collector to delete the pods
Jan  8 15:16:14.503: INFO: Deleting DaemonSet.extensions daemon-set took: 26.341653ms
Jan  8 15:16:15.004: INFO: Terminating DaemonSet.extensions daemon-set pods took: 500.542855ms
Jan  8 15:16:26.654: INFO: Number of nodes with available pods: 0
Jan  8 15:16:26.654: INFO: Number of running nodes: 0, number of available pods: 0
Jan  8 15:16:26.661: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-6597/daemonsets","resourceVersion":"19793216"},"items":null}

Jan  8 15:16:26.665: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-6597/pods","resourceVersion":"19793216"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  8 15:16:26.684: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-6597" for this suite.
Jan  8 15:16:32.715: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  8 15:16:32.883: INFO: namespace daemonsets-6597 deletion completed in 6.195943813s

• [SLOW TEST:53.488 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  8 15:16:32.884: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for all rs to be garbage collected
STEP: expected 0 rs, got 1 rs
STEP: expected 0 pods, got 2 pods
STEP: expected 0 pods, got 2 pods
STEP: Gathering metrics
W0108 15:16:36.040232       8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jan  8 15:16:36.040: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  8 15:16:36.040: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-7549" for this suite.
Jan  8 15:16:42.492: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  8 15:16:42.608: INFO: namespace gc-7549 deletion completed in 6.559115824s

• [SLOW TEST:9.724 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSJan  8 15:16:42.609: INFO: Running AfterSuite actions on all nodes
Jan  8 15:16:42.609: INFO: Running AfterSuite actions on node 1
Jan  8 15:16:42.609: INFO: Skipping dumping logs from cluster


Summarizing 1 Failure:

[Fail] [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] [It] Should recreate evicted statefulset [Conformance] 
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:769

Ran 215 of 4412 Specs in 8429.772 seconds
FAIL! -- 214 Passed | 1 Failed | 0 Pending | 4197 Skipped
--- FAIL: TestE2E (8430.09s)
FAIL