I0208 12:56:09.125135 8 e2e.go:243] Starting e2e run "1dc19db1-c19a-40bb-97b3-7aca40d01612" on Ginkgo node 1 Running Suite: Kubernetes e2e suite =================================== Random Seed: 1581166567 - Will randomize all specs Will run 215 of 4412 specs Feb 8 12:56:09.366: INFO: >>> kubeConfig: /root/.kube/config Feb 8 12:56:09.370: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Feb 8 12:56:09.406: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Feb 8 12:56:09.446: INFO: 10 / 10 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Feb 8 12:56:09.446: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Feb 8 12:56:09.446: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Feb 8 12:56:09.461: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Feb 8 12:56:09.461: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'weave-net' (0 seconds elapsed) Feb 8 12:56:09.461: INFO: e2e test version: v1.15.7 Feb 8 12:56:09.463: INFO: kube-apiserver version: v1.15.1 SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 8 12:56:09.464: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test Feb 8 12:56:09.703: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 8 12:56:21.840: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-8763" for this suite. Feb 8 12:57:08.095: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 8 12:57:08.232: INFO: namespace kubelet-test-8763 deletion completed in 46.172616043s • [SLOW TEST:58.769 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a busybox Pod with hostAliases /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136 should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 8 12:57:08.233: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Feb 8 12:57:08.400: INFO: Waiting up to 5m0s for pod "downwardapi-volume-892d92f3-c002-40e1-ae60-663bfd5f9526" in namespace "downward-api-6289" to be "success or failure" Feb 8 12:57:08.408: INFO: Pod "downwardapi-volume-892d92f3-c002-40e1-ae60-663bfd5f9526": Phase="Pending", Reason="", readiness=false. Elapsed: 7.436663ms Feb 8 12:57:10.418: INFO: Pod "downwardapi-volume-892d92f3-c002-40e1-ae60-663bfd5f9526": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0176554s Feb 8 12:57:12.447: INFO: Pod "downwardapi-volume-892d92f3-c002-40e1-ae60-663bfd5f9526": Phase="Pending", Reason="", readiness=false. Elapsed: 4.046543396s Feb 8 12:57:14.453: INFO: Pod "downwardapi-volume-892d92f3-c002-40e1-ae60-663bfd5f9526": Phase="Pending", Reason="", readiness=false. Elapsed: 6.053036399s Feb 8 12:57:16.465: INFO: Pod "downwardapi-volume-892d92f3-c002-40e1-ae60-663bfd5f9526": Phase="Pending", Reason="", readiness=false. Elapsed: 8.06509833s Feb 8 12:57:18.472: INFO: Pod "downwardapi-volume-892d92f3-c002-40e1-ae60-663bfd5f9526": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.071688521s STEP: Saw pod success Feb 8 12:57:18.472: INFO: Pod "downwardapi-volume-892d92f3-c002-40e1-ae60-663bfd5f9526" satisfied condition "success or failure" Feb 8 12:57:18.476: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-892d92f3-c002-40e1-ae60-663bfd5f9526 container client-container: STEP: delete the pod Feb 8 12:57:18.648: INFO: Waiting for pod downwardapi-volume-892d92f3-c002-40e1-ae60-663bfd5f9526 to disappear Feb 8 12:57:18.651: INFO: Pod downwardapi-volume-892d92f3-c002-40e1-ae60-663bfd5f9526 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 8 12:57:18.651: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6289" for this suite. Feb 8 12:57:24.674: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 8 12:57:24.800: INFO: namespace downward-api-6289 deletion completed in 6.145399021s • [SLOW TEST:16.567 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 8 12:57:24.801: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Feb 8 12:57:24.919: INFO: Waiting up to 5m0s for pod "downwardapi-volume-23f516f8-86a8-427d-85ef-8c9e25a8ac29" in namespace "downward-api-3761" to be "success or failure" Feb 8 12:57:24.950: INFO: Pod "downwardapi-volume-23f516f8-86a8-427d-85ef-8c9e25a8ac29": Phase="Pending", Reason="", readiness=false. Elapsed: 31.327453ms Feb 8 12:57:26.965: INFO: Pod "downwardapi-volume-23f516f8-86a8-427d-85ef-8c9e25a8ac29": Phase="Pending", Reason="", readiness=false. Elapsed: 2.046113737s Feb 8 12:57:28.972: INFO: Pod "downwardapi-volume-23f516f8-86a8-427d-85ef-8c9e25a8ac29": Phase="Pending", Reason="", readiness=false. Elapsed: 4.053228229s Feb 8 12:57:30.980: INFO: Pod "downwardapi-volume-23f516f8-86a8-427d-85ef-8c9e25a8ac29": Phase="Pending", Reason="", readiness=false. Elapsed: 6.060539103s Feb 8 12:57:32.990: INFO: Pod "downwardapi-volume-23f516f8-86a8-427d-85ef-8c9e25a8ac29": Phase="Pending", Reason="", readiness=false. Elapsed: 8.071302978s Feb 8 12:57:34.996: INFO: Pod "downwardapi-volume-23f516f8-86a8-427d-85ef-8c9e25a8ac29": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.076742394s STEP: Saw pod success Feb 8 12:57:34.996: INFO: Pod "downwardapi-volume-23f516f8-86a8-427d-85ef-8c9e25a8ac29" satisfied condition "success or failure" Feb 8 12:57:34.998: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-23f516f8-86a8-427d-85ef-8c9e25a8ac29 container client-container: STEP: delete the pod Feb 8 12:57:35.494: INFO: Waiting for pod downwardapi-volume-23f516f8-86a8-427d-85ef-8c9e25a8ac29 to disappear Feb 8 12:57:35.509: INFO: Pod downwardapi-volume-23f516f8-86a8-427d-85ef-8c9e25a8ac29 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 8 12:57:35.509: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3761" for this suite. Feb 8 12:57:41.556: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 8 12:57:41.858: INFO: namespace downward-api-3761 deletion completed in 6.325900353s • [SLOW TEST:17.058 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 8 12:57:41.859: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-0ae26ce0-fe78-427b-817d-73beecacb3ae STEP: Creating a pod to test consume secrets Feb 8 12:57:42.090: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-2014a61b-eab8-41be-a2cf-5cb38ece6204" in namespace "projected-1644" to be "success or failure" Feb 8 12:57:42.099: INFO: Pod "pod-projected-secrets-2014a61b-eab8-41be-a2cf-5cb38ece6204": Phase="Pending", Reason="", readiness=false. Elapsed: 8.867837ms Feb 8 12:57:44.105: INFO: Pod "pod-projected-secrets-2014a61b-eab8-41be-a2cf-5cb38ece6204": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01549611s Feb 8 12:57:46.113: INFO: Pod "pod-projected-secrets-2014a61b-eab8-41be-a2cf-5cb38ece6204": Phase="Pending", Reason="", readiness=false. Elapsed: 4.022802315s Feb 8 12:57:48.118: INFO: Pod "pod-projected-secrets-2014a61b-eab8-41be-a2cf-5cb38ece6204": Phase="Pending", Reason="", readiness=false. Elapsed: 6.027762986s Feb 8 12:57:50.125: INFO: Pod "pod-projected-secrets-2014a61b-eab8-41be-a2cf-5cb38ece6204": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.035046486s STEP: Saw pod success Feb 8 12:57:50.125: INFO: Pod "pod-projected-secrets-2014a61b-eab8-41be-a2cf-5cb38ece6204" satisfied condition "success or failure" Feb 8 12:57:50.128: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-2014a61b-eab8-41be-a2cf-5cb38ece6204 container projected-secret-volume-test: STEP: delete the pod Feb 8 12:57:50.180: INFO: Waiting for pod pod-projected-secrets-2014a61b-eab8-41be-a2cf-5cb38ece6204 to disappear Feb 8 12:57:50.197: INFO: Pod pod-projected-secrets-2014a61b-eab8-41be-a2cf-5cb38ece6204 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 8 12:57:50.198: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1644" for this suite. Feb 8 12:57:58.320: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 8 12:57:58.459: INFO: namespace projected-1644 deletion completed in 8.256498276s • [SLOW TEST:16.600 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl version should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 8 12:57:58.460: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Feb 8 12:57:58.523: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version' Feb 8 12:57:58.683: INFO: stderr: "" Feb 8 12:57:58.683: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"15\", GitVersion:\"v1.15.7\", GitCommit:\"6c143d35bb11d74970e7bc0b6c45b6bfdffc0bd4\", GitTreeState:\"clean\", BuildDate:\"2019-12-22T16:55:20Z\", GoVersion:\"go1.12.14\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"15\", GitVersion:\"v1.15.1\", GitCommit:\"4485c6f18cee9a5d3c3b4e523bd27972b1b53892\", GitTreeState:\"clean\", BuildDate:\"2019-07-18T09:09:21Z\", GoVersion:\"go1.12.5\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 8 12:57:58.683: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6654" for this suite. Feb 8 12:58:04.717: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 8 12:58:04.798: INFO: namespace kubectl-6654 deletion completed in 6.10529608s • [SLOW TEST:6.338 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl version /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 8 12:58:04.798: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Feb 8 12:58:04.984: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 8 12:58:06.121: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-8873" for this suite. Feb 8 12:58:12.630: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 8 12:58:12.799: INFO: namespace custom-resource-definition-8873 deletion completed in 6.670862369s • [SLOW TEST:8.001 seconds] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Simple CustomResourceDefinition /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:35 creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 8 12:58:12.800: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 8 12:58:24.979: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-7497" for this suite. Feb 8 12:58:31.021: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 8 12:58:31.116: INFO: namespace kubelet-test-7497 deletion completed in 6.129141587s • [SLOW TEST:18.316 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78 should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 8 12:58:31.116: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap configmap-3893/configmap-test-28e85a58-d7f4-44ac-814d-7b839b2b3381 STEP: Creating a pod to test consume configMaps Feb 8 12:58:31.390: INFO: Waiting up to 5m0s for pod "pod-configmaps-6dbff3e8-278d-46ca-a8e3-0ba8dddd26f7" in namespace "configmap-3893" to be "success or failure" Feb 8 12:58:31.395: INFO: Pod "pod-configmaps-6dbff3e8-278d-46ca-a8e3-0ba8dddd26f7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.454967ms Feb 8 12:58:33.400: INFO: Pod "pod-configmaps-6dbff3e8-278d-46ca-a8e3-0ba8dddd26f7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010373374s Feb 8 12:58:35.407: INFO: Pod "pod-configmaps-6dbff3e8-278d-46ca-a8e3-0ba8dddd26f7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.017246315s Feb 8 12:58:37.416: INFO: Pod "pod-configmaps-6dbff3e8-278d-46ca-a8e3-0ba8dddd26f7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.025580902s Feb 8 12:58:39.429: INFO: Pod "pod-configmaps-6dbff3e8-278d-46ca-a8e3-0ba8dddd26f7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.038886928s STEP: Saw pod success Feb 8 12:58:39.429: INFO: Pod "pod-configmaps-6dbff3e8-278d-46ca-a8e3-0ba8dddd26f7" satisfied condition "success or failure" Feb 8 12:58:39.434: INFO: Trying to get logs from node iruya-node pod pod-configmaps-6dbff3e8-278d-46ca-a8e3-0ba8dddd26f7 container env-test: STEP: delete the pod Feb 8 12:58:39.540: INFO: Waiting for pod pod-configmaps-6dbff3e8-278d-46ca-a8e3-0ba8dddd26f7 to disappear Feb 8 12:58:39.547: INFO: Pod pod-configmaps-6dbff3e8-278d-46ca-a8e3-0ba8dddd26f7 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 8 12:58:39.547: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3893" for this suite. Feb 8 12:58:45.571: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 8 12:58:45.678: INFO: namespace configmap-3893 deletion completed in 6.127034504s • [SLOW TEST:14.562 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 8 12:58:45.678: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod Feb 8 12:58:56.555: INFO: Successfully updated pod "labelsupdatecc8f0e39-ddbb-447c-95ab-bce3e4753ff2" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 8 12:58:58.664: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1092" for this suite. Feb 8 12:59:20.689: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 8 12:59:20.782: INFO: namespace downward-api-1092 deletion completed in 22.109303114s • [SLOW TEST:35.104 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 8 12:59:20.782: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Feb 8 12:59:29.982: INFO: Expected: &{OK} to match Container's Termination Message: OK -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 8 12:59:30.002: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-7335" for this suite. Feb 8 12:59:36.173: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 8 12:59:36.272: INFO: namespace container-runtime-7335 deletion completed in 6.125444069s • [SLOW TEST:15.490 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl api-versions should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 8 12:59:36.272: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: validating api versions Feb 8 12:59:36.488: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions' Feb 8 12:59:36.707: INFO: stderr: "" Feb 8 12:59:36.707: INFO: stdout: "admissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\napps/v1beta1\napps/v1beta2\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 8 12:59:36.707: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9303" for this suite. Feb 8 12:59:42.741: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 8 12:59:42.850: INFO: namespace kubectl-9303 deletion completed in 6.135962922s • [SLOW TEST:6.578 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl api-versions /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl expose should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 8 12:59:42.851: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating Redis RC Feb 8 12:59:42.975: INFO: namespace kubectl-3226 Feb 8 12:59:42.975: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3226' Feb 8 12:59:44.911: INFO: stderr: "" Feb 8 12:59:44.911: INFO: stdout: "replicationcontroller/redis-master created\n" STEP: Waiting for Redis master to start. Feb 8 12:59:46.553: INFO: Selector matched 1 pods for map[app:redis] Feb 8 12:59:46.553: INFO: Found 0 / 1 Feb 8 12:59:46.925: INFO: Selector matched 1 pods for map[app:redis] Feb 8 12:59:46.925: INFO: Found 0 / 1 Feb 8 12:59:47.928: INFO: Selector matched 1 pods for map[app:redis] Feb 8 12:59:47.928: INFO: Found 0 / 1 Feb 8 12:59:48.923: INFO: Selector matched 1 pods for map[app:redis] Feb 8 12:59:48.923: INFO: Found 0 / 1 Feb 8 12:59:49.984: INFO: Selector matched 1 pods for map[app:redis] Feb 8 12:59:49.984: INFO: Found 0 / 1 Feb 8 12:59:51.084: INFO: Selector matched 1 pods for map[app:redis] Feb 8 12:59:51.084: INFO: Found 0 / 1 Feb 8 12:59:51.924: INFO: Selector matched 1 pods for map[app:redis] Feb 8 12:59:51.924: INFO: Found 0 / 1 Feb 8 12:59:52.959: INFO: Selector matched 1 pods for map[app:redis] Feb 8 12:59:52.959: INFO: Found 0 / 1 Feb 8 12:59:53.947: INFO: Selector matched 1 pods for map[app:redis] Feb 8 12:59:53.947: INFO: Found 0 / 1 Feb 8 12:59:54.923: INFO: Selector matched 1 pods for map[app:redis] Feb 8 12:59:54.923: INFO: Found 1 / 1 Feb 8 12:59:54.923: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Feb 8 12:59:54.928: INFO: Selector matched 1 pods for map[app:redis] Feb 8 12:59:54.928: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Feb 8 12:59:54.928: INFO: wait on redis-master startup in kubectl-3226 Feb 8 12:59:54.929: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-2lpxd redis-master --namespace=kubectl-3226' Feb 8 12:59:55.184: INFO: stderr: "" Feb 8 12:59:55.184: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 08 Feb 12:59:52.672 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 08 Feb 12:59:52.672 # Server started, Redis version 3.2.12\n1:M 08 Feb 12:59:52.673 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 08 Feb 12:59:52.673 * The server is now ready to accept connections on port 6379\n" STEP: exposing RC Feb 8 12:59:55.185: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc redis-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-3226' Feb 8 12:59:55.401: INFO: stderr: "" Feb 8 12:59:55.401: INFO: stdout: "service/rm2 exposed\n" Feb 8 12:59:55.405: INFO: Service rm2 in namespace kubectl-3226 found. STEP: exposing service Feb 8 12:59:57.417: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-3226' Feb 8 12:59:57.633: INFO: stderr: "" Feb 8 12:59:57.633: INFO: stdout: "service/rm3 exposed\n" Feb 8 12:59:57.744: INFO: Service rm3 in namespace kubectl-3226 found. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 8 12:59:59.755: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3226" for this suite. Feb 8 13:00:21.816: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 8 13:00:21.935: INFO: namespace kubectl-3226 deletion completed in 22.174747582s • [SLOW TEST:39.084 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl expose /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 8 13:00:21.935: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating pod Feb 8 13:00:34.147: INFO: Pod pod-hostip-91321f94-ff49-47e8-8cc3-84ab89c2fcc8 has hostIP: 10.96.3.65 [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 8 13:00:34.148: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-4452" for this suite. Feb 8 13:00:52.271: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 8 13:00:52.362: INFO: namespace pods-4452 deletion completed in 18.209502832s • [SLOW TEST:30.427 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 8 13:00:52.363: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Feb 8 13:00:52.461: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace STEP: Creating rc "condition-test" that asks for more than the allowed pod quota STEP: Checking rc "condition-test" has the desired failure condition set STEP: Scaling down rc "condition-test" to satisfy pod quota Feb 8 13:00:54.692: INFO: Updating replication controller "condition-test" STEP: Checking rc "condition-test" has no failure condition set [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 8 13:00:56.622: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-3861" for this suite. Feb 8 13:01:06.695: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 8 13:01:07.007: INFO: namespace replication-controller-3861 deletion completed in 10.374745003s • [SLOW TEST:14.644 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 8 13:01:07.007: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Feb 8 13:01:07.314: INFO: (0) /api/v1/nodes/iruya-node:10250/proxy/logs/:
alternatives.log
alternatives.l... (200; 26.135676ms)
Feb  8 13:01:07.321: INFO: (1) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.072627ms)
Feb  8 13:01:07.328: INFO: (2) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.165026ms)
Feb  8 13:01:07.335: INFO: (3) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.221693ms)
Feb  8 13:01:07.379: INFO: (4) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 43.861338ms)
Feb  8 13:01:07.388: INFO: (5) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 9.144176ms)
Feb  8 13:01:07.397: INFO: (6) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.389562ms)
Feb  8 13:01:07.404: INFO: (7) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.227513ms)
Feb  8 13:01:07.419: INFO: (8) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 14.735118ms)
Feb  8 13:01:07.425: INFO: (9) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.419817ms)
Feb  8 13:01:07.433: INFO: (10) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.698155ms)
Feb  8 13:01:07.440: INFO: (11) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.765926ms)
Feb  8 13:01:07.446: INFO: (12) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.326202ms)
Feb  8 13:01:07.455: INFO: (13) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.765789ms)
Feb  8 13:01:07.462: INFO: (14) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.600894ms)
Feb  8 13:01:07.467: INFO: (15) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.363224ms)
Feb  8 13:01:07.473: INFO: (16) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.691167ms)
Feb  8 13:01:07.477: INFO: (17) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.741144ms)
Feb  8 13:01:07.482: INFO: (18) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.632174ms)
Feb  8 13:01:07.488: INFO: (19) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.52793ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  8 13:01:07.488: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-1720" for this suite.
Feb  8 13:01:13.515: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 13:01:13.647: INFO: namespace proxy-1720 deletion completed in 6.155088154s

• [SLOW TEST:6.639 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58
    should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  8 13:01:13.647: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test env composition
Feb  8 13:01:13.821: INFO: Waiting up to 5m0s for pod "var-expansion-ed3c1274-aeb7-436e-ab9d-8b836bb33ed4" in namespace "var-expansion-1727" to be "success or failure"
Feb  8 13:01:13.869: INFO: Pod "var-expansion-ed3c1274-aeb7-436e-ab9d-8b836bb33ed4": Phase="Pending", Reason="", readiness=false. Elapsed: 47.872523ms
Feb  8 13:01:15.881: INFO: Pod "var-expansion-ed3c1274-aeb7-436e-ab9d-8b836bb33ed4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.06064109s
Feb  8 13:01:17.896: INFO: Pod "var-expansion-ed3c1274-aeb7-436e-ab9d-8b836bb33ed4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.075721722s
Feb  8 13:01:19.904: INFO: Pod "var-expansion-ed3c1274-aeb7-436e-ab9d-8b836bb33ed4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.083381754s
Feb  8 13:01:21.915: INFO: Pod "var-expansion-ed3c1274-aeb7-436e-ab9d-8b836bb33ed4": Phase="Pending", Reason="", readiness=false. Elapsed: 8.093833524s
Feb  8 13:01:23.921: INFO: Pod "var-expansion-ed3c1274-aeb7-436e-ab9d-8b836bb33ed4": Phase="Pending", Reason="", readiness=false. Elapsed: 10.100031172s
Feb  8 13:01:25.928: INFO: Pod "var-expansion-ed3c1274-aeb7-436e-ab9d-8b836bb33ed4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.107149617s
STEP: Saw pod success
Feb  8 13:01:25.928: INFO: Pod "var-expansion-ed3c1274-aeb7-436e-ab9d-8b836bb33ed4" satisfied condition "success or failure"
Feb  8 13:01:25.931: INFO: Trying to get logs from node iruya-node pod var-expansion-ed3c1274-aeb7-436e-ab9d-8b836bb33ed4 container dapi-container: 
STEP: delete the pod
Feb  8 13:01:25.997: INFO: Waiting for pod var-expansion-ed3c1274-aeb7-436e-ab9d-8b836bb33ed4 to disappear
Feb  8 13:01:26.001: INFO: Pod var-expansion-ed3c1274-aeb7-436e-ab9d-8b836bb33ed4 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  8 13:01:26.001: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-1727" for this suite.
Feb  8 13:01:32.031: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 13:01:32.147: INFO: namespace var-expansion-1727 deletion completed in 6.139950984s

• [SLOW TEST:18.500 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  8 13:01:32.147: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81
Feb  8 13:01:32.210: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Feb  8 13:01:32.283: INFO: Waiting for terminating namespaces to be deleted...
Feb  8 13:01:32.285: INFO: 
Logging pods the kubelet thinks is on node iruya-node before test
Feb  8 13:01:32.294: INFO: kube-proxy-976zl from kube-system started at 2019-08-04 09:01:39 +0000 UTC (1 container statuses recorded)
Feb  8 13:01:32.294: INFO: 	Container kube-proxy ready: true, restart count 0
Feb  8 13:01:32.294: INFO: weave-net-rlp57 from kube-system started at 2019-10-12 11:56:39 +0000 UTC (2 container statuses recorded)
Feb  8 13:01:32.294: INFO: 	Container weave ready: true, restart count 0
Feb  8 13:01:32.294: INFO: 	Container weave-npc ready: true, restart count 0
Feb  8 13:01:32.294: INFO: 
Logging pods the kubelet thinks is on node iruya-server-sfge57q7djm7 before test
Feb  8 13:01:32.308: INFO: kube-controller-manager-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:42 +0000 UTC (1 container statuses recorded)
Feb  8 13:01:32.308: INFO: 	Container kube-controller-manager ready: true, restart count 20
Feb  8 13:01:32.308: INFO: kube-proxy-58v95 from kube-system started at 2019-08-04 08:52:37 +0000 UTC (1 container statuses recorded)
Feb  8 13:01:32.308: INFO: 	Container kube-proxy ready: true, restart count 0
Feb  8 13:01:32.308: INFO: kube-apiserver-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:39 +0000 UTC (1 container statuses recorded)
Feb  8 13:01:32.309: INFO: 	Container kube-apiserver ready: true, restart count 0
Feb  8 13:01:32.309: INFO: kube-scheduler-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:43 +0000 UTC (1 container statuses recorded)
Feb  8 13:01:32.309: INFO: 	Container kube-scheduler ready: true, restart count 13
Feb  8 13:01:32.309: INFO: coredns-5c98db65d4-xx8w8 from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded)
Feb  8 13:01:32.309: INFO: 	Container coredns ready: true, restart count 0
Feb  8 13:01:32.309: INFO: etcd-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:38 +0000 UTC (1 container statuses recorded)
Feb  8 13:01:32.309: INFO: 	Container etcd ready: true, restart count 0
Feb  8 13:01:32.309: INFO: weave-net-bzl4d from kube-system started at 2019-08-04 08:52:37 +0000 UTC (2 container statuses recorded)
Feb  8 13:01:32.309: INFO: 	Container weave ready: true, restart count 0
Feb  8 13:01:32.309: INFO: 	Container weave-npc ready: true, restart count 0
Feb  8 13:01:32.309: INFO: coredns-5c98db65d4-bm4gs from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded)
Feb  8 13:01:32.309: INFO: 	Container coredns ready: true, restart count 0
[It] validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: verifying the node has the label node iruya-node
STEP: verifying the node has the label node iruya-server-sfge57q7djm7
Feb  8 13:01:32.379: INFO: Pod coredns-5c98db65d4-bm4gs requesting resource cpu=100m on Node iruya-server-sfge57q7djm7
Feb  8 13:01:32.379: INFO: Pod coredns-5c98db65d4-xx8w8 requesting resource cpu=100m on Node iruya-server-sfge57q7djm7
Feb  8 13:01:32.379: INFO: Pod etcd-iruya-server-sfge57q7djm7 requesting resource cpu=0m on Node iruya-server-sfge57q7djm7
Feb  8 13:01:32.379: INFO: Pod kube-apiserver-iruya-server-sfge57q7djm7 requesting resource cpu=250m on Node iruya-server-sfge57q7djm7
Feb  8 13:01:32.379: INFO: Pod kube-controller-manager-iruya-server-sfge57q7djm7 requesting resource cpu=200m on Node iruya-server-sfge57q7djm7
Feb  8 13:01:32.379: INFO: Pod kube-proxy-58v95 requesting resource cpu=0m on Node iruya-server-sfge57q7djm7
Feb  8 13:01:32.379: INFO: Pod kube-proxy-976zl requesting resource cpu=0m on Node iruya-node
Feb  8 13:01:32.379: INFO: Pod kube-scheduler-iruya-server-sfge57q7djm7 requesting resource cpu=100m on Node iruya-server-sfge57q7djm7
Feb  8 13:01:32.379: INFO: Pod weave-net-bzl4d requesting resource cpu=20m on Node iruya-server-sfge57q7djm7
Feb  8 13:01:32.379: INFO: Pod weave-net-rlp57 requesting resource cpu=20m on Node iruya-node
STEP: Starting Pods to consume most of the cluster CPU.
STEP: Creating another pod that requires unavailable amount of CPU.
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-893858d1-d344-4e18-8751-43c5711d3f5a.15f16ee94a965c4e], Reason = [Scheduled], Message = [Successfully assigned sched-pred-9964/filler-pod-893858d1-d344-4e18-8751-43c5711d3f5a to iruya-node]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-893858d1-d344-4e18-8751-43c5711d3f5a.15f16eea95465304], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-893858d1-d344-4e18-8751-43c5711d3f5a.15f16eeb95f4664b], Reason = [Created], Message = [Created container filler-pod-893858d1-d344-4e18-8751-43c5711d3f5a]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-893858d1-d344-4e18-8751-43c5711d3f5a.15f16eebd075d4a0], Reason = [Started], Message = [Started container filler-pod-893858d1-d344-4e18-8751-43c5711d3f5a]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-9a88e714-9de8-497e-8735-75017c66209c.15f16ee94da39542], Reason = [Scheduled], Message = [Successfully assigned sched-pred-9964/filler-pod-9a88e714-9de8-497e-8735-75017c66209c to iruya-server-sfge57q7djm7]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-9a88e714-9de8-497e-8735-75017c66209c.15f16eea8c20a92f], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-9a88e714-9de8-497e-8735-75017c66209c.15f16eebb97edaaa], Reason = [Created], Message = [Created container filler-pod-9a88e714-9de8-497e-8735-75017c66209c]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-9a88e714-9de8-497e-8735-75017c66209c.15f16eebdd0d1c1b], Reason = [Started], Message = [Started container filler-pod-9a88e714-9de8-497e-8735-75017c66209c]
STEP: Considering event: 
Type = [Warning], Name = [additional-pod.15f16eec1ab3bbf5], Reason = [FailedScheduling], Message = [0/2 nodes are available: 2 Insufficient cpu.]
STEP: removing the label node off the node iruya-node
STEP: verifying the node doesn't have the label node
STEP: removing the label node off the node iruya-server-sfge57q7djm7
STEP: verifying the node doesn't have the label node
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  8 13:01:45.740: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-9964" for this suite.
Feb  8 13:01:55.838: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 13:01:55.994: INFO: namespace sched-pred-9964 deletion completed in 10.243654541s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72

• [SLOW TEST:23.847 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23
  validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl rolling-update 
  should support rolling-update to same image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  8 13:01:55.995: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1516
[It] should support rolling-update to same image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Feb  8 13:01:56.172: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=kubectl-9923'
Feb  8 13:01:56.289: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Feb  8 13:01:56.289: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n"
STEP: verifying the rc e2e-test-nginx-rc was created
Feb  8 13:01:56.314: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0
Feb  8 13:01:56.439: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
STEP: rolling-update to same image controller
Feb  8 13:01:56.485: INFO: scanned /root for discovery docs: 
Feb  8 13:01:56.485: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-nginx-rc --update-period=1s --image=docker.io/library/nginx:1.14-alpine --image-pull-policy=IfNotPresent --namespace=kubectl-9923'
Feb  8 13:02:19.791: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Feb  8 13:02:19.791: INFO: stdout: "Created e2e-test-nginx-rc-4c4dba6135c440532bd31d40dd2febd5\nScaling up e2e-test-nginx-rc-4c4dba6135c440532bd31d40dd2febd5 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-4c4dba6135c440532bd31d40dd2febd5 up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-4c4dba6135c440532bd31d40dd2febd5 to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n"
Feb  8 13:02:19.791: INFO: stdout: "Created e2e-test-nginx-rc-4c4dba6135c440532bd31d40dd2febd5\nScaling up e2e-test-nginx-rc-4c4dba6135c440532bd31d40dd2febd5 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-4c4dba6135c440532bd31d40dd2febd5 up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-4c4dba6135c440532bd31d40dd2febd5 to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n"
STEP: waiting for all containers in run=e2e-test-nginx-rc pods to come up.
Feb  8 13:02:19.791: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-9923'
Feb  8 13:02:19.941: INFO: stderr: ""
Feb  8 13:02:19.941: INFO: stdout: "e2e-test-nginx-rc-4c4dba6135c440532bd31d40dd2febd5-7lxsr e2e-test-nginx-rc-72v4t "
STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2
Feb  8 13:02:24.941: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-9923'
Feb  8 13:02:25.023: INFO: stderr: ""
Feb  8 13:02:25.023: INFO: stdout: "e2e-test-nginx-rc-4c4dba6135c440532bd31d40dd2febd5-7lxsr e2e-test-nginx-rc-72v4t "
STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2
Feb  8 13:02:30.024: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-9923'
Feb  8 13:02:30.191: INFO: stderr: ""
Feb  8 13:02:30.191: INFO: stdout: "e2e-test-nginx-rc-4c4dba6135c440532bd31d40dd2febd5-7lxsr "
Feb  8 13:02:30.192: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-4c4dba6135c440532bd31d40dd2febd5-7lxsr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-nginx-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9923'
Feb  8 13:02:30.265: INFO: stderr: ""
Feb  8 13:02:30.265: INFO: stdout: "true"
Feb  8 13:02:30.265: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-4c4dba6135c440532bd31d40dd2febd5-7lxsr -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-nginx-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9923'
Feb  8 13:02:30.437: INFO: stderr: ""
Feb  8 13:02:30.437: INFO: stdout: "docker.io/library/nginx:1.14-alpine"
Feb  8 13:02:30.437: INFO: e2e-test-nginx-rc-4c4dba6135c440532bd31d40dd2febd5-7lxsr is verified up and running
[AfterEach] [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1522
Feb  8 13:02:30.437: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=kubectl-9923'
Feb  8 13:02:30.592: INFO: stderr: ""
Feb  8 13:02:30.592: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  8 13:02:30.593: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-9923" for this suite.
Feb  8 13:02:52.707: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 13:02:52.864: INFO: namespace kubectl-9923 deletion completed in 22.257963881s

• [SLOW TEST:56.870 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should support rolling-update to same image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[k8s.io] Pods 
  should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  8 13:02:52.865: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Feb  8 13:03:03.636: INFO: Successfully updated pod "pod-update-1810a2ea-2bf9-44f1-9b51-5b03ebe93ad1"
STEP: verifying the updated pod is in kubernetes
Feb  8 13:03:03.668: INFO: Pod update OK
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  8 13:03:03.668: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-5482" for this suite.
Feb  8 13:03:25.732: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 13:03:25.895: INFO: namespace pods-5482 deletion completed in 22.201483873s

• [SLOW TEST:33.030 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  8 13:03:25.895: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name s-test-opt-del-297994bb-5932-48a7-8f96-26118fbf60ef
STEP: Creating secret with name s-test-opt-upd-bc452b73-9504-4e1b-8e1f-c158a15da54a
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-297994bb-5932-48a7-8f96-26118fbf60ef
STEP: Updating secret s-test-opt-upd-bc452b73-9504-4e1b-8e1f-c158a15da54a
STEP: Creating secret with name s-test-opt-create-b705c7db-75e4-4289-903b-dc55cff62212
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  8 13:04:57.812: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3869" for this suite.
Feb  8 13:05:19.849: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 13:05:19.996: INFO: namespace projected-3869 deletion completed in 22.176971636s

• [SLOW TEST:114.101 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  8 13:05:19.996: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-cee6396b-0066-472f-b79c-150364eb51fa
STEP: Creating a pod to test consume configMaps
Feb  8 13:05:20.113: INFO: Waiting up to 5m0s for pod "pod-configmaps-3a40dd45-8405-4aa1-8057-a6c773f49a00" in namespace "configmap-5580" to be "success or failure"
Feb  8 13:05:20.129: INFO: Pod "pod-configmaps-3a40dd45-8405-4aa1-8057-a6c773f49a00": Phase="Pending", Reason="", readiness=false. Elapsed: 16.630946ms
Feb  8 13:05:22.138: INFO: Pod "pod-configmaps-3a40dd45-8405-4aa1-8057-a6c773f49a00": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024641942s
Feb  8 13:05:24.154: INFO: Pod "pod-configmaps-3a40dd45-8405-4aa1-8057-a6c773f49a00": Phase="Pending", Reason="", readiness=false. Elapsed: 4.040740936s
Feb  8 13:05:26.160: INFO: Pod "pod-configmaps-3a40dd45-8405-4aa1-8057-a6c773f49a00": Phase="Pending", Reason="", readiness=false. Elapsed: 6.046831129s
Feb  8 13:05:28.174: INFO: Pod "pod-configmaps-3a40dd45-8405-4aa1-8057-a6c773f49a00": Phase="Pending", Reason="", readiness=false. Elapsed: 8.061459532s
Feb  8 13:05:30.226: INFO: Pod "pod-configmaps-3a40dd45-8405-4aa1-8057-a6c773f49a00": Phase="Pending", Reason="", readiness=false. Elapsed: 10.112671115s
Feb  8 13:05:32.231: INFO: Pod "pod-configmaps-3a40dd45-8405-4aa1-8057-a6c773f49a00": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.118377321s
STEP: Saw pod success
Feb  8 13:05:32.231: INFO: Pod "pod-configmaps-3a40dd45-8405-4aa1-8057-a6c773f49a00" satisfied condition "success or failure"
Feb  8 13:05:32.234: INFO: Trying to get logs from node iruya-node pod pod-configmaps-3a40dd45-8405-4aa1-8057-a6c773f49a00 container configmap-volume-test: 
STEP: delete the pod
Feb  8 13:05:32.272: INFO: Waiting for pod pod-configmaps-3a40dd45-8405-4aa1-8057-a6c773f49a00 to disappear
Feb  8 13:05:32.281: INFO: Pod pod-configmaps-3a40dd45-8405-4aa1-8057-a6c773f49a00 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  8 13:05:32.281: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-5580" for this suite.
Feb  8 13:05:38.415: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 13:05:38.540: INFO: namespace configmap-5580 deletion completed in 6.160634738s

• [SLOW TEST:18.544 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-node] Downward API 
  should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  8 13:05:38.540: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward api env vars
Feb  8 13:05:38.625: INFO: Waiting up to 5m0s for pod "downward-api-f252d1a4-ecc7-4c8a-85c6-a1cb99d9fbac" in namespace "downward-api-2102" to be "success or failure"
Feb  8 13:05:38.705: INFO: Pod "downward-api-f252d1a4-ecc7-4c8a-85c6-a1cb99d9fbac": Phase="Pending", Reason="", readiness=false. Elapsed: 79.014243ms
Feb  8 13:05:40.712: INFO: Pod "downward-api-f252d1a4-ecc7-4c8a-85c6-a1cb99d9fbac": Phase="Pending", Reason="", readiness=false. Elapsed: 2.086589869s
Feb  8 13:05:42.737: INFO: Pod "downward-api-f252d1a4-ecc7-4c8a-85c6-a1cb99d9fbac": Phase="Pending", Reason="", readiness=false. Elapsed: 4.111416681s
Feb  8 13:05:44.744: INFO: Pod "downward-api-f252d1a4-ecc7-4c8a-85c6-a1cb99d9fbac": Phase="Pending", Reason="", readiness=false. Elapsed: 6.118852276s
Feb  8 13:05:46.753: INFO: Pod "downward-api-f252d1a4-ecc7-4c8a-85c6-a1cb99d9fbac": Phase="Pending", Reason="", readiness=false. Elapsed: 8.127722617s
Feb  8 13:05:48.775: INFO: Pod "downward-api-f252d1a4-ecc7-4c8a-85c6-a1cb99d9fbac": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.149518138s
STEP: Saw pod success
Feb  8 13:05:48.775: INFO: Pod "downward-api-f252d1a4-ecc7-4c8a-85c6-a1cb99d9fbac" satisfied condition "success or failure"
Feb  8 13:05:48.781: INFO: Trying to get logs from node iruya-node pod downward-api-f252d1a4-ecc7-4c8a-85c6-a1cb99d9fbac container dapi-container: 
STEP: delete the pod
Feb  8 13:05:48.982: INFO: Waiting for pod downward-api-f252d1a4-ecc7-4c8a-85c6-a1cb99d9fbac to disappear
Feb  8 13:05:49.017: INFO: Pod downward-api-f252d1a4-ecc7-4c8a-85c6-a1cb99d9fbac no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  8 13:05:49.018: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-2102" for this suite.
Feb  8 13:05:55.047: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 13:05:55.171: INFO: namespace downward-api-2102 deletion completed in 6.145991517s

• [SLOW TEST:16.631 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32
  should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-storage] Projected secret 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  8 13:05:55.172: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name projected-secret-test-7337e0e4-eaba-4cfc-aa7f-8dace8885f60
STEP: Creating a pod to test consume secrets
Feb  8 13:05:55.483: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-cb8ada20-2e88-42be-a042-6f1aeed30b3d" in namespace "projected-8261" to be "success or failure"
Feb  8 13:05:55.489: INFO: Pod "pod-projected-secrets-cb8ada20-2e88-42be-a042-6f1aeed30b3d": Phase="Pending", Reason="", readiness=false. Elapsed: 5.504691ms
Feb  8 13:05:57.494: INFO: Pod "pod-projected-secrets-cb8ada20-2e88-42be-a042-6f1aeed30b3d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011078791s
Feb  8 13:05:59.503: INFO: Pod "pod-projected-secrets-cb8ada20-2e88-42be-a042-6f1aeed30b3d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.019693002s
Feb  8 13:06:01.518: INFO: Pod "pod-projected-secrets-cb8ada20-2e88-42be-a042-6f1aeed30b3d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.034577349s
Feb  8 13:06:03.526: INFO: Pod "pod-projected-secrets-cb8ada20-2e88-42be-a042-6f1aeed30b3d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.042751642s
Feb  8 13:06:05.538: INFO: Pod "pod-projected-secrets-cb8ada20-2e88-42be-a042-6f1aeed30b3d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.054830758s
STEP: Saw pod success
Feb  8 13:06:05.538: INFO: Pod "pod-projected-secrets-cb8ada20-2e88-42be-a042-6f1aeed30b3d" satisfied condition "success or failure"
Feb  8 13:06:05.545: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-cb8ada20-2e88-42be-a042-6f1aeed30b3d container secret-volume-test: 
STEP: delete the pod
Feb  8 13:06:05.907: INFO: Waiting for pod pod-projected-secrets-cb8ada20-2e88-42be-a042-6f1aeed30b3d to disappear
Feb  8 13:06:05.986: INFO: Pod pod-projected-secrets-cb8ada20-2e88-42be-a042-6f1aeed30b3d no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  8 13:06:05.986: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8261" for this suite.
Feb  8 13:06:12.028: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 13:06:12.192: INFO: namespace projected-8261 deletion completed in 6.19611752s

• [SLOW TEST:17.021 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should scale a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  8 13:06:12.193: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should scale a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a replication controller
Feb  8 13:06:12.354: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5631'
Feb  8 13:06:12.808: INFO: stderr: ""
Feb  8 13:06:12.808: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Feb  8 13:06:12.809: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5631'
Feb  8 13:06:13.115: INFO: stderr: ""
Feb  8 13:06:13.115: INFO: stdout: "update-demo-nautilus-d854f update-demo-nautilus-f9nbn "
Feb  8 13:06:13.115: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-d854f -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5631'
Feb  8 13:06:13.301: INFO: stderr: ""
Feb  8 13:06:13.301: INFO: stdout: ""
Feb  8 13:06:13.301: INFO: update-demo-nautilus-d854f is created but not running
Feb  8 13:06:18.301: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5631'
Feb  8 13:06:18.528: INFO: stderr: ""
Feb  8 13:06:18.528: INFO: stdout: "update-demo-nautilus-d854f update-demo-nautilus-f9nbn "
Feb  8 13:06:18.528: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-d854f -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5631'
Feb  8 13:06:18.630: INFO: stderr: ""
Feb  8 13:06:18.630: INFO: stdout: ""
Feb  8 13:06:18.630: INFO: update-demo-nautilus-d854f is created but not running
Feb  8 13:06:23.631: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5631'
Feb  8 13:06:23.779: INFO: stderr: ""
Feb  8 13:06:23.779: INFO: stdout: "update-demo-nautilus-d854f update-demo-nautilus-f9nbn "
Feb  8 13:06:23.780: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-d854f -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5631'
Feb  8 13:06:23.998: INFO: stderr: ""
Feb  8 13:06:23.998: INFO: stdout: "true"
Feb  8 13:06:23.999: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-d854f -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5631'
Feb  8 13:06:24.203: INFO: stderr: ""
Feb  8 13:06:24.203: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb  8 13:06:24.203: INFO: validating pod update-demo-nautilus-d854f
Feb  8 13:06:24.247: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb  8 13:06:24.248: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb  8 13:06:24.248: INFO: update-demo-nautilus-d854f is verified up and running
Feb  8 13:06:24.248: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-f9nbn -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5631'
Feb  8 13:06:24.361: INFO: stderr: ""
Feb  8 13:06:24.361: INFO: stdout: "true"
Feb  8 13:06:24.361: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-f9nbn -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5631'
Feb  8 13:06:24.434: INFO: stderr: ""
Feb  8 13:06:24.434: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb  8 13:06:24.434: INFO: validating pod update-demo-nautilus-f9nbn
Feb  8 13:06:24.442: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb  8 13:06:24.442: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb  8 13:06:24.442: INFO: update-demo-nautilus-f9nbn is verified up and running
STEP: scaling down the replication controller
Feb  8 13:06:24.444: INFO: scanned /root for discovery docs: 
Feb  8 13:06:24.444: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-5631'
Feb  8 13:06:25.641: INFO: stderr: ""
Feb  8 13:06:25.641: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Feb  8 13:06:25.641: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5631'
Feb  8 13:06:25.736: INFO: stderr: ""
Feb  8 13:06:25.736: INFO: stdout: "update-demo-nautilus-d854f update-demo-nautilus-f9nbn "
STEP: Replicas for name=update-demo: expected=1 actual=2
Feb  8 13:06:30.736: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5631'
Feb  8 13:06:30.825: INFO: stderr: ""
Feb  8 13:06:30.825: INFO: stdout: "update-demo-nautilus-d854f update-demo-nautilus-f9nbn "
STEP: Replicas for name=update-demo: expected=1 actual=2
Feb  8 13:06:35.826: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5631'
Feb  8 13:06:36.002: INFO: stderr: ""
Feb  8 13:06:36.002: INFO: stdout: "update-demo-nautilus-d854f "
Feb  8 13:06:36.003: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-d854f -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5631'
Feb  8 13:06:36.228: INFO: stderr: ""
Feb  8 13:06:36.228: INFO: stdout: "true"
Feb  8 13:06:36.229: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-d854f -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5631'
Feb  8 13:06:36.313: INFO: stderr: ""
Feb  8 13:06:36.313: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb  8 13:06:36.313: INFO: validating pod update-demo-nautilus-d854f
Feb  8 13:06:36.339: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb  8 13:06:36.339: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb  8 13:06:36.339: INFO: update-demo-nautilus-d854f is verified up and running
STEP: scaling up the replication controller
Feb  8 13:06:36.343: INFO: scanned /root for discovery docs: 
Feb  8 13:06:36.343: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-5631'
Feb  8 13:06:37.469: INFO: stderr: ""
Feb  8 13:06:37.469: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Feb  8 13:06:37.469: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5631'
Feb  8 13:06:37.637: INFO: stderr: ""
Feb  8 13:06:37.637: INFO: stdout: "update-demo-nautilus-d854f update-demo-nautilus-k7ncg "
Feb  8 13:06:37.637: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-d854f -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5631'
Feb  8 13:06:37.733: INFO: stderr: ""
Feb  8 13:06:37.733: INFO: stdout: "true"
Feb  8 13:06:37.734: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-d854f -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5631'
Feb  8 13:06:38.116: INFO: stderr: ""
Feb  8 13:06:38.116: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb  8 13:06:38.116: INFO: validating pod update-demo-nautilus-d854f
Feb  8 13:06:38.134: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb  8 13:06:38.134: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb  8 13:06:38.134: INFO: update-demo-nautilus-d854f is verified up and running
Feb  8 13:06:38.134: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-k7ncg -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5631'
Feb  8 13:06:38.290: INFO: stderr: ""
Feb  8 13:06:38.290: INFO: stdout: ""
Feb  8 13:06:38.290: INFO: update-demo-nautilus-k7ncg is created but not running
Feb  8 13:06:43.290: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5631'
Feb  8 13:06:43.985: INFO: stderr: ""
Feb  8 13:06:43.986: INFO: stdout: "update-demo-nautilus-d854f update-demo-nautilus-k7ncg "
Feb  8 13:06:43.986: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-d854f -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5631'
Feb  8 13:06:44.747: INFO: stderr: ""
Feb  8 13:06:44.747: INFO: stdout: "true"
Feb  8 13:06:44.747: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-d854f -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5631'
Feb  8 13:06:45.099: INFO: stderr: ""
Feb  8 13:06:45.099: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb  8 13:06:45.099: INFO: validating pod update-demo-nautilus-d854f
Feb  8 13:06:45.106: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb  8 13:06:45.106: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb  8 13:06:45.106: INFO: update-demo-nautilus-d854f is verified up and running
Feb  8 13:06:45.107: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-k7ncg -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5631'
Feb  8 13:06:45.233: INFO: stderr: ""
Feb  8 13:06:45.233: INFO: stdout: ""
Feb  8 13:06:45.233: INFO: update-demo-nautilus-k7ncg is created but not running
Feb  8 13:06:50.234: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5631'
Feb  8 13:06:50.389: INFO: stderr: ""
Feb  8 13:06:50.389: INFO: stdout: "update-demo-nautilus-d854f update-demo-nautilus-k7ncg "
Feb  8 13:06:50.389: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-d854f -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5631'
Feb  8 13:06:50.510: INFO: stderr: ""
Feb  8 13:06:50.510: INFO: stdout: "true"
Feb  8 13:06:50.511: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-d854f -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5631'
Feb  8 13:06:50.606: INFO: stderr: ""
Feb  8 13:06:50.606: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb  8 13:06:50.606: INFO: validating pod update-demo-nautilus-d854f
Feb  8 13:06:50.616: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb  8 13:06:50.616: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb  8 13:06:50.616: INFO: update-demo-nautilus-d854f is verified up and running
Feb  8 13:06:50.616: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-k7ncg -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5631'
Feb  8 13:06:50.784: INFO: stderr: ""
Feb  8 13:06:50.784: INFO: stdout: "true"
Feb  8 13:06:50.785: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-k7ncg -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5631'
Feb  8 13:06:50.880: INFO: stderr: ""
Feb  8 13:06:50.880: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb  8 13:06:50.880: INFO: validating pod update-demo-nautilus-k7ncg
Feb  8 13:06:50.890: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb  8 13:06:50.890: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb  8 13:06:50.890: INFO: update-demo-nautilus-k7ncg is verified up and running
STEP: using delete to clean up resources
Feb  8 13:06:50.891: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5631'
Feb  8 13:06:51.018: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb  8 13:06:51.018: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Feb  8 13:06:51.018: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-5631'
Feb  8 13:06:51.107: INFO: stderr: "No resources found.\n"
Feb  8 13:06:51.107: INFO: stdout: ""
Feb  8 13:06:51.107: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-5631 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Feb  8 13:06:51.220: INFO: stderr: ""
Feb  8 13:06:51.220: INFO: stdout: "update-demo-nautilus-d854f\nupdate-demo-nautilus-k7ncg\n"
Feb  8 13:06:51.720: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-5631'
Feb  8 13:06:51.855: INFO: stderr: "No resources found.\n"
Feb  8 13:06:51.855: INFO: stdout: ""
Feb  8 13:06:51.856: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-5631 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Feb  8 13:06:51.970: INFO: stderr: ""
Feb  8 13:06:51.970: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  8 13:06:51.970: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5631" for this suite.
Feb  8 13:07:14.816: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 13:07:14.933: INFO: namespace kubectl-5631 deletion completed in 22.957467856s

• [SLOW TEST:62.740 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should scale a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  8 13:07:14.933: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb  8 13:07:15.049: INFO: Waiting up to 5m0s for pod "downwardapi-volume-25bf784c-a012-48af-8a81-03410fa1b07b" in namespace "downward-api-1707" to be "success or failure"
Feb  8 13:07:15.059: INFO: Pod "downwardapi-volume-25bf784c-a012-48af-8a81-03410fa1b07b": Phase="Pending", Reason="", readiness=false. Elapsed: 9.356042ms
Feb  8 13:07:17.070: INFO: Pod "downwardapi-volume-25bf784c-a012-48af-8a81-03410fa1b07b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020220946s
Feb  8 13:07:19.078: INFO: Pod "downwardapi-volume-25bf784c-a012-48af-8a81-03410fa1b07b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.028205432s
Feb  8 13:07:21.089: INFO: Pod "downwardapi-volume-25bf784c-a012-48af-8a81-03410fa1b07b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.039467784s
Feb  8 13:07:23.095: INFO: Pod "downwardapi-volume-25bf784c-a012-48af-8a81-03410fa1b07b": Phase="Pending", Reason="", readiness=false. Elapsed: 8.045999492s
Feb  8 13:07:25.106: INFO: Pod "downwardapi-volume-25bf784c-a012-48af-8a81-03410fa1b07b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.056868762s
STEP: Saw pod success
Feb  8 13:07:25.106: INFO: Pod "downwardapi-volume-25bf784c-a012-48af-8a81-03410fa1b07b" satisfied condition "success or failure"
Feb  8 13:07:25.111: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-25bf784c-a012-48af-8a81-03410fa1b07b container client-container: 
STEP: delete the pod
Feb  8 13:07:25.185: INFO: Waiting for pod downwardapi-volume-25bf784c-a012-48af-8a81-03410fa1b07b to disappear
Feb  8 13:07:25.203: INFO: Pod downwardapi-volume-25bf784c-a012-48af-8a81-03410fa1b07b no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  8 13:07:25.204: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-1707" for this suite.
Feb  8 13:07:31.236: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 13:07:31.351: INFO: namespace downward-api-1707 deletion completed in 6.140518623s

• [SLOW TEST:16.418 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-cli] Kubectl client [k8s.io] Proxy server 
  should support proxy with --port 0  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  8 13:07:31.351: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should support proxy with --port 0  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: starting the proxy server
Feb  8 13:07:31.461: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter'
STEP: curling proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  8 13:07:31.563: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6052" for this suite.
Feb  8 13:07:37.591: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 13:07:37.700: INFO: namespace kubectl-6052 deletion completed in 6.130723485s

• [SLOW TEST:6.348 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Proxy server
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should support proxy with --port 0  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  8 13:07:37.700: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-upd-6c49103f-d899-4084-ae70-d4d8ad7f61b4
STEP: Creating the pod
STEP: Waiting for pod with text data
STEP: Waiting for pod with binary data
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  8 13:07:49.996: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-6447" for this suite.
Feb  8 13:08:12.040: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 13:08:12.158: INFO: namespace configmap-6447 deletion completed in 22.157438482s

• [SLOW TEST:34.458 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  8 13:08:12.159: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-0c61eca2-47f6-4292-b256-9f6a080debdb
STEP: Creating a pod to test consume secrets
Feb  8 13:08:12.307: INFO: Waiting up to 5m0s for pod "pod-secrets-9af71d3c-d38d-43db-8f60-e648c29163bd" in namespace "secrets-7378" to be "success or failure"
Feb  8 13:08:12.317: INFO: Pod "pod-secrets-9af71d3c-d38d-43db-8f60-e648c29163bd": Phase="Pending", Reason="", readiness=false. Elapsed: 9.329084ms
Feb  8 13:08:14.329: INFO: Pod "pod-secrets-9af71d3c-d38d-43db-8f60-e648c29163bd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02225573s
Feb  8 13:08:16.340: INFO: Pod "pod-secrets-9af71d3c-d38d-43db-8f60-e648c29163bd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.032896292s
Feb  8 13:08:18.351: INFO: Pod "pod-secrets-9af71d3c-d38d-43db-8f60-e648c29163bd": Phase="Pending", Reason="", readiness=false. Elapsed: 6.044213786s
Feb  8 13:08:20.367: INFO: Pod "pod-secrets-9af71d3c-d38d-43db-8f60-e648c29163bd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.059422703s
STEP: Saw pod success
Feb  8 13:08:20.367: INFO: Pod "pod-secrets-9af71d3c-d38d-43db-8f60-e648c29163bd" satisfied condition "success or failure"
Feb  8 13:08:20.370: INFO: Trying to get logs from node iruya-node pod pod-secrets-9af71d3c-d38d-43db-8f60-e648c29163bd container secret-volume-test: 
STEP: delete the pod
Feb  8 13:08:20.446: INFO: Waiting for pod pod-secrets-9af71d3c-d38d-43db-8f60-e648c29163bd to disappear
Feb  8 13:08:20.548: INFO: Pod pod-secrets-9af71d3c-d38d-43db-8f60-e648c29163bd no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  8 13:08:20.548: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-7378" for this suite.
Feb  8 13:08:26.602: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 13:08:26.759: INFO: namespace secrets-7378 deletion completed in 6.197925356s

• [SLOW TEST:14.601 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  8 13:08:26.759: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0777 on tmpfs
Feb  8 13:08:26.872: INFO: Waiting up to 5m0s for pod "pod-31c3501c-8d4c-462f-80ca-8e5bcf0c5777" in namespace "emptydir-2321" to be "success or failure"
Feb  8 13:08:26.888: INFO: Pod "pod-31c3501c-8d4c-462f-80ca-8e5bcf0c5777": Phase="Pending", Reason="", readiness=false. Elapsed: 16.107981ms
Feb  8 13:08:28.984: INFO: Pod "pod-31c3501c-8d4c-462f-80ca-8e5bcf0c5777": Phase="Pending", Reason="", readiness=false. Elapsed: 2.111686963s
Feb  8 13:08:31.002: INFO: Pod "pod-31c3501c-8d4c-462f-80ca-8e5bcf0c5777": Phase="Pending", Reason="", readiness=false. Elapsed: 4.130043785s
Feb  8 13:08:33.009: INFO: Pod "pod-31c3501c-8d4c-462f-80ca-8e5bcf0c5777": Phase="Pending", Reason="", readiness=false. Elapsed: 6.136950361s
Feb  8 13:08:35.084: INFO: Pod "pod-31c3501c-8d4c-462f-80ca-8e5bcf0c5777": Phase="Pending", Reason="", readiness=false. Elapsed: 8.21222024s
Feb  8 13:08:37.094: INFO: Pod "pod-31c3501c-8d4c-462f-80ca-8e5bcf0c5777": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.222522896s
STEP: Saw pod success
Feb  8 13:08:37.095: INFO: Pod "pod-31c3501c-8d4c-462f-80ca-8e5bcf0c5777" satisfied condition "success or failure"
Feb  8 13:08:37.103: INFO: Trying to get logs from node iruya-node pod pod-31c3501c-8d4c-462f-80ca-8e5bcf0c5777 container test-container: 
STEP: delete the pod
Feb  8 13:08:37.159: INFO: Waiting for pod pod-31c3501c-8d4c-462f-80ca-8e5bcf0c5777 to disappear
Feb  8 13:08:37.165: INFO: Pod pod-31c3501c-8d4c-462f-80ca-8e5bcf0c5777 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  8 13:08:37.165: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-2321" for this suite.
Feb  8 13:08:43.207: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 13:08:43.311: INFO: namespace emptydir-2321 deletion completed in 6.139824551s

• [SLOW TEST:16.552 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Job 
  should delete a job [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  8 13:08:43.312: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete a job [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a job
STEP: Ensuring active pods == parallelism
STEP: delete a job
STEP: deleting Job.batch foo in namespace job-2327, will wait for the garbage collector to delete the pods
Feb  8 13:08:57.538: INFO: Deleting Job.batch foo took: 11.012249ms
Feb  8 13:08:57.839: INFO: Terminating Job.batch foo pods took: 300.333087ms
STEP: Ensuring job was deleted
[AfterEach] [sig-apps] Job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  8 13:09:46.651: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-2327" for this suite.
Feb  8 13:09:52.717: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 13:09:52.879: INFO: namespace job-2327 deletion completed in 6.217627862s

• [SLOW TEST:69.567 seconds]
[sig-apps] Job
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should delete a job [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[k8s.io] Pods 
  should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  8 13:09:52.879: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb  8 13:10:01.231: INFO: Waiting up to 5m0s for pod "client-envvars-2611fd1f-ff40-4f32-bf7f-5c8f9a38cf5c" in namespace "pods-9918" to be "success or failure"
Feb  8 13:10:01.272: INFO: Pod "client-envvars-2611fd1f-ff40-4f32-bf7f-5c8f9a38cf5c": Phase="Pending", Reason="", readiness=false. Elapsed: 41.123186ms
Feb  8 13:10:03.283: INFO: Pod "client-envvars-2611fd1f-ff40-4f32-bf7f-5c8f9a38cf5c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.052064072s
Feb  8 13:10:05.308: INFO: Pod "client-envvars-2611fd1f-ff40-4f32-bf7f-5c8f9a38cf5c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.076638164s
Feb  8 13:10:07.316: INFO: Pod "client-envvars-2611fd1f-ff40-4f32-bf7f-5c8f9a38cf5c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.085113896s
Feb  8 13:10:09.323: INFO: Pod "client-envvars-2611fd1f-ff40-4f32-bf7f-5c8f9a38cf5c": Phase="Pending", Reason="", readiness=false. Elapsed: 8.092153492s
Feb  8 13:10:11.331: INFO: Pod "client-envvars-2611fd1f-ff40-4f32-bf7f-5c8f9a38cf5c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.100364645s
STEP: Saw pod success
Feb  8 13:10:11.331: INFO: Pod "client-envvars-2611fd1f-ff40-4f32-bf7f-5c8f9a38cf5c" satisfied condition "success or failure"
Feb  8 13:10:11.335: INFO: Trying to get logs from node iruya-node pod client-envvars-2611fd1f-ff40-4f32-bf7f-5c8f9a38cf5c container env3cont: 
STEP: delete the pod
Feb  8 13:10:11.410: INFO: Waiting for pod client-envvars-2611fd1f-ff40-4f32-bf7f-5c8f9a38cf5c to disappear
Feb  8 13:10:11.417: INFO: Pod client-envvars-2611fd1f-ff40-4f32-bf7f-5c8f9a38cf5c no longer exists
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  8 13:10:11.417: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-9918" for this suite.
Feb  8 13:10:57.547: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 13:10:57.633: INFO: namespace pods-9918 deletion completed in 46.203356184s

• [SLOW TEST:64.753 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with projected pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  8 13:10:57.633: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with projected pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod pod-subpath-test-projected-lczx
STEP: Creating a pod to test atomic-volume-subpath
Feb  8 13:10:57.881: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-lczx" in namespace "subpath-6606" to be "success or failure"
Feb  8 13:10:57.908: INFO: Pod "pod-subpath-test-projected-lczx": Phase="Pending", Reason="", readiness=false. Elapsed: 26.3774ms
Feb  8 13:10:59.917: INFO: Pod "pod-subpath-test-projected-lczx": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034845363s
Feb  8 13:11:01.931: INFO: Pod "pod-subpath-test-projected-lczx": Phase="Pending", Reason="", readiness=false. Elapsed: 4.049196846s
Feb  8 13:11:03.939: INFO: Pod "pod-subpath-test-projected-lczx": Phase="Pending", Reason="", readiness=false. Elapsed: 6.056817077s
Feb  8 13:11:05.946: INFO: Pod "pod-subpath-test-projected-lczx": Phase="Pending", Reason="", readiness=false. Elapsed: 8.063893936s
Feb  8 13:11:07.958: INFO: Pod "pod-subpath-test-projected-lczx": Phase="Running", Reason="", readiness=true. Elapsed: 10.076525661s
Feb  8 13:11:09.972: INFO: Pod "pod-subpath-test-projected-lczx": Phase="Running", Reason="", readiness=true. Elapsed: 12.089947144s
Feb  8 13:11:11.980: INFO: Pod "pod-subpath-test-projected-lczx": Phase="Running", Reason="", readiness=true. Elapsed: 14.098504692s
Feb  8 13:11:14.003: INFO: Pod "pod-subpath-test-projected-lczx": Phase="Running", Reason="", readiness=true. Elapsed: 16.121189439s
Feb  8 13:11:16.010: INFO: Pod "pod-subpath-test-projected-lczx": Phase="Running", Reason="", readiness=true. Elapsed: 18.128470104s
Feb  8 13:11:18.020: INFO: Pod "pod-subpath-test-projected-lczx": Phase="Running", Reason="", readiness=true. Elapsed: 20.137813151s
Feb  8 13:11:20.026: INFO: Pod "pod-subpath-test-projected-lczx": Phase="Running", Reason="", readiness=true. Elapsed: 22.143894267s
Feb  8 13:11:22.034: INFO: Pod "pod-subpath-test-projected-lczx": Phase="Running", Reason="", readiness=true. Elapsed: 24.152242997s
Feb  8 13:11:24.042: INFO: Pod "pod-subpath-test-projected-lczx": Phase="Running", Reason="", readiness=true. Elapsed: 26.160133356s
Feb  8 13:11:26.048: INFO: Pod "pod-subpath-test-projected-lczx": Phase="Running", Reason="", readiness=true. Elapsed: 28.166004547s
Feb  8 13:11:28.054: INFO: Pod "pod-subpath-test-projected-lczx": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.172233472s
STEP: Saw pod success
Feb  8 13:11:28.054: INFO: Pod "pod-subpath-test-projected-lczx" satisfied condition "success or failure"
Feb  8 13:11:28.057: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-projected-lczx container test-container-subpath-projected-lczx: 
STEP: delete the pod
Feb  8 13:11:28.315: INFO: Waiting for pod pod-subpath-test-projected-lczx to disappear
Feb  8 13:11:28.326: INFO: Pod pod-subpath-test-projected-lczx no longer exists
STEP: Deleting pod pod-subpath-test-projected-lczx
Feb  8 13:11:28.326: INFO: Deleting pod "pod-subpath-test-projected-lczx" in namespace "subpath-6606"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  8 13:11:28.335: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-6606" for this suite.
Feb  8 13:11:34.376: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 13:11:34.534: INFO: namespace subpath-6606 deletion completed in 6.193648879s

• [SLOW TEST:36.902 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with projected pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  8 13:11:34.535: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb  8 13:11:34.745: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d422fdab-6203-4cd0-b3a3-bb84d134f48b" in namespace "projected-8473" to be "success or failure"
Feb  8 13:11:34.785: INFO: Pod "downwardapi-volume-d422fdab-6203-4cd0-b3a3-bb84d134f48b": Phase="Pending", Reason="", readiness=false. Elapsed: 40.919783ms
Feb  8 13:11:36.799: INFO: Pod "downwardapi-volume-d422fdab-6203-4cd0-b3a3-bb84d134f48b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.054248171s
Feb  8 13:11:38.812: INFO: Pod "downwardapi-volume-d422fdab-6203-4cd0-b3a3-bb84d134f48b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.06778933s
Feb  8 13:11:40.821: INFO: Pod "downwardapi-volume-d422fdab-6203-4cd0-b3a3-bb84d134f48b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.076650634s
Feb  8 13:11:42.829: INFO: Pod "downwardapi-volume-d422fdab-6203-4cd0-b3a3-bb84d134f48b": Phase="Pending", Reason="", readiness=false. Elapsed: 8.08400851s
Feb  8 13:11:44.837: INFO: Pod "downwardapi-volume-d422fdab-6203-4cd0-b3a3-bb84d134f48b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.09221358s
STEP: Saw pod success
Feb  8 13:11:44.837: INFO: Pod "downwardapi-volume-d422fdab-6203-4cd0-b3a3-bb84d134f48b" satisfied condition "success or failure"
Feb  8 13:11:44.839: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-d422fdab-6203-4cd0-b3a3-bb84d134f48b container client-container: 
STEP: delete the pod
Feb  8 13:11:44.917: INFO: Waiting for pod downwardapi-volume-d422fdab-6203-4cd0-b3a3-bb84d134f48b to disappear
Feb  8 13:11:44.953: INFO: Pod downwardapi-volume-d422fdab-6203-4cd0-b3a3-bb84d134f48b no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  8 13:11:44.953: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8473" for this suite.
Feb  8 13:11:50.981: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 13:11:51.087: INFO: namespace projected-8473 deletion completed in 6.129613173s

• [SLOW TEST:16.552 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  8 13:11:51.087: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-355aa775-a8d1-4fce-8551-287e3d600faf
STEP: Creating a pod to test consume configMaps
Feb  8 13:11:51.215: INFO: Waiting up to 5m0s for pod "pod-configmaps-fb22eace-4d0d-4d5a-a545-ed10ef849803" in namespace "configmap-2616" to be "success or failure"
Feb  8 13:11:51.222: INFO: Pod "pod-configmaps-fb22eace-4d0d-4d5a-a545-ed10ef849803": Phase="Pending", Reason="", readiness=false. Elapsed: 7.008181ms
Feb  8 13:11:53.228: INFO: Pod "pod-configmaps-fb22eace-4d0d-4d5a-a545-ed10ef849803": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012114233s
Feb  8 13:11:55.247: INFO: Pod "pod-configmaps-fb22eace-4d0d-4d5a-a545-ed10ef849803": Phase="Pending", Reason="", readiness=false. Elapsed: 4.031146177s
Feb  8 13:11:57.254: INFO: Pod "pod-configmaps-fb22eace-4d0d-4d5a-a545-ed10ef849803": Phase="Pending", Reason="", readiness=false. Elapsed: 6.038640114s
Feb  8 13:11:59.268: INFO: Pod "pod-configmaps-fb22eace-4d0d-4d5a-a545-ed10ef849803": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.052709431s
STEP: Saw pod success
Feb  8 13:11:59.268: INFO: Pod "pod-configmaps-fb22eace-4d0d-4d5a-a545-ed10ef849803" satisfied condition "success or failure"
Feb  8 13:11:59.280: INFO: Trying to get logs from node iruya-node pod pod-configmaps-fb22eace-4d0d-4d5a-a545-ed10ef849803 container configmap-volume-test: 
STEP: delete the pod
Feb  8 13:11:59.373: INFO: Waiting for pod pod-configmaps-fb22eace-4d0d-4d5a-a545-ed10ef849803 to disappear
Feb  8 13:11:59.376: INFO: Pod pod-configmaps-fb22eace-4d0d-4d5a-a545-ed10ef849803 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  8 13:11:59.377: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-2616" for this suite.
Feb  8 13:12:05.436: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 13:12:05.604: INFO: namespace configmap-2616 deletion completed in 6.221082072s

• [SLOW TEST:14.517 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should do a rolling update of a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  8 13:12:05.604: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should do a rolling update of a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the initial replication controller
Feb  8 13:12:05.700: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1476'
Feb  8 13:12:08.591: INFO: stderr: ""
Feb  8 13:12:08.591: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Feb  8 13:12:08.592: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1476'
Feb  8 13:12:08.821: INFO: stderr: ""
Feb  8 13:12:08.821: INFO: stdout: "update-demo-nautilus-lwz2c update-demo-nautilus-n5flr "
Feb  8 13:12:08.822: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-lwz2c -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1476'
Feb  8 13:12:08.956: INFO: stderr: ""
Feb  8 13:12:08.956: INFO: stdout: ""
Feb  8 13:12:08.956: INFO: update-demo-nautilus-lwz2c is created but not running
Feb  8 13:12:13.956: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1476'
Feb  8 13:12:14.067: INFO: stderr: ""
Feb  8 13:12:14.067: INFO: stdout: "update-demo-nautilus-lwz2c update-demo-nautilus-n5flr "
Feb  8 13:12:14.068: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-lwz2c -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1476'
Feb  8 13:12:14.152: INFO: stderr: ""
Feb  8 13:12:14.152: INFO: stdout: ""
Feb  8 13:12:14.152: INFO: update-demo-nautilus-lwz2c is created but not running
Feb  8 13:12:19.152: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1476'
Feb  8 13:12:19.233: INFO: stderr: ""
Feb  8 13:12:19.233: INFO: stdout: "update-demo-nautilus-lwz2c update-demo-nautilus-n5flr "
Feb  8 13:12:19.233: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-lwz2c -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1476'
Feb  8 13:12:19.320: INFO: stderr: ""
Feb  8 13:12:19.320: INFO: stdout: ""
Feb  8 13:12:19.320: INFO: update-demo-nautilus-lwz2c is created but not running
Feb  8 13:12:24.320: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1476'
Feb  8 13:12:24.432: INFO: stderr: ""
Feb  8 13:12:24.432: INFO: stdout: "update-demo-nautilus-lwz2c update-demo-nautilus-n5flr "
Feb  8 13:12:24.432: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-lwz2c -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1476'
Feb  8 13:12:24.525: INFO: stderr: ""
Feb  8 13:12:24.526: INFO: stdout: "true"
Feb  8 13:12:24.526: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-lwz2c -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1476'
Feb  8 13:12:24.623: INFO: stderr: ""
Feb  8 13:12:24.623: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb  8 13:12:24.623: INFO: validating pod update-demo-nautilus-lwz2c
Feb  8 13:12:24.632: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb  8 13:12:24.632: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb  8 13:12:24.632: INFO: update-demo-nautilus-lwz2c is verified up and running
Feb  8 13:12:24.632: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-n5flr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1476'
Feb  8 13:12:24.715: INFO: stderr: ""
Feb  8 13:12:24.715: INFO: stdout: "true"
Feb  8 13:12:24.715: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-n5flr -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1476'
Feb  8 13:12:24.839: INFO: stderr: ""
Feb  8 13:12:24.840: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb  8 13:12:24.840: INFO: validating pod update-demo-nautilus-n5flr
Feb  8 13:12:24.874: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb  8 13:12:24.874: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb  8 13:12:24.874: INFO: update-demo-nautilus-n5flr is verified up and running
STEP: rolling-update to new replication controller
Feb  8 13:12:24.879: INFO: scanned /root for discovery docs: 
Feb  8 13:12:24.879: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=kubectl-1476'
Feb  8 13:12:56.616: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Feb  8 13:12:56.616: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Feb  8 13:12:56.617: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1476'
Feb  8 13:12:56.766: INFO: stderr: ""
Feb  8 13:12:56.766: INFO: stdout: "update-demo-kitten-jzf7w update-demo-kitten-rqpxr "
Feb  8 13:12:56.766: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-jzf7w -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1476'
Feb  8 13:12:56.882: INFO: stderr: ""
Feb  8 13:12:56.883: INFO: stdout: "true"
Feb  8 13:12:56.883: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-jzf7w -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1476'
Feb  8 13:12:57.019: INFO: stderr: ""
Feb  8 13:12:57.019: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Feb  8 13:12:57.019: INFO: validating pod update-demo-kitten-jzf7w
Feb  8 13:12:57.035: INFO: got data: {
  "image": "kitten.jpg"
}

Feb  8 13:12:57.036: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Feb  8 13:12:57.036: INFO: update-demo-kitten-jzf7w is verified up and running
Feb  8 13:12:57.036: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-rqpxr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1476'
Feb  8 13:12:57.141: INFO: stderr: ""
Feb  8 13:12:57.141: INFO: stdout: "true"
Feb  8 13:12:57.141: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-rqpxr -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1476'
Feb  8 13:12:57.222: INFO: stderr: ""
Feb  8 13:12:57.222: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Feb  8 13:12:57.222: INFO: validating pod update-demo-kitten-rqpxr
Feb  8 13:12:57.248: INFO: got data: {
  "image": "kitten.jpg"
}

Feb  8 13:12:57.248: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Feb  8 13:12:57.248: INFO: update-demo-kitten-rqpxr is verified up and running
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  8 13:12:57.248: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1476" for this suite.
Feb  8 13:13:23.277: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 13:13:23.396: INFO: namespace kubectl-1476 deletion completed in 26.144381023s

• [SLOW TEST:77.792 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should do a rolling update of a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  8 13:13:23.396: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-043ceb44-1756-417c-8986-04158a1047df
STEP: Creating a pod to test consume configMaps
Feb  8 13:13:23.547: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-6f15040b-2747-4f3e-a3c9-f21122200ee7" in namespace "projected-1437" to be "success or failure"
Feb  8 13:13:23.554: INFO: Pod "pod-projected-configmaps-6f15040b-2747-4f3e-a3c9-f21122200ee7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.766642ms
Feb  8 13:13:25.562: INFO: Pod "pod-projected-configmaps-6f15040b-2747-4f3e-a3c9-f21122200ee7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014943199s
Feb  8 13:13:27.570: INFO: Pod "pod-projected-configmaps-6f15040b-2747-4f3e-a3c9-f21122200ee7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.02315874s
Feb  8 13:13:29.580: INFO: Pod "pod-projected-configmaps-6f15040b-2747-4f3e-a3c9-f21122200ee7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.032925299s
Feb  8 13:13:31.590: INFO: Pod "pod-projected-configmaps-6f15040b-2747-4f3e-a3c9-f21122200ee7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.04338125s
STEP: Saw pod success
Feb  8 13:13:31.590: INFO: Pod "pod-projected-configmaps-6f15040b-2747-4f3e-a3c9-f21122200ee7" satisfied condition "success or failure"
Feb  8 13:13:31.595: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-6f15040b-2747-4f3e-a3c9-f21122200ee7 container projected-configmap-volume-test: 
STEP: delete the pod
Feb  8 13:13:31.681: INFO: Waiting for pod pod-projected-configmaps-6f15040b-2747-4f3e-a3c9-f21122200ee7 to disappear
Feb  8 13:13:31.692: INFO: Pod pod-projected-configmaps-6f15040b-2747-4f3e-a3c9-f21122200ee7 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  8 13:13:31.693: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1437" for this suite.
Feb  8 13:13:37.776: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 13:13:37.951: INFO: namespace projected-1437 deletion completed in 6.253947312s

• [SLOW TEST:14.555 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl describe 
  should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  8 13:13:37.951: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb  8 13:13:38.018: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5360'
Feb  8 13:13:38.417: INFO: stderr: ""
Feb  8 13:13:38.417: INFO: stdout: "replicationcontroller/redis-master created\n"
Feb  8 13:13:38.417: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5360'
Feb  8 13:13:39.038: INFO: stderr: ""
Feb  8 13:13:39.038: INFO: stdout: "service/redis-master created\n"
STEP: Waiting for Redis master to start.
Feb  8 13:13:40.050: INFO: Selector matched 1 pods for map[app:redis]
Feb  8 13:13:40.050: INFO: Found 0 / 1
Feb  8 13:13:41.054: INFO: Selector matched 1 pods for map[app:redis]
Feb  8 13:13:41.054: INFO: Found 0 / 1
Feb  8 13:13:42.054: INFO: Selector matched 1 pods for map[app:redis]
Feb  8 13:13:42.054: INFO: Found 0 / 1
Feb  8 13:13:43.048: INFO: Selector matched 1 pods for map[app:redis]
Feb  8 13:13:43.048: INFO: Found 0 / 1
Feb  8 13:13:44.044: INFO: Selector matched 1 pods for map[app:redis]
Feb  8 13:13:44.044: INFO: Found 0 / 1
Feb  8 13:13:45.044: INFO: Selector matched 1 pods for map[app:redis]
Feb  8 13:13:45.044: INFO: Found 0 / 1
Feb  8 13:13:46.051: INFO: Selector matched 1 pods for map[app:redis]
Feb  8 13:13:46.051: INFO: Found 0 / 1
Feb  8 13:13:47.047: INFO: Selector matched 1 pods for map[app:redis]
Feb  8 13:13:47.047: INFO: Found 0 / 1
Feb  8 13:13:48.048: INFO: Selector matched 1 pods for map[app:redis]
Feb  8 13:13:48.048: INFO: Found 1 / 1
Feb  8 13:13:48.048: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Feb  8 13:13:48.052: INFO: Selector matched 1 pods for map[app:redis]
Feb  8 13:13:48.052: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Feb  8 13:13:48.052: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe pod redis-master-h5vlw --namespace=kubectl-5360'
Feb  8 13:13:48.228: INFO: stderr: ""
Feb  8 13:13:48.228: INFO: stdout: "Name:           redis-master-h5vlw\nNamespace:      kubectl-5360\nPriority:       0\nNode:           iruya-node/10.96.3.65\nStart Time:     Sat, 08 Feb 2020 13:13:38 +0000\nLabels:         app=redis\n                role=master\nAnnotations:    \nStatus:         Running\nIP:             10.44.0.1\nControlled By:  ReplicationController/redis-master\nContainers:\n  redis-master:\n    Container ID:   docker://d63c87cf450a6e1872750bfc44ce16f49ace9ebfe22d1d08d8075a92334553ee\n    Image:          gcr.io/kubernetes-e2e-test-images/redis:1.0\n    Image ID:       docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830\n    Port:           6379/TCP\n    Host Port:      0/TCP\n    State:          Running\n      Started:      Sat, 08 Feb 2020 13:13:46 +0000\n    Ready:          True\n    Restart Count:  0\n    Environment:    \n    Mounts:\n      /var/run/secrets/kubernetes.io/serviceaccount from default-token-6574k (ro)\nConditions:\n  Type              Status\n  Initialized       True \n  Ready             True \n  ContainersReady   True \n  PodScheduled      True \nVolumes:\n  default-token-6574k:\n    Type:        Secret (a volume populated by a Secret)\n    SecretName:  default-token-6574k\n    Optional:    false\nQoS Class:       BestEffort\nNode-Selectors:  \nTolerations:     node.kubernetes.io/not-ready:NoExecute for 300s\n                 node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n  Type    Reason     Age   From                 Message\n  ----    ------     ----  ----                 -------\n  Normal  Scheduled  10s   default-scheduler    Successfully assigned kubectl-5360/redis-master-h5vlw to iruya-node\n  Normal  Pulled     6s    kubelet, iruya-node  Container image \"gcr.io/kubernetes-e2e-test-images/redis:1.0\" already present on machine\n  Normal  Created    3s    kubelet, iruya-node  Created container redis-master\n  Normal  Started    2s    kubelet, iruya-node  Started container redis-master\n"
Feb  8 13:13:48.228: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe rc redis-master --namespace=kubectl-5360'
Feb  8 13:13:48.348: INFO: stderr: ""
Feb  8 13:13:48.349: INFO: stdout: "Name:         redis-master\nNamespace:    kubectl-5360\nSelector:     app=redis,role=master\nLabels:       app=redis\n              role=master\nAnnotations:  \nReplicas:     1 current / 1 desired\nPods Status:  1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n  Labels:  app=redis\n           role=master\n  Containers:\n   redis-master:\n    Image:        gcr.io/kubernetes-e2e-test-images/redis:1.0\n    Port:         6379/TCP\n    Host Port:    0/TCP\n    Environment:  \n    Mounts:       \n  Volumes:        \nEvents:\n  Type    Reason            Age   From                    Message\n  ----    ------            ----  ----                    -------\n  Normal  SuccessfulCreate  10s   replication-controller  Created pod: redis-master-h5vlw\n"
Feb  8 13:13:48.349: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe service redis-master --namespace=kubectl-5360'
Feb  8 13:13:48.504: INFO: stderr: ""
Feb  8 13:13:48.504: INFO: stdout: "Name:              redis-master\nNamespace:         kubectl-5360\nLabels:            app=redis\n                   role=master\nAnnotations:       \nSelector:          app=redis,role=master\nType:              ClusterIP\nIP:                10.98.65.207\nPort:                6379/TCP\nTargetPort:        redis-server/TCP\nEndpoints:         10.44.0.1:6379\nSession Affinity:  None\nEvents:            \n"
Feb  8 13:13:48.512: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe node iruya-node'
Feb  8 13:13:48.635: INFO: stderr: ""
Feb  8 13:13:48.635: INFO: stdout: "Name:               iruya-node\nRoles:              \nLabels:             beta.kubernetes.io/arch=amd64\n                    beta.kubernetes.io/os=linux\n                    kubernetes.io/arch=amd64\n                    kubernetes.io/hostname=iruya-node\n                    kubernetes.io/os=linux\nAnnotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock\n                    node.alpha.kubernetes.io/ttl: 0\n                    volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp:  Sun, 04 Aug 2019 09:01:39 +0000\nTaints:             \nUnschedulable:      false\nConditions:\n  Type                 Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message\n  ----                 ------  -----------------                 ------------------                ------                       -------\n  NetworkUnavailable   False   Sat, 12 Oct 2019 11:56:49 +0000   Sat, 12 Oct 2019 11:56:49 +0000   WeaveIsUp                    Weave pod has set this\n  MemoryPressure       False   Sat, 08 Feb 2020 13:13:01 +0000   Sun, 04 Aug 2019 09:01:39 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available\n  DiskPressure         False   Sat, 08 Feb 2020 13:13:01 +0000   Sun, 04 Aug 2019 09:01:39 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure\n  PIDPressure          False   Sat, 08 Feb 2020 13:13:01 +0000   Sun, 04 Aug 2019 09:01:39 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available\n  Ready                True    Sat, 08 Feb 2020 13:13:01 +0000   Sun, 04 Aug 2019 09:02:19 +0000   KubeletReady                 kubelet is posting ready status. AppArmor enabled\nAddresses:\n  InternalIP:  10.96.3.65\n  Hostname:    iruya-node\nCapacity:\n cpu:                4\n ephemeral-storage:  20145724Ki\n hugepages-2Mi:      0\n memory:             4039076Ki\n pods:               110\nAllocatable:\n cpu:                4\n ephemeral-storage:  18566299208\n hugepages-2Mi:      0\n memory:             3936676Ki\n pods:               110\nSystem Info:\n Machine ID:                 f573dcf04d6f4a87856a35d266a2fa7a\n System UUID:                F573DCF0-4D6F-4A87-856A-35D266A2FA7A\n Boot ID:                    8baf4beb-8391-43e6-b17b-b1e184b5370a\n Kernel Version:             4.15.0-52-generic\n OS Image:                   Ubuntu 18.04.2 LTS\n Operating System:           linux\n Architecture:               amd64\n Container Runtime Version:  docker://18.9.7\n Kubelet Version:            v1.15.1\n Kube-Proxy Version:         v1.15.1\nPodCIDR:                     10.96.1.0/24\nNon-terminated Pods:         (3 in total)\n  Namespace                  Name                  CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE\n  ---------                  ----                  ------------  ----------  ---------------  -------------  ---\n  kube-system                kube-proxy-976zl      0 (0%)        0 (0%)      0 (0%)           0 (0%)         188d\n  kube-system                weave-net-rlp57       20m (0%)      0 (0%)      0 (0%)           0 (0%)         119d\n  kubectl-5360               redis-master-h5vlw    0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s\nAllocated resources:\n  (Total limits may be over 100 percent, i.e., overcommitted.)\n  Resource           Requests  Limits\n  --------           --------  ------\n  cpu                20m (0%)  0 (0%)\n  memory             0 (0%)    0 (0%)\n  ephemeral-storage  0 (0%)    0 (0%)\nEvents:              \n"
Feb  8 13:13:48.635: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe namespace kubectl-5360'
Feb  8 13:13:48.730: INFO: stderr: ""
Feb  8 13:13:48.730: INFO: stdout: "Name:         kubectl-5360\nLabels:       e2e-framework=kubectl\n              e2e-run=1dc19db1-c19a-40bb-97b3-7aca40d01612\nAnnotations:  \nStatus:       Active\n\nNo resource quota.\n\nNo resource limits.\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  8 13:13:48.730: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5360" for this suite.
Feb  8 13:14:10.818: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 13:14:11.109: INFO: namespace kubectl-5360 deletion completed in 22.326074359s

• [SLOW TEST:33.157 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl describe
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should check if kubectl describe prints relevant information for rc and pods  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  8 13:14:11.109: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75
STEP: Creating service test in namespace statefulset-6266
[It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Initializing watcher for selector baz=blah,foo=bar
STEP: Creating stateful set ss in namespace statefulset-6266
STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-6266
Feb  8 13:14:11.442: INFO: Found 0 stateful pods, waiting for 1
Feb  8 13:14:21.450: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod
Feb  8 13:14:21.459: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6266 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb  8 13:14:22.142: INFO: stderr: "I0208 13:14:21.708458    1437 log.go:172] (0xc0000fafd0) (0xc00061ec80) Create stream\nI0208 13:14:21.708675    1437 log.go:172] (0xc0000fafd0) (0xc00061ec80) Stream added, broadcasting: 1\nI0208 13:14:21.719061    1437 log.go:172] (0xc0000fafd0) Reply frame received for 1\nI0208 13:14:21.719132    1437 log.go:172] (0xc0000fafd0) (0xc00061ed20) Create stream\nI0208 13:14:21.719144    1437 log.go:172] (0xc0000fafd0) (0xc00061ed20) Stream added, broadcasting: 3\nI0208 13:14:21.722228    1437 log.go:172] (0xc0000fafd0) Reply frame received for 3\nI0208 13:14:21.722307    1437 log.go:172] (0xc0000fafd0) (0xc000a1a000) Create stream\nI0208 13:14:21.722328    1437 log.go:172] (0xc0000fafd0) (0xc000a1a000) Stream added, broadcasting: 5\nI0208 13:14:21.726081    1437 log.go:172] (0xc0000fafd0) Reply frame received for 5\nI0208 13:14:21.882511    1437 log.go:172] (0xc0000fafd0) Data frame received for 5\nI0208 13:14:21.882618    1437 log.go:172] (0xc000a1a000) (5) Data frame handling\nI0208 13:14:21.882653    1437 log.go:172] (0xc000a1a000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0208 13:14:21.947680    1437 log.go:172] (0xc0000fafd0) Data frame received for 3\nI0208 13:14:21.947779    1437 log.go:172] (0xc00061ed20) (3) Data frame handling\nI0208 13:14:21.947822    1437 log.go:172] (0xc00061ed20) (3) Data frame sent\nI0208 13:14:22.110308    1437 log.go:172] (0xc0000fafd0) Data frame received for 1\nI0208 13:14:22.110506    1437 log.go:172] (0xc0000fafd0) (0xc000a1a000) Stream removed, broadcasting: 5\nI0208 13:14:22.110708    1437 log.go:172] (0xc00061ec80) (1) Data frame handling\nI0208 13:14:22.110840    1437 log.go:172] (0xc00061ec80) (1) Data frame sent\nI0208 13:14:22.111033    1437 log.go:172] (0xc0000fafd0) (0xc00061ed20) Stream removed, broadcasting: 3\nI0208 13:14:22.111225    1437 log.go:172] (0xc0000fafd0) (0xc00061ec80) Stream removed, broadcasting: 1\nI0208 13:14:22.112384    1437 log.go:172] (0xc0000fafd0) Go away received\nI0208 13:14:22.119262    1437 log.go:172] (0xc0000fafd0) (0xc00061ec80) Stream removed, broadcasting: 1\nI0208 13:14:22.119326    1437 log.go:172] (0xc0000fafd0) (0xc00061ed20) Stream removed, broadcasting: 3\nI0208 13:14:22.119373    1437 log.go:172] (0xc0000fafd0) (0xc000a1a000) Stream removed, broadcasting: 5\n"
Feb  8 13:14:22.142: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb  8 13:14:22.142: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Feb  8 13:14:22.159: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
Feb  8 13:14:32.168: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Feb  8 13:14:32.168: INFO: Waiting for statefulset status.replicas updated to 0
Feb  8 13:14:32.192: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999663s
Feb  8 13:14:33.200: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.990445001s
Feb  8 13:14:34.209: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.981657891s
Feb  8 13:14:35.217: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.973404146s
Feb  8 13:14:36.227: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.964771008s
Feb  8 13:14:37.238: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.954800468s
Feb  8 13:14:38.256: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.943616677s
Feb  8 13:14:39.267: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.926323367s
Feb  8 13:14:40.278: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.914634714s
Feb  8 13:14:41.285: INFO: Verifying statefulset ss doesn't scale past 1 for another 903.899182ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-6266
Feb  8 13:14:42.291: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6266 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  8 13:14:43.023: INFO: stderr: "I0208 13:14:42.526803    1458 log.go:172] (0xc0008e60b0) (0xc0008d40a0) Create stream\nI0208 13:14:42.527347    1458 log.go:172] (0xc0008e60b0) (0xc0008d40a0) Stream added, broadcasting: 1\nI0208 13:14:42.551082    1458 log.go:172] (0xc0008e60b0) Reply frame received for 1\nI0208 13:14:42.551154    1458 log.go:172] (0xc0008e60b0) (0xc0009e4000) Create stream\nI0208 13:14:42.551175    1458 log.go:172] (0xc0008e60b0) (0xc0009e4000) Stream added, broadcasting: 3\nI0208 13:14:42.552994    1458 log.go:172] (0xc0008e60b0) Reply frame received for 3\nI0208 13:14:42.553045    1458 log.go:172] (0xc0008e60b0) (0xc0005fe1e0) Create stream\nI0208 13:14:42.553059    1458 log.go:172] (0xc0008e60b0) (0xc0005fe1e0) Stream added, broadcasting: 5\nI0208 13:14:42.556513    1458 log.go:172] (0xc0008e60b0) Reply frame received for 5\nI0208 13:14:42.870107    1458 log.go:172] (0xc0008e60b0) Data frame received for 5\nI0208 13:14:42.870222    1458 log.go:172] (0xc0005fe1e0) (5) Data frame handling\nI0208 13:14:42.870259    1458 log.go:172] (0xc0005fe1e0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0208 13:14:42.870334    1458 log.go:172] (0xc0008e60b0) Data frame received for 3\nI0208 13:14:42.870451    1458 log.go:172] (0xc0009e4000) (3) Data frame handling\nI0208 13:14:42.870502    1458 log.go:172] (0xc0009e4000) (3) Data frame sent\nI0208 13:14:43.009880    1458 log.go:172] (0xc0008e60b0) (0xc0009e4000) Stream removed, broadcasting: 3\nI0208 13:14:43.010096    1458 log.go:172] (0xc0008e60b0) Data frame received for 1\nI0208 13:14:43.010114    1458 log.go:172] (0xc0008d40a0) (1) Data frame handling\nI0208 13:14:43.010147    1458 log.go:172] (0xc0008d40a0) (1) Data frame sent\nI0208 13:14:43.010163    1458 log.go:172] (0xc0008e60b0) (0xc0008d40a0) Stream removed, broadcasting: 1\nI0208 13:14:43.010286    1458 log.go:172] (0xc0008e60b0) (0xc0005fe1e0) Stream removed, broadcasting: 5\nI0208 13:14:43.010357    1458 log.go:172] (0xc0008e60b0) Go away received\nI0208 13:14:43.010885    1458 log.go:172] (0xc0008e60b0) (0xc0008d40a0) Stream removed, broadcasting: 1\nI0208 13:14:43.010911    1458 log.go:172] (0xc0008e60b0) (0xc0009e4000) Stream removed, broadcasting: 3\nI0208 13:14:43.010922    1458 log.go:172] (0xc0008e60b0) (0xc0005fe1e0) Stream removed, broadcasting: 5\n"
Feb  8 13:14:43.023: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Feb  8 13:14:43.023: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Feb  8 13:14:43.028: INFO: Found 1 stateful pods, waiting for 3
Feb  8 13:14:53.037: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Feb  8 13:14:53.037: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Feb  8 13:14:53.037: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Pending - Ready=false
Feb  8 13:15:03.037: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Feb  8 13:15:03.037: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Feb  8 13:15:03.037: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Verifying that stateful set ss was scaled up in order
STEP: Scale down will halt with unhealthy stateful pod
Feb  8 13:15:03.045: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6266 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb  8 13:15:03.642: INFO: stderr: "I0208 13:15:03.300215    1478 log.go:172] (0xc000140dc0) (0xc0003ba820) Create stream\nI0208 13:15:03.300361    1478 log.go:172] (0xc000140dc0) (0xc0003ba820) Stream added, broadcasting: 1\nI0208 13:15:03.319533    1478 log.go:172] (0xc000140dc0) Reply frame received for 1\nI0208 13:15:03.319657    1478 log.go:172] (0xc000140dc0) (0xc0003ba000) Create stream\nI0208 13:15:03.319677    1478 log.go:172] (0xc000140dc0) (0xc0003ba000) Stream added, broadcasting: 3\nI0208 13:15:03.322346    1478 log.go:172] (0xc000140dc0) Reply frame received for 3\nI0208 13:15:03.322386    1478 log.go:172] (0xc000140dc0) (0xc000640140) Create stream\nI0208 13:15:03.322401    1478 log.go:172] (0xc000140dc0) (0xc000640140) Stream added, broadcasting: 5\nI0208 13:15:03.324457    1478 log.go:172] (0xc000140dc0) Reply frame received for 5\nI0208 13:15:03.441309    1478 log.go:172] (0xc000140dc0) Data frame received for 3\nI0208 13:15:03.441437    1478 log.go:172] (0xc0003ba000) (3) Data frame handling\nI0208 13:15:03.441557    1478 log.go:172] (0xc0003ba000) (3) Data frame sent\nI0208 13:15:03.441674    1478 log.go:172] (0xc000140dc0) Data frame received for 5\nI0208 13:15:03.441688    1478 log.go:172] (0xc000640140) (5) Data frame handling\nI0208 13:15:03.441750    1478 log.go:172] (0xc000640140) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0208 13:15:03.622042    1478 log.go:172] (0xc000140dc0) (0xc0003ba000) Stream removed, broadcasting: 3\nI0208 13:15:03.622194    1478 log.go:172] (0xc000140dc0) Data frame received for 1\nI0208 13:15:03.622296    1478 log.go:172] (0xc000140dc0) (0xc000640140) Stream removed, broadcasting: 5\nI0208 13:15:03.622360    1478 log.go:172] (0xc0003ba820) (1) Data frame handling\nI0208 13:15:03.622426    1478 log.go:172] (0xc0003ba820) (1) Data frame sent\nI0208 13:15:03.622452    1478 log.go:172] (0xc000140dc0) (0xc0003ba820) Stream removed, broadcasting: 1\nI0208 13:15:03.622643    1478 log.go:172] (0xc000140dc0) Go away received\nI0208 13:15:03.623522    1478 log.go:172] (0xc000140dc0) (0xc0003ba820) Stream removed, broadcasting: 1\nI0208 13:15:03.623566    1478 log.go:172] (0xc000140dc0) (0xc0003ba000) Stream removed, broadcasting: 3\nI0208 13:15:03.623584    1478 log.go:172] (0xc000140dc0) (0xc000640140) Stream removed, broadcasting: 5\n"
Feb  8 13:15:03.642: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb  8 13:15:03.642: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Feb  8 13:15:03.642: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6266 ss-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb  8 13:15:04.054: INFO: stderr: "I0208 13:15:03.808380    1500 log.go:172] (0xc00013a6e0) (0xc0005da640) Create stream\nI0208 13:15:03.808618    1500 log.go:172] (0xc00013a6e0) (0xc0005da640) Stream added, broadcasting: 1\nI0208 13:15:03.812887    1500 log.go:172] (0xc00013a6e0) Reply frame received for 1\nI0208 13:15:03.812909    1500 log.go:172] (0xc00013a6e0) (0xc0007c6000) Create stream\nI0208 13:15:03.812918    1500 log.go:172] (0xc00013a6e0) (0xc0007c6000) Stream added, broadcasting: 3\nI0208 13:15:03.814021    1500 log.go:172] (0xc00013a6e0) Reply frame received for 3\nI0208 13:15:03.814040    1500 log.go:172] (0xc00013a6e0) (0xc0006e8000) Create stream\nI0208 13:15:03.814047    1500 log.go:172] (0xc00013a6e0) (0xc0006e8000) Stream added, broadcasting: 5\nI0208 13:15:03.815047    1500 log.go:172] (0xc00013a6e0) Reply frame received for 5\nI0208 13:15:03.909329    1500 log.go:172] (0xc00013a6e0) Data frame received for 5\nI0208 13:15:03.909726    1500 log.go:172] (0xc0006e8000) (5) Data frame handling\nI0208 13:15:03.909845    1500 log.go:172] (0xc0006e8000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0208 13:15:03.938372    1500 log.go:172] (0xc00013a6e0) Data frame received for 3\nI0208 13:15:03.938603    1500 log.go:172] (0xc0007c6000) (3) Data frame handling\nI0208 13:15:03.938680    1500 log.go:172] (0xc0007c6000) (3) Data frame sent\nI0208 13:15:04.042744    1500 log.go:172] (0xc00013a6e0) Data frame received for 1\nI0208 13:15:04.042833    1500 log.go:172] (0xc0005da640) (1) Data frame handling\nI0208 13:15:04.042869    1500 log.go:172] (0xc0005da640) (1) Data frame sent\nI0208 13:15:04.043124    1500 log.go:172] (0xc00013a6e0) (0xc0005da640) Stream removed, broadcasting: 1\nI0208 13:15:04.043307    1500 log.go:172] (0xc00013a6e0) (0xc0006e8000) Stream removed, broadcasting: 5\nI0208 13:15:04.043361    1500 log.go:172] (0xc00013a6e0) (0xc0007c6000) Stream removed, broadcasting: 3\nI0208 13:15:04.043405    1500 log.go:172] (0xc00013a6e0) Go away received\nI0208 13:15:04.043687    1500 log.go:172] (0xc00013a6e0) (0xc0005da640) Stream removed, broadcasting: 1\nI0208 13:15:04.043758    1500 log.go:172] (0xc00013a6e0) (0xc0007c6000) Stream removed, broadcasting: 3\nI0208 13:15:04.043782    1500 log.go:172] (0xc00013a6e0) (0xc0006e8000) Stream removed, broadcasting: 5\n"
Feb  8 13:15:04.054: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb  8 13:15:04.054: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Feb  8 13:15:04.054: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6266 ss-2 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb  8 13:15:04.629: INFO: stderr: "I0208 13:15:04.262281    1518 log.go:172] (0xc000732370) (0xc00028a780) Create stream\nI0208 13:15:04.262370    1518 log.go:172] (0xc000732370) (0xc00028a780) Stream added, broadcasting: 1\nI0208 13:15:04.268987    1518 log.go:172] (0xc000732370) Reply frame received for 1\nI0208 13:15:04.269031    1518 log.go:172] (0xc000732370) (0xc000884000) Create stream\nI0208 13:15:04.269043    1518 log.go:172] (0xc000732370) (0xc000884000) Stream added, broadcasting: 3\nI0208 13:15:04.270412    1518 log.go:172] (0xc000732370) Reply frame received for 3\nI0208 13:15:04.270428    1518 log.go:172] (0xc000732370) (0xc00028a820) Create stream\nI0208 13:15:04.270433    1518 log.go:172] (0xc000732370) (0xc00028a820) Stream added, broadcasting: 5\nI0208 13:15:04.271796    1518 log.go:172] (0xc000732370) Reply frame received for 5\nI0208 13:15:04.392106    1518 log.go:172] (0xc000732370) Data frame received for 5\nI0208 13:15:04.392194    1518 log.go:172] (0xc00028a820) (5) Data frame handling\nI0208 13:15:04.392216    1518 log.go:172] (0xc00028a820) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0208 13:15:04.429740    1518 log.go:172] (0xc000732370) Data frame received for 3\nI0208 13:15:04.429782    1518 log.go:172] (0xc000884000) (3) Data frame handling\nI0208 13:15:04.429798    1518 log.go:172] (0xc000884000) (3) Data frame sent\nI0208 13:15:04.615075    1518 log.go:172] (0xc000732370) (0xc000884000) Stream removed, broadcasting: 3\nI0208 13:15:04.615219    1518 log.go:172] (0xc000732370) Data frame received for 1\nI0208 13:15:04.615239    1518 log.go:172] (0xc00028a780) (1) Data frame handling\nI0208 13:15:04.615265    1518 log.go:172] (0xc00028a780) (1) Data frame sent\nI0208 13:15:04.615296    1518 log.go:172] (0xc000732370) (0xc00028a780) Stream removed, broadcasting: 1\nI0208 13:15:04.615327    1518 log.go:172] (0xc000732370) (0xc00028a820) Stream removed, broadcasting: 5\nI0208 13:15:04.615361    1518 log.go:172] (0xc000732370) Go away received\nI0208 13:15:04.616304    1518 log.go:172] (0xc000732370) (0xc00028a780) Stream removed, broadcasting: 1\nI0208 13:15:04.616324    1518 log.go:172] (0xc000732370) (0xc000884000) Stream removed, broadcasting: 3\nI0208 13:15:04.616341    1518 log.go:172] (0xc000732370) (0xc00028a820) Stream removed, broadcasting: 5\n"
Feb  8 13:15:04.629: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb  8 13:15:04.629: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Feb  8 13:15:04.629: INFO: Waiting for statefulset status.replicas updated to 0
Feb  8 13:15:04.646: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1
Feb  8 13:15:14.663: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Feb  8 13:15:14.663: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Feb  8 13:15:14.663: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Feb  8 13:15:14.692: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999078s
Feb  8 13:15:15.699: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.988014313s
Feb  8 13:15:16.715: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.980062432s
Feb  8 13:15:17.726: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.964674527s
Feb  8 13:15:18.735: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.953553179s
Feb  8 13:15:19.744: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.944315965s
Feb  8 13:15:20.753: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.935982939s
Feb  8 13:15:21.798: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.926841173s
Feb  8 13:15:22.804: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.881532639s
Feb  8 13:15:23.819: INFO: Verifying statefulset ss doesn't scale past 3 for another 875.205759ms
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-6266
Feb  8 13:15:24.840: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6266 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  8 13:15:25.343: INFO: stderr: "I0208 13:15:25.059097    1538 log.go:172] (0xc0006b44d0) (0xc0001ba820) Create stream\nI0208 13:15:25.059360    1538 log.go:172] (0xc0006b44d0) (0xc0001ba820) Stream added, broadcasting: 1\nI0208 13:15:25.065682    1538 log.go:172] (0xc0006b44d0) Reply frame received for 1\nI0208 13:15:25.065733    1538 log.go:172] (0xc0006b44d0) (0xc0009ac000) Create stream\nI0208 13:15:25.065746    1538 log.go:172] (0xc0006b44d0) (0xc0009ac000) Stream added, broadcasting: 3\nI0208 13:15:25.067994    1538 log.go:172] (0xc0006b44d0) Reply frame received for 3\nI0208 13:15:25.068047    1538 log.go:172] (0xc0006b44d0) (0xc000654000) Create stream\nI0208 13:15:25.068083    1538 log.go:172] (0xc0006b44d0) (0xc000654000) Stream added, broadcasting: 5\nI0208 13:15:25.069457    1538 log.go:172] (0xc0006b44d0) Reply frame received for 5\nI0208 13:15:25.168643    1538 log.go:172] (0xc0006b44d0) Data frame received for 5\nI0208 13:15:25.168703    1538 log.go:172] (0xc000654000) (5) Data frame handling\nI0208 13:15:25.168715    1538 log.go:172] (0xc000654000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0208 13:15:25.168755    1538 log.go:172] (0xc0006b44d0) Data frame received for 3\nI0208 13:15:25.168763    1538 log.go:172] (0xc0009ac000) (3) Data frame handling\nI0208 13:15:25.168772    1538 log.go:172] (0xc0009ac000) (3) Data frame sent\nI0208 13:15:25.331881    1538 log.go:172] (0xc0006b44d0) Data frame received for 1\nI0208 13:15:25.332032    1538 log.go:172] (0xc0006b44d0) (0xc000654000) Stream removed, broadcasting: 5\nI0208 13:15:25.332165    1538 log.go:172] (0xc0006b44d0) (0xc0009ac000) Stream removed, broadcasting: 3\nI0208 13:15:25.332271    1538 log.go:172] (0xc0001ba820) (1) Data frame handling\nI0208 13:15:25.332311    1538 log.go:172] (0xc0001ba820) (1) Data frame sent\nI0208 13:15:25.332323    1538 log.go:172] (0xc0006b44d0) (0xc0001ba820) Stream removed, broadcasting: 1\nI0208 13:15:25.332347    1538 log.go:172] (0xc0006b44d0) Go away received\nI0208 13:15:25.333047    1538 log.go:172] (0xc0006b44d0) (0xc0001ba820) Stream removed, broadcasting: 1\nI0208 13:15:25.333074    1538 log.go:172] (0xc0006b44d0) (0xc0009ac000) Stream removed, broadcasting: 3\nI0208 13:15:25.333081    1538 log.go:172] (0xc0006b44d0) (0xc000654000) Stream removed, broadcasting: 5\n"
Feb  8 13:15:25.343: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Feb  8 13:15:25.343: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Feb  8 13:15:25.343: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6266 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  8 13:15:25.675: INFO: stderr: "I0208 13:15:25.467986    1558 log.go:172] (0xc000116790) (0xc0004386e0) Create stream\nI0208 13:15:25.468211    1558 log.go:172] (0xc000116790) (0xc0004386e0) Stream added, broadcasting: 1\nI0208 13:15:25.470448    1558 log.go:172] (0xc000116790) Reply frame received for 1\nI0208 13:15:25.470478    1558 log.go:172] (0xc000116790) (0xc00010e280) Create stream\nI0208 13:15:25.470490    1558 log.go:172] (0xc000116790) (0xc00010e280) Stream added, broadcasting: 3\nI0208 13:15:25.472284    1558 log.go:172] (0xc000116790) Reply frame received for 3\nI0208 13:15:25.472323    1558 log.go:172] (0xc000116790) (0xc000204000) Create stream\nI0208 13:15:25.472339    1558 log.go:172] (0xc000116790) (0xc000204000) Stream added, broadcasting: 5\nI0208 13:15:25.474736    1558 log.go:172] (0xc000116790) Reply frame received for 5\nI0208 13:15:25.591443    1558 log.go:172] (0xc000116790) Data frame received for 5\nI0208 13:15:25.591508    1558 log.go:172] (0xc000204000) (5) Data frame handling\nI0208 13:15:25.591526    1558 log.go:172] (0xc000204000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0208 13:15:25.591545    1558 log.go:172] (0xc000116790) Data frame received for 3\nI0208 13:15:25.591553    1558 log.go:172] (0xc00010e280) (3) Data frame handling\nI0208 13:15:25.591564    1558 log.go:172] (0xc00010e280) (3) Data frame sent\nI0208 13:15:25.667979    1558 log.go:172] (0xc000116790) Data frame received for 1\nI0208 13:15:25.668146    1558 log.go:172] (0xc000116790) (0xc00010e280) Stream removed, broadcasting: 3\nI0208 13:15:25.668176    1558 log.go:172] (0xc0004386e0) (1) Data frame handling\nI0208 13:15:25.668187    1558 log.go:172] (0xc0004386e0) (1) Data frame sent\nI0208 13:15:25.668205    1558 log.go:172] (0xc000116790) (0xc000204000) Stream removed, broadcasting: 5\nI0208 13:15:25.668224    1558 log.go:172] (0xc000116790) (0xc0004386e0) Stream removed, broadcasting: 1\nI0208 13:15:25.668244    1558 log.go:172] (0xc000116790) Go away received\nI0208 13:15:25.668927    1558 log.go:172] (0xc000116790) (0xc0004386e0) Stream removed, broadcasting: 1\nI0208 13:15:25.668959    1558 log.go:172] (0xc000116790) (0xc00010e280) Stream removed, broadcasting: 3\nI0208 13:15:25.668984    1558 log.go:172] (0xc000116790) (0xc000204000) Stream removed, broadcasting: 5\n"
Feb  8 13:15:25.675: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Feb  8 13:15:25.675: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Feb  8 13:15:25.675: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6266 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  8 13:15:26.216: INFO: stderr: "I0208 13:15:25.874608    1574 log.go:172] (0xc000a20420) (0xc000416820) Create stream\nI0208 13:15:25.874791    1574 log.go:172] (0xc000a20420) (0xc000416820) Stream added, broadcasting: 1\nI0208 13:15:25.894514    1574 log.go:172] (0xc000a20420) Reply frame received for 1\nI0208 13:15:25.894612    1574 log.go:172] (0xc000a20420) (0xc0005e8140) Create stream\nI0208 13:15:25.894643    1574 log.go:172] (0xc000a20420) (0xc0005e8140) Stream added, broadcasting: 3\nI0208 13:15:25.896151    1574 log.go:172] (0xc000a20420) Reply frame received for 3\nI0208 13:15:25.896183    1574 log.go:172] (0xc000a20420) (0xc0005e81e0) Create stream\nI0208 13:15:25.896193    1574 log.go:172] (0xc000a20420) (0xc0005e81e0) Stream added, broadcasting: 5\nI0208 13:15:25.897379    1574 log.go:172] (0xc000a20420) Reply frame received for 5\nI0208 13:15:26.014658    1574 log.go:172] (0xc000a20420) Data frame received for 3\nI0208 13:15:26.014741    1574 log.go:172] (0xc0005e8140) (3) Data frame handling\nI0208 13:15:26.014763    1574 log.go:172] (0xc0005e8140) (3) Data frame sent\nI0208 13:15:26.014832    1574 log.go:172] (0xc000a20420) Data frame received for 5\nI0208 13:15:26.014857    1574 log.go:172] (0xc0005e81e0) (5) Data frame handling\nI0208 13:15:26.014887    1574 log.go:172] (0xc0005e81e0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0208 13:15:26.202375    1574 log.go:172] (0xc000a20420) Data frame received for 1\nI0208 13:15:26.202664    1574 log.go:172] (0xc000a20420) (0xc0005e81e0) Stream removed, broadcasting: 5\nI0208 13:15:26.202812    1574 log.go:172] (0xc000416820) (1) Data frame handling\nI0208 13:15:26.202859    1574 log.go:172] (0xc000416820) (1) Data frame sent\nI0208 13:15:26.202993    1574 log.go:172] (0xc000a20420) (0xc0005e8140) Stream removed, broadcasting: 3\nI0208 13:15:26.203212    1574 log.go:172] (0xc000a20420) (0xc000416820) Stream removed, broadcasting: 1\nI0208 13:15:26.203279    1574 log.go:172] (0xc000a20420) Go away received\nI0208 13:15:26.204499    1574 log.go:172] (0xc000a20420) (0xc000416820) Stream removed, broadcasting: 1\nI0208 13:15:26.204532    1574 log.go:172] (0xc000a20420) (0xc0005e8140) Stream removed, broadcasting: 3\nI0208 13:15:26.204549    1574 log.go:172] (0xc000a20420) (0xc0005e81e0) Stream removed, broadcasting: 5\n"
Feb  8 13:15:26.216: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Feb  8 13:15:26.216: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Feb  8 13:15:26.216: INFO: Scaling statefulset ss to 0
STEP: Verifying that stateful set ss was scaled down in reverse order
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86
Feb  8 13:15:56.245: INFO: Deleting all statefulset in ns statefulset-6266
Feb  8 13:15:56.252: INFO: Scaling statefulset ss to 0
Feb  8 13:15:56.272: INFO: Waiting for statefulset status.replicas updated to 0
Feb  8 13:15:56.277: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  8 13:15:56.324: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-6266" for this suite.
Feb  8 13:16:02.376: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 13:16:02.466: INFO: namespace statefulset-6266 deletion completed in 6.134545147s

• [SLOW TEST:111.357 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-storage] Projected downwardAPI 
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  8 13:16:02.467: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating the pod
Feb  8 13:16:11.226: INFO: Successfully updated pod "annotationupdateac4c71ec-531f-49b7-a613-2cf0221cbe1f"
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  8 13:16:13.293: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6923" for this suite.
Feb  8 13:16:35.339: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 13:16:35.489: INFO: namespace projected-6923 deletion completed in 22.187449309s

• [SLOW TEST:33.022 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  8 13:16:35.490: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
[It] should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Feb  8 13:16:35.794: INFO: Number of nodes with available pods: 0
Feb  8 13:16:35.794: INFO: Node iruya-node is running more than one daemon pod
Feb  8 13:16:36.809: INFO: Number of nodes with available pods: 0
Feb  8 13:16:36.809: INFO: Node iruya-node is running more than one daemon pod
Feb  8 13:16:38.150: INFO: Number of nodes with available pods: 0
Feb  8 13:16:38.150: INFO: Node iruya-node is running more than one daemon pod
Feb  8 13:16:38.803: INFO: Number of nodes with available pods: 0
Feb  8 13:16:38.803: INFO: Node iruya-node is running more than one daemon pod
Feb  8 13:16:39.826: INFO: Number of nodes with available pods: 0
Feb  8 13:16:39.827: INFO: Node iruya-node is running more than one daemon pod
Feb  8 13:16:40.914: INFO: Number of nodes with available pods: 0
Feb  8 13:16:40.914: INFO: Node iruya-node is running more than one daemon pod
Feb  8 13:16:43.649: INFO: Number of nodes with available pods: 0
Feb  8 13:16:43.649: INFO: Node iruya-node is running more than one daemon pod
Feb  8 13:16:43.840: INFO: Number of nodes with available pods: 0
Feb  8 13:16:43.840: INFO: Node iruya-node is running more than one daemon pod
Feb  8 13:16:44.810: INFO: Number of nodes with available pods: 0
Feb  8 13:16:44.810: INFO: Node iruya-node is running more than one daemon pod
Feb  8 13:16:45.808: INFO: Number of nodes with available pods: 1
Feb  8 13:16:45.808: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Feb  8 13:16:46.809: INFO: Number of nodes with available pods: 2
Feb  8 13:16:46.809: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Stop a daemon pod, check that the daemon pod is revived.
Feb  8 13:16:46.896: INFO: Number of nodes with available pods: 1
Feb  8 13:16:46.896: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Feb  8 13:16:47.909: INFO: Number of nodes with available pods: 1
Feb  8 13:16:47.909: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Feb  8 13:16:48.909: INFO: Number of nodes with available pods: 1
Feb  8 13:16:48.909: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Feb  8 13:16:49.910: INFO: Number of nodes with available pods: 1
Feb  8 13:16:49.910: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Feb  8 13:16:50.915: INFO: Number of nodes with available pods: 1
Feb  8 13:16:50.915: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Feb  8 13:16:51.911: INFO: Number of nodes with available pods: 1
Feb  8 13:16:51.911: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Feb  8 13:16:52.911: INFO: Number of nodes with available pods: 1
Feb  8 13:16:52.911: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Feb  8 13:16:53.925: INFO: Number of nodes with available pods: 1
Feb  8 13:16:53.925: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Feb  8 13:16:54.910: INFO: Number of nodes with available pods: 1
Feb  8 13:16:54.910: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Feb  8 13:16:55.913: INFO: Number of nodes with available pods: 1
Feb  8 13:16:55.913: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Feb  8 13:16:56.911: INFO: Number of nodes with available pods: 1
Feb  8 13:16:56.911: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Feb  8 13:16:57.938: INFO: Number of nodes with available pods: 1
Feb  8 13:16:57.939: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Feb  8 13:16:58.922: INFO: Number of nodes with available pods: 1
Feb  8 13:16:58.922: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Feb  8 13:16:59.934: INFO: Number of nodes with available pods: 1
Feb  8 13:16:59.934: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Feb  8 13:17:00.907: INFO: Number of nodes with available pods: 1
Feb  8 13:17:00.907: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Feb  8 13:17:03.009: INFO: Number of nodes with available pods: 1
Feb  8 13:17:03.009: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Feb  8 13:17:03.922: INFO: Number of nodes with available pods: 1
Feb  8 13:17:03.922: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Feb  8 13:17:04.918: INFO: Number of nodes with available pods: 1
Feb  8 13:17:04.918: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Feb  8 13:17:05.914: INFO: Number of nodes with available pods: 2
Feb  8 13:17:05.914: INFO: Number of running nodes: 2, number of available pods: 2
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-4241, will wait for the garbage collector to delete the pods
Feb  8 13:17:05.992: INFO: Deleting DaemonSet.extensions daemon-set took: 16.129405ms
Feb  8 13:17:06.292: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.480502ms
Feb  8 13:17:16.602: INFO: Number of nodes with available pods: 0
Feb  8 13:17:16.602: INFO: Number of running nodes: 0, number of available pods: 0
Feb  8 13:17:16.611: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-4241/daemonsets","resourceVersion":"23568898"},"items":null}

Feb  8 13:17:16.615: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-4241/pods","resourceVersion":"23568898"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  8 13:17:16.657: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-4241" for this suite.
Feb  8 13:17:22.687: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 13:17:22.826: INFO: namespace daemonsets-4241 deletion completed in 6.162451584s

• [SLOW TEST:47.337 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  8 13:17:22.827: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Performing setup for networking test in namespace pod-network-test-797
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Feb  8 13:17:22.932: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Feb  8 13:18:01.168: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostName&protocol=udp&host=10.32.0.4&port=8081&tries=1'] Namespace:pod-network-test-797 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb  8 13:18:01.168: INFO: >>> kubeConfig: /root/.kube/config
I0208 13:18:01.256378       8 log.go:172] (0xc00098db80) (0xc0012b68c0) Create stream
I0208 13:18:01.256445       8 log.go:172] (0xc00098db80) (0xc0012b68c0) Stream added, broadcasting: 1
I0208 13:18:01.267952       8 log.go:172] (0xc00098db80) Reply frame received for 1
I0208 13:18:01.268049       8 log.go:172] (0xc00098db80) (0xc0017a1cc0) Create stream
I0208 13:18:01.268068       8 log.go:172] (0xc00098db80) (0xc0017a1cc0) Stream added, broadcasting: 3
I0208 13:18:01.271333       8 log.go:172] (0xc00098db80) Reply frame received for 3
I0208 13:18:01.271386       8 log.go:172] (0xc00098db80) (0xc0012b6960) Create stream
I0208 13:18:01.271400       8 log.go:172] (0xc00098db80) (0xc0012b6960) Stream added, broadcasting: 5
I0208 13:18:01.275128       8 log.go:172] (0xc00098db80) Reply frame received for 5
I0208 13:18:01.614286       8 log.go:172] (0xc00098db80) Data frame received for 3
I0208 13:18:01.614352       8 log.go:172] (0xc0017a1cc0) (3) Data frame handling
I0208 13:18:01.614391       8 log.go:172] (0xc0017a1cc0) (3) Data frame sent
I0208 13:18:01.742608       8 log.go:172] (0xc00098db80) Data frame received for 1
I0208 13:18:01.742672       8 log.go:172] (0xc0012b68c0) (1) Data frame handling
I0208 13:18:01.742682       8 log.go:172] (0xc0012b68c0) (1) Data frame sent
I0208 13:18:01.742694       8 log.go:172] (0xc00098db80) (0xc0012b68c0) Stream removed, broadcasting: 1
I0208 13:18:01.742791       8 log.go:172] (0xc00098db80) (0xc0017a1cc0) Stream removed, broadcasting: 3
I0208 13:18:01.743204       8 log.go:172] (0xc00098db80) (0xc0012b6960) Stream removed, broadcasting: 5
I0208 13:18:01.743419       8 log.go:172] (0xc00098db80) Go away received
I0208 13:18:01.743468       8 log.go:172] (0xc00098db80) (0xc0012b68c0) Stream removed, broadcasting: 1
I0208 13:18:01.743531       8 log.go:172] (0xc00098db80) (0xc0017a1cc0) Stream removed, broadcasting: 3
I0208 13:18:01.743594       8 log.go:172] (0xc00098db80) (0xc0012b6960) Stream removed, broadcasting: 5
Feb  8 13:18:01.743: INFO: Waiting for endpoints: map[]
Feb  8 13:18:01.754: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostName&protocol=udp&host=10.44.0.1&port=8081&tries=1'] Namespace:pod-network-test-797 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb  8 13:18:01.754: INFO: >>> kubeConfig: /root/.kube/config
I0208 13:18:01.819496       8 log.go:172] (0xc001f59550) (0xc002146000) Create stream
I0208 13:18:01.819548       8 log.go:172] (0xc001f59550) (0xc002146000) Stream added, broadcasting: 1
I0208 13:18:01.828645       8 log.go:172] (0xc001f59550) Reply frame received for 1
I0208 13:18:01.828680       8 log.go:172] (0xc001f59550) (0xc0005b23c0) Create stream
I0208 13:18:01.828688       8 log.go:172] (0xc001f59550) (0xc0005b23c0) Stream added, broadcasting: 3
I0208 13:18:01.831374       8 log.go:172] (0xc001f59550) Reply frame received for 3
I0208 13:18:01.831439       8 log.go:172] (0xc001f59550) (0xc0005b2500) Create stream
I0208 13:18:01.831448       8 log.go:172] (0xc001f59550) (0xc0005b2500) Stream added, broadcasting: 5
I0208 13:18:01.834975       8 log.go:172] (0xc001f59550) Reply frame received for 5
I0208 13:18:01.961406       8 log.go:172] (0xc001f59550) Data frame received for 3
I0208 13:18:01.961655       8 log.go:172] (0xc0005b23c0) (3) Data frame handling
I0208 13:18:01.961711       8 log.go:172] (0xc0005b23c0) (3) Data frame sent
I0208 13:18:02.112874       8 log.go:172] (0xc001f59550) (0xc0005b23c0) Stream removed, broadcasting: 3
I0208 13:18:02.113039       8 log.go:172] (0xc001f59550) Data frame received for 1
I0208 13:18:02.113052       8 log.go:172] (0xc002146000) (1) Data frame handling
I0208 13:18:02.113070       8 log.go:172] (0xc002146000) (1) Data frame sent
I0208 13:18:02.113346       8 log.go:172] (0xc001f59550) (0xc002146000) Stream removed, broadcasting: 1
I0208 13:18:02.113408       8 log.go:172] (0xc001f59550) (0xc0005b2500) Stream removed, broadcasting: 5
I0208 13:18:02.113420       8 log.go:172] (0xc001f59550) Go away received
I0208 13:18:02.113580       8 log.go:172] (0xc001f59550) (0xc002146000) Stream removed, broadcasting: 1
I0208 13:18:02.113709       8 log.go:172] (0xc001f59550) (0xc0005b23c0) Stream removed, broadcasting: 3
I0208 13:18:02.113723       8 log.go:172] (0xc001f59550) (0xc0005b2500) Stream removed, broadcasting: 5
Feb  8 13:18:02.113: INFO: Waiting for endpoints: map[]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  8 13:18:02.113: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-797" for this suite.
Feb  8 13:18:24.145: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 13:18:24.240: INFO: namespace pod-network-test-797 deletion completed in 22.118578026s

• [SLOW TEST:61.414 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  8 13:18:24.241: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3975.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3975.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Feb  8 13:18:36.405: INFO: Unable to read wheezy_udp@kubernetes.default.svc.cluster.local from pod dns-3975/dns-test-c999dab8-3b0a-4838-94f1-dc57a0793a69: the server could not find the requested resource (get pods dns-test-c999dab8-3b0a-4838-94f1-dc57a0793a69)
Feb  8 13:18:36.413: INFO: Unable to read wheezy_tcp@kubernetes.default.svc.cluster.local from pod dns-3975/dns-test-c999dab8-3b0a-4838-94f1-dc57a0793a69: the server could not find the requested resource (get pods dns-test-c999dab8-3b0a-4838-94f1-dc57a0793a69)
Feb  8 13:18:36.419: INFO: Unable to read wheezy_udp@PodARecord from pod dns-3975/dns-test-c999dab8-3b0a-4838-94f1-dc57a0793a69: the server could not find the requested resource (get pods dns-test-c999dab8-3b0a-4838-94f1-dc57a0793a69)
Feb  8 13:18:36.425: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-3975/dns-test-c999dab8-3b0a-4838-94f1-dc57a0793a69: the server could not find the requested resource (get pods dns-test-c999dab8-3b0a-4838-94f1-dc57a0793a69)
Feb  8 13:18:36.430: INFO: Unable to read jessie_udp@kubernetes.default.svc.cluster.local from pod dns-3975/dns-test-c999dab8-3b0a-4838-94f1-dc57a0793a69: the server could not find the requested resource (get pods dns-test-c999dab8-3b0a-4838-94f1-dc57a0793a69)
Feb  8 13:18:36.434: INFO: Unable to read jessie_tcp@kubernetes.default.svc.cluster.local from pod dns-3975/dns-test-c999dab8-3b0a-4838-94f1-dc57a0793a69: the server could not find the requested resource (get pods dns-test-c999dab8-3b0a-4838-94f1-dc57a0793a69)
Feb  8 13:18:36.439: INFO: Unable to read jessie_udp@PodARecord from pod dns-3975/dns-test-c999dab8-3b0a-4838-94f1-dc57a0793a69: the server could not find the requested resource (get pods dns-test-c999dab8-3b0a-4838-94f1-dc57a0793a69)
Feb  8 13:18:36.445: INFO: Unable to read jessie_tcp@PodARecord from pod dns-3975/dns-test-c999dab8-3b0a-4838-94f1-dc57a0793a69: the server could not find the requested resource (get pods dns-test-c999dab8-3b0a-4838-94f1-dc57a0793a69)
Feb  8 13:18:36.445: INFO: Lookups using dns-3975/dns-test-c999dab8-3b0a-4838-94f1-dc57a0793a69 failed for: [wheezy_udp@kubernetes.default.svc.cluster.local wheezy_tcp@kubernetes.default.svc.cluster.local wheezy_udp@PodARecord wheezy_tcp@PodARecord jessie_udp@kubernetes.default.svc.cluster.local jessie_tcp@kubernetes.default.svc.cluster.local jessie_udp@PodARecord jessie_tcp@PodARecord]

Feb  8 13:18:41.531: INFO: DNS probes using dns-3975/dns-test-c999dab8-3b0a-4838-94f1-dc57a0793a69 succeeded

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  8 13:18:42.768: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-3975" for this suite.
Feb  8 13:18:48.993: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 13:18:49.159: INFO: namespace dns-3975 deletion completed in 6.240829956s

• [SLOW TEST:24.919 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  8 13:18:49.160: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test override arguments
Feb  8 13:18:49.304: INFO: Waiting up to 5m0s for pod "client-containers-afbbe24b-c317-4b3e-87f6-57c41e7d5884" in namespace "containers-7193" to be "success or failure"
Feb  8 13:18:49.359: INFO: Pod "client-containers-afbbe24b-c317-4b3e-87f6-57c41e7d5884": Phase="Pending", Reason="", readiness=false. Elapsed: 54.445425ms
Feb  8 13:18:51.370: INFO: Pod "client-containers-afbbe24b-c317-4b3e-87f6-57c41e7d5884": Phase="Pending", Reason="", readiness=false. Elapsed: 2.065276101s
Feb  8 13:18:53.401: INFO: Pod "client-containers-afbbe24b-c317-4b3e-87f6-57c41e7d5884": Phase="Pending", Reason="", readiness=false. Elapsed: 4.096661536s
Feb  8 13:18:55.406: INFO: Pod "client-containers-afbbe24b-c317-4b3e-87f6-57c41e7d5884": Phase="Pending", Reason="", readiness=false. Elapsed: 6.101263771s
Feb  8 13:18:57.435: INFO: Pod "client-containers-afbbe24b-c317-4b3e-87f6-57c41e7d5884": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.130646331s
STEP: Saw pod success
Feb  8 13:18:57.435: INFO: Pod "client-containers-afbbe24b-c317-4b3e-87f6-57c41e7d5884" satisfied condition "success or failure"
Feb  8 13:18:57.441: INFO: Trying to get logs from node iruya-node pod client-containers-afbbe24b-c317-4b3e-87f6-57c41e7d5884 container test-container: 
STEP: delete the pod
Feb  8 13:18:57.609: INFO: Waiting for pod client-containers-afbbe24b-c317-4b3e-87f6-57c41e7d5884 to disappear
Feb  8 13:18:57.629: INFO: Pod client-containers-afbbe24b-c317-4b3e-87f6-57c41e7d5884 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  8 13:18:57.629: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-7193" for this suite.
Feb  8 13:19:03.678: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 13:19:03.805: INFO: namespace containers-7193 deletion completed in 6.171665898s

• [SLOW TEST:14.646 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  8 13:19:03.805: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-3f2dd759-e1b2-4772-91e4-39578680bb69
STEP: Creating a pod to test consume configMaps
Feb  8 13:19:03.914: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-e19cd86f-697c-4f2f-aa93-897a6190ec06" in namespace "projected-1639" to be "success or failure"
Feb  8 13:19:03.927: INFO: Pod "pod-projected-configmaps-e19cd86f-697c-4f2f-aa93-897a6190ec06": Phase="Pending", Reason="", readiness=false. Elapsed: 12.895056ms
Feb  8 13:19:05.936: INFO: Pod "pod-projected-configmaps-e19cd86f-697c-4f2f-aa93-897a6190ec06": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02211882s
Feb  8 13:19:07.948: INFO: Pod "pod-projected-configmaps-e19cd86f-697c-4f2f-aa93-897a6190ec06": Phase="Pending", Reason="", readiness=false. Elapsed: 4.033581124s
Feb  8 13:19:09.955: INFO: Pod "pod-projected-configmaps-e19cd86f-697c-4f2f-aa93-897a6190ec06": Phase="Pending", Reason="", readiness=false. Elapsed: 6.040601526s
Feb  8 13:19:11.966: INFO: Pod "pod-projected-configmaps-e19cd86f-697c-4f2f-aa93-897a6190ec06": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.052111551s
STEP: Saw pod success
Feb  8 13:19:11.966: INFO: Pod "pod-projected-configmaps-e19cd86f-697c-4f2f-aa93-897a6190ec06" satisfied condition "success or failure"
Feb  8 13:19:11.972: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-e19cd86f-697c-4f2f-aa93-897a6190ec06 container projected-configmap-volume-test: 
STEP: delete the pod
Feb  8 13:19:12.146: INFO: Waiting for pod pod-projected-configmaps-e19cd86f-697c-4f2f-aa93-897a6190ec06 to disappear
Feb  8 13:19:12.167: INFO: Pod pod-projected-configmaps-e19cd86f-697c-4f2f-aa93-897a6190ec06 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  8 13:19:12.167: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1639" for this suite.
Feb  8 13:19:18.244: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 13:19:18.382: INFO: namespace projected-1639 deletion completed in 6.211726713s

• [SLOW TEST:14.576 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  8 13:19:18.382: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test substitution in container's args
Feb  8 13:19:18.520: INFO: Waiting up to 5m0s for pod "var-expansion-1b7747d9-4d06-47c0-bf16-39764488a659" in namespace "var-expansion-4307" to be "success or failure"
Feb  8 13:19:18.539: INFO: Pod "var-expansion-1b7747d9-4d06-47c0-bf16-39764488a659": Phase="Pending", Reason="", readiness=false. Elapsed: 19.179342ms
Feb  8 13:19:20.551: INFO: Pod "var-expansion-1b7747d9-4d06-47c0-bf16-39764488a659": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03187806s
Feb  8 13:19:22.562: INFO: Pod "var-expansion-1b7747d9-4d06-47c0-bf16-39764488a659": Phase="Pending", Reason="", readiness=false. Elapsed: 4.042520535s
Feb  8 13:19:24.574: INFO: Pod "var-expansion-1b7747d9-4d06-47c0-bf16-39764488a659": Phase="Pending", Reason="", readiness=false. Elapsed: 6.054150193s
Feb  8 13:19:26.583: INFO: Pod "var-expansion-1b7747d9-4d06-47c0-bf16-39764488a659": Phase="Pending", Reason="", readiness=false. Elapsed: 8.063743849s
Feb  8 13:19:28.594: INFO: Pod "var-expansion-1b7747d9-4d06-47c0-bf16-39764488a659": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.073885117s
STEP: Saw pod success
Feb  8 13:19:28.594: INFO: Pod "var-expansion-1b7747d9-4d06-47c0-bf16-39764488a659" satisfied condition "success or failure"
Feb  8 13:19:28.597: INFO: Trying to get logs from node iruya-node pod var-expansion-1b7747d9-4d06-47c0-bf16-39764488a659 container dapi-container: 
STEP: delete the pod
Feb  8 13:19:28.829: INFO: Waiting for pod var-expansion-1b7747d9-4d06-47c0-bf16-39764488a659 to disappear
Feb  8 13:19:28.838: INFO: Pod var-expansion-1b7747d9-4d06-47c0-bf16-39764488a659 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  8 13:19:28.838: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-4307" for this suite.
Feb  8 13:19:34.893: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 13:19:35.009: INFO: namespace var-expansion-4307 deletion completed in 6.16441442s

• [SLOW TEST:16.627 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  8 13:19:35.009: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with configmap pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod pod-subpath-test-configmap-vq9j
STEP: Creating a pod to test atomic-volume-subpath
Feb  8 13:19:35.116: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-vq9j" in namespace "subpath-9320" to be "success or failure"
Feb  8 13:19:35.122: INFO: Pod "pod-subpath-test-configmap-vq9j": Phase="Pending", Reason="", readiness=false. Elapsed: 5.984364ms
Feb  8 13:19:37.134: INFO: Pod "pod-subpath-test-configmap-vq9j": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018097076s
Feb  8 13:19:39.141: INFO: Pod "pod-subpath-test-configmap-vq9j": Phase="Pending", Reason="", readiness=false. Elapsed: 4.024817638s
Feb  8 13:19:41.148: INFO: Pod "pod-subpath-test-configmap-vq9j": Phase="Pending", Reason="", readiness=false. Elapsed: 6.031930314s
Feb  8 13:19:43.153: INFO: Pod "pod-subpath-test-configmap-vq9j": Phase="Running", Reason="", readiness=true. Elapsed: 8.036869111s
Feb  8 13:19:45.161: INFO: Pod "pod-subpath-test-configmap-vq9j": Phase="Running", Reason="", readiness=true. Elapsed: 10.044998063s
Feb  8 13:19:47.170: INFO: Pod "pod-subpath-test-configmap-vq9j": Phase="Running", Reason="", readiness=true. Elapsed: 12.053572865s
Feb  8 13:19:49.178: INFO: Pod "pod-subpath-test-configmap-vq9j": Phase="Running", Reason="", readiness=true. Elapsed: 14.062178366s
Feb  8 13:19:51.187: INFO: Pod "pod-subpath-test-configmap-vq9j": Phase="Running", Reason="", readiness=true. Elapsed: 16.070919361s
Feb  8 13:19:53.208: INFO: Pod "pod-subpath-test-configmap-vq9j": Phase="Running", Reason="", readiness=true. Elapsed: 18.091704069s
Feb  8 13:19:55.216: INFO: Pod "pod-subpath-test-configmap-vq9j": Phase="Running", Reason="", readiness=true. Elapsed: 20.099999373s
Feb  8 13:19:57.224: INFO: Pod "pod-subpath-test-configmap-vq9j": Phase="Running", Reason="", readiness=true. Elapsed: 22.10817892s
Feb  8 13:19:59.234: INFO: Pod "pod-subpath-test-configmap-vq9j": Phase="Running", Reason="", readiness=true. Elapsed: 24.117811461s
Feb  8 13:20:01.243: INFO: Pod "pod-subpath-test-configmap-vq9j": Phase="Running", Reason="", readiness=true. Elapsed: 26.126702782s
Feb  8 13:20:03.256: INFO: Pod "pod-subpath-test-configmap-vq9j": Phase="Running", Reason="", readiness=true. Elapsed: 28.139394613s
Feb  8 13:20:05.267: INFO: Pod "pod-subpath-test-configmap-vq9j": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.150490361s
STEP: Saw pod success
Feb  8 13:20:05.267: INFO: Pod "pod-subpath-test-configmap-vq9j" satisfied condition "success or failure"
Feb  8 13:20:05.273: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-configmap-vq9j container test-container-subpath-configmap-vq9j: 
STEP: delete the pod
Feb  8 13:20:05.348: INFO: Waiting for pod pod-subpath-test-configmap-vq9j to disappear
Feb  8 13:20:05.352: INFO: Pod pod-subpath-test-configmap-vq9j no longer exists
STEP: Deleting pod pod-subpath-test-configmap-vq9j
Feb  8 13:20:05.352: INFO: Deleting pod "pod-subpath-test-configmap-vq9j" in namespace "subpath-9320"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  8 13:20:05.356: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-9320" for this suite.
Feb  8 13:20:11.458: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 13:20:11.576: INFO: namespace subpath-9320 deletion completed in 6.214817271s

• [SLOW TEST:36.568 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with configmap pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  8 13:20:11.577: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-b6d1391a-2f6f-4544-9267-6debd2ebbe3a
STEP: Creating a pod to test consume configMaps
Feb  8 13:20:11.673: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-f6dcb7da-ffee-43ad-a63a-5bf4ebb1638e" in namespace "projected-8425" to be "success or failure"
Feb  8 13:20:11.682: INFO: Pod "pod-projected-configmaps-f6dcb7da-ffee-43ad-a63a-5bf4ebb1638e": Phase="Pending", Reason="", readiness=false. Elapsed: 9.11878ms
Feb  8 13:20:13.694: INFO: Pod "pod-projected-configmaps-f6dcb7da-ffee-43ad-a63a-5bf4ebb1638e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021064052s
Feb  8 13:20:15.701: INFO: Pod "pod-projected-configmaps-f6dcb7da-ffee-43ad-a63a-5bf4ebb1638e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.02835188s
Feb  8 13:20:17.710: INFO: Pod "pod-projected-configmaps-f6dcb7da-ffee-43ad-a63a-5bf4ebb1638e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.037344445s
Feb  8 13:20:19.723: INFO: Pod "pod-projected-configmaps-f6dcb7da-ffee-43ad-a63a-5bf4ebb1638e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.049924865s
STEP: Saw pod success
Feb  8 13:20:19.723: INFO: Pod "pod-projected-configmaps-f6dcb7da-ffee-43ad-a63a-5bf4ebb1638e" satisfied condition "success or failure"
Feb  8 13:20:19.729: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-f6dcb7da-ffee-43ad-a63a-5bf4ebb1638e container projected-configmap-volume-test: 
STEP: delete the pod
Feb  8 13:20:19.910: INFO: Waiting for pod pod-projected-configmaps-f6dcb7da-ffee-43ad-a63a-5bf4ebb1638e to disappear
Feb  8 13:20:19.936: INFO: Pod pod-projected-configmaps-f6dcb7da-ffee-43ad-a63a-5bf4ebb1638e no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  8 13:20:19.936: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8425" for this suite.
Feb  8 13:20:25.976: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 13:20:26.072: INFO: namespace projected-8425 deletion completed in 6.115860076s

• [SLOW TEST:14.495 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  8 13:20:26.072: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir volume type on node default medium
Feb  8 13:20:26.206: INFO: Waiting up to 5m0s for pod "pod-02ce750e-59df-4f94-8dfe-274b7cec0ea1" in namespace "emptydir-3343" to be "success or failure"
Feb  8 13:20:26.233: INFO: Pod "pod-02ce750e-59df-4f94-8dfe-274b7cec0ea1": Phase="Pending", Reason="", readiness=false. Elapsed: 26.806818ms
Feb  8 13:20:28.246: INFO: Pod "pod-02ce750e-59df-4f94-8dfe-274b7cec0ea1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.039914032s
Feb  8 13:20:30.253: INFO: Pod "pod-02ce750e-59df-4f94-8dfe-274b7cec0ea1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.047026526s
Feb  8 13:20:32.269: INFO: Pod "pod-02ce750e-59df-4f94-8dfe-274b7cec0ea1": Phase="Pending", Reason="", readiness=false. Elapsed: 6.062826537s
Feb  8 13:20:34.275: INFO: Pod "pod-02ce750e-59df-4f94-8dfe-274b7cec0ea1": Phase="Pending", Reason="", readiness=false. Elapsed: 8.069083353s
Feb  8 13:20:36.285: INFO: Pod "pod-02ce750e-59df-4f94-8dfe-274b7cec0ea1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.078853267s
STEP: Saw pod success
Feb  8 13:20:36.285: INFO: Pod "pod-02ce750e-59df-4f94-8dfe-274b7cec0ea1" satisfied condition "success or failure"
Feb  8 13:20:36.293: INFO: Trying to get logs from node iruya-node pod pod-02ce750e-59df-4f94-8dfe-274b7cec0ea1 container test-container: 
STEP: delete the pod
Feb  8 13:20:36.622: INFO: Waiting for pod pod-02ce750e-59df-4f94-8dfe-274b7cec0ea1 to disappear
Feb  8 13:20:36.633: INFO: Pod pod-02ce750e-59df-4f94-8dfe-274b7cec0ea1 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  8 13:20:36.633: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-3343" for this suite.
Feb  8 13:20:42.718: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 13:20:42.803: INFO: namespace emptydir-3343 deletion completed in 6.161299442s

• [SLOW TEST:16.730 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  8 13:20:42.803: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-map-3723fd88-efb7-4523-8d38-0ee68bbbdbd2
STEP: Creating a pod to test consume configMaps
Feb  8 13:20:42.976: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-03629f2b-1468-4971-8241-6640681ab63b" in namespace "projected-3455" to be "success or failure"
Feb  8 13:20:42.991: INFO: Pod "pod-projected-configmaps-03629f2b-1468-4971-8241-6640681ab63b": Phase="Pending", Reason="", readiness=false. Elapsed: 14.840425ms
Feb  8 13:20:45.000: INFO: Pod "pod-projected-configmaps-03629f2b-1468-4971-8241-6640681ab63b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023575788s
Feb  8 13:20:47.006: INFO: Pod "pod-projected-configmaps-03629f2b-1468-4971-8241-6640681ab63b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.030241702s
Feb  8 13:20:49.013: INFO: Pod "pod-projected-configmaps-03629f2b-1468-4971-8241-6640681ab63b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.036888369s
Feb  8 13:20:51.080: INFO: Pod "pod-projected-configmaps-03629f2b-1468-4971-8241-6640681ab63b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.104011255s
STEP: Saw pod success
Feb  8 13:20:51.080: INFO: Pod "pod-projected-configmaps-03629f2b-1468-4971-8241-6640681ab63b" satisfied condition "success or failure"
Feb  8 13:20:51.086: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-03629f2b-1468-4971-8241-6640681ab63b container projected-configmap-volume-test: 
STEP: delete the pod
Feb  8 13:20:51.168: INFO: Waiting for pod pod-projected-configmaps-03629f2b-1468-4971-8241-6640681ab63b to disappear
Feb  8 13:20:51.176: INFO: Pod pod-projected-configmaps-03629f2b-1468-4971-8241-6640681ab63b no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  8 13:20:51.176: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3455" for this suite.
Feb  8 13:20:57.702: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 13:20:57.883: INFO: namespace projected-3455 deletion completed in 6.259937897s

• [SLOW TEST:15.080 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  8 13:20:57.884: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81
Feb  8 13:20:57.938: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Feb  8 13:20:57.965: INFO: Waiting for terminating namespaces to be deleted...
Feb  8 13:20:57.987: INFO: 
Logging pods the kubelet thinks is on node iruya-node before test
Feb  8 13:20:57.997: INFO: weave-net-rlp57 from kube-system started at 2019-10-12 11:56:39 +0000 UTC (2 container statuses recorded)
Feb  8 13:20:57.997: INFO: 	Container weave ready: true, restart count 0
Feb  8 13:20:57.997: INFO: 	Container weave-npc ready: true, restart count 0
Feb  8 13:20:57.997: INFO: kube-proxy-976zl from kube-system started at 2019-08-04 09:01:39 +0000 UTC (1 container statuses recorded)
Feb  8 13:20:57.998: INFO: 	Container kube-proxy ready: true, restart count 0
Feb  8 13:20:57.998: INFO: 
Logging pods the kubelet thinks is on node iruya-server-sfge57q7djm7 before test
Feb  8 13:20:58.013: INFO: coredns-5c98db65d4-bm4gs from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded)
Feb  8 13:20:58.013: INFO: 	Container coredns ready: true, restart count 0
Feb  8 13:20:58.013: INFO: etcd-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:38 +0000 UTC (1 container statuses recorded)
Feb  8 13:20:58.013: INFO: 	Container etcd ready: true, restart count 0
Feb  8 13:20:58.013: INFO: weave-net-bzl4d from kube-system started at 2019-08-04 08:52:37 +0000 UTC (2 container statuses recorded)
Feb  8 13:20:58.013: INFO: 	Container weave ready: true, restart count 0
Feb  8 13:20:58.013: INFO: 	Container weave-npc ready: true, restart count 0
Feb  8 13:20:58.013: INFO: kube-controller-manager-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:42 +0000 UTC (1 container statuses recorded)
Feb  8 13:20:58.013: INFO: 	Container kube-controller-manager ready: true, restart count 20
Feb  8 13:20:58.013: INFO: kube-proxy-58v95 from kube-system started at 2019-08-04 08:52:37 +0000 UTC (1 container statuses recorded)
Feb  8 13:20:58.013: INFO: 	Container kube-proxy ready: true, restart count 0
Feb  8 13:20:58.013: INFO: kube-apiserver-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:39 +0000 UTC (1 container statuses recorded)
Feb  8 13:20:58.013: INFO: 	Container kube-apiserver ready: true, restart count 0
Feb  8 13:20:58.013: INFO: kube-scheduler-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:43 +0000 UTC (1 container statuses recorded)
Feb  8 13:20:58.013: INFO: 	Container kube-scheduler ready: true, restart count 13
Feb  8 13:20:58.013: INFO: coredns-5c98db65d4-xx8w8 from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded)
Feb  8 13:20:58.013: INFO: 	Container coredns ready: true, restart count 0
[It] validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Trying to schedule Pod with nonempty NodeSelector.
STEP: Considering event: 
Type = [Warning], Name = [restricted-pod.15f16ff8ad4075e9], Reason = [FailedScheduling], Message = [0/2 nodes are available: 2 node(s) didn't match node selector.]
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  8 13:20:59.043: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-8148" for this suite.
Feb  8 13:21:05.070: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 13:21:05.226: INFO: namespace sched-pred-8148 deletion completed in 6.178527885s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72

• [SLOW TEST:7.343 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23
  validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy through a service and a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  8 13:21:05.227: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy through a service and a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: starting an echo server on multiple ports
STEP: creating replication controller proxy-service-gdjj7 in namespace proxy-7567
I0208 13:21:05.429727       8 runners.go:180] Created replication controller with name: proxy-service-gdjj7, namespace: proxy-7567, replica count: 1
I0208 13:21:06.480504       8 runners.go:180] proxy-service-gdjj7 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0208 13:21:07.480951       8 runners.go:180] proxy-service-gdjj7 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0208 13:21:08.481269       8 runners.go:180] proxy-service-gdjj7 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0208 13:21:09.481551       8 runners.go:180] proxy-service-gdjj7 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0208 13:21:10.481857       8 runners.go:180] proxy-service-gdjj7 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0208 13:21:11.482180       8 runners.go:180] proxy-service-gdjj7 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0208 13:21:12.482514       8 runners.go:180] proxy-service-gdjj7 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0208 13:21:13.482870       8 runners.go:180] proxy-service-gdjj7 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0208 13:21:14.483150       8 runners.go:180] proxy-service-gdjj7 Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Feb  8 13:21:14.494: INFO: setup took 9.151945923s, starting test cases
STEP: running 16 cases, 20 attempts per case, 320 total attempts
Feb  8 13:21:14.539: INFO: (0) /api/v1/namespaces/proxy-7567/services/proxy-service-gdjj7:portname2/proxy/: bar (200; 44.822226ms)
Feb  8 13:21:14.539: INFO: (0) /api/v1/namespaces/proxy-7567/pods/http:proxy-service-gdjj7-57mnk:160/proxy/: foo (200; 44.905041ms)
Feb  8 13:21:14.539: INFO: (0) /api/v1/namespaces/proxy-7567/pods/http:proxy-service-gdjj7-57mnk:162/proxy/: bar (200; 44.867534ms)
Feb  8 13:21:14.539: INFO: (0) /api/v1/namespaces/proxy-7567/pods/proxy-service-gdjj7-57mnk:160/proxy/: foo (200; 44.633767ms)
Feb  8 13:21:14.539: INFO: (0) /api/v1/namespaces/proxy-7567/services/http:proxy-service-gdjj7:portname2/proxy/: bar (200; 44.613655ms)
Feb  8 13:21:14.539: INFO: (0) /api/v1/namespaces/proxy-7567/services/proxy-service-gdjj7:portname1/proxy/: foo (200; 44.889344ms)
Feb  8 13:21:14.539: INFO: (0) /api/v1/namespaces/proxy-7567/pods/proxy-service-gdjj7-57mnk:1080/proxy/: test<... (200; 44.60732ms)
Feb  8 13:21:14.539: INFO: (0) /api/v1/namespaces/proxy-7567/pods/proxy-service-gdjj7-57mnk:162/proxy/: bar (200; 44.754457ms)
Feb  8 13:21:14.539: INFO: (0) /api/v1/namespaces/proxy-7567/services/http:proxy-service-gdjj7:portname1/proxy/: foo (200; 44.641261ms)
Feb  8 13:21:14.539: INFO: (0) /api/v1/namespaces/proxy-7567/pods/proxy-service-gdjj7-57mnk/proxy/: test (200; 44.874203ms)
Feb  8 13:21:14.540: INFO: (0) /api/v1/namespaces/proxy-7567/pods/http:proxy-service-gdjj7-57mnk:1080/proxy/: ... (200; 45.694506ms)
Feb  8 13:21:14.563: INFO: (0) /api/v1/namespaces/proxy-7567/pods/https:proxy-service-gdjj7-57mnk:443/proxy/: ... (200; 11.197828ms)
Feb  8 13:21:14.580: INFO: (1) /api/v1/namespaces/proxy-7567/services/proxy-service-gdjj7:portname1/proxy/: foo (200; 14.242028ms)
Feb  8 13:21:14.581: INFO: (1) /api/v1/namespaces/proxy-7567/pods/proxy-service-gdjj7-57mnk:160/proxy/: foo (200; 15.024483ms)
Feb  8 13:21:14.582: INFO: (1) /api/v1/namespaces/proxy-7567/pods/proxy-service-gdjj7-57mnk:1080/proxy/: test<... (200; 16.397927ms)
Feb  8 13:21:14.583: INFO: (1) /api/v1/namespaces/proxy-7567/pods/http:proxy-service-gdjj7-57mnk:160/proxy/: foo (200; 17.281738ms)
Feb  8 13:21:14.583: INFO: (1) /api/v1/namespaces/proxy-7567/pods/proxy-service-gdjj7-57mnk/proxy/: test (200; 17.462155ms)
Feb  8 13:21:14.584: INFO: (1) /api/v1/namespaces/proxy-7567/pods/https:proxy-service-gdjj7-57mnk:462/proxy/: tls qux (200; 17.667187ms)
Feb  8 13:21:14.584: INFO: (1) /api/v1/namespaces/proxy-7567/pods/https:proxy-service-gdjj7-57mnk:443/proxy/: test (200; 14.60726ms)
Feb  8 13:21:14.605: INFO: (2) /api/v1/namespaces/proxy-7567/pods/http:proxy-service-gdjj7-57mnk:162/proxy/: bar (200; 14.826714ms)
Feb  8 13:21:14.609: INFO: (2) /api/v1/namespaces/proxy-7567/pods/http:proxy-service-gdjj7-57mnk:160/proxy/: foo (200; 19.102413ms)
Feb  8 13:21:14.609: INFO: (2) /api/v1/namespaces/proxy-7567/pods/http:proxy-service-gdjj7-57mnk:1080/proxy/: ... (200; 19.27341ms)
Feb  8 13:21:14.609: INFO: (2) /api/v1/namespaces/proxy-7567/pods/https:proxy-service-gdjj7-57mnk:443/proxy/: test<... (200; 19.474813ms)
Feb  8 13:21:14.610: INFO: (2) /api/v1/namespaces/proxy-7567/pods/https:proxy-service-gdjj7-57mnk:462/proxy/: tls qux (200; 19.930539ms)
Feb  8 13:21:14.610: INFO: (2) /api/v1/namespaces/proxy-7567/services/http:proxy-service-gdjj7:portname1/proxy/: foo (200; 20.546502ms)
Feb  8 13:21:14.620: INFO: (3) /api/v1/namespaces/proxy-7567/pods/proxy-service-gdjj7-57mnk:162/proxy/: bar (200; 8.923968ms)
Feb  8 13:21:14.620: INFO: (3) /api/v1/namespaces/proxy-7567/pods/proxy-service-gdjj7-57mnk:160/proxy/: foo (200; 9.714003ms)
Feb  8 13:21:14.621: INFO: (3) /api/v1/namespaces/proxy-7567/pods/http:proxy-service-gdjj7-57mnk:160/proxy/: foo (200; 10.223089ms)
Feb  8 13:21:14.621: INFO: (3) /api/v1/namespaces/proxy-7567/pods/http:proxy-service-gdjj7-57mnk:1080/proxy/: ... (200; 10.757073ms)
Feb  8 13:21:14.622: INFO: (3) /api/v1/namespaces/proxy-7567/pods/proxy-service-gdjj7-57mnk:1080/proxy/: test<... (200; 11.201954ms)
Feb  8 13:21:14.623: INFO: (3) /api/v1/namespaces/proxy-7567/pods/http:proxy-service-gdjj7-57mnk:162/proxy/: bar (200; 12.423104ms)
Feb  8 13:21:14.623: INFO: (3) /api/v1/namespaces/proxy-7567/services/proxy-service-gdjj7:portname2/proxy/: bar (200; 12.801863ms)
Feb  8 13:21:14.623: INFO: (3) /api/v1/namespaces/proxy-7567/services/http:proxy-service-gdjj7:portname2/proxy/: bar (200; 12.738527ms)
Feb  8 13:21:14.623: INFO: (3) /api/v1/namespaces/proxy-7567/services/https:proxy-service-gdjj7:tlsportname2/proxy/: tls qux (200; 12.595807ms)
Feb  8 13:21:14.623: INFO: (3) /api/v1/namespaces/proxy-7567/services/https:proxy-service-gdjj7:tlsportname1/proxy/: tls baz (200; 12.304312ms)
Feb  8 13:21:14.624: INFO: (3) /api/v1/namespaces/proxy-7567/services/proxy-service-gdjj7:portname1/proxy/: foo (200; 12.740961ms)
Feb  8 13:21:14.624: INFO: (3) /api/v1/namespaces/proxy-7567/pods/https:proxy-service-gdjj7-57mnk:443/proxy/: test (200; 13.31099ms)
Feb  8 13:21:14.624: INFO: (3) /api/v1/namespaces/proxy-7567/pods/https:proxy-service-gdjj7-57mnk:460/proxy/: tls baz (200; 13.264341ms)
Feb  8 13:21:14.624: INFO: (3) /api/v1/namespaces/proxy-7567/services/http:proxy-service-gdjj7:portname1/proxy/: foo (200; 13.746058ms)
Feb  8 13:21:14.630: INFO: (4) /api/v1/namespaces/proxy-7567/pods/http:proxy-service-gdjj7-57mnk:162/proxy/: bar (200; 5.802054ms)
Feb  8 13:21:14.631: INFO: (4) /api/v1/namespaces/proxy-7567/pods/proxy-service-gdjj7-57mnk:160/proxy/: foo (200; 7.027302ms)
Feb  8 13:21:14.631: INFO: (4) /api/v1/namespaces/proxy-7567/pods/http:proxy-service-gdjj7-57mnk:1080/proxy/: ... (200; 6.988649ms)
Feb  8 13:21:14.633: INFO: (4) /api/v1/namespaces/proxy-7567/pods/https:proxy-service-gdjj7-57mnk:462/proxy/: tls qux (200; 8.357491ms)
Feb  8 13:21:14.633: INFO: (4) /api/v1/namespaces/proxy-7567/pods/proxy-service-gdjj7-57mnk/proxy/: test (200; 8.465028ms)
Feb  8 13:21:14.634: INFO: (4) /api/v1/namespaces/proxy-7567/pods/http:proxy-service-gdjj7-57mnk:160/proxy/: foo (200; 9.442112ms)
Feb  8 13:21:14.634: INFO: (4) /api/v1/namespaces/proxy-7567/pods/https:proxy-service-gdjj7-57mnk:443/proxy/: test<... (200; 9.587348ms)
Feb  8 13:21:14.634: INFO: (4) /api/v1/namespaces/proxy-7567/pods/https:proxy-service-gdjj7-57mnk:460/proxy/: tls baz (200; 9.594309ms)
Feb  8 13:21:14.635: INFO: (4) /api/v1/namespaces/proxy-7567/pods/proxy-service-gdjj7-57mnk:162/proxy/: bar (200; 10.493291ms)
Feb  8 13:21:14.636: INFO: (4) /api/v1/namespaces/proxy-7567/services/http:proxy-service-gdjj7:portname1/proxy/: foo (200; 11.868429ms)
Feb  8 13:21:14.637: INFO: (4) /api/v1/namespaces/proxy-7567/services/proxy-service-gdjj7:portname1/proxy/: foo (200; 12.237149ms)
Feb  8 13:21:14.637: INFO: (4) /api/v1/namespaces/proxy-7567/services/https:proxy-service-gdjj7:tlsportname2/proxy/: tls qux (200; 12.354393ms)
Feb  8 13:21:14.637: INFO: (4) /api/v1/namespaces/proxy-7567/services/https:proxy-service-gdjj7:tlsportname1/proxy/: tls baz (200; 12.585354ms)
Feb  8 13:21:14.637: INFO: (4) /api/v1/namespaces/proxy-7567/services/http:proxy-service-gdjj7:portname2/proxy/: bar (200; 12.691664ms)
Feb  8 13:21:14.637: INFO: (4) /api/v1/namespaces/proxy-7567/services/proxy-service-gdjj7:portname2/proxy/: bar (200; 12.831298ms)
Feb  8 13:21:14.644: INFO: (5) /api/v1/namespaces/proxy-7567/pods/https:proxy-service-gdjj7-57mnk:462/proxy/: tls qux (200; 6.99954ms)
Feb  8 13:21:14.646: INFO: (5) /api/v1/namespaces/proxy-7567/pods/https:proxy-service-gdjj7-57mnk:460/proxy/: tls baz (200; 8.739264ms)
Feb  8 13:21:14.646: INFO: (5) /api/v1/namespaces/proxy-7567/pods/http:proxy-service-gdjj7-57mnk:162/proxy/: bar (200; 8.808566ms)
Feb  8 13:21:14.647: INFO: (5) /api/v1/namespaces/proxy-7567/pods/proxy-service-gdjj7-57mnk:1080/proxy/: test<... (200; 9.362754ms)
Feb  8 13:21:14.647: INFO: (5) /api/v1/namespaces/proxy-7567/pods/http:proxy-service-gdjj7-57mnk:1080/proxy/: ... (200; 9.618191ms)
Feb  8 13:21:14.647: INFO: (5) /api/v1/namespaces/proxy-7567/pods/http:proxy-service-gdjj7-57mnk:160/proxy/: foo (200; 9.787912ms)
Feb  8 13:21:14.648: INFO: (5) /api/v1/namespaces/proxy-7567/pods/proxy-service-gdjj7-57mnk:162/proxy/: bar (200; 10.057176ms)
Feb  8 13:21:14.648: INFO: (5) /api/v1/namespaces/proxy-7567/pods/proxy-service-gdjj7-57mnk:160/proxy/: foo (200; 10.120611ms)
Feb  8 13:21:14.648: INFO: (5) /api/v1/namespaces/proxy-7567/pods/proxy-service-gdjj7-57mnk/proxy/: test (200; 10.214398ms)
Feb  8 13:21:14.648: INFO: (5) /api/v1/namespaces/proxy-7567/services/http:proxy-service-gdjj7:portname2/proxy/: bar (200; 10.304405ms)
Feb  8 13:21:14.648: INFO: (5) /api/v1/namespaces/proxy-7567/pods/https:proxy-service-gdjj7-57mnk:443/proxy/: test<... (200; 7.060003ms)
Feb  8 13:21:14.665: INFO: (6) /api/v1/namespaces/proxy-7567/services/http:proxy-service-gdjj7:portname2/proxy/: bar (200; 13.492579ms)
Feb  8 13:21:14.665: INFO: (6) /api/v1/namespaces/proxy-7567/pods/http:proxy-service-gdjj7-57mnk:160/proxy/: foo (200; 13.986228ms)
Feb  8 13:21:14.666: INFO: (6) /api/v1/namespaces/proxy-7567/pods/proxy-service-gdjj7-57mnk/proxy/: test (200; 14.418786ms)
Feb  8 13:21:14.666: INFO: (6) /api/v1/namespaces/proxy-7567/services/proxy-service-gdjj7:portname2/proxy/: bar (200; 14.30409ms)
Feb  8 13:21:14.666: INFO: (6) /api/v1/namespaces/proxy-7567/pods/http:proxy-service-gdjj7-57mnk:1080/proxy/: ... (200; 14.493134ms)
Feb  8 13:21:14.666: INFO: (6) /api/v1/namespaces/proxy-7567/pods/https:proxy-service-gdjj7-57mnk:462/proxy/: tls qux (200; 14.847886ms)
Feb  8 13:21:14.666: INFO: (6) /api/v1/namespaces/proxy-7567/pods/proxy-service-gdjj7-57mnk:162/proxy/: bar (200; 14.901782ms)
Feb  8 13:21:14.667: INFO: (6) /api/v1/namespaces/proxy-7567/pods/https:proxy-service-gdjj7-57mnk:460/proxy/: tls baz (200; 15.679003ms)
Feb  8 13:21:14.668: INFO: (6) /api/v1/namespaces/proxy-7567/services/https:proxy-service-gdjj7:tlsportname2/proxy/: tls qux (200; 16.648098ms)
Feb  8 13:21:14.669: INFO: (6) /api/v1/namespaces/proxy-7567/pods/https:proxy-service-gdjj7-57mnk:443/proxy/: test<... (200; 12.684921ms)
Feb  8 13:21:14.684: INFO: (7) /api/v1/namespaces/proxy-7567/pods/http:proxy-service-gdjj7-57mnk:1080/proxy/: ... (200; 13.440435ms)
Feb  8 13:21:14.684: INFO: (7) /api/v1/namespaces/proxy-7567/pods/https:proxy-service-gdjj7-57mnk:462/proxy/: tls qux (200; 13.395951ms)
Feb  8 13:21:14.685: INFO: (7) /api/v1/namespaces/proxy-7567/pods/https:proxy-service-gdjj7-57mnk:443/proxy/: test (200; 13.680863ms)
Feb  8 13:21:14.685: INFO: (7) /api/v1/namespaces/proxy-7567/services/https:proxy-service-gdjj7:tlsportname1/proxy/: tls baz (200; 13.836751ms)
Feb  8 13:21:14.685: INFO: (7) /api/v1/namespaces/proxy-7567/pods/https:proxy-service-gdjj7-57mnk:460/proxy/: tls baz (200; 13.924041ms)
Feb  8 13:21:14.685: INFO: (7) /api/v1/namespaces/proxy-7567/services/proxy-service-gdjj7:portname2/proxy/: bar (200; 13.774773ms)
Feb  8 13:21:14.685: INFO: (7) /api/v1/namespaces/proxy-7567/pods/proxy-service-gdjj7-57mnk:162/proxy/: bar (200; 13.874504ms)
Feb  8 13:21:14.685: INFO: (7) /api/v1/namespaces/proxy-7567/services/http:proxy-service-gdjj7:portname2/proxy/: bar (200; 13.984509ms)
Feb  8 13:21:14.685: INFO: (7) /api/v1/namespaces/proxy-7567/services/http:proxy-service-gdjj7:portname1/proxy/: foo (200; 14.239634ms)
Feb  8 13:21:14.686: INFO: (7) /api/v1/namespaces/proxy-7567/pods/http:proxy-service-gdjj7-57mnk:160/proxy/: foo (200; 14.943387ms)
Feb  8 13:21:14.693: INFO: (8) /api/v1/namespaces/proxy-7567/pods/proxy-service-gdjj7-57mnk:1080/proxy/: test<... (200; 7.21723ms)
Feb  8 13:21:14.694: INFO: (8) /api/v1/namespaces/proxy-7567/pods/http:proxy-service-gdjj7-57mnk:1080/proxy/: ... (200; 7.60006ms)
Feb  8 13:21:14.694: INFO: (8) /api/v1/namespaces/proxy-7567/pods/proxy-service-gdjj7-57mnk:162/proxy/: bar (200; 7.893665ms)
Feb  8 13:21:14.694: INFO: (8) /api/v1/namespaces/proxy-7567/pods/http:proxy-service-gdjj7-57mnk:160/proxy/: foo (200; 7.92195ms)
Feb  8 13:21:14.694: INFO: (8) /api/v1/namespaces/proxy-7567/pods/proxy-service-gdjj7-57mnk/proxy/: test (200; 7.88628ms)
Feb  8 13:21:14.694: INFO: (8) /api/v1/namespaces/proxy-7567/pods/proxy-service-gdjj7-57mnk:160/proxy/: foo (200; 7.921571ms)
Feb  8 13:21:14.694: INFO: (8) /api/v1/namespaces/proxy-7567/pods/https:proxy-service-gdjj7-57mnk:462/proxy/: tls qux (200; 7.894962ms)
Feb  8 13:21:14.694: INFO: (8) /api/v1/namespaces/proxy-7567/pods/http:proxy-service-gdjj7-57mnk:162/proxy/: bar (200; 7.986867ms)
Feb  8 13:21:14.694: INFO: (8) /api/v1/namespaces/proxy-7567/pods/https:proxy-service-gdjj7-57mnk:460/proxy/: tls baz (200; 8.113229ms)
Feb  8 13:21:14.694: INFO: (8) /api/v1/namespaces/proxy-7567/pods/https:proxy-service-gdjj7-57mnk:443/proxy/: test (200; 10.761102ms)
Feb  8 13:21:14.707: INFO: (9) /api/v1/namespaces/proxy-7567/pods/proxy-service-gdjj7-57mnk:1080/proxy/: test<... (200; 10.760029ms)
Feb  8 13:21:14.707: INFO: (9) /api/v1/namespaces/proxy-7567/pods/http:proxy-service-gdjj7-57mnk:1080/proxy/: ... (200; 10.962635ms)
Feb  8 13:21:14.707: INFO: (9) /api/v1/namespaces/proxy-7567/pods/http:proxy-service-gdjj7-57mnk:160/proxy/: foo (200; 11.040378ms)
Feb  8 13:21:14.707: INFO: (9) /api/v1/namespaces/proxy-7567/services/https:proxy-service-gdjj7:tlsportname1/proxy/: tls baz (200; 10.947413ms)
Feb  8 13:21:14.708: INFO: (9) /api/v1/namespaces/proxy-7567/pods/http:proxy-service-gdjj7-57mnk:162/proxy/: bar (200; 11.169157ms)
Feb  8 13:21:14.708: INFO: (9) /api/v1/namespaces/proxy-7567/services/https:proxy-service-gdjj7:tlsportname2/proxy/: tls qux (200; 11.338005ms)
Feb  8 13:21:14.708: INFO: (9) /api/v1/namespaces/proxy-7567/pods/https:proxy-service-gdjj7-57mnk:462/proxy/: tls qux (200; 11.449447ms)
Feb  8 13:21:14.708: INFO: (9) /api/v1/namespaces/proxy-7567/pods/proxy-service-gdjj7-57mnk:162/proxy/: bar (200; 11.434726ms)
Feb  8 13:21:14.708: INFO: (9) /api/v1/namespaces/proxy-7567/pods/https:proxy-service-gdjj7-57mnk:460/proxy/: tls baz (200; 11.491905ms)
Feb  8 13:21:14.708: INFO: (9) /api/v1/namespaces/proxy-7567/services/http:proxy-service-gdjj7:portname1/proxy/: foo (200; 12.087476ms)
Feb  8 13:21:14.708: INFO: (9) /api/v1/namespaces/proxy-7567/services/http:proxy-service-gdjj7:portname2/proxy/: bar (200; 12.092464ms)
Feb  8 13:21:14.708: INFO: (9) /api/v1/namespaces/proxy-7567/pods/https:proxy-service-gdjj7-57mnk:443/proxy/: test<... (200; 5.797937ms)
Feb  8 13:21:14.715: INFO: (10) /api/v1/namespaces/proxy-7567/pods/http:proxy-service-gdjj7-57mnk:160/proxy/: foo (200; 5.810863ms)
Feb  8 13:21:14.715: INFO: (10) /api/v1/namespaces/proxy-7567/pods/proxy-service-gdjj7-57mnk:162/proxy/: bar (200; 5.765118ms)
Feb  8 13:21:14.715: INFO: (10) /api/v1/namespaces/proxy-7567/pods/proxy-service-gdjj7-57mnk/proxy/: test (200; 6.09455ms)
Feb  8 13:21:14.715: INFO: (10) /api/v1/namespaces/proxy-7567/pods/https:proxy-service-gdjj7-57mnk:443/proxy/: ... (200; 6.973829ms)
Feb  8 13:21:14.716: INFO: (10) /api/v1/namespaces/proxy-7567/services/https:proxy-service-gdjj7:tlsportname1/proxy/: tls baz (200; 6.888245ms)
Feb  8 13:21:14.716: INFO: (10) /api/v1/namespaces/proxy-7567/services/proxy-service-gdjj7:portname1/proxy/: foo (200; 7.347691ms)
Feb  8 13:21:14.717: INFO: (10) /api/v1/namespaces/proxy-7567/services/http:proxy-service-gdjj7:portname1/proxy/: foo (200; 7.86009ms)
Feb  8 13:21:14.717: INFO: (10) /api/v1/namespaces/proxy-7567/services/https:proxy-service-gdjj7:tlsportname2/proxy/: tls qux (200; 7.80164ms)
Feb  8 13:21:14.717: INFO: (10) /api/v1/namespaces/proxy-7567/services/proxy-service-gdjj7:portname2/proxy/: bar (200; 7.839476ms)
Feb  8 13:21:14.717: INFO: (10) /api/v1/namespaces/proxy-7567/services/http:proxy-service-gdjj7:portname2/proxy/: bar (200; 8.03985ms)
Feb  8 13:21:14.717: INFO: (10) /api/v1/namespaces/proxy-7567/pods/proxy-service-gdjj7-57mnk:160/proxy/: foo (200; 8.254633ms)
Feb  8 13:21:14.722: INFO: (11) /api/v1/namespaces/proxy-7567/pods/http:proxy-service-gdjj7-57mnk:1080/proxy/: ... (200; 4.435835ms)
Feb  8 13:21:14.722: INFO: (11) /api/v1/namespaces/proxy-7567/pods/https:proxy-service-gdjj7-57mnk:462/proxy/: tls qux (200; 4.596602ms)
Feb  8 13:21:14.722: INFO: (11) /api/v1/namespaces/proxy-7567/pods/proxy-service-gdjj7-57mnk:160/proxy/: foo (200; 5.259318ms)
Feb  8 13:21:14.722: INFO: (11) /api/v1/namespaces/proxy-7567/pods/proxy-service-gdjj7-57mnk:162/proxy/: bar (200; 5.284282ms)
Feb  8 13:21:14.723: INFO: (11) /api/v1/namespaces/proxy-7567/pods/http:proxy-service-gdjj7-57mnk:160/proxy/: foo (200; 5.752253ms)
Feb  8 13:21:14.723: INFO: (11) /api/v1/namespaces/proxy-7567/pods/proxy-service-gdjj7-57mnk/proxy/: test (200; 5.830259ms)
Feb  8 13:21:14.723: INFO: (11) /api/v1/namespaces/proxy-7567/pods/proxy-service-gdjj7-57mnk:1080/proxy/: test<... (200; 6.07094ms)
Feb  8 13:21:14.723: INFO: (11) /api/v1/namespaces/proxy-7567/pods/http:proxy-service-gdjj7-57mnk:162/proxy/: bar (200; 6.163482ms)
Feb  8 13:21:14.723: INFO: (11) /api/v1/namespaces/proxy-7567/pods/https:proxy-service-gdjj7-57mnk:460/proxy/: tls baz (200; 6.307816ms)
Feb  8 13:21:14.724: INFO: (11) /api/v1/namespaces/proxy-7567/pods/https:proxy-service-gdjj7-57mnk:443/proxy/: test (200; 5.711868ms)
Feb  8 13:21:14.733: INFO: (12) /api/v1/namespaces/proxy-7567/pods/http:proxy-service-gdjj7-57mnk:162/proxy/: bar (200; 6.709495ms)
Feb  8 13:21:14.733: INFO: (12) /api/v1/namespaces/proxy-7567/pods/https:proxy-service-gdjj7-57mnk:460/proxy/: tls baz (200; 7.00231ms)
Feb  8 13:21:14.733: INFO: (12) /api/v1/namespaces/proxy-7567/pods/proxy-service-gdjj7-57mnk:1080/proxy/: test<... (200; 6.618533ms)
Feb  8 13:21:14.734: INFO: (12) /api/v1/namespaces/proxy-7567/pods/https:proxy-service-gdjj7-57mnk:462/proxy/: tls qux (200; 6.523859ms)
Feb  8 13:21:14.734: INFO: (12) /api/v1/namespaces/proxy-7567/pods/proxy-service-gdjj7-57mnk:160/proxy/: foo (200; 6.607099ms)
Feb  8 13:21:14.734: INFO: (12) /api/v1/namespaces/proxy-7567/pods/proxy-service-gdjj7-57mnk:162/proxy/: bar (200; 6.133307ms)
Feb  8 13:21:14.734: INFO: (12) /api/v1/namespaces/proxy-7567/pods/http:proxy-service-gdjj7-57mnk:1080/proxy/: ... (200; 6.473954ms)
Feb  8 13:21:14.736: INFO: (12) /api/v1/namespaces/proxy-7567/services/http:proxy-service-gdjj7:portname1/proxy/: foo (200; 9.710449ms)
Feb  8 13:21:14.736: INFO: (12) /api/v1/namespaces/proxy-7567/services/http:proxy-service-gdjj7:portname2/proxy/: bar (200; 9.678726ms)
Feb  8 13:21:14.736: INFO: (12) /api/v1/namespaces/proxy-7567/services/proxy-service-gdjj7:portname2/proxy/: bar (200; 8.971249ms)
Feb  8 13:21:14.737: INFO: (12) /api/v1/namespaces/proxy-7567/services/proxy-service-gdjj7:portname1/proxy/: foo (200; 9.510763ms)
Feb  8 13:21:14.737: INFO: (12) /api/v1/namespaces/proxy-7567/services/https:proxy-service-gdjj7:tlsportname1/proxy/: tls baz (200; 9.295125ms)
Feb  8 13:21:14.738: INFO: (12) /api/v1/namespaces/proxy-7567/services/https:proxy-service-gdjj7:tlsportname2/proxy/: tls qux (200; 10.996493ms)
Feb  8 13:21:14.750: INFO: (13) /api/v1/namespaces/proxy-7567/pods/http:proxy-service-gdjj7-57mnk:1080/proxy/: ... (200; 11.722119ms)
Feb  8 13:21:14.750: INFO: (13) /api/v1/namespaces/proxy-7567/pods/proxy-service-gdjj7-57mnk:162/proxy/: bar (200; 11.693637ms)
Feb  8 13:21:14.750: INFO: (13) /api/v1/namespaces/proxy-7567/pods/https:proxy-service-gdjj7-57mnk:462/proxy/: tls qux (200; 11.769046ms)
Feb  8 13:21:14.750: INFO: (13) /api/v1/namespaces/proxy-7567/pods/http:proxy-service-gdjj7-57mnk:162/proxy/: bar (200; 11.71311ms)
Feb  8 13:21:14.750: INFO: (13) /api/v1/namespaces/proxy-7567/pods/proxy-service-gdjj7-57mnk/proxy/: test (200; 12.193288ms)
Feb  8 13:21:14.751: INFO: (13) /api/v1/namespaces/proxy-7567/pods/proxy-service-gdjj7-57mnk:160/proxy/: foo (200; 12.829346ms)
Feb  8 13:21:14.751: INFO: (13) /api/v1/namespaces/proxy-7567/pods/proxy-service-gdjj7-57mnk:1080/proxy/: test<... (200; 12.970676ms)
Feb  8 13:21:14.751: INFO: (13) /api/v1/namespaces/proxy-7567/pods/http:proxy-service-gdjj7-57mnk:160/proxy/: foo (200; 13.033177ms)
Feb  8 13:21:14.751: INFO: (13) /api/v1/namespaces/proxy-7567/pods/https:proxy-service-gdjj7-57mnk:460/proxy/: tls baz (200; 13.093635ms)
Feb  8 13:21:14.751: INFO: (13) /api/v1/namespaces/proxy-7567/pods/https:proxy-service-gdjj7-57mnk:443/proxy/: test<... (200; 9.528678ms)
Feb  8 13:21:14.763: INFO: (14) /api/v1/namespaces/proxy-7567/pods/http:proxy-service-gdjj7-57mnk:162/proxy/: bar (200; 9.534733ms)
Feb  8 13:21:14.763: INFO: (14) /api/v1/namespaces/proxy-7567/pods/proxy-service-gdjj7-57mnk/proxy/: test (200; 9.671ms)
Feb  8 13:21:14.763: INFO: (14) /api/v1/namespaces/proxy-7567/pods/http:proxy-service-gdjj7-57mnk:1080/proxy/: ... (200; 9.570375ms)
Feb  8 13:21:14.763: INFO: (14) /api/v1/namespaces/proxy-7567/pods/https:proxy-service-gdjj7-57mnk:460/proxy/: tls baz (200; 9.876682ms)
Feb  8 13:21:14.764: INFO: (14) /api/v1/namespaces/proxy-7567/services/http:proxy-service-gdjj7:portname1/proxy/: foo (200; 10.71827ms)
Feb  8 13:21:14.765: INFO: (14) /api/v1/namespaces/proxy-7567/services/https:proxy-service-gdjj7:tlsportname1/proxy/: tls baz (200; 11.12583ms)
Feb  8 13:21:14.765: INFO: (14) /api/v1/namespaces/proxy-7567/services/proxy-service-gdjj7:portname1/proxy/: foo (200; 11.426106ms)
Feb  8 13:21:14.765: INFO: (14) /api/v1/namespaces/proxy-7567/services/proxy-service-gdjj7:portname2/proxy/: bar (200; 11.573629ms)
Feb  8 13:21:14.765: INFO: (14) /api/v1/namespaces/proxy-7567/services/http:proxy-service-gdjj7:portname2/proxy/: bar (200; 11.619081ms)
Feb  8 13:21:14.765: INFO: (14) /api/v1/namespaces/proxy-7567/services/https:proxy-service-gdjj7:tlsportname2/proxy/: tls qux (200; 11.753827ms)
Feb  8 13:21:14.773: INFO: (15) /api/v1/namespaces/proxy-7567/pods/proxy-service-gdjj7-57mnk:162/proxy/: bar (200; 7.922256ms)
Feb  8 13:21:14.774: INFO: (15) /api/v1/namespaces/proxy-7567/pods/http:proxy-service-gdjj7-57mnk:1080/proxy/: ... (200; 7.971922ms)
Feb  8 13:21:14.774: INFO: (15) /api/v1/namespaces/proxy-7567/pods/https:proxy-service-gdjj7-57mnk:460/proxy/: tls baz (200; 8.076689ms)
Feb  8 13:21:14.774: INFO: (15) /api/v1/namespaces/proxy-7567/pods/proxy-service-gdjj7-57mnk:160/proxy/: foo (200; 8.017789ms)
Feb  8 13:21:14.774: INFO: (15) /api/v1/namespaces/proxy-7567/pods/http:proxy-service-gdjj7-57mnk:162/proxy/: bar (200; 8.024933ms)
Feb  8 13:21:14.775: INFO: (15) /api/v1/namespaces/proxy-7567/pods/https:proxy-service-gdjj7-57mnk:443/proxy/: test<... (200; 11.57921ms)
Feb  8 13:21:14.777: INFO: (15) /api/v1/namespaces/proxy-7567/services/https:proxy-service-gdjj7:tlsportname2/proxy/: tls qux (200; 11.629771ms)
Feb  8 13:21:14.777: INFO: (15) /api/v1/namespaces/proxy-7567/pods/https:proxy-service-gdjj7-57mnk:462/proxy/: tls qux (200; 11.909294ms)
Feb  8 13:21:14.778: INFO: (15) /api/v1/namespaces/proxy-7567/services/proxy-service-gdjj7:portname1/proxy/: foo (200; 12.096004ms)
Feb  8 13:21:14.778: INFO: (15) /api/v1/namespaces/proxy-7567/pods/proxy-service-gdjj7-57mnk/proxy/: test (200; 12.103269ms)
Feb  8 13:21:14.778: INFO: (15) /api/v1/namespaces/proxy-7567/services/https:proxy-service-gdjj7:tlsportname1/proxy/: tls baz (200; 12.288223ms)
Feb  8 13:21:14.778: INFO: (15) /api/v1/namespaces/proxy-7567/services/proxy-service-gdjj7:portname2/proxy/: bar (200; 12.70278ms)
Feb  8 13:21:14.779: INFO: (15) /api/v1/namespaces/proxy-7567/services/http:proxy-service-gdjj7:portname1/proxy/: foo (200; 13.862984ms)
Feb  8 13:21:14.789: INFO: (16) /api/v1/namespaces/proxy-7567/pods/http:proxy-service-gdjj7-57mnk:1080/proxy/: ... (200; 9.302812ms)
Feb  8 13:21:14.789: INFO: (16) /api/v1/namespaces/proxy-7567/services/http:proxy-service-gdjj7:portname2/proxy/: bar (200; 9.227905ms)
Feb  8 13:21:14.790: INFO: (16) /api/v1/namespaces/proxy-7567/services/proxy-service-gdjj7:portname2/proxy/: bar (200; 10.853871ms)
Feb  8 13:21:14.791: INFO: (16) /api/v1/namespaces/proxy-7567/services/proxy-service-gdjj7:portname1/proxy/: foo (200; 11.227465ms)
Feb  8 13:21:14.791: INFO: (16) /api/v1/namespaces/proxy-7567/pods/proxy-service-gdjj7-57mnk/proxy/: test (200; 11.497889ms)
Feb  8 13:21:14.792: INFO: (16) /api/v1/namespaces/proxy-7567/services/https:proxy-service-gdjj7:tlsportname2/proxy/: tls qux (200; 12.726154ms)
Feb  8 13:21:14.793: INFO: (16) /api/v1/namespaces/proxy-7567/pods/proxy-service-gdjj7-57mnk:1080/proxy/: test<... (200; 13.834116ms)
Feb  8 13:21:14.793: INFO: (16) /api/v1/namespaces/proxy-7567/pods/https:proxy-service-gdjj7-57mnk:443/proxy/: ... (200; 10.283871ms)
Feb  8 13:21:14.805: INFO: (17) /api/v1/namespaces/proxy-7567/services/http:proxy-service-gdjj7:portname1/proxy/: foo (200; 11.317085ms)
Feb  8 13:21:14.805: INFO: (17) /api/v1/namespaces/proxy-7567/pods/http:proxy-service-gdjj7-57mnk:162/proxy/: bar (200; 11.461673ms)
Feb  8 13:21:14.806: INFO: (17) /api/v1/namespaces/proxy-7567/pods/https:proxy-service-gdjj7-57mnk:460/proxy/: tls baz (200; 12.176184ms)
Feb  8 13:21:14.806: INFO: (17) /api/v1/namespaces/proxy-7567/services/https:proxy-service-gdjj7:tlsportname1/proxy/: tls baz (200; 12.269496ms)
Feb  8 13:21:14.806: INFO: (17) /api/v1/namespaces/proxy-7567/services/https:proxy-service-gdjj7:tlsportname2/proxy/: tls qux (200; 12.361641ms)
Feb  8 13:21:14.806: INFO: (17) /api/v1/namespaces/proxy-7567/pods/https:proxy-service-gdjj7-57mnk:462/proxy/: tls qux (200; 12.483169ms)
Feb  8 13:21:14.806: INFO: (17) /api/v1/namespaces/proxy-7567/services/http:proxy-service-gdjj7:portname2/proxy/: bar (200; 12.454091ms)
Feb  8 13:21:14.806: INFO: (17) /api/v1/namespaces/proxy-7567/services/proxy-service-gdjj7:portname2/proxy/: bar (200; 12.502852ms)
Feb  8 13:21:14.807: INFO: (17) /api/v1/namespaces/proxy-7567/pods/proxy-service-gdjj7-57mnk:160/proxy/: foo (200; 13.047041ms)
Feb  8 13:21:14.807: INFO: (17) /api/v1/namespaces/proxy-7567/pods/proxy-service-gdjj7-57mnk:1080/proxy/: test<... (200; 13.263356ms)
Feb  8 13:21:14.807: INFO: (17) /api/v1/namespaces/proxy-7567/services/proxy-service-gdjj7:portname1/proxy/: foo (200; 13.382293ms)
Feb  8 13:21:14.807: INFO: (17) /api/v1/namespaces/proxy-7567/pods/proxy-service-gdjj7-57mnk/proxy/: test (200; 13.311156ms)
Feb  8 13:21:14.807: INFO: (17) /api/v1/namespaces/proxy-7567/pods/http:proxy-service-gdjj7-57mnk:160/proxy/: foo (200; 13.4754ms)
Feb  8 13:21:14.808: INFO: (17) /api/v1/namespaces/proxy-7567/pods/proxy-service-gdjj7-57mnk:162/proxy/: bar (200; 13.792407ms)
Feb  8 13:21:14.816: INFO: (18) /api/v1/namespaces/proxy-7567/pods/proxy-service-gdjj7-57mnk:1080/proxy/: test<... (200; 7.785426ms)
Feb  8 13:21:14.816: INFO: (18) /api/v1/namespaces/proxy-7567/pods/https:proxy-service-gdjj7-57mnk:462/proxy/: tls qux (200; 7.898908ms)
Feb  8 13:21:14.816: INFO: (18) /api/v1/namespaces/proxy-7567/pods/https:proxy-service-gdjj7-57mnk:460/proxy/: tls baz (200; 8.16059ms)
Feb  8 13:21:14.816: INFO: (18) /api/v1/namespaces/proxy-7567/pods/http:proxy-service-gdjj7-57mnk:162/proxy/: bar (200; 8.135515ms)
Feb  8 13:21:14.816: INFO: (18) /api/v1/namespaces/proxy-7567/pods/proxy-service-gdjj7-57mnk:160/proxy/: foo (200; 8.33188ms)
Feb  8 13:21:14.816: INFO: (18) /api/v1/namespaces/proxy-7567/pods/proxy-service-gdjj7-57mnk/proxy/: test (200; 8.473791ms)
Feb  8 13:21:14.817: INFO: (18) /api/v1/namespaces/proxy-7567/pods/https:proxy-service-gdjj7-57mnk:443/proxy/: ... (200; 9.11133ms)
Feb  8 13:21:14.817: INFO: (18) /api/v1/namespaces/proxy-7567/pods/http:proxy-service-gdjj7-57mnk:160/proxy/: foo (200; 8.982066ms)
Feb  8 13:21:14.817: INFO: (18) /api/v1/namespaces/proxy-7567/pods/proxy-service-gdjj7-57mnk:162/proxy/: bar (200; 9.160414ms)
Feb  8 13:21:14.819: INFO: (18) /api/v1/namespaces/proxy-7567/services/proxy-service-gdjj7:portname2/proxy/: bar (200; 11.477959ms)
Feb  8 13:21:14.821: INFO: (18) /api/v1/namespaces/proxy-7567/services/http:proxy-service-gdjj7:portname2/proxy/: bar (200; 13.434313ms)
Feb  8 13:21:14.822: INFO: (18) /api/v1/namespaces/proxy-7567/services/proxy-service-gdjj7:portname1/proxy/: foo (200; 13.799519ms)
Feb  8 13:21:14.822: INFO: (18) /api/v1/namespaces/proxy-7567/services/http:proxy-service-gdjj7:portname1/proxy/: foo (200; 13.91173ms)
Feb  8 13:21:14.822: INFO: (18) /api/v1/namespaces/proxy-7567/services/https:proxy-service-gdjj7:tlsportname1/proxy/: tls baz (200; 13.935814ms)
Feb  8 13:21:14.823: INFO: (18) /api/v1/namespaces/proxy-7567/services/https:proxy-service-gdjj7:tlsportname2/proxy/: tls qux (200; 15.160106ms)
Feb  8 13:21:14.838: INFO: (19) /api/v1/namespaces/proxy-7567/services/http:proxy-service-gdjj7:portname2/proxy/: bar (200; 14.455595ms)
Feb  8 13:21:14.839: INFO: (19) /api/v1/namespaces/proxy-7567/services/http:proxy-service-gdjj7:portname1/proxy/: foo (200; 15.632264ms)
Feb  8 13:21:14.839: INFO: (19) /api/v1/namespaces/proxy-7567/pods/http:proxy-service-gdjj7-57mnk:1080/proxy/: ... (200; 15.820857ms)
Feb  8 13:21:14.839: INFO: (19) /api/v1/namespaces/proxy-7567/pods/http:proxy-service-gdjj7-57mnk:162/proxy/: bar (200; 15.988264ms)
Feb  8 13:21:14.839: INFO: (19) /api/v1/namespaces/proxy-7567/pods/proxy-service-gdjj7-57mnk/proxy/: test (200; 16.32236ms)
Feb  8 13:21:14.840: INFO: (19) /api/v1/namespaces/proxy-7567/pods/https:proxy-service-gdjj7-57mnk:460/proxy/: tls baz (200; 16.751501ms)
Feb  8 13:21:14.841: INFO: (19) /api/v1/namespaces/proxy-7567/services/https:proxy-service-gdjj7:tlsportname1/proxy/: tls baz (200; 18.43196ms)
Feb  8 13:21:14.842: INFO: (19) /api/v1/namespaces/proxy-7567/pods/https:proxy-service-gdjj7-57mnk:443/proxy/: test<... (200; 20.702837ms)
Feb  8 13:21:14.844: INFO: (19) /api/v1/namespaces/proxy-7567/pods/proxy-service-gdjj7-57mnk:162/proxy/: bar (200; 20.868376ms)
Feb  8 13:21:14.844: INFO: (19) /api/v1/namespaces/proxy-7567/services/proxy-service-gdjj7:portname2/proxy/: bar (200; 20.909171ms)
Feb  8 13:21:14.846: INFO: (19) /api/v1/namespaces/proxy-7567/pods/http:proxy-service-gdjj7-57mnk:160/proxy/: foo (200; 22.46904ms)
Feb  8 13:21:14.846: INFO: (19) /api/v1/namespaces/proxy-7567/services/proxy-service-gdjj7:portname1/proxy/: foo (200; 22.550052ms)
STEP: deleting ReplicationController proxy-service-gdjj7 in namespace proxy-7567, will wait for the garbage collector to delete the pods
Feb  8 13:21:14.947: INFO: Deleting ReplicationController proxy-service-gdjj7 took: 30.729234ms
Feb  8 13:21:15.248: INFO: Terminating ReplicationController proxy-service-gdjj7 pods took: 300.64873ms
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  8 13:21:26.649: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-7567" for this suite.
Feb  8 13:21:32.676: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 13:21:32.778: INFO: namespace proxy-7567 deletion completed in 6.123686432s

• [SLOW TEST:27.551 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58
    should proxy through a service and a pod  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-api-machinery] Secrets 
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  8 13:21:32.778: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-493ab013-32e6-4116-80fe-081136afc8ef
STEP: Creating a pod to test consume secrets
Feb  8 13:21:32.929: INFO: Waiting up to 5m0s for pod "pod-secrets-05ef9c21-e343-4e5a-8a99-49927acbfc75" in namespace "secrets-1615" to be "success or failure"
Feb  8 13:21:32.950: INFO: Pod "pod-secrets-05ef9c21-e343-4e5a-8a99-49927acbfc75": Phase="Pending", Reason="", readiness=false. Elapsed: 20.395631ms
Feb  8 13:21:34.960: INFO: Pod "pod-secrets-05ef9c21-e343-4e5a-8a99-49927acbfc75": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03106972s
Feb  8 13:21:36.970: INFO: Pod "pod-secrets-05ef9c21-e343-4e5a-8a99-49927acbfc75": Phase="Pending", Reason="", readiness=false. Elapsed: 4.040484585s
Feb  8 13:21:38.978: INFO: Pod "pod-secrets-05ef9c21-e343-4e5a-8a99-49927acbfc75": Phase="Pending", Reason="", readiness=false. Elapsed: 6.048818405s
Feb  8 13:21:40.989: INFO: Pod "pod-secrets-05ef9c21-e343-4e5a-8a99-49927acbfc75": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.05989026s
STEP: Saw pod success
Feb  8 13:21:40.989: INFO: Pod "pod-secrets-05ef9c21-e343-4e5a-8a99-49927acbfc75" satisfied condition "success or failure"
Feb  8 13:21:40.994: INFO: Trying to get logs from node iruya-node pod pod-secrets-05ef9c21-e343-4e5a-8a99-49927acbfc75 container secret-env-test: 
STEP: delete the pod
Feb  8 13:21:41.034: INFO: Waiting for pod pod-secrets-05ef9c21-e343-4e5a-8a99-49927acbfc75 to disappear
Feb  8 13:21:41.046: INFO: Pod pod-secrets-05ef9c21-e343-4e5a-8a99-49927acbfc75 no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  8 13:21:41.046: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-1615" for this suite.
Feb  8 13:21:47.169: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 13:21:47.287: INFO: namespace secrets-1615 deletion completed in 6.193212534s

• [SLOW TEST:14.509 seconds]
[sig-api-machinery] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run --rm job 
  should create a job from an image, then delete the job  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  8 13:21:47.287: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should create a job from an image, then delete the job  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: executing a command with run --rm and attach with stdin
Feb  8 13:21:47.355: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-8002 run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed''
Feb  8 13:21:56.629: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0208 13:21:55.412744    1594 log.go:172] (0xc000a36160) (0xc0003ea140) Create stream\nI0208 13:21:55.412912    1594 log.go:172] (0xc000a36160) (0xc0003ea140) Stream added, broadcasting: 1\nI0208 13:21:55.420824    1594 log.go:172] (0xc000a36160) Reply frame received for 1\nI0208 13:21:55.420860    1594 log.go:172] (0xc000a36160) (0xc0003ea1e0) Create stream\nI0208 13:21:55.420872    1594 log.go:172] (0xc000a36160) (0xc0003ea1e0) Stream added, broadcasting: 3\nI0208 13:21:55.422508    1594 log.go:172] (0xc000a36160) Reply frame received for 3\nI0208 13:21:55.422575    1594 log.go:172] (0xc000a36160) (0xc000426000) Create stream\nI0208 13:21:55.422590    1594 log.go:172] (0xc000a36160) (0xc000426000) Stream added, broadcasting: 5\nI0208 13:21:55.424657    1594 log.go:172] (0xc000a36160) Reply frame received for 5\nI0208 13:21:55.424702    1594 log.go:172] (0xc000a36160) (0xc0004e6000) Create stream\nI0208 13:21:55.424719    1594 log.go:172] (0xc000a36160) (0xc0004e6000) Stream added, broadcasting: 7\nI0208 13:21:55.426513    1594 log.go:172] (0xc000a36160) Reply frame received for 7\nI0208 13:21:55.426721    1594 log.go:172] (0xc0003ea1e0) (3) Writing data frame\nI0208 13:21:55.426896    1594 log.go:172] (0xc0003ea1e0) (3) Writing data frame\nI0208 13:21:55.436288    1594 log.go:172] (0xc000a36160) Data frame received for 5\nI0208 13:21:55.436518    1594 log.go:172] (0xc000426000) (5) Data frame handling\nI0208 13:21:55.436632    1594 log.go:172] (0xc000426000) (5) Data frame sent\nI0208 13:21:55.442496    1594 log.go:172] (0xc000a36160) Data frame received for 5\nI0208 13:21:55.442518    1594 log.go:172] (0xc000426000) (5) Data frame handling\nI0208 13:21:55.442535    1594 log.go:172] (0xc000426000) (5) Data frame sent\nI0208 13:21:56.581598    1594 log.go:172] (0xc000a36160) (0xc0003ea1e0) Stream removed, broadcasting: 3\nI0208 13:21:56.581708    1594 log.go:172] (0xc000a36160) Data frame received for 1\nI0208 13:21:56.581726    1594 log.go:172] (0xc0003ea140) (1) Data frame handling\nI0208 13:21:56.581750    1594 log.go:172] (0xc0003ea140) (1) Data frame sent\nI0208 13:21:56.581784    1594 log.go:172] (0xc000a36160) (0xc0003ea140) Stream removed, broadcasting: 1\nI0208 13:21:56.581856    1594 log.go:172] (0xc000a36160) (0xc000426000) Stream removed, broadcasting: 5\nI0208 13:21:56.581921    1594 log.go:172] (0xc000a36160) (0xc0004e6000) Stream removed, broadcasting: 7\nI0208 13:21:56.581961    1594 log.go:172] (0xc000a36160) Go away received\nI0208 13:21:56.582023    1594 log.go:172] (0xc000a36160) (0xc0003ea140) Stream removed, broadcasting: 1\nI0208 13:21:56.582046    1594 log.go:172] (0xc000a36160) (0xc0003ea1e0) Stream removed, broadcasting: 3\nI0208 13:21:56.582061    1594 log.go:172] (0xc000a36160) (0xc000426000) Stream removed, broadcasting: 5\nI0208 13:21:56.582077    1594 log.go:172] (0xc000a36160) (0xc0004e6000) Stream removed, broadcasting: 7\n"
Feb  8 13:21:56.629: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n"
STEP: verifying the job e2e-test-rm-busybox-job was deleted
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  8 13:21:58.642: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-8002" for this suite.
Feb  8 13:22:04.670: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 13:22:04.811: INFO: namespace kubectl-8002 deletion completed in 6.16372468s

• [SLOW TEST:17.524 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run --rm job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create a job from an image, then delete the job  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  8 13:22:04.811: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-864781de-e634-4d9f-9f0e-b54049b0b5f7
STEP: Creating a pod to test consume configMaps
Feb  8 13:22:04.935: INFO: Waiting up to 5m0s for pod "pod-configmaps-4c9f5d40-c4ee-4bea-8e94-e3ad76dd1df9" in namespace "configmap-4660" to be "success or failure"
Feb  8 13:22:04.949: INFO: Pod "pod-configmaps-4c9f5d40-c4ee-4bea-8e94-e3ad76dd1df9": Phase="Pending", Reason="", readiness=false. Elapsed: 13.937035ms
Feb  8 13:22:07.649: INFO: Pod "pod-configmaps-4c9f5d40-c4ee-4bea-8e94-e3ad76dd1df9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.713728486s
Feb  8 13:22:09.678: INFO: Pod "pod-configmaps-4c9f5d40-c4ee-4bea-8e94-e3ad76dd1df9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.74283375s
Feb  8 13:22:11.691: INFO: Pod "pod-configmaps-4c9f5d40-c4ee-4bea-8e94-e3ad76dd1df9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.755925649s
Feb  8 13:22:13.699: INFO: Pod "pod-configmaps-4c9f5d40-c4ee-4bea-8e94-e3ad76dd1df9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.763492098s
STEP: Saw pod success
Feb  8 13:22:13.699: INFO: Pod "pod-configmaps-4c9f5d40-c4ee-4bea-8e94-e3ad76dd1df9" satisfied condition "success or failure"
Feb  8 13:22:13.704: INFO: Trying to get logs from node iruya-node pod pod-configmaps-4c9f5d40-c4ee-4bea-8e94-e3ad76dd1df9 container configmap-volume-test: 
STEP: delete the pod
Feb  8 13:22:13.931: INFO: Waiting for pod pod-configmaps-4c9f5d40-c4ee-4bea-8e94-e3ad76dd1df9 to disappear
Feb  8 13:22:13.955: INFO: Pod pod-configmaps-4c9f5d40-c4ee-4bea-8e94-e3ad76dd1df9 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  8 13:22:13.955: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-4660" for this suite.
Feb  8 13:22:21.920: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 13:22:22.089: INFO: namespace configmap-4660 deletion completed in 8.106927958s

• [SLOW TEST:17.278 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  8 13:22:22.089: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88
[It] should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  8 13:22:22.216: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-4119" for this suite.
Feb  8 13:22:28.251: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 13:22:28.542: INFO: namespace services-4119 deletion completed in 6.316086467s
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92

• [SLOW TEST:6.453 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  8 13:22:28.543: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
[It] should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb  8 13:22:28.704: INFO: Creating daemon "daemon-set" with a node selector
STEP: Initially, daemon pods should not be running on any nodes.
Feb  8 13:22:28.737: INFO: Number of nodes with available pods: 0
Feb  8 13:22:28.737: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Change node label to blue, check that daemon pod is launched.
Feb  8 13:22:28.814: INFO: Number of nodes with available pods: 0
Feb  8 13:22:28.814: INFO: Node iruya-node is running more than one daemon pod
Feb  8 13:22:29.834: INFO: Number of nodes with available pods: 0
Feb  8 13:22:29.834: INFO: Node iruya-node is running more than one daemon pod
Feb  8 13:22:30.829: INFO: Number of nodes with available pods: 0
Feb  8 13:22:30.829: INFO: Node iruya-node is running more than one daemon pod
Feb  8 13:22:31.823: INFO: Number of nodes with available pods: 0
Feb  8 13:22:31.823: INFO: Node iruya-node is running more than one daemon pod
Feb  8 13:22:32.822: INFO: Number of nodes with available pods: 0
Feb  8 13:22:32.822: INFO: Node iruya-node is running more than one daemon pod
Feb  8 13:22:33.880: INFO: Number of nodes with available pods: 0
Feb  8 13:22:33.880: INFO: Node iruya-node is running more than one daemon pod
Feb  8 13:22:34.827: INFO: Number of nodes with available pods: 0
Feb  8 13:22:34.827: INFO: Node iruya-node is running more than one daemon pod
Feb  8 13:22:35.829: INFO: Number of nodes with available pods: 1
Feb  8 13:22:35.829: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Update the node label to green, and wait for daemons to be unscheduled
Feb  8 13:22:35.923: INFO: Number of nodes with available pods: 1
Feb  8 13:22:35.923: INFO: Number of running nodes: 0, number of available pods: 1
Feb  8 13:22:36.929: INFO: Number of nodes with available pods: 0
Feb  8 13:22:36.929: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate
Feb  8 13:22:36.944: INFO: Number of nodes with available pods: 0
Feb  8 13:22:36.944: INFO: Node iruya-node is running more than one daemon pod
Feb  8 13:22:37.952: INFO: Number of nodes with available pods: 0
Feb  8 13:22:37.952: INFO: Node iruya-node is running more than one daemon pod
Feb  8 13:22:38.951: INFO: Number of nodes with available pods: 0
Feb  8 13:22:38.951: INFO: Node iruya-node is running more than one daemon pod
Feb  8 13:22:39.950: INFO: Number of nodes with available pods: 0
Feb  8 13:22:39.950: INFO: Node iruya-node is running more than one daemon pod
Feb  8 13:22:40.951: INFO: Number of nodes with available pods: 0
Feb  8 13:22:40.951: INFO: Node iruya-node is running more than one daemon pod
Feb  8 13:22:41.956: INFO: Number of nodes with available pods: 0
Feb  8 13:22:41.956: INFO: Node iruya-node is running more than one daemon pod
Feb  8 13:22:42.952: INFO: Number of nodes with available pods: 0
Feb  8 13:22:42.952: INFO: Node iruya-node is running more than one daemon pod
Feb  8 13:22:43.955: INFO: Number of nodes with available pods: 0
Feb  8 13:22:43.955: INFO: Node iruya-node is running more than one daemon pod
Feb  8 13:22:44.954: INFO: Number of nodes with available pods: 0
Feb  8 13:22:44.954: INFO: Node iruya-node is running more than one daemon pod
Feb  8 13:22:45.952: INFO: Number of nodes with available pods: 0
Feb  8 13:22:45.952: INFO: Node iruya-node is running more than one daemon pod
Feb  8 13:22:46.955: INFO: Number of nodes with available pods: 0
Feb  8 13:22:46.955: INFO: Node iruya-node is running more than one daemon pod
Feb  8 13:22:47.951: INFO: Number of nodes with available pods: 0
Feb  8 13:22:47.951: INFO: Node iruya-node is running more than one daemon pod
Feb  8 13:22:48.953: INFO: Number of nodes with available pods: 0
Feb  8 13:22:48.953: INFO: Node iruya-node is running more than one daemon pod
Feb  8 13:22:49.952: INFO: Number of nodes with available pods: 0
Feb  8 13:22:49.952: INFO: Node iruya-node is running more than one daemon pod
Feb  8 13:22:50.959: INFO: Number of nodes with available pods: 0
Feb  8 13:22:50.959: INFO: Node iruya-node is running more than one daemon pod
Feb  8 13:22:51.954: INFO: Number of nodes with available pods: 0
Feb  8 13:22:51.954: INFO: Node iruya-node is running more than one daemon pod
Feb  8 13:22:52.958: INFO: Number of nodes with available pods: 0
Feb  8 13:22:52.958: INFO: Node iruya-node is running more than one daemon pod
Feb  8 13:22:53.960: INFO: Number of nodes with available pods: 1
Feb  8 13:22:53.960: INFO: Number of running nodes: 1, number of available pods: 1
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-3489, will wait for the garbage collector to delete the pods
Feb  8 13:22:54.038: INFO: Deleting DaemonSet.extensions daemon-set took: 16.390354ms
Feb  8 13:22:54.338: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.21486ms
Feb  8 13:23:06.652: INFO: Number of nodes with available pods: 0
Feb  8 13:23:06.652: INFO: Number of running nodes: 0, number of available pods: 0
Feb  8 13:23:06.657: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-3489/daemonsets","resourceVersion":"23569859"},"items":null}

Feb  8 13:23:06.668: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-3489/pods","resourceVersion":"23569859"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  8 13:23:06.751: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-3489" for this suite.
Feb  8 13:23:12.792: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 13:23:12.935: INFO: namespace daemonsets-3489 deletion completed in 6.177563545s

• [SLOW TEST:44.393 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  8 13:23:12.937: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Performing setup for networking test in namespace pod-network-test-121
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Feb  8 13:23:13.050: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Feb  8 13:23:51.328: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.32.0.4 8081 | grep -v '^\s*$'] Namespace:pod-network-test-121 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb  8 13:23:51.328: INFO: >>> kubeConfig: /root/.kube/config
I0208 13:23:51.429763       8 log.go:172] (0xc00154eb00) (0xc000113540) Create stream
I0208 13:23:51.429852       8 log.go:172] (0xc00154eb00) (0xc000113540) Stream added, broadcasting: 1
I0208 13:23:51.440418       8 log.go:172] (0xc00154eb00) Reply frame received for 1
I0208 13:23:51.440469       8 log.go:172] (0xc00154eb00) (0xc001ef8dc0) Create stream
I0208 13:23:51.440488       8 log.go:172] (0xc00154eb00) (0xc001ef8dc0) Stream added, broadcasting: 3
I0208 13:23:51.442171       8 log.go:172] (0xc00154eb00) Reply frame received for 3
I0208 13:23:51.442203       8 log.go:172] (0xc00154eb00) (0xc0001135e0) Create stream
I0208 13:23:51.442210       8 log.go:172] (0xc00154eb00) (0xc0001135e0) Stream added, broadcasting: 5
I0208 13:23:51.443936       8 log.go:172] (0xc00154eb00) Reply frame received for 5
I0208 13:23:52.609189       8 log.go:172] (0xc00154eb00) Data frame received for 3
I0208 13:23:52.609453       8 log.go:172] (0xc001ef8dc0) (3) Data frame handling
I0208 13:23:52.609498       8 log.go:172] (0xc001ef8dc0) (3) Data frame sent
I0208 13:23:52.728631       8 log.go:172] (0xc00154eb00) Data frame received for 1
I0208 13:23:52.728699       8 log.go:172] (0xc000113540) (1) Data frame handling
I0208 13:23:52.728727       8 log.go:172] (0xc00154eb00) (0xc001ef8dc0) Stream removed, broadcasting: 3
I0208 13:23:52.728766       8 log.go:172] (0xc000113540) (1) Data frame sent
I0208 13:23:52.728787       8 log.go:172] (0xc00154eb00) (0xc000113540) Stream removed, broadcasting: 1
I0208 13:23:52.729406       8 log.go:172] (0xc00154eb00) (0xc0001135e0) Stream removed, broadcasting: 5
I0208 13:23:52.729493       8 log.go:172] (0xc00154eb00) Go away received
I0208 13:23:52.729546       8 log.go:172] (0xc00154eb00) (0xc000113540) Stream removed, broadcasting: 1
I0208 13:23:52.729589       8 log.go:172] (0xc00154eb00) (0xc001ef8dc0) Stream removed, broadcasting: 3
I0208 13:23:52.729635       8 log.go:172] (0xc00154eb00) (0xc0001135e0) Stream removed, broadcasting: 5
Feb  8 13:23:52.729: INFO: Found all expected endpoints: [netserver-0]
Feb  8 13:23:52.737: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.44.0.1 8081 | grep -v '^\s*$'] Namespace:pod-network-test-121 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb  8 13:23:52.737: INFO: >>> kubeConfig: /root/.kube/config
I0208 13:23:52.796421       8 log.go:172] (0xc0018988f0) (0xc0005e1180) Create stream
I0208 13:23:52.796481       8 log.go:172] (0xc0018988f0) (0xc0005e1180) Stream added, broadcasting: 1
I0208 13:23:52.800624       8 log.go:172] (0xc0018988f0) Reply frame received for 1
I0208 13:23:52.800654       8 log.go:172] (0xc0018988f0) (0xc002facc80) Create stream
I0208 13:23:52.800659       8 log.go:172] (0xc0018988f0) (0xc002facc80) Stream added, broadcasting: 3
I0208 13:23:52.801795       8 log.go:172] (0xc0018988f0) Reply frame received for 3
I0208 13:23:52.801821       8 log.go:172] (0xc0018988f0) (0xc002facd20) Create stream
I0208 13:23:52.801842       8 log.go:172] (0xc0018988f0) (0xc002facd20) Stream added, broadcasting: 5
I0208 13:23:52.803371       8 log.go:172] (0xc0018988f0) Reply frame received for 5
I0208 13:23:53.926065       8 log.go:172] (0xc0018988f0) Data frame received for 3
I0208 13:23:53.926147       8 log.go:172] (0xc002facc80) (3) Data frame handling
I0208 13:23:53.926163       8 log.go:172] (0xc002facc80) (3) Data frame sent
I0208 13:23:54.130474       8 log.go:172] (0xc0018988f0) Data frame received for 1
I0208 13:23:54.130596       8 log.go:172] (0xc0005e1180) (1) Data frame handling
I0208 13:23:54.130773       8 log.go:172] (0xc0005e1180) (1) Data frame sent
I0208 13:23:54.130885       8 log.go:172] (0xc0018988f0) (0xc0005e1180) Stream removed, broadcasting: 1
I0208 13:23:54.131673       8 log.go:172] (0xc0018988f0) (0xc002facc80) Stream removed, broadcasting: 3
I0208 13:23:54.131902       8 log.go:172] (0xc0018988f0) (0xc002facd20) Stream removed, broadcasting: 5
I0208 13:23:54.131946       8 log.go:172] (0xc0018988f0) (0xc0005e1180) Stream removed, broadcasting: 1
I0208 13:23:54.131966       8 log.go:172] (0xc0018988f0) (0xc002facc80) Stream removed, broadcasting: 3
I0208 13:23:54.131980       8 log.go:172] (0xc0018988f0) (0xc002facd20) Stream removed, broadcasting: 5
Feb  8 13:23:54.132: INFO: Found all expected endpoints: [netserver-1]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  8 13:23:54.132: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-121" for this suite.
Feb  8 13:24:18.177: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 13:24:18.351: INFO: namespace pod-network-test-121 deletion completed in 24.205908288s

• [SLOW TEST:65.414 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  8 13:24:18.352: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: modifying the configmap a second time
STEP: deleting the configmap
STEP: creating a watch on configmaps from the resource version returned by the first update
STEP: Expecting to observe notifications for all changes to the configmap after the first update
Feb  8 13:24:18.516: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:watch-250,SelfLink:/api/v1/namespaces/watch-250/configmaps/e2e-watch-test-resource-version,UID:d35ca5c0-53df-4841-8804-b14c0369bfc2,ResourceVersion:23570046,Generation:0,CreationTimestamp:2020-02-08 13:24:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Feb  8 13:24:18.516: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:watch-250,SelfLink:/api/v1/namespaces/watch-250/configmaps/e2e-watch-test-resource-version,UID:d35ca5c0-53df-4841-8804-b14c0369bfc2,ResourceVersion:23570047,Generation:0,CreationTimestamp:2020-02-08 13:24:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  8 13:24:18.516: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-250" for this suite.
Feb  8 13:24:26.546: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 13:24:26.659: INFO: namespace watch-250 deletion completed in 8.136142659s

• [SLOW TEST:8.307 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  8 13:24:26.659: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-upd-36f4bc1b-eb7d-408a-9bd4-351f34d003ce
STEP: Creating the pod
STEP: Updating configmap configmap-test-upd-36f4bc1b-eb7d-408a-9bd4-351f34d003ce
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  8 13:25:52.874: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-4116" for this suite.
Feb  8 13:26:14.985: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 13:26:15.077: INFO: namespace configmap-4116 deletion completed in 22.196112903s

• [SLOW TEST:108.418 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  8 13:26:15.077: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod pod-subpath-test-configmap-hd4l
STEP: Creating a pod to test atomic-volume-subpath
Feb  8 13:26:15.159: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-hd4l" in namespace "subpath-924" to be "success or failure"
Feb  8 13:26:15.166: INFO: Pod "pod-subpath-test-configmap-hd4l": Phase="Pending", Reason="", readiness=false. Elapsed: 7.866043ms
Feb  8 13:26:17.174: INFO: Pod "pod-subpath-test-configmap-hd4l": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015631421s
Feb  8 13:26:19.180: INFO: Pod "pod-subpath-test-configmap-hd4l": Phase="Pending", Reason="", readiness=false. Elapsed: 4.021232388s
Feb  8 13:26:21.187: INFO: Pod "pod-subpath-test-configmap-hd4l": Phase="Pending", Reason="", readiness=false. Elapsed: 6.028053419s
Feb  8 13:26:23.194: INFO: Pod "pod-subpath-test-configmap-hd4l": Phase="Running", Reason="", readiness=true. Elapsed: 8.0357929s
Feb  8 13:26:25.205: INFO: Pod "pod-subpath-test-configmap-hd4l": Phase="Running", Reason="", readiness=true. Elapsed: 10.046264411s
Feb  8 13:26:27.214: INFO: Pod "pod-subpath-test-configmap-hd4l": Phase="Running", Reason="", readiness=true. Elapsed: 12.055710883s
Feb  8 13:26:29.238: INFO: Pod "pod-subpath-test-configmap-hd4l": Phase="Running", Reason="", readiness=true. Elapsed: 14.078999083s
Feb  8 13:26:31.253: INFO: Pod "pod-subpath-test-configmap-hd4l": Phase="Running", Reason="", readiness=true. Elapsed: 16.094006853s
Feb  8 13:26:33.265: INFO: Pod "pod-subpath-test-configmap-hd4l": Phase="Running", Reason="", readiness=true. Elapsed: 18.106871433s
Feb  8 13:26:35.286: INFO: Pod "pod-subpath-test-configmap-hd4l": Phase="Running", Reason="", readiness=true. Elapsed: 20.127486659s
Feb  8 13:26:37.295: INFO: Pod "pod-subpath-test-configmap-hd4l": Phase="Running", Reason="", readiness=true. Elapsed: 22.136665705s
Feb  8 13:26:39.307: INFO: Pod "pod-subpath-test-configmap-hd4l": Phase="Running", Reason="", readiness=true. Elapsed: 24.148668505s
Feb  8 13:26:41.317: INFO: Pod "pod-subpath-test-configmap-hd4l": Phase="Running", Reason="", readiness=true. Elapsed: 26.158496776s
Feb  8 13:26:43.326: INFO: Pod "pod-subpath-test-configmap-hd4l": Phase="Running", Reason="", readiness=true. Elapsed: 28.16779231s
Feb  8 13:26:45.339: INFO: Pod "pod-subpath-test-configmap-hd4l": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.179929153s
STEP: Saw pod success
Feb  8 13:26:45.339: INFO: Pod "pod-subpath-test-configmap-hd4l" satisfied condition "success or failure"
Feb  8 13:26:45.374: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-configmap-hd4l container test-container-subpath-configmap-hd4l: 
STEP: delete the pod
Feb  8 13:26:45.467: INFO: Waiting for pod pod-subpath-test-configmap-hd4l to disappear
Feb  8 13:26:45.501: INFO: Pod pod-subpath-test-configmap-hd4l no longer exists
STEP: Deleting pod pod-subpath-test-configmap-hd4l
Feb  8 13:26:45.501: INFO: Deleting pod "pod-subpath-test-configmap-hd4l" in namespace "subpath-924"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  8 13:26:45.518: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-924" for this suite.
Feb  8 13:26:51.555: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 13:26:51.710: INFO: namespace subpath-924 deletion completed in 6.185803586s

• [SLOW TEST:36.633 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  8 13:26:51.711: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for all rs to be garbage collected
STEP: expected 0 rs, got 1 rs
STEP: expected 0 pods, got 2 pods
STEP: expected 0 pods, got 2 pods
STEP: Gathering metrics
W0208 13:26:54.654732       8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Feb  8 13:26:54.654: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  8 13:26:54.654: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-3175" for this suite.
Feb  8 13:27:00.681: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 13:27:00.842: INFO: namespace gc-3175 deletion completed in 6.185090126s

• [SLOW TEST:9.131 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  8 13:27:00.842: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-map-f38c0bbb-8567-4665-af75-838699d0cd2f
STEP: Creating a pod to test consume configMaps
Feb  8 13:27:01.054: INFO: Waiting up to 5m0s for pod "pod-configmaps-0916e67d-1f3d-42b1-b455-6e9edc20314f" in namespace "configmap-4551" to be "success or failure"
Feb  8 13:27:01.081: INFO: Pod "pod-configmaps-0916e67d-1f3d-42b1-b455-6e9edc20314f": Phase="Pending", Reason="", readiness=false. Elapsed: 26.328058ms
Feb  8 13:27:03.087: INFO: Pod "pod-configmaps-0916e67d-1f3d-42b1-b455-6e9edc20314f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032301506s
Feb  8 13:27:05.098: INFO: Pod "pod-configmaps-0916e67d-1f3d-42b1-b455-6e9edc20314f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.043539479s
Feb  8 13:27:07.105: INFO: Pod "pod-configmaps-0916e67d-1f3d-42b1-b455-6e9edc20314f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.050899388s
Feb  8 13:27:09.114: INFO: Pod "pod-configmaps-0916e67d-1f3d-42b1-b455-6e9edc20314f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.05961112s
STEP: Saw pod success
Feb  8 13:27:09.114: INFO: Pod "pod-configmaps-0916e67d-1f3d-42b1-b455-6e9edc20314f" satisfied condition "success or failure"
Feb  8 13:27:09.119: INFO: Trying to get logs from node iruya-node pod pod-configmaps-0916e67d-1f3d-42b1-b455-6e9edc20314f container configmap-volume-test: 
STEP: delete the pod
Feb  8 13:27:09.197: INFO: Waiting for pod pod-configmaps-0916e67d-1f3d-42b1-b455-6e9edc20314f to disappear
Feb  8 13:27:09.203: INFO: Pod pod-configmaps-0916e67d-1f3d-42b1-b455-6e9edc20314f no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  8 13:27:09.203: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-4551" for this suite.
Feb  8 13:27:15.309: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 13:27:15.483: INFO: namespace configmap-4551 deletion completed in 6.271613294s

• [SLOW TEST:14.642 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicaSet 
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  8 13:27:15.484: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Given a Pod with a 'name' label pod-adoption-release is created
STEP: When a replicaset with a matching selector is created
STEP: Then the orphan pod is adopted
STEP: When the matched label of one of its pods change
Feb  8 13:27:24.733: INFO: Pod name pod-adoption-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  8 13:27:24.828: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replicaset-1910" for this suite.
Feb  8 13:27:49.038: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 13:27:49.175: INFO: namespace replicaset-1910 deletion completed in 24.337111232s

• [SLOW TEST:33.692 seconds]
[sig-apps] ReplicaSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class 
  should be submitted and removed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  8 13:27:49.176: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods Set QOS Class
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:179
[It] should be submitted and removed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying QOS class is set on the pod
[AfterEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  8 13:27:49.291: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-9061" for this suite.
Feb  8 13:28:11.394: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 13:28:11.494: INFO: namespace pods-9061 deletion completed in 22.179768588s

• [SLOW TEST:22.318 seconds]
[k8s.io] [sig-node] Pods Extended
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  [k8s.io] Pods Set QOS Class
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should be submitted and removed  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  8 13:28:11.494: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0644 on node default medium
Feb  8 13:28:11.587: INFO: Waiting up to 5m0s for pod "pod-4aeaa3be-252e-4ed5-af77-702c27efc912" in namespace "emptydir-577" to be "success or failure"
Feb  8 13:28:11.602: INFO: Pod "pod-4aeaa3be-252e-4ed5-af77-702c27efc912": Phase="Pending", Reason="", readiness=false. Elapsed: 15.084106ms
Feb  8 13:28:13.613: INFO: Pod "pod-4aeaa3be-252e-4ed5-af77-702c27efc912": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025692445s
Feb  8 13:28:15.619: INFO: Pod "pod-4aeaa3be-252e-4ed5-af77-702c27efc912": Phase="Pending", Reason="", readiness=false. Elapsed: 4.03216502s
Feb  8 13:28:17.627: INFO: Pod "pod-4aeaa3be-252e-4ed5-af77-702c27efc912": Phase="Pending", Reason="", readiness=false. Elapsed: 6.039688344s
Feb  8 13:28:19.679: INFO: Pod "pod-4aeaa3be-252e-4ed5-af77-702c27efc912": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.092206355s
STEP: Saw pod success
Feb  8 13:28:19.680: INFO: Pod "pod-4aeaa3be-252e-4ed5-af77-702c27efc912" satisfied condition "success or failure"
Feb  8 13:28:19.683: INFO: Trying to get logs from node iruya-node pod pod-4aeaa3be-252e-4ed5-af77-702c27efc912 container test-container: 
STEP: delete the pod
Feb  8 13:28:19.855: INFO: Waiting for pod pod-4aeaa3be-252e-4ed5-af77-702c27efc912 to disappear
Feb  8 13:28:19.865: INFO: Pod pod-4aeaa3be-252e-4ed5-af77-702c27efc912 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  8 13:28:19.866: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-577" for this suite.
Feb  8 13:28:25.903: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 13:28:26.021: INFO: namespace emptydir-577 deletion completed in 6.143753702s

• [SLOW TEST:14.527 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  8 13:28:26.021: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Performing setup for networking test in namespace pod-network-test-5433
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Feb  8 13:28:26.086: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Feb  8 13:29:02.289: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostName&protocol=http&host=10.44.0.1&port=8080&tries=1'] Namespace:pod-network-test-5433 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb  8 13:29:02.289: INFO: >>> kubeConfig: /root/.kube/config
I0208 13:29:02.381213       8 log.go:172] (0xc00098c370) (0xc001ebc000) Create stream
I0208 13:29:02.381272       8 log.go:172] (0xc00098c370) (0xc001ebc000) Stream added, broadcasting: 1
I0208 13:29:02.391354       8 log.go:172] (0xc00098c370) Reply frame received for 1
I0208 13:29:02.391400       8 log.go:172] (0xc00098c370) (0xc002e4c000) Create stream
I0208 13:29:02.391420       8 log.go:172] (0xc00098c370) (0xc002e4c000) Stream added, broadcasting: 3
I0208 13:29:02.398681       8 log.go:172] (0xc00098c370) Reply frame received for 3
I0208 13:29:02.398708       8 log.go:172] (0xc00098c370) (0xc001ef86e0) Create stream
I0208 13:29:02.398718       8 log.go:172] (0xc00098c370) (0xc001ef86e0) Stream added, broadcasting: 5
I0208 13:29:02.399932       8 log.go:172] (0xc00098c370) Reply frame received for 5
I0208 13:29:02.640476       8 log.go:172] (0xc00098c370) Data frame received for 3
I0208 13:29:02.640649       8 log.go:172] (0xc002e4c000) (3) Data frame handling
I0208 13:29:02.640700       8 log.go:172] (0xc002e4c000) (3) Data frame sent
I0208 13:29:02.794814       8 log.go:172] (0xc00098c370) Data frame received for 1
I0208 13:29:02.794926       8 log.go:172] (0xc001ebc000) (1) Data frame handling
I0208 13:29:02.794976       8 log.go:172] (0xc001ebc000) (1) Data frame sent
I0208 13:29:02.795276       8 log.go:172] (0xc00098c370) (0xc001ebc000) Stream removed, broadcasting: 1
I0208 13:29:02.795867       8 log.go:172] (0xc00098c370) (0xc002e4c000) Stream removed, broadcasting: 3
I0208 13:29:02.796005       8 log.go:172] (0xc00098c370) (0xc001ef86e0) Stream removed, broadcasting: 5
I0208 13:29:02.796052       8 log.go:172] (0xc00098c370) (0xc001ebc000) Stream removed, broadcasting: 1
I0208 13:29:02.796071       8 log.go:172] (0xc00098c370) (0xc002e4c000) Stream removed, broadcasting: 3
I0208 13:29:02.796086       8 log.go:172] (0xc00098c370) (0xc001ef86e0) Stream removed, broadcasting: 5
I0208 13:29:02.796660       8 log.go:172] (0xc00098c370) Go away received
Feb  8 13:29:02.796: INFO: Waiting for endpoints: map[]
Feb  8 13:29:02.805: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostName&protocol=http&host=10.32.0.4&port=8080&tries=1'] Namespace:pod-network-test-5433 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb  8 13:29:02.805: INFO: >>> kubeConfig: /root/.kube/config
I0208 13:29:02.885425       8 log.go:172] (0xc00154e9a0) (0xc002e4c280) Create stream
I0208 13:29:02.885501       8 log.go:172] (0xc00154e9a0) (0xc002e4c280) Stream added, broadcasting: 1
I0208 13:29:02.892305       8 log.go:172] (0xc00154e9a0) Reply frame received for 1
I0208 13:29:02.892333       8 log.go:172] (0xc00154e9a0) (0xc001ebc0a0) Create stream
I0208 13:29:02.892347       8 log.go:172] (0xc00154e9a0) (0xc001ebc0a0) Stream added, broadcasting: 3
I0208 13:29:02.894061       8 log.go:172] (0xc00154e9a0) Reply frame received for 3
I0208 13:29:02.894086       8 log.go:172] (0xc00154e9a0) (0xc001ef8a00) Create stream
I0208 13:29:02.894092       8 log.go:172] (0xc00154e9a0) (0xc001ef8a00) Stream added, broadcasting: 5
I0208 13:29:02.899286       8 log.go:172] (0xc00154e9a0) Reply frame received for 5
I0208 13:29:03.027769       8 log.go:172] (0xc00154e9a0) Data frame received for 3
I0208 13:29:03.027867       8 log.go:172] (0xc001ebc0a0) (3) Data frame handling
I0208 13:29:03.027903       8 log.go:172] (0xc001ebc0a0) (3) Data frame sent
I0208 13:29:03.147990       8 log.go:172] (0xc00154e9a0) (0xc001ebc0a0) Stream removed, broadcasting: 3
I0208 13:29:03.148123       8 log.go:172] (0xc00154e9a0) (0xc001ef8a00) Stream removed, broadcasting: 5
I0208 13:29:03.148151       8 log.go:172] (0xc00154e9a0) Data frame received for 1
I0208 13:29:03.148171       8 log.go:172] (0xc002e4c280) (1) Data frame handling
I0208 13:29:03.148190       8 log.go:172] (0xc002e4c280) (1) Data frame sent
I0208 13:29:03.148211       8 log.go:172] (0xc00154e9a0) (0xc002e4c280) Stream removed, broadcasting: 1
I0208 13:29:03.148233       8 log.go:172] (0xc00154e9a0) Go away received
I0208 13:29:03.148505       8 log.go:172] (0xc00154e9a0) (0xc002e4c280) Stream removed, broadcasting: 1
I0208 13:29:03.148675       8 log.go:172] (0xc00154e9a0) (0xc001ebc0a0) Stream removed, broadcasting: 3
I0208 13:29:03.148696       8 log.go:172] (0xc00154e9a0) (0xc001ef8a00) Stream removed, broadcasting: 5
Feb  8 13:29:03.148: INFO: Waiting for endpoints: map[]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  8 13:29:03.148: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-5433" for this suite.
Feb  8 13:29:27.199: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 13:29:27.358: INFO: namespace pod-network-test-5433 deletion completed in 24.200739975s

• [SLOW TEST:61.337 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should fail to create secret due to empty secret key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  8 13:29:27.358: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create secret due to empty secret key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name secret-emptykey-test-a91049c2-cb39-4efc-8eee-6970f58cc67c
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  8 13:29:27.484: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-6539" for this suite.
Feb  8 13:29:33.513: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 13:29:33.676: INFO: namespace secrets-6539 deletion completed in 6.187557213s

• [SLOW TEST:6.318 seconds]
[sig-api-machinery] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31
  should fail to create secret due to empty secret key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  8 13:29:33.677: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-map-2515d690-f1e5-4704-9dd6-58caed14298c
STEP: Creating a pod to test consume secrets
Feb  8 13:29:33.859: INFO: Waiting up to 5m0s for pod "pod-secrets-8fd911b4-e2b2-4b7e-b512-02089e9b35fb" in namespace "secrets-5221" to be "success or failure"
Feb  8 13:29:33.870: INFO: Pod "pod-secrets-8fd911b4-e2b2-4b7e-b512-02089e9b35fb": Phase="Pending", Reason="", readiness=false. Elapsed: 11.399079ms
Feb  8 13:29:35.904: INFO: Pod "pod-secrets-8fd911b4-e2b2-4b7e-b512-02089e9b35fb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.045058865s
Feb  8 13:29:37.912: INFO: Pod "pod-secrets-8fd911b4-e2b2-4b7e-b512-02089e9b35fb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.05265186s
Feb  8 13:29:39.932: INFO: Pod "pod-secrets-8fd911b4-e2b2-4b7e-b512-02089e9b35fb": Phase="Pending", Reason="", readiness=false. Elapsed: 6.072829062s
Feb  8 13:29:41.948: INFO: Pod "pod-secrets-8fd911b4-e2b2-4b7e-b512-02089e9b35fb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.088987685s
STEP: Saw pod success
Feb  8 13:29:41.948: INFO: Pod "pod-secrets-8fd911b4-e2b2-4b7e-b512-02089e9b35fb" satisfied condition "success or failure"
Feb  8 13:29:41.953: INFO: Trying to get logs from node iruya-node pod pod-secrets-8fd911b4-e2b2-4b7e-b512-02089e9b35fb container secret-volume-test: 
STEP: delete the pod
Feb  8 13:29:42.286: INFO: Waiting for pod pod-secrets-8fd911b4-e2b2-4b7e-b512-02089e9b35fb to disappear
Feb  8 13:29:42.298: INFO: Pod pod-secrets-8fd911b4-e2b2-4b7e-b512-02089e9b35fb no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  8 13:29:42.298: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-5221" for this suite.
Feb  8 13:29:48.371: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 13:29:48.501: INFO: namespace secrets-5221 deletion completed in 6.194356561s

• [SLOW TEST:14.825 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  8 13:29:48.502: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods
STEP: Gathering metrics
W0208 13:30:32.003759       8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Feb  8 13:30:32.003: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  8 13:30:32.003: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-1762" for this suite.
Feb  8 13:30:42.109: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 13:30:42.375: INFO: namespace gc-1762 deletion completed in 10.363572635s

• [SLOW TEST:53.873 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  8 13:30:42.375: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Feb  8 13:30:57.966: INFO: Expected: &{} to match Container's Termination Message:  --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  8 13:30:58.090: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-6504" for this suite.
Feb  8 13:31:04.172: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 13:31:04.292: INFO: namespace container-runtime-6504 deletion completed in 6.181596429s

• [SLOW TEST:21.916 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129
      should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  8 13:31:04.292: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the container
STEP: wait for the container to reach Failed
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Feb  8 13:31:12.551: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  8 13:31:12.603: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-4135" for this suite.
Feb  8 13:31:18.694: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 13:31:18.863: INFO: namespace container-runtime-4135 deletion completed in 6.25242046s

• [SLOW TEST:14.570 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129
      should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  8 13:31:18.863: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the rc
STEP: delete the rc
STEP: wait for all pods to be garbage collected
STEP: Gathering metrics
W0208 13:31:29.026896       8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Feb  8 13:31:29.026: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  8 13:31:29.027: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-6371" for this suite.
Feb  8 13:31:35.066: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 13:31:35.184: INFO: namespace gc-6371 deletion completed in 6.154779541s

• [SLOW TEST:16.321 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  8 13:31:35.185: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb  8 13:31:35.258: INFO: Waiting up to 5m0s for pod "downwardapi-volume-84c21b48-5b9c-4ce7-b904-a36530b68550" in namespace "projected-3696" to be "success or failure"
Feb  8 13:31:35.266: INFO: Pod "downwardapi-volume-84c21b48-5b9c-4ce7-b904-a36530b68550": Phase="Pending", Reason="", readiness=false. Elapsed: 8.437516ms
Feb  8 13:31:37.279: INFO: Pod "downwardapi-volume-84c21b48-5b9c-4ce7-b904-a36530b68550": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021520402s
Feb  8 13:31:39.288: INFO: Pod "downwardapi-volume-84c21b48-5b9c-4ce7-b904-a36530b68550": Phase="Pending", Reason="", readiness=false. Elapsed: 4.030389894s
Feb  8 13:31:41.305: INFO: Pod "downwardapi-volume-84c21b48-5b9c-4ce7-b904-a36530b68550": Phase="Pending", Reason="", readiness=false. Elapsed: 6.046631125s
Feb  8 13:31:43.317: INFO: Pod "downwardapi-volume-84c21b48-5b9c-4ce7-b904-a36530b68550": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.058607163s
STEP: Saw pod success
Feb  8 13:31:43.317: INFO: Pod "downwardapi-volume-84c21b48-5b9c-4ce7-b904-a36530b68550" satisfied condition "success or failure"
Feb  8 13:31:43.322: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-84c21b48-5b9c-4ce7-b904-a36530b68550 container client-container: 
STEP: delete the pod
Feb  8 13:31:43.418: INFO: Waiting for pod downwardapi-volume-84c21b48-5b9c-4ce7-b904-a36530b68550 to disappear
Feb  8 13:31:43.444: INFO: Pod downwardapi-volume-84c21b48-5b9c-4ce7-b904-a36530b68550 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  8 13:31:43.445: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3696" for this suite.
Feb  8 13:31:49.561: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 13:31:49.746: INFO: namespace projected-3696 deletion completed in 6.216313656s

• [SLOW TEST:14.561 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  8 13:31:49.746: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name projected-secret-test-bcd24c80-37a2-4409-baa8-673bab7867e2
STEP: Creating a pod to test consume secrets
Feb  8 13:31:49.848: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-3915d732-27a1-4e96-bc94-a7a7e6073dea" in namespace "projected-2879" to be "success or failure"
Feb  8 13:31:49.886: INFO: Pod "pod-projected-secrets-3915d732-27a1-4e96-bc94-a7a7e6073dea": Phase="Pending", Reason="", readiness=false. Elapsed: 38.028704ms
Feb  8 13:31:51.897: INFO: Pod "pod-projected-secrets-3915d732-27a1-4e96-bc94-a7a7e6073dea": Phase="Pending", Reason="", readiness=false. Elapsed: 2.048486767s
Feb  8 13:31:53.909: INFO: Pod "pod-projected-secrets-3915d732-27a1-4e96-bc94-a7a7e6073dea": Phase="Pending", Reason="", readiness=false. Elapsed: 4.060843622s
Feb  8 13:31:55.918: INFO: Pod "pod-projected-secrets-3915d732-27a1-4e96-bc94-a7a7e6073dea": Phase="Pending", Reason="", readiness=false. Elapsed: 6.06912925s
Feb  8 13:31:57.924: INFO: Pod "pod-projected-secrets-3915d732-27a1-4e96-bc94-a7a7e6073dea": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.075647411s
STEP: Saw pod success
Feb  8 13:31:57.924: INFO: Pod "pod-projected-secrets-3915d732-27a1-4e96-bc94-a7a7e6073dea" satisfied condition "success or failure"
Feb  8 13:31:57.927: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-3915d732-27a1-4e96-bc94-a7a7e6073dea container projected-secret-volume-test: 
STEP: delete the pod
Feb  8 13:31:58.927: INFO: Waiting for pod pod-projected-secrets-3915d732-27a1-4e96-bc94-a7a7e6073dea to disappear
Feb  8 13:31:58.932: INFO: Pod pod-projected-secrets-3915d732-27a1-4e96-bc94-a7a7e6073dea no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  8 13:31:58.932: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2879" for this suite.
Feb  8 13:32:04.969: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 13:32:05.108: INFO: namespace projected-2879 deletion completed in 6.167952289s

• [SLOW TEST:15.362 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  8 13:32:05.109: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44
[It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
Feb  8 13:32:05.225: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  8 13:32:17.652: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-3442" for this suite.
Feb  8 13:32:23.702: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 13:32:23.826: INFO: namespace init-container-3442 deletion completed in 6.159403909s

• [SLOW TEST:18.717 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  pod should support shared volumes between containers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  8 13:32:23.826: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] pod should support shared volumes between containers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating Pod
STEP: Waiting for the pod running
STEP: Geting the pod
STEP: Reading file content from the nginx-container
Feb  8 13:32:34.010: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec pod-sharedvolume-9ba09c84-7752-4465-bbe6-e599f565dd52 -c busybox-main-container --namespace=emptydir-8297 -- cat /usr/share/volumeshare/shareddata.txt'
Feb  8 13:32:36.390: INFO: stderr: "I0208 13:32:36.119364    1615 log.go:172] (0xc0005444d0) (0xc0007ef2c0) Create stream\nI0208 13:32:36.119420    1615 log.go:172] (0xc0005444d0) (0xc0007ef2c0) Stream added, broadcasting: 1\nI0208 13:32:36.131686    1615 log.go:172] (0xc0005444d0) Reply frame received for 1\nI0208 13:32:36.131772    1615 log.go:172] (0xc0005444d0) (0xc00053d680) Create stream\nI0208 13:32:36.131793    1615 log.go:172] (0xc0005444d0) (0xc00053d680) Stream added, broadcasting: 3\nI0208 13:32:36.134116    1615 log.go:172] (0xc0005444d0) Reply frame received for 3\nI0208 13:32:36.134142    1615 log.go:172] (0xc0005444d0) (0xc0008000a0) Create stream\nI0208 13:32:36.134151    1615 log.go:172] (0xc0005444d0) (0xc0008000a0) Stream added, broadcasting: 5\nI0208 13:32:36.135803    1615 log.go:172] (0xc0005444d0) Reply frame received for 5\nI0208 13:32:36.250419    1615 log.go:172] (0xc0005444d0) Data frame received for 3\nI0208 13:32:36.250468    1615 log.go:172] (0xc00053d680) (3) Data frame handling\nI0208 13:32:36.250491    1615 log.go:172] (0xc00053d680) (3) Data frame sent\nI0208 13:32:36.379704    1615 log.go:172] (0xc0005444d0) (0xc00053d680) Stream removed, broadcasting: 3\nI0208 13:32:36.379899    1615 log.go:172] (0xc0005444d0) Data frame received for 1\nI0208 13:32:36.379991    1615 log.go:172] (0xc0005444d0) (0xc0008000a0) Stream removed, broadcasting: 5\nI0208 13:32:36.380076    1615 log.go:172] (0xc0007ef2c0) (1) Data frame handling\nI0208 13:32:36.380134    1615 log.go:172] (0xc0007ef2c0) (1) Data frame sent\nI0208 13:32:36.380153    1615 log.go:172] (0xc0005444d0) (0xc0007ef2c0) Stream removed, broadcasting: 1\nI0208 13:32:36.380166    1615 log.go:172] (0xc0005444d0) Go away received\nI0208 13:32:36.380805    1615 log.go:172] (0xc0005444d0) (0xc0007ef2c0) Stream removed, broadcasting: 1\nI0208 13:32:36.380902    1615 log.go:172] (0xc0005444d0) (0xc00053d680) Stream removed, broadcasting: 3\nI0208 13:32:36.380914    1615 log.go:172] (0xc0005444d0) (0xc0008000a0) Stream removed, broadcasting: 5\n"
Feb  8 13:32:36.390: INFO: stdout: "Hello from the busy-box sub-container\n"
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  8 13:32:36.391: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-8297" for this suite.
Feb  8 13:32:42.429: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 13:32:42.546: INFO: namespace emptydir-8297 deletion completed in 6.143656387s

• [SLOW TEST:18.720 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  pod should support shared volumes between containers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should be possible to delete [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  8 13:32:42.548: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should be possible to delete [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  8 13:32:42.868: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-5535" for this suite.
Feb  8 13:32:49.046: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 13:32:49.181: INFO: namespace kubelet-test-5535 deletion completed in 6.298695693s

• [SLOW TEST:6.633 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78
    should be possible to delete [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  8 13:32:49.181: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44
[It] should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
Feb  8 13:32:49.285: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  8 13:33:02.294: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-395" for this suite.
Feb  8 13:33:08.392: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 13:33:08.550: INFO: namespace init-container-395 deletion completed in 6.227687412s

• [SLOW TEST:19.369 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[k8s.io] [sig-node] PreStop 
  should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  8 13:33:08.550: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename prestop
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:167
[It] should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating server pod server in namespace prestop-5118
STEP: Waiting for pods to come up.
STEP: Creating tester pod tester in namespace prestop-5118
STEP: Deleting pre-stop pod
Feb  8 13:33:31.935: INFO: Saw: {
	"Hostname": "server",
	"Sent": null,
	"Received": {
		"prestop": 1
	},
	"Errors": null,
	"Log": [
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up."
	],
	"StillContactingPeers": true
}
STEP: Deleting the server pod
[AfterEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  8 13:33:31.959: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "prestop-5118" for this suite.
Feb  8 13:34:12.180: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 13:34:12.297: INFO: namespace prestop-5118 deletion completed in 40.273034611s

• [SLOW TEST:63.747 seconds]
[k8s.io] [sig-node] PreStop
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with secret pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  8 13:34:12.297: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with secret pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod pod-subpath-test-secret-wkcs
STEP: Creating a pod to test atomic-volume-subpath
Feb  8 13:34:12.391: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-wkcs" in namespace "subpath-6842" to be "success or failure"
Feb  8 13:34:12.402: INFO: Pod "pod-subpath-test-secret-wkcs": Phase="Pending", Reason="", readiness=false. Elapsed: 10.75703ms
Feb  8 13:34:14.650: INFO: Pod "pod-subpath-test-secret-wkcs": Phase="Pending", Reason="", readiness=false. Elapsed: 2.258956723s
Feb  8 13:34:16.663: INFO: Pod "pod-subpath-test-secret-wkcs": Phase="Pending", Reason="", readiness=false. Elapsed: 4.271481491s
Feb  8 13:34:18.679: INFO: Pod "pod-subpath-test-secret-wkcs": Phase="Pending", Reason="", readiness=false. Elapsed: 6.287148669s
Feb  8 13:34:20.690: INFO: Pod "pod-subpath-test-secret-wkcs": Phase="Running", Reason="", readiness=true. Elapsed: 8.298232359s
Feb  8 13:34:22.696: INFO: Pod "pod-subpath-test-secret-wkcs": Phase="Running", Reason="", readiness=true. Elapsed: 10.304592503s
Feb  8 13:34:24.704: INFO: Pod "pod-subpath-test-secret-wkcs": Phase="Running", Reason="", readiness=true. Elapsed: 12.312304739s
Feb  8 13:34:26.712: INFO: Pod "pod-subpath-test-secret-wkcs": Phase="Running", Reason="", readiness=true. Elapsed: 14.320277284s
Feb  8 13:34:28.723: INFO: Pod "pod-subpath-test-secret-wkcs": Phase="Running", Reason="", readiness=true. Elapsed: 16.331041332s
Feb  8 13:34:30.735: INFO: Pod "pod-subpath-test-secret-wkcs": Phase="Running", Reason="", readiness=true. Elapsed: 18.343028837s
Feb  8 13:34:32.766: INFO: Pod "pod-subpath-test-secret-wkcs": Phase="Running", Reason="", readiness=true. Elapsed: 20.374143825s
Feb  8 13:34:34.779: INFO: Pod "pod-subpath-test-secret-wkcs": Phase="Running", Reason="", readiness=true. Elapsed: 22.387332881s
Feb  8 13:34:36.786: INFO: Pod "pod-subpath-test-secret-wkcs": Phase="Running", Reason="", readiness=true. Elapsed: 24.394319016s
Feb  8 13:34:38.792: INFO: Pod "pod-subpath-test-secret-wkcs": Phase="Running", Reason="", readiness=true. Elapsed: 26.400892519s
Feb  8 13:34:40.822: INFO: Pod "pod-subpath-test-secret-wkcs": Phase="Succeeded", Reason="", readiness=false. Elapsed: 28.430244323s
STEP: Saw pod success
Feb  8 13:34:40.822: INFO: Pod "pod-subpath-test-secret-wkcs" satisfied condition "success or failure"
Feb  8 13:34:40.827: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-secret-wkcs container test-container-subpath-secret-wkcs: 
STEP: delete the pod
Feb  8 13:34:40.964: INFO: Waiting for pod pod-subpath-test-secret-wkcs to disappear
Feb  8 13:34:40.970: INFO: Pod pod-subpath-test-secret-wkcs no longer exists
STEP: Deleting pod pod-subpath-test-secret-wkcs
Feb  8 13:34:40.970: INFO: Deleting pod "pod-subpath-test-secret-wkcs" in namespace "subpath-6842"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  8 13:34:40.974: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-6842" for this suite.
Feb  8 13:34:47.009: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 13:34:47.110: INFO: namespace subpath-6842 deletion completed in 6.130354403s

• [SLOW TEST:34.812 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with secret pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  8 13:34:47.110: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88
[It] should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating service multi-endpoint-test in namespace services-3987
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-3987 to expose endpoints map[]
Feb  8 13:34:47.269: INFO: Get endpoints failed (24.449549ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found
Feb  8 13:34:48.274: INFO: successfully validated that service multi-endpoint-test in namespace services-3987 exposes endpoints map[] (1.029786095s elapsed)
STEP: Creating pod pod1 in namespace services-3987
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-3987 to expose endpoints map[pod1:[100]]
Feb  8 13:34:52.453: INFO: Unexpected endpoints: found map[], expected map[pod1:[100]] (4.158921689s elapsed, will retry)
Feb  8 13:34:56.701: INFO: successfully validated that service multi-endpoint-test in namespace services-3987 exposes endpoints map[pod1:[100]] (8.407915476s elapsed)
STEP: Creating pod pod2 in namespace services-3987
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-3987 to expose endpoints map[pod1:[100] pod2:[101]]
Feb  8 13:35:01.535: INFO: Unexpected endpoints: found map[1d103cd1-b5bf-4004-b9c6-3e3a58baba0c:[100]], expected map[pod1:[100] pod2:[101]] (4.814015102s elapsed, will retry)
Feb  8 13:35:04.724: INFO: successfully validated that service multi-endpoint-test in namespace services-3987 exposes endpoints map[pod1:[100] pod2:[101]] (8.003248491s elapsed)
STEP: Deleting pod pod1 in namespace services-3987
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-3987 to expose endpoints map[pod2:[101]]
Feb  8 13:35:05.783: INFO: successfully validated that service multi-endpoint-test in namespace services-3987 exposes endpoints map[pod2:[101]] (1.040618622s elapsed)
STEP: Deleting pod pod2 in namespace services-3987
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-3987 to expose endpoints map[]
Feb  8 13:35:06.861: INFO: successfully validated that service multi-endpoint-test in namespace services-3987 exposes endpoints map[] (1.060222937s elapsed)
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  8 13:35:07.451: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-3987" for this suite.
Feb  8 13:35:29.505: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 13:35:29.621: INFO: namespace services-3987 deletion completed in 22.159907522s
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92

• [SLOW TEST:42.511 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should create and stop a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  8 13:35:29.623: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should create and stop a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a replication controller
Feb  8 13:35:29.772: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8771'
Feb  8 13:35:30.105: INFO: stderr: ""
Feb  8 13:35:30.105: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Feb  8 13:35:30.105: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8771'
Feb  8 13:35:30.266: INFO: stderr: ""
Feb  8 13:35:30.266: INFO: stdout: "update-demo-nautilus-g7g79 update-demo-nautilus-r9d6b "
Feb  8 13:35:30.266: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-g7g79 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8771'
Feb  8 13:35:30.429: INFO: stderr: ""
Feb  8 13:35:30.429: INFO: stdout: ""
Feb  8 13:35:30.429: INFO: update-demo-nautilus-g7g79 is created but not running
Feb  8 13:35:35.430: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8771'
Feb  8 13:35:37.239: INFO: stderr: ""
Feb  8 13:35:37.239: INFO: stdout: "update-demo-nautilus-g7g79 update-demo-nautilus-r9d6b "
Feb  8 13:35:37.239: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-g7g79 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8771'
Feb  8 13:35:37.692: INFO: stderr: ""
Feb  8 13:35:37.692: INFO: stdout: ""
Feb  8 13:35:37.692: INFO: update-demo-nautilus-g7g79 is created but not running
Feb  8 13:35:42.693: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8771'
Feb  8 13:35:42.814: INFO: stderr: ""
Feb  8 13:35:42.814: INFO: stdout: "update-demo-nautilus-g7g79 update-demo-nautilus-r9d6b "
Feb  8 13:35:42.814: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-g7g79 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8771'
Feb  8 13:35:42.914: INFO: stderr: ""
Feb  8 13:35:42.914: INFO: stdout: "true"
Feb  8 13:35:42.914: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-g7g79 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8771'
Feb  8 13:35:42.988: INFO: stderr: ""
Feb  8 13:35:42.988: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb  8 13:35:42.988: INFO: validating pod update-demo-nautilus-g7g79
Feb  8 13:35:42.996: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb  8 13:35:42.996: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb  8 13:35:42.996: INFO: update-demo-nautilus-g7g79 is verified up and running
Feb  8 13:35:42.996: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-r9d6b -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8771'
Feb  8 13:35:43.073: INFO: stderr: ""
Feb  8 13:35:43.073: INFO: stdout: "true"
Feb  8 13:35:43.073: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-r9d6b -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8771'
Feb  8 13:35:43.167: INFO: stderr: ""
Feb  8 13:35:43.167: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb  8 13:35:43.167: INFO: validating pod update-demo-nautilus-r9d6b
Feb  8 13:35:43.198: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb  8 13:35:43.198: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb  8 13:35:43.198: INFO: update-demo-nautilus-r9d6b is verified up and running
STEP: using delete to clean up resources
Feb  8 13:35:43.198: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8771'
Feb  8 13:35:43.303: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb  8 13:35:43.304: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Feb  8 13:35:43.304: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-8771'
Feb  8 13:35:43.394: INFO: stderr: "No resources found.\n"
Feb  8 13:35:43.394: INFO: stdout: ""
Feb  8 13:35:43.395: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-8771 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Feb  8 13:35:43.485: INFO: stderr: ""
Feb  8 13:35:43.485: INFO: stdout: "update-demo-nautilus-g7g79\nupdate-demo-nautilus-r9d6b\n"
Feb  8 13:35:43.986: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-8771'
Feb  8 13:35:44.091: INFO: stderr: "No resources found.\n"
Feb  8 13:35:44.091: INFO: stdout: ""
Feb  8 13:35:44.092: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-8771 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Feb  8 13:35:44.180: INFO: stderr: ""
Feb  8 13:35:44.180: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  8 13:35:44.180: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-8771" for this suite.
Feb  8 13:36:06.686: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 13:36:06.810: INFO: namespace kubectl-8771 deletion completed in 22.620227723s

• [SLOW TEST:37.188 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create and stop a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  8 13:36:06.811: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward api env vars
Feb  8 13:36:06.976: INFO: Waiting up to 5m0s for pod "downward-api-5c36af43-83f0-4798-ab70-f1e2b1f5435b" in namespace "downward-api-7119" to be "success or failure"
Feb  8 13:36:07.032: INFO: Pod "downward-api-5c36af43-83f0-4798-ab70-f1e2b1f5435b": Phase="Pending", Reason="", readiness=false. Elapsed: 56.006727ms
Feb  8 13:36:09.041: INFO: Pod "downward-api-5c36af43-83f0-4798-ab70-f1e2b1f5435b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.065410988s
Feb  8 13:36:11.049: INFO: Pod "downward-api-5c36af43-83f0-4798-ab70-f1e2b1f5435b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.072973667s
Feb  8 13:36:13.058: INFO: Pod "downward-api-5c36af43-83f0-4798-ab70-f1e2b1f5435b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.082044003s
Feb  8 13:36:15.079: INFO: Pod "downward-api-5c36af43-83f0-4798-ab70-f1e2b1f5435b": Phase="Pending", Reason="", readiness=false. Elapsed: 8.102999616s
Feb  8 13:36:17.088: INFO: Pod "downward-api-5c36af43-83f0-4798-ab70-f1e2b1f5435b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.112611942s
STEP: Saw pod success
Feb  8 13:36:17.089: INFO: Pod "downward-api-5c36af43-83f0-4798-ab70-f1e2b1f5435b" satisfied condition "success or failure"
Feb  8 13:36:17.092: INFO: Trying to get logs from node iruya-node pod downward-api-5c36af43-83f0-4798-ab70-f1e2b1f5435b container dapi-container: 
STEP: delete the pod
Feb  8 13:36:17.603: INFO: Waiting for pod downward-api-5c36af43-83f0-4798-ab70-f1e2b1f5435b to disappear
Feb  8 13:36:17.611: INFO: Pod downward-api-5c36af43-83f0-4798-ab70-f1e2b1f5435b no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  8 13:36:17.611: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-7119" for this suite.
Feb  8 13:36:23.647: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 13:36:23.795: INFO: namespace downward-api-7119 deletion completed in 6.176564757s

• [SLOW TEST:16.984 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-api-machinery] Watchers 
  should receive events on concurrent watches in same order [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  8 13:36:23.796: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should receive events on concurrent watches in same order [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: starting a background goroutine to produce watch events
STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  8 13:36:29.365: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-5055" for this suite.
Feb  8 13:36:35.499: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 13:36:35.599: INFO: namespace watch-5055 deletion completed in 6.210940131s

• [SLOW TEST:11.803 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should receive events on concurrent watches in same order [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Burst scaling should run to completion even with unhealthy pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  8 13:36:35.600: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75
STEP: Creating service test in namespace statefulset-7068
[It] Burst scaling should run to completion even with unhealthy pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating stateful set ss in namespace statefulset-7068
STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-7068
Feb  8 13:36:35.720: INFO: Found 0 stateful pods, waiting for 1
Feb  8 13:36:45.728: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod
Feb  8 13:36:45.732: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7068 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb  8 13:36:46.373: INFO: stderr: "I0208 13:36:45.981765    1914 log.go:172] (0xc0009062c0) (0xc000740640) Create stream\nI0208 13:36:45.981819    1914 log.go:172] (0xc0009062c0) (0xc000740640) Stream added, broadcasting: 1\nI0208 13:36:45.992569    1914 log.go:172] (0xc0009062c0) Reply frame received for 1\nI0208 13:36:45.992600    1914 log.go:172] (0xc0009062c0) (0xc000742000) Create stream\nI0208 13:36:45.992609    1914 log.go:172] (0xc0009062c0) (0xc000742000) Stream added, broadcasting: 3\nI0208 13:36:45.995173    1914 log.go:172] (0xc0009062c0) Reply frame received for 3\nI0208 13:36:45.995200    1914 log.go:172] (0xc0009062c0) (0xc0007406e0) Create stream\nI0208 13:36:45.995209    1914 log.go:172] (0xc0009062c0) (0xc0007406e0) Stream added, broadcasting: 5\nI0208 13:36:45.997507    1914 log.go:172] (0xc0009062c0) Reply frame received for 5\nI0208 13:36:46.152876    1914 log.go:172] (0xc0009062c0) Data frame received for 5\nI0208 13:36:46.152949    1914 log.go:172] (0xc0007406e0) (5) Data frame handling\nI0208 13:36:46.152968    1914 log.go:172] (0xc0007406e0) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0208 13:36:46.210210    1914 log.go:172] (0xc0009062c0) Data frame received for 3\nI0208 13:36:46.210287    1914 log.go:172] (0xc000742000) (3) Data frame handling\nI0208 13:36:46.210314    1914 log.go:172] (0xc000742000) (3) Data frame sent\nI0208 13:36:46.366823    1914 log.go:172] (0xc0009062c0) (0xc000742000) Stream removed, broadcasting: 3\nI0208 13:36:46.366922    1914 log.go:172] (0xc0009062c0) Data frame received for 1\nI0208 13:36:46.366952    1914 log.go:172] (0xc000740640) (1) Data frame handling\nI0208 13:36:46.366960    1914 log.go:172] (0xc000740640) (1) Data frame sent\nI0208 13:36:46.366969    1914 log.go:172] (0xc0009062c0) (0xc000740640) Stream removed, broadcasting: 1\nI0208 13:36:46.367116    1914 log.go:172] (0xc0009062c0) (0xc0007406e0) Stream removed, broadcasting: 5\nI0208 13:36:46.367134    1914 log.go:172] (0xc0009062c0) Go away received\nI0208 13:36:46.367233    1914 log.go:172] (0xc0009062c0) (0xc000740640) Stream removed, broadcasting: 1\nI0208 13:36:46.367243    1914 log.go:172] (0xc0009062c0) (0xc000742000) Stream removed, broadcasting: 3\nI0208 13:36:46.367247    1914 log.go:172] (0xc0009062c0) (0xc0007406e0) Stream removed, broadcasting: 5\n"
Feb  8 13:36:46.373: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb  8 13:36:46.373: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Feb  8 13:36:46.391: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Feb  8 13:36:46.391: INFO: Waiting for statefulset status.replicas updated to 0
Feb  8 13:36:46.470: INFO: POD   NODE        PHASE    GRACE  CONDITIONS
Feb  8 13:36:46.470: INFO: ss-0  iruya-node  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-08 13:36:35 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-08 13:36:46 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-08 13:36:46 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-08 13:36:35 +0000 UTC  }]
Feb  8 13:36:46.470: INFO: ss-1              Pending         []
Feb  8 13:36:46.470: INFO: 
Feb  8 13:36:46.470: INFO: StatefulSet ss has not reached scale 3, at 2
Feb  8 13:36:48.343: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.965410991s
Feb  8 13:36:49.715: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.09223636s
Feb  8 13:36:50.722: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.720429416s
Feb  8 13:36:51.729: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.713991578s
Feb  8 13:36:52.989: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.706918695s
Feb  8 13:36:54.006: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.445827496s
Feb  8 13:36:55.033: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.429775813s
Feb  8 13:36:56.043: INFO: Verifying statefulset ss doesn't scale past 3 for another 402.699587ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-7068
Feb  8 13:36:57.060: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7068 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  8 13:36:57.509: INFO: stderr: "I0208 13:36:57.232570    1931 log.go:172] (0xc0007ce580) (0xc0005aeaa0) Create stream\nI0208 13:36:57.232816    1931 log.go:172] (0xc0007ce580) (0xc0005aeaa0) Stream added, broadcasting: 1\nI0208 13:36:57.240309    1931 log.go:172] (0xc0007ce580) Reply frame received for 1\nI0208 13:36:57.240365    1931 log.go:172] (0xc0007ce580) (0xc0008f8000) Create stream\nI0208 13:36:57.240394    1931 log.go:172] (0xc0007ce580) (0xc0008f8000) Stream added, broadcasting: 3\nI0208 13:36:57.242216    1931 log.go:172] (0xc0007ce580) Reply frame received for 3\nI0208 13:36:57.242240    1931 log.go:172] (0xc0007ce580) (0xc0008f80a0) Create stream\nI0208 13:36:57.242247    1931 log.go:172] (0xc0007ce580) (0xc0008f80a0) Stream added, broadcasting: 5\nI0208 13:36:57.243792    1931 log.go:172] (0xc0007ce580) Reply frame received for 5\nI0208 13:36:57.352928    1931 log.go:172] (0xc0007ce580) Data frame received for 3\nI0208 13:36:57.352993    1931 log.go:172] (0xc0008f8000) (3) Data frame handling\nI0208 13:36:57.353016    1931 log.go:172] (0xc0008f8000) (3) Data frame sent\nI0208 13:36:57.353597    1931 log.go:172] (0xc0007ce580) Data frame received for 5\nI0208 13:36:57.353609    1931 log.go:172] (0xc0008f80a0) (5) Data frame handling\nI0208 13:36:57.353617    1931 log.go:172] (0xc0008f80a0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0208 13:36:57.499002    1931 log.go:172] (0xc0007ce580) (0xc0008f8000) Stream removed, broadcasting: 3\nI0208 13:36:57.499098    1931 log.go:172] (0xc0007ce580) (0xc0008f80a0) Stream removed, broadcasting: 5\nI0208 13:36:57.499173    1931 log.go:172] (0xc0007ce580) Data frame received for 1\nI0208 13:36:57.499187    1931 log.go:172] (0xc0005aeaa0) (1) Data frame handling\nI0208 13:36:57.499209    1931 log.go:172] (0xc0005aeaa0) (1) Data frame sent\nI0208 13:36:57.499224    1931 log.go:172] (0xc0007ce580) (0xc0005aeaa0) Stream removed, broadcasting: 1\nI0208 13:36:57.499240    1931 log.go:172] (0xc0007ce580) Go away received\nI0208 13:36:57.500184    1931 log.go:172] (0xc0007ce580) (0xc0005aeaa0) Stream removed, broadcasting: 1\nI0208 13:36:57.500278    1931 log.go:172] (0xc0007ce580) (0xc0008f8000) Stream removed, broadcasting: 3\nI0208 13:36:57.500291    1931 log.go:172] (0xc0007ce580) (0xc0008f80a0) Stream removed, broadcasting: 5\n"
Feb  8 13:36:57.509: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Feb  8 13:36:57.509: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Feb  8 13:36:57.510: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7068 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  8 13:36:57.968: INFO: stderr: "I0208 13:36:57.762206    1952 log.go:172] (0xc00075a0b0) (0xc0003608c0) Create stream\nI0208 13:36:57.762267    1952 log.go:172] (0xc00075a0b0) (0xc0003608c0) Stream added, broadcasting: 1\nI0208 13:36:57.765705    1952 log.go:172] (0xc00075a0b0) Reply frame received for 1\nI0208 13:36:57.765738    1952 log.go:172] (0xc00075a0b0) (0xc000360960) Create stream\nI0208 13:36:57.765744    1952 log.go:172] (0xc00075a0b0) (0xc000360960) Stream added, broadcasting: 3\nI0208 13:36:57.767517    1952 log.go:172] (0xc00075a0b0) Reply frame received for 3\nI0208 13:36:57.767535    1952 log.go:172] (0xc00075a0b0) (0xc00086c000) Create stream\nI0208 13:36:57.767542    1952 log.go:172] (0xc00075a0b0) (0xc00086c000) Stream added, broadcasting: 5\nI0208 13:36:57.768935    1952 log.go:172] (0xc00075a0b0) Reply frame received for 5\nI0208 13:36:57.846204    1952 log.go:172] (0xc00075a0b0) Data frame received for 5\nI0208 13:36:57.846324    1952 log.go:172] (0xc00086c000) (5) Data frame handling\nI0208 13:36:57.846357    1952 log.go:172] (0xc00086c000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0208 13:36:57.846483    1952 log.go:172] (0xc00075a0b0) Data frame received for 3\nI0208 13:36:57.846524    1952 log.go:172] (0xc000360960) (3) Data frame handling\nI0208 13:36:57.846569    1952 log.go:172] (0xc000360960) (3) Data frame sent\nI0208 13:36:57.956199    1952 log.go:172] (0xc00075a0b0) Data frame received for 1\nI0208 13:36:57.956468    1952 log.go:172] (0xc0003608c0) (1) Data frame handling\nI0208 13:36:57.956584    1952 log.go:172] (0xc0003608c0) (1) Data frame sent\nI0208 13:36:57.956626    1952 log.go:172] (0xc00075a0b0) (0xc0003608c0) Stream removed, broadcasting: 1\nI0208 13:36:57.956662    1952 log.go:172] (0xc00075a0b0) (0xc000360960) Stream removed, broadcasting: 3\nI0208 13:36:57.956806    1952 log.go:172] (0xc00075a0b0) (0xc00086c000) Stream removed, broadcasting: 5\nI0208 13:36:57.956965    1952 log.go:172] (0xc00075a0b0) Go away received\nI0208 13:36:57.957169    1952 log.go:172] (0xc00075a0b0) (0xc0003608c0) Stream removed, broadcasting: 1\nI0208 13:36:57.957180    1952 log.go:172] (0xc00075a0b0) (0xc000360960) Stream removed, broadcasting: 3\nI0208 13:36:57.957184    1952 log.go:172] (0xc00075a0b0) (0xc00086c000) Stream removed, broadcasting: 5\n"
Feb  8 13:36:57.968: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Feb  8 13:36:57.968: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Feb  8 13:36:57.969: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7068 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  8 13:36:58.452: INFO: stderr: "I0208 13:36:58.136862    1971 log.go:172] (0xc00080c420) (0xc0006b8640) Create stream\nI0208 13:36:58.137035    1971 log.go:172] (0xc00080c420) (0xc0006b8640) Stream added, broadcasting: 1\nI0208 13:36:58.142311    1971 log.go:172] (0xc00080c420) Reply frame received for 1\nI0208 13:36:58.142359    1971 log.go:172] (0xc00080c420) (0xc00061e280) Create stream\nI0208 13:36:58.142374    1971 log.go:172] (0xc00080c420) (0xc00061e280) Stream added, broadcasting: 3\nI0208 13:36:58.145866    1971 log.go:172] (0xc00080c420) Reply frame received for 3\nI0208 13:36:58.145931    1971 log.go:172] (0xc00080c420) (0xc0006ae000) Create stream\nI0208 13:36:58.145947    1971 log.go:172] (0xc00080c420) (0xc0006ae000) Stream added, broadcasting: 5\nI0208 13:36:58.147924    1971 log.go:172] (0xc00080c420) Reply frame received for 5\nI0208 13:36:58.234382    1971 log.go:172] (0xc00080c420) Data frame received for 5\nI0208 13:36:58.234498    1971 log.go:172] (0xc0006ae000) (5) Data frame handling\nI0208 13:36:58.234510    1971 log.go:172] (0xc0006ae000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0208 13:36:58.234530    1971 log.go:172] (0xc00080c420) Data frame received for 3\nI0208 13:36:58.234542    1971 log.go:172] (0xc00061e280) (3) Data frame handling\nI0208 13:36:58.234574    1971 log.go:172] (0xc00061e280) (3) Data frame sent\nI0208 13:36:58.443864    1971 log.go:172] (0xc00080c420) (0xc00061e280) Stream removed, broadcasting: 3\nI0208 13:36:58.444075    1971 log.go:172] (0xc00080c420) Data frame received for 1\nI0208 13:36:58.444111    1971 log.go:172] (0xc00080c420) (0xc0006ae000) Stream removed, broadcasting: 5\nI0208 13:36:58.444249    1971 log.go:172] (0xc0006b8640) (1) Data frame handling\nI0208 13:36:58.444270    1971 log.go:172] (0xc0006b8640) (1) Data frame sent\nI0208 13:36:58.444383    1971 log.go:172] (0xc00080c420) (0xc0006b8640) Stream removed, broadcasting: 1\nI0208 13:36:58.444412    1971 log.go:172] (0xc00080c420) Go away received\nI0208 13:36:58.446711    1971 log.go:172] (0xc00080c420) (0xc0006b8640) Stream removed, broadcasting: 1\nI0208 13:36:58.446754    1971 log.go:172] (0xc00080c420) (0xc00061e280) Stream removed, broadcasting: 3\nI0208 13:36:58.446780    1971 log.go:172] (0xc00080c420) (0xc0006ae000) Stream removed, broadcasting: 5\n"
Feb  8 13:36:58.453: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Feb  8 13:36:58.453: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Feb  8 13:36:58.466: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Feb  8 13:36:58.466: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Feb  8 13:36:58.466: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Scale down will not halt with unhealthy stateful pod
Feb  8 13:36:58.474: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7068 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb  8 13:36:58.902: INFO: stderr: "I0208 13:36:58.639261    1990 log.go:172] (0xc0008be0b0) (0xc000732640) Create stream\nI0208 13:36:58.639375    1990 log.go:172] (0xc0008be0b0) (0xc000732640) Stream added, broadcasting: 1\nI0208 13:36:58.647490    1990 log.go:172] (0xc0008be0b0) Reply frame received for 1\nI0208 13:36:58.647520    1990 log.go:172] (0xc0008be0b0) (0xc0007fe000) Create stream\nI0208 13:36:58.647528    1990 log.go:172] (0xc0008be0b0) (0xc0007fe000) Stream added, broadcasting: 3\nI0208 13:36:58.648993    1990 log.go:172] (0xc0008be0b0) Reply frame received for 3\nI0208 13:36:58.649013    1990 log.go:172] (0xc0008be0b0) (0xc0007fe0a0) Create stream\nI0208 13:36:58.649019    1990 log.go:172] (0xc0008be0b0) (0xc0007fe0a0) Stream added, broadcasting: 5\nI0208 13:36:58.650198    1990 log.go:172] (0xc0008be0b0) Reply frame received for 5\nI0208 13:36:58.739964    1990 log.go:172] (0xc0008be0b0) Data frame received for 3\nI0208 13:36:58.740020    1990 log.go:172] (0xc0007fe000) (3) Data frame handling\nI0208 13:36:58.740030    1990 log.go:172] (0xc0007fe000) (3) Data frame sent\nI0208 13:36:58.740047    1990 log.go:172] (0xc0008be0b0) Data frame received for 5\nI0208 13:36:58.740055    1990 log.go:172] (0xc0007fe0a0) (5) Data frame handling\nI0208 13:36:58.740064    1990 log.go:172] (0xc0007fe0a0) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0208 13:36:58.896583    1990 log.go:172] (0xc0008be0b0) (0xc0007fe000) Stream removed, broadcasting: 3\nI0208 13:36:58.896744    1990 log.go:172] (0xc0008be0b0) Data frame received for 1\nI0208 13:36:58.896762    1990 log.go:172] (0xc000732640) (1) Data frame handling\nI0208 13:36:58.896774    1990 log.go:172] (0xc000732640) (1) Data frame sent\nI0208 13:36:58.896843    1990 log.go:172] (0xc0008be0b0) (0xc0007fe0a0) Stream removed, broadcasting: 5\nI0208 13:36:58.896900    1990 log.go:172] (0xc0008be0b0) (0xc000732640) Stream removed, broadcasting: 1\nI0208 13:36:58.896921    1990 log.go:172] (0xc0008be0b0) Go away received\nI0208 13:36:58.897133    1990 log.go:172] (0xc0008be0b0) (0xc000732640) Stream removed, broadcasting: 1\nI0208 13:36:58.897149    1990 log.go:172] (0xc0008be0b0) (0xc0007fe000) Stream removed, broadcasting: 3\nI0208 13:36:58.897156    1990 log.go:172] (0xc0008be0b0) (0xc0007fe0a0) Stream removed, broadcasting: 5\n"
Feb  8 13:36:58.902: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb  8 13:36:58.902: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Feb  8 13:36:58.902: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7068 ss-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb  8 13:36:59.296: INFO: stderr: "I0208 13:36:59.045293    2005 log.go:172] (0xc000a10420) (0xc000a0e640) Create stream\nI0208 13:36:59.045415    2005 log.go:172] (0xc000a10420) (0xc000a0e640) Stream added, broadcasting: 1\nI0208 13:36:59.052480    2005 log.go:172] (0xc000a10420) Reply frame received for 1\nI0208 13:36:59.052573    2005 log.go:172] (0xc000a10420) (0xc00003a140) Create stream\nI0208 13:36:59.052602    2005 log.go:172] (0xc000a10420) (0xc00003a140) Stream added, broadcasting: 3\nI0208 13:36:59.053645    2005 log.go:172] (0xc000a10420) Reply frame received for 3\nI0208 13:36:59.053664    2005 log.go:172] (0xc000a10420) (0xc00003a460) Create stream\nI0208 13:36:59.053673    2005 log.go:172] (0xc000a10420) (0xc00003a460) Stream added, broadcasting: 5\nI0208 13:36:59.055108    2005 log.go:172] (0xc000a10420) Reply frame received for 5\nI0208 13:36:59.141112    2005 log.go:172] (0xc000a10420) Data frame received for 5\nI0208 13:36:59.141145    2005 log.go:172] (0xc00003a460) (5) Data frame handling\nI0208 13:36:59.141159    2005 log.go:172] (0xc00003a460) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0208 13:36:59.188942    2005 log.go:172] (0xc000a10420) Data frame received for 3\nI0208 13:36:59.188960    2005 log.go:172] (0xc00003a140) (3) Data frame handling\nI0208 13:36:59.188968    2005 log.go:172] (0xc00003a140) (3) Data frame sent\nI0208 13:36:59.284547    2005 log.go:172] (0xc000a10420) Data frame received for 1\nI0208 13:36:59.284675    2005 log.go:172] (0xc000a10420) (0xc00003a140) Stream removed, broadcasting: 3\nI0208 13:36:59.284720    2005 log.go:172] (0xc000a0e640) (1) Data frame handling\nI0208 13:36:59.284764    2005 log.go:172] (0xc000a10420) (0xc00003a460) Stream removed, broadcasting: 5\nI0208 13:36:59.284809    2005 log.go:172] (0xc000a0e640) (1) Data frame sent\nI0208 13:36:59.284868    2005 log.go:172] (0xc000a10420) (0xc000a0e640) Stream removed, broadcasting: 1\nI0208 13:36:59.284879    2005 log.go:172] (0xc000a10420) Go away received\nI0208 13:36:59.285797    2005 log.go:172] (0xc000a10420) (0xc000a0e640) Stream removed, broadcasting: 1\nI0208 13:36:59.285861    2005 log.go:172] (0xc000a10420) (0xc00003a140) Stream removed, broadcasting: 3\nI0208 13:36:59.285881    2005 log.go:172] (0xc000a10420) (0xc00003a460) Stream removed, broadcasting: 5\n"
Feb  8 13:36:59.296: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb  8 13:36:59.296: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Feb  8 13:36:59.297: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7068 ss-2 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb  8 13:36:59.942: INFO: stderr: "I0208 13:36:59.468542    2024 log.go:172] (0xc0009b2370) (0xc0009da5a0) Create stream\nI0208 13:36:59.468657    2024 log.go:172] (0xc0009b2370) (0xc0009da5a0) Stream added, broadcasting: 1\nI0208 13:36:59.473906    2024 log.go:172] (0xc0009b2370) Reply frame received for 1\nI0208 13:36:59.473964    2024 log.go:172] (0xc0009b2370) (0xc0009da6e0) Create stream\nI0208 13:36:59.473971    2024 log.go:172] (0xc0009b2370) (0xc0009da6e0) Stream added, broadcasting: 3\nI0208 13:36:59.475363    2024 log.go:172] (0xc0009b2370) Reply frame received for 3\nI0208 13:36:59.475393    2024 log.go:172] (0xc0009b2370) (0xc0009ec000) Create stream\nI0208 13:36:59.475407    2024 log.go:172] (0xc0009b2370) (0xc0009ec000) Stream added, broadcasting: 5\nI0208 13:36:59.478692    2024 log.go:172] (0xc0009b2370) Reply frame received for 5\nI0208 13:36:59.604094    2024 log.go:172] (0xc0009b2370) Data frame received for 5\nI0208 13:36:59.604199    2024 log.go:172] (0xc0009ec000) (5) Data frame handling\nI0208 13:36:59.604226    2024 log.go:172] (0xc0009ec000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0208 13:36:59.627876    2024 log.go:172] (0xc0009b2370) Data frame received for 3\nI0208 13:36:59.628006    2024 log.go:172] (0xc0009da6e0) (3) Data frame handling\nI0208 13:36:59.628035    2024 log.go:172] (0xc0009da6e0) (3) Data frame sent\nI0208 13:36:59.935805    2024 log.go:172] (0xc0009b2370) (0xc0009da6e0) Stream removed, broadcasting: 3\nI0208 13:36:59.935933    2024 log.go:172] (0xc0009b2370) Data frame received for 1\nI0208 13:36:59.935945    2024 log.go:172] (0xc0009da5a0) (1) Data frame handling\nI0208 13:36:59.935954    2024 log.go:172] (0xc0009da5a0) (1) Data frame sent\nI0208 13:36:59.935971    2024 log.go:172] (0xc0009b2370) (0xc0009ec000) Stream removed, broadcasting: 5\nI0208 13:36:59.936040    2024 log.go:172] (0xc0009b2370) (0xc0009da5a0) Stream removed, broadcasting: 1\nI0208 13:36:59.936068    2024 log.go:172] (0xc0009b2370) Go away received\nI0208 13:36:59.936789    2024 log.go:172] (0xc0009b2370) (0xc0009da5a0) Stream removed, broadcasting: 1\nI0208 13:36:59.936840    2024 log.go:172] (0xc0009b2370) (0xc0009da6e0) Stream removed, broadcasting: 3\nI0208 13:36:59.936902    2024 log.go:172] (0xc0009b2370) (0xc0009ec000) Stream removed, broadcasting: 5\n"
Feb  8 13:36:59.943: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb  8 13:36:59.943: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Feb  8 13:36:59.943: INFO: Waiting for statefulset status.replicas updated to 0
Feb  8 13:36:59.990: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1
Feb  8 13:37:10.058: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Feb  8 13:37:10.058: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Feb  8 13:37:10.058: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Feb  8 13:37:10.085: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Feb  8 13:37:10.085: INFO: ss-0  iruya-node                 Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-08 13:36:35 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-08 13:36:59 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-08 13:36:59 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-08 13:36:35 +0000 UTC  }]
Feb  8 13:37:10.085: INFO: ss-1  iruya-server-sfge57q7djm7  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-08 13:36:46 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-08 13:36:59 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-08 13:36:59 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-08 13:36:46 +0000 UTC  }]
Feb  8 13:37:10.085: INFO: ss-2  iruya-node                 Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-08 13:36:46 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-08 13:37:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-08 13:37:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-08 13:36:46 +0000 UTC  }]
Feb  8 13:37:10.085: INFO: 
Feb  8 13:37:10.085: INFO: StatefulSet ss has not reached scale 0, at 3
Feb  8 13:37:11.482: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Feb  8 13:37:11.482: INFO: ss-0  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-08 13:36:35 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-08 13:36:59 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-08 13:36:59 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-08 13:36:35 +0000 UTC  }]
Feb  8 13:37:11.482: INFO: ss-1  iruya-server-sfge57q7djm7  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-08 13:36:46 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-08 13:36:59 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-08 13:36:59 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-08 13:36:46 +0000 UTC  }]
Feb  8 13:37:11.482: INFO: ss-2  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-08 13:36:46 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-08 13:37:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-08 13:37:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-08 13:36:46 +0000 UTC  }]
Feb  8 13:37:11.482: INFO: 
Feb  8 13:37:11.482: INFO: StatefulSet ss has not reached scale 0, at 3
Feb  8 13:37:12.490: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Feb  8 13:37:12.491: INFO: ss-0  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-08 13:36:35 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-08 13:36:59 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-08 13:36:59 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-08 13:36:35 +0000 UTC  }]
Feb  8 13:37:12.491: INFO: ss-1  iruya-server-sfge57q7djm7  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-08 13:36:46 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-08 13:36:59 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-08 13:36:59 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-08 13:36:46 +0000 UTC  }]
Feb  8 13:37:12.491: INFO: ss-2  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-08 13:36:46 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-08 13:37:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-08 13:37:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-08 13:36:46 +0000 UTC  }]
Feb  8 13:37:12.491: INFO: 
Feb  8 13:37:12.491: INFO: StatefulSet ss has not reached scale 0, at 3
Feb  8 13:37:13.503: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Feb  8 13:37:13.503: INFO: ss-0  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-08 13:36:35 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-08 13:36:59 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-08 13:36:59 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-08 13:36:35 +0000 UTC  }]
Feb  8 13:37:13.503: INFO: ss-1  iruya-server-sfge57q7djm7  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-08 13:36:46 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-08 13:36:59 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-08 13:36:59 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-08 13:36:46 +0000 UTC  }]
Feb  8 13:37:13.503: INFO: ss-2  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-08 13:36:46 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-08 13:37:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-08 13:37:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-08 13:36:46 +0000 UTC  }]
Feb  8 13:37:13.503: INFO: 
Feb  8 13:37:13.503: INFO: StatefulSet ss has not reached scale 0, at 3
Feb  8 13:37:14.519: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Feb  8 13:37:14.519: INFO: ss-0  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-08 13:36:35 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-08 13:36:59 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-08 13:36:59 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-08 13:36:35 +0000 UTC  }]
Feb  8 13:37:14.519: INFO: ss-1  iruya-server-sfge57q7djm7  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-08 13:36:46 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-08 13:36:59 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-08 13:36:59 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-08 13:36:46 +0000 UTC  }]
Feb  8 13:37:14.519: INFO: ss-2  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-08 13:36:46 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-08 13:37:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-08 13:37:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-08 13:36:46 +0000 UTC  }]
Feb  8 13:37:14.519: INFO: 
Feb  8 13:37:14.519: INFO: StatefulSet ss has not reached scale 0, at 3
Feb  8 13:37:15.547: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Feb  8 13:37:15.547: INFO: ss-0  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-08 13:36:35 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-08 13:36:59 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-08 13:36:59 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-08 13:36:35 +0000 UTC  }]
Feb  8 13:37:15.547: INFO: ss-1  iruya-server-sfge57q7djm7  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-08 13:36:46 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-08 13:36:59 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-08 13:36:59 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-08 13:36:46 +0000 UTC  }]
Feb  8 13:37:15.547: INFO: ss-2  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-08 13:36:46 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-08 13:37:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-08 13:37:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-08 13:36:46 +0000 UTC  }]
Feb  8 13:37:15.547: INFO: 
Feb  8 13:37:15.547: INFO: StatefulSet ss has not reached scale 0, at 3
Feb  8 13:37:16.560: INFO: POD   NODE        PHASE    GRACE  CONDITIONS
Feb  8 13:37:16.560: INFO: ss-0  iruya-node  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-08 13:36:35 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-08 13:36:59 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-08 13:36:59 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-08 13:36:35 +0000 UTC  }]
Feb  8 13:37:16.560: INFO: ss-2  iruya-node  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-08 13:36:46 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-08 13:37:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-08 13:37:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-08 13:36:46 +0000 UTC  }]
Feb  8 13:37:16.560: INFO: 
Feb  8 13:37:16.560: INFO: StatefulSet ss has not reached scale 0, at 2
Feb  8 13:37:17.567: INFO: POD   NODE        PHASE    GRACE  CONDITIONS
Feb  8 13:37:17.568: INFO: ss-0  iruya-node  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-08 13:36:35 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-08 13:36:59 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-08 13:36:59 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-08 13:36:35 +0000 UTC  }]
Feb  8 13:37:17.568: INFO: ss-2  iruya-node  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-08 13:36:46 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-08 13:37:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-08 13:37:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-08 13:36:46 +0000 UTC  }]
Feb  8 13:37:17.568: INFO: 
Feb  8 13:37:17.568: INFO: StatefulSet ss has not reached scale 0, at 2
Feb  8 13:37:18.578: INFO: POD   NODE        PHASE    GRACE  CONDITIONS
Feb  8 13:37:18.578: INFO: ss-0  iruya-node  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-08 13:36:35 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-08 13:36:59 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-08 13:36:59 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-08 13:36:35 +0000 UTC  }]
Feb  8 13:37:18.578: INFO: 
Feb  8 13:37:18.578: INFO: StatefulSet ss has not reached scale 0, at 1
Feb  8 13:37:19.585: INFO: Verifying statefulset ss doesn't scale past 0 for another 498.73789ms
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-7068
Feb  8 13:37:20.596: INFO: Scaling statefulset ss to 0
Feb  8 13:37:20.611: INFO: Waiting for statefulset status.replicas updated to 0
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86
Feb  8 13:37:20.615: INFO: Deleting all statefulset in ns statefulset-7068
Feb  8 13:37:20.619: INFO: Scaling statefulset ss to 0
Feb  8 13:37:20.631: INFO: Waiting for statefulset status.replicas updated to 0
Feb  8 13:37:20.635: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  8 13:37:20.660: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-7068" for this suite.
Feb  8 13:37:26.703: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 13:37:26.779: INFO: namespace statefulset-7068 deletion completed in 6.109002798s

• [SLOW TEST:51.180 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    Burst scaling should run to completion even with unhealthy pods [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  8 13:37:26.779: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating the pod
Feb  8 13:37:35.478: INFO: Successfully updated pod "labelsupdate86927387-5a61-4839-84e5-118aaecf471f"
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  8 13:37:39.674: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7359" for this suite.
Feb  8 13:38:01.730: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 13:38:01.899: INFO: namespace projected-7359 deletion completed in 22.216694399s

• [SLOW TEST:35.120 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  8 13:38:01.899: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-map-7a452e03-149d-47ad-88ac-fc59f6d5bc93
STEP: Creating a pod to test consume secrets
Feb  8 13:38:02.034: INFO: Waiting up to 5m0s for pod "pod-secrets-e3de4165-d582-4178-a5f5-3e343ec18051" in namespace "secrets-5167" to be "success or failure"
Feb  8 13:38:02.043: INFO: Pod "pod-secrets-e3de4165-d582-4178-a5f5-3e343ec18051": Phase="Pending", Reason="", readiness=false. Elapsed: 8.95486ms
Feb  8 13:38:04.056: INFO: Pod "pod-secrets-e3de4165-d582-4178-a5f5-3e343ec18051": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021654113s
Feb  8 13:38:06.112: INFO: Pod "pod-secrets-e3de4165-d582-4178-a5f5-3e343ec18051": Phase="Pending", Reason="", readiness=false. Elapsed: 4.07805698s
Feb  8 13:38:08.129: INFO: Pod "pod-secrets-e3de4165-d582-4178-a5f5-3e343ec18051": Phase="Pending", Reason="", readiness=false. Elapsed: 6.094752767s
Feb  8 13:38:10.144: INFO: Pod "pod-secrets-e3de4165-d582-4178-a5f5-3e343ec18051": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.10925324s
STEP: Saw pod success
Feb  8 13:38:10.144: INFO: Pod "pod-secrets-e3de4165-d582-4178-a5f5-3e343ec18051" satisfied condition "success or failure"
Feb  8 13:38:10.147: INFO: Trying to get logs from node iruya-node pod pod-secrets-e3de4165-d582-4178-a5f5-3e343ec18051 container secret-volume-test: 
STEP: delete the pod
Feb  8 13:38:10.293: INFO: Waiting for pod pod-secrets-e3de4165-d582-4178-a5f5-3e343ec18051 to disappear
Feb  8 13:38:10.307: INFO: Pod pod-secrets-e3de4165-d582-4178-a5f5-3e343ec18051 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  8 13:38:10.307: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-5167" for this suite.
Feb  8 13:38:16.374: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 13:38:16.666: INFO: namespace secrets-5167 deletion completed in 6.348323563s

• [SLOW TEST:14.767 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-api-machinery] Garbage collector 
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  8 13:38:16.666: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the rc1
STEP: create the rc2
STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well
STEP: delete the rc simpletest-rc-to-be-deleted
STEP: wait for the rc to be deleted
STEP: Gathering metrics
W0208 13:38:32.074616       8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Feb  8 13:38:32.074: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  8 13:38:32.074: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-6728" for this suite.
Feb  8 13:38:42.841: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 13:38:43.720: INFO: namespace gc-6728 deletion completed in 11.64126549s

• [SLOW TEST:27.055 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-node] Downward API 
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  8 13:38:43.721: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward api env vars
Feb  8 13:38:45.561: INFO: Waiting up to 5m0s for pod "downward-api-1359cc8a-ca1c-4ae5-87d0-ad04ec4f2413" in namespace "downward-api-4341" to be "success or failure"
Feb  8 13:38:45.743: INFO: Pod "downward-api-1359cc8a-ca1c-4ae5-87d0-ad04ec4f2413": Phase="Pending", Reason="", readiness=false. Elapsed: 182.100715ms
Feb  8 13:38:47.754: INFO: Pod "downward-api-1359cc8a-ca1c-4ae5-87d0-ad04ec4f2413": Phase="Pending", Reason="", readiness=false. Elapsed: 2.193227077s
Feb  8 13:38:49.763: INFO: Pod "downward-api-1359cc8a-ca1c-4ae5-87d0-ad04ec4f2413": Phase="Pending", Reason="", readiness=false. Elapsed: 4.202165779s
Feb  8 13:38:51.771: INFO: Pod "downward-api-1359cc8a-ca1c-4ae5-87d0-ad04ec4f2413": Phase="Pending", Reason="", readiness=false. Elapsed: 6.210352246s
Feb  8 13:38:53.831: INFO: Pod "downward-api-1359cc8a-ca1c-4ae5-87d0-ad04ec4f2413": Phase="Pending", Reason="", readiness=false. Elapsed: 8.269786845s
Feb  8 13:38:55.839: INFO: Pod "downward-api-1359cc8a-ca1c-4ae5-87d0-ad04ec4f2413": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.277711568s
STEP: Saw pod success
Feb  8 13:38:55.839: INFO: Pod "downward-api-1359cc8a-ca1c-4ae5-87d0-ad04ec4f2413" satisfied condition "success or failure"
Feb  8 13:38:55.867: INFO: Trying to get logs from node iruya-node pod downward-api-1359cc8a-ca1c-4ae5-87d0-ad04ec4f2413 container dapi-container: 
STEP: delete the pod
Feb  8 13:38:55.958: INFO: Waiting for pod downward-api-1359cc8a-ca1c-4ae5-87d0-ad04ec4f2413 to disappear
Feb  8 13:38:55.966: INFO: Pod downward-api-1359cc8a-ca1c-4ae5-87d0-ad04ec4f2413 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  8 13:38:55.966: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-4341" for this suite.
Feb  8 13:39:02.033: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 13:39:02.206: INFO: namespace downward-api-4341 deletion completed in 6.200585649s

• [SLOW TEST:18.486 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[k8s.io] Probing container 
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  8 13:39:02.207: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod busybox-a60c7887-c9d0-4a46-8051-a490d0023971 in namespace container-probe-8637
Feb  8 13:39:10.310: INFO: Started pod busybox-a60c7887-c9d0-4a46-8051-a490d0023971 in namespace container-probe-8637
STEP: checking the pod's current state and verifying that restartCount is present
Feb  8 13:39:10.320: INFO: Initial restart count of pod busybox-a60c7887-c9d0-4a46-8051-a490d0023971 is 0
Feb  8 13:40:08.694: INFO: Restart count of pod container-probe-8637/busybox-a60c7887-c9d0-4a46-8051-a490d0023971 is now 1 (58.373604673s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  8 13:40:08.731: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-8637" for this suite.
Feb  8 13:40:14.852: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 13:40:15.109: INFO: namespace container-probe-8637 deletion completed in 6.358194659s

• [SLOW TEST:72.902 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-storage] Secrets 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  8 13:40:15.109: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name s-test-opt-del-3a742879-4414-4721-b4fe-e313730ac1a3
STEP: Creating secret with name s-test-opt-upd-ce294f38-7c57-436d-b914-e90a035035e6
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-3a742879-4414-4721-b4fe-e313730ac1a3
STEP: Updating secret s-test-opt-upd-ce294f38-7c57-436d-b914-e90a035035e6
STEP: Creating secret with name s-test-opt-create-2040bcb2-1a25-45bf-9a5e-c5ba5a89fc01
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  8 13:41:49.269: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-8058" for this suite.
Feb  8 13:42:11.326: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 13:42:11.519: INFO: namespace secrets-8058 deletion completed in 22.243524s

• [SLOW TEST:116.410 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  8 13:42:11.519: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with configMap that has name projected-configmap-test-upd-993c3384-1591-443a-8444-91ef713485fd
STEP: Creating the pod
STEP: Updating configmap projected-configmap-test-upd-993c3384-1591-443a-8444-91ef713485fd
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  8 13:42:23.263: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-222" for this suite.
Feb  8 13:42:45.357: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 13:42:45.478: INFO: namespace projected-222 deletion completed in 22.199975757s

• [SLOW TEST:33.959 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-apps] ReplicationController 
  should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  8 13:42:45.478: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Given a Pod with a 'name' label pod-adoption is created
STEP: When a replication controller with a matching selector is created
STEP: Then the orphan pod is adopted
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  8 13:42:54.673: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-1182" for this suite.
Feb  8 13:43:18.696: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 13:43:18.848: INFO: namespace replication-controller-1182 deletion completed in 24.170668403s

• [SLOW TEST:33.370 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  8 13:43:18.849: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72
[It] deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb  8 13:43:18.998: INFO: Pod name cleanup-pod: Found 0 pods out of 1
Feb  8 13:43:24.007: INFO: Pod name cleanup-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Feb  8 13:43:28.018: INFO: Creating deployment test-cleanup-deployment
STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66
Feb  8 13:43:28.048: INFO: Deployment "test-cleanup-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment,GenerateName:,Namespace:deployment-5615,SelfLink:/apis/apps/v1/namespaces/deployment-5615/deployments/test-cleanup-deployment,UID:c7183301-1657-4602-ad44-d7727c46c8ca,ResourceVersion:23573124,Generation:1,CreationTimestamp:2020-02-08 13:43:28 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[],ReadyReplicas:0,CollisionCount:nil,},}

Feb  8 13:43:28.053: INFO: New ReplicaSet of Deployment "test-cleanup-deployment" is nil.
Feb  8 13:43:28.053: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment":
Feb  8 13:43:28.053: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller,GenerateName:,Namespace:deployment-5615,SelfLink:/apis/apps/v1/namespaces/deployment-5615/replicasets/test-cleanup-controller,UID:f141de68-3085-47c3-81b9-cbf9144ef905,ResourceVersion:23573125,Generation:1,CreationTimestamp:2020-02-08 13:43:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment c7183301-1657-4602-ad44-d7727c46c8ca 0xc0029c4a97 0xc0029c4a98}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Feb  8 13:43:28.073: INFO: Pod "test-cleanup-controller-7x6f6" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller-7x6f6,GenerateName:test-cleanup-controller-,Namespace:deployment-5615,SelfLink:/api/v1/namespaces/deployment-5615/pods/test-cleanup-controller-7x6f6,UID:c4757fa3-4fde-42fd-bdd6-8dba20473778,ResourceVersion:23573121,Generation:0,CreationTimestamp:2020-02-08 13:43:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-controller f141de68-3085-47c3-81b9-cbf9144ef905 0xc00258dde7 0xc00258dde8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-sjhh7 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-sjhh7,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-sjhh7 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00258de60} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00258de80}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-08 13:43:19 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-08 13:43:27 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-08 13:43:27 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-08 13:43:19 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.1,StartTime:2020-02-08 13:43:19 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-08 13:43:25 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://783040224c1053d6b42dcde54dc41015c65816e42a280d02b59ebab900732d2c}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  8 13:43:28.074: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-5615" for this suite.
Feb  8 13:43:34.220: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 13:43:34.413: INFO: namespace deployment-5615 deletion completed in 6.325372925s

• [SLOW TEST:15.565 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run deployment 
  should create a deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  8 13:43:34.413: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1557
[It] should create a deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Feb  8 13:43:34.537: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --generator=deployment/apps.v1 --namespace=kubectl-1383'
Feb  8 13:43:37.450: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Feb  8 13:43:37.450: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n"
STEP: verifying the deployment e2e-test-nginx-deployment was created
STEP: verifying the pod controlled by deployment e2e-test-nginx-deployment was created
[AfterEach] [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1562
Feb  8 13:43:39.647: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=kubectl-1383'
Feb  8 13:43:39.781: INFO: stderr: ""
Feb  8 13:43:39.782: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  8 13:43:39.782: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1383" for this suite.
Feb  8 13:43:45.890: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 13:43:46.021: INFO: namespace kubectl-1383 deletion completed in 6.18555334s

• [SLOW TEST:11.608 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create a deployment from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Should recreate evicted statefulset [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  8 13:43:46.021: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75
STEP: Creating service test in namespace statefulset-4617
[It] Should recreate evicted statefulset [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Looking for a node to schedule stateful set and pod
STEP: Creating pod with conflicting port in namespace statefulset-4617
STEP: Creating statefulset with conflicting port in namespace statefulset-4617
STEP: Waiting until pod test-pod will start running in namespace statefulset-4617
STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-4617
Feb  8 13:43:54.394: INFO: Observed stateful pod in namespace: statefulset-4617, name: ss-0, uid: 19b18129-f372-4769-8c20-a24c559977b7, status phase: Pending. Waiting for statefulset controller to delete.
Feb  8 13:43:56.592: INFO: Observed stateful pod in namespace: statefulset-4617, name: ss-0, uid: 19b18129-f372-4769-8c20-a24c559977b7, status phase: Failed. Waiting for statefulset controller to delete.
Feb  8 13:43:56.628: INFO: Observed stateful pod in namespace: statefulset-4617, name: ss-0, uid: 19b18129-f372-4769-8c20-a24c559977b7, status phase: Failed. Waiting for statefulset controller to delete.
Feb  8 13:43:56.651: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-4617
STEP: Removing pod with conflicting port in namespace statefulset-4617
STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-4617 and will be in running state
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86
Feb  8 13:44:07.023: INFO: Deleting all statefulset in ns statefulset-4617
Feb  8 13:44:07.026: INFO: Scaling statefulset ss to 0
Feb  8 13:44:17.080: INFO: Waiting for statefulset status.replicas updated to 0
Feb  8 13:44:17.088: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  8 13:44:17.141: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-4617" for this suite.
Feb  8 13:44:23.174: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 13:44:23.363: INFO: namespace statefulset-4617 deletion completed in 6.218134236s

• [SLOW TEST:37.342 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    Should recreate evicted statefulset [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-auth] ServiceAccounts 
  should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  8 13:44:23.364: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: getting the auto-created API token
Feb  8 13:44:23.980: INFO: created pod pod-service-account-defaultsa
Feb  8 13:44:23.980: INFO: pod pod-service-account-defaultsa service account token volume mount: true
Feb  8 13:44:23.990: INFO: created pod pod-service-account-mountsa
Feb  8 13:44:23.991: INFO: pod pod-service-account-mountsa service account token volume mount: true
Feb  8 13:44:24.029: INFO: created pod pod-service-account-nomountsa
Feb  8 13:44:24.029: INFO: pod pod-service-account-nomountsa service account token volume mount: false
Feb  8 13:44:24.133: INFO: created pod pod-service-account-defaultsa-mountspec
Feb  8 13:44:24.133: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true
Feb  8 13:44:24.181: INFO: created pod pod-service-account-mountsa-mountspec
Feb  8 13:44:24.181: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true
Feb  8 13:44:24.209: INFO: created pod pod-service-account-nomountsa-mountspec
Feb  8 13:44:24.209: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true
Feb  8 13:44:24.329: INFO: created pod pod-service-account-defaultsa-nomountspec
Feb  8 13:44:24.329: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false
Feb  8 13:44:24.384: INFO: created pod pod-service-account-mountsa-nomountspec
Feb  8 13:44:24.384: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false
Feb  8 13:44:24.421: INFO: created pod pod-service-account-nomountsa-nomountspec
Feb  8 13:44:24.421: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false
[AfterEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  8 13:44:24.421: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-3917" for this suite.
Feb  8 13:45:06.329: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 13:45:06.470: INFO: namespace svcaccounts-3917 deletion completed in 41.82638436s

• [SLOW TEST:43.106 seconds]
[sig-auth] ServiceAccounts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23
  should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  8 13:45:06.471: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Cleaning up the secret
STEP: Cleaning up the configmap
STEP: Cleaning up the pod
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  8 13:45:16.815: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-wrapper-2172" for this suite.
Feb  8 13:45:22.862: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 13:45:23.014: INFO: namespace emptydir-wrapper-2172 deletion completed in 6.184758366s

• [SLOW TEST:16.543 seconds]
[sig-storage] EmptyDir wrapper volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  8 13:45:23.014: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating secret secrets-2673/secret-test-d8af109d-a24b-4416-a27d-2248c028451c
STEP: Creating a pod to test consume secrets
Feb  8 13:45:23.088: INFO: Waiting up to 5m0s for pod "pod-configmaps-a4b53943-5d06-4385-ae5d-24447e2da64f" in namespace "secrets-2673" to be "success or failure"
Feb  8 13:45:23.183: INFO: Pod "pod-configmaps-a4b53943-5d06-4385-ae5d-24447e2da64f": Phase="Pending", Reason="", readiness=false. Elapsed: 94.888546ms
Feb  8 13:45:25.632: INFO: Pod "pod-configmaps-a4b53943-5d06-4385-ae5d-24447e2da64f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.543870201s
Feb  8 13:45:27.643: INFO: Pod "pod-configmaps-a4b53943-5d06-4385-ae5d-24447e2da64f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.554380018s
Feb  8 13:45:29.655: INFO: Pod "pod-configmaps-a4b53943-5d06-4385-ae5d-24447e2da64f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.566313966s
Feb  8 13:45:31.664: INFO: Pod "pod-configmaps-a4b53943-5d06-4385-ae5d-24447e2da64f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.575244878s
STEP: Saw pod success
Feb  8 13:45:31.664: INFO: Pod "pod-configmaps-a4b53943-5d06-4385-ae5d-24447e2da64f" satisfied condition "success or failure"
Feb  8 13:45:31.673: INFO: Trying to get logs from node iruya-node pod pod-configmaps-a4b53943-5d06-4385-ae5d-24447e2da64f container env-test: 
STEP: delete the pod
Feb  8 13:45:31.766: INFO: Waiting for pod pod-configmaps-a4b53943-5d06-4385-ae5d-24447e2da64f to disappear
Feb  8 13:45:31.788: INFO: Pod pod-configmaps-a4b53943-5d06-4385-ae5d-24447e2da64f no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  8 13:45:31.788: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-2673" for this suite.
Feb  8 13:45:37.893: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 13:45:38.068: INFO: namespace secrets-2673 deletion completed in 6.268058277s

• [SLOW TEST:15.053 seconds]
[sig-api-machinery] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command in a pod 
  should print the output to logs [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  8 13:45:38.068: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should print the output to logs [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  8 13:45:46.176: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-2519" for this suite.
Feb  8 13:46:30.260: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 13:46:30.398: INFO: namespace kubelet-test-2519 deletion completed in 44.177311647s

• [SLOW TEST:52.330 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when scheduling a busybox command in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40
    should print the output to logs [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  8 13:46:30.398: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod test-webserver-cbfb680a-6646-4512-95b6-509d2705587b in namespace container-probe-8517
Feb  8 13:46:40.543: INFO: Started pod test-webserver-cbfb680a-6646-4512-95b6-509d2705587b in namespace container-probe-8517
STEP: checking the pod's current state and verifying that restartCount is present
Feb  8 13:46:40.549: INFO: Initial restart count of pod test-webserver-cbfb680a-6646-4512-95b6-509d2705587b is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  8 13:50:40.666: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-8517" for this suite.
Feb  8 13:50:46.700: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 13:50:46.847: INFO: namespace container-probe-8517 deletion completed in 6.173381168s

• [SLOW TEST:256.449 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-apps] Deployment 
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  8 13:50:46.848: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72
[It] RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb  8 13:50:47.012: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted)
Feb  8 13:50:47.037: INFO: Pod name sample-pod: Found 0 pods out of 1
Feb  8 13:50:52.101: INFO: Pod name sample-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Feb  8 13:50:56.122: INFO: Creating deployment "test-rolling-update-deployment"
Feb  8 13:50:56.136: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has
Feb  8 13:50:56.162: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created
Feb  8 13:50:58.272: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected
Feb  8 13:50:58.277: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716766656, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716766656, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716766656, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716766656, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  8 13:51:00.285: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716766656, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716766656, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716766656, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716766656, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  8 13:51:02.287: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716766656, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716766656, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716766656, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716766656, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  8 13:51:04.286: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted)
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66
Feb  8 13:51:04.298: INFO: Deployment "test-rolling-update-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment,GenerateName:,Namespace:deployment-3256,SelfLink:/apis/apps/v1/namespaces/deployment-3256/deployments/test-rolling-update-deployment,UID:18850edd-8311-4ff0-864e-135ba66907dc,ResourceVersion:23574158,Generation:1,CreationTimestamp:2020-02-08 13:50:56 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-02-08 13:50:56 +0000 UTC 2020-02-08 13:50:56 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-02-08 13:51:03 +0000 UTC 2020-02-08 13:50:56 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rolling-update-deployment-79f6b9d75c" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},}

Feb  8 13:51:04.303: INFO: New ReplicaSet "test-rolling-update-deployment-79f6b9d75c" of Deployment "test-rolling-update-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-79f6b9d75c,GenerateName:,Namespace:deployment-3256,SelfLink:/apis/apps/v1/namespaces/deployment-3256/replicasets/test-rolling-update-deployment-79f6b9d75c,UID:1406c187-6b3b-4e60-b41a-888e9273cda4,ResourceVersion:23574147,Generation:1,CreationTimestamp:2020-02-08 13:50:56 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment 18850edd-8311-4ff0-864e-135ba66907dc 0xc002c89597 0xc002c89598}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Feb  8 13:51:04.303: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment":
Feb  8 13:51:04.303: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-controller,GenerateName:,Namespace:deployment-3256,SelfLink:/apis/apps/v1/namespaces/deployment-3256/replicasets/test-rolling-update-controller,UID:6d28f38e-a2e4-4955-a1f1-8df4bb243ee2,ResourceVersion:23574157,Generation:2,CreationTimestamp:2020-02-08 13:50:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305832,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment 18850edd-8311-4ff0-864e-135ba66907dc 0xc002c894af 0xc002c894c0}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Feb  8 13:51:04.308: INFO: Pod "test-rolling-update-deployment-79f6b9d75c-tmftg" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-79f6b9d75c-tmftg,GenerateName:test-rolling-update-deployment-79f6b9d75c-,Namespace:deployment-3256,SelfLink:/api/v1/namespaces/deployment-3256/pods/test-rolling-update-deployment-79f6b9d75c-tmftg,UID:06858eef-273c-44c0-8354-b14f070011a1,ResourceVersion:23574146,Generation:0,CreationTimestamp:2020-02-08 13:50:56 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rolling-update-deployment-79f6b9d75c 1406c187-6b3b-4e60-b41a-888e9273cda4 0xc00223dd27 0xc00223dd28}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-fnvhf {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-fnvhf,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [{default-token-fnvhf true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00223dda0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00223dde0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-08 13:50:56 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-08 13:51:03 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-08 13:51:03 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-08 13:50:56 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.2,StartTime:2020-02-08 13:50:56 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-02-08 13:51:02 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 docker://2501ace58dbc9b7850259239e22210e39cb72029c39518c9c821fd0e317a85b0}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  8 13:51:04.308: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-3256" for this suite.
Feb  8 13:51:10.377: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 13:51:10.523: INFO: namespace deployment-3256 deletion completed in 6.205189369s

• [SLOW TEST:23.676 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  8 13:51:10.524: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-78b604b1-782d-41ba-9ffc-867ab5e9bfc7
STEP: Creating a pod to test consume secrets
Feb  8 13:51:10.710: INFO: Waiting up to 5m0s for pod "pod-secrets-9657b7d3-d6c8-4a79-b1ef-da0dd2b825a5" in namespace "secrets-8918" to be "success or failure"
Feb  8 13:51:10.817: INFO: Pod "pod-secrets-9657b7d3-d6c8-4a79-b1ef-da0dd2b825a5": Phase="Pending", Reason="", readiness=false. Elapsed: 106.971674ms
Feb  8 13:51:12.824: INFO: Pod "pod-secrets-9657b7d3-d6c8-4a79-b1ef-da0dd2b825a5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.114459325s
Feb  8 13:51:14.829: INFO: Pod "pod-secrets-9657b7d3-d6c8-4a79-b1ef-da0dd2b825a5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.119747509s
Feb  8 13:51:16.836: INFO: Pod "pod-secrets-9657b7d3-d6c8-4a79-b1ef-da0dd2b825a5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.126657655s
Feb  8 13:51:18.843: INFO: Pod "pod-secrets-9657b7d3-d6c8-4a79-b1ef-da0dd2b825a5": Phase="Pending", Reason="", readiness=false. Elapsed: 8.133474344s
Feb  8 13:51:20.855: INFO: Pod "pod-secrets-9657b7d3-d6c8-4a79-b1ef-da0dd2b825a5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.145474194s
STEP: Saw pod success
Feb  8 13:51:20.855: INFO: Pod "pod-secrets-9657b7d3-d6c8-4a79-b1ef-da0dd2b825a5" satisfied condition "success or failure"
Feb  8 13:51:20.866: INFO: Trying to get logs from node iruya-node pod pod-secrets-9657b7d3-d6c8-4a79-b1ef-da0dd2b825a5 container secret-volume-test: 
STEP: delete the pod
Feb  8 13:51:20.951: INFO: Waiting for pod pod-secrets-9657b7d3-d6c8-4a79-b1ef-da0dd2b825a5 to disappear
Feb  8 13:51:20.998: INFO: Pod pod-secrets-9657b7d3-d6c8-4a79-b1ef-da0dd2b825a5 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  8 13:51:20.999: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-8918" for this suite.
Feb  8 13:51:27.032: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 13:51:27.155: INFO: namespace secrets-8918 deletion completed in 6.149434311s

• [SLOW TEST:16.632 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  8 13:51:27.156: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-1480.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-1480.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-1480.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-1480.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-1480.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-1480.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-1480.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-1480.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-1480.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-1480.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-1480.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-1480.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1480.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 2.44.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.44.2_udp@PTR;check="$$(dig +tcp +noall +answer +search 2.44.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.44.2_tcp@PTR;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-1480.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-1480.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-1480.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-1480.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-1480.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-1480.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-1480.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-1480.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-1480.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-1480.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-1480.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-1480.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1480.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 2.44.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.44.2_udp@PTR;check="$$(dig +tcp +noall +answer +search 2.44.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.44.2_tcp@PTR;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Feb  8 13:51:41.385: INFO: Unable to read wheezy_udp@dns-test-service.dns-1480.svc.cluster.local from pod dns-1480/dns-test-7a4635d5-f28a-4cf4-9773-c2d9bbbf80ab: the server could not find the requested resource (get pods dns-test-7a4635d5-f28a-4cf4-9773-c2d9bbbf80ab)
Feb  8 13:51:41.393: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1480.svc.cluster.local from pod dns-1480/dns-test-7a4635d5-f28a-4cf4-9773-c2d9bbbf80ab: the server could not find the requested resource (get pods dns-test-7a4635d5-f28a-4cf4-9773-c2d9bbbf80ab)
Feb  8 13:51:41.401: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1480.svc.cluster.local from pod dns-1480/dns-test-7a4635d5-f28a-4cf4-9773-c2d9bbbf80ab: the server could not find the requested resource (get pods dns-test-7a4635d5-f28a-4cf4-9773-c2d9bbbf80ab)
Feb  8 13:51:41.407: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1480.svc.cluster.local from pod dns-1480/dns-test-7a4635d5-f28a-4cf4-9773-c2d9bbbf80ab: the server could not find the requested resource (get pods dns-test-7a4635d5-f28a-4cf4-9773-c2d9bbbf80ab)
Feb  8 13:51:41.415: INFO: Unable to read wheezy_udp@_http._tcp.test-service-2.dns-1480.svc.cluster.local from pod dns-1480/dns-test-7a4635d5-f28a-4cf4-9773-c2d9bbbf80ab: the server could not find the requested resource (get pods dns-test-7a4635d5-f28a-4cf4-9773-c2d9bbbf80ab)
Feb  8 13:51:41.420: INFO: Unable to read wheezy_tcp@_http._tcp.test-service-2.dns-1480.svc.cluster.local from pod dns-1480/dns-test-7a4635d5-f28a-4cf4-9773-c2d9bbbf80ab: the server could not find the requested resource (get pods dns-test-7a4635d5-f28a-4cf4-9773-c2d9bbbf80ab)
Feb  8 13:51:41.424: INFO: Unable to read wheezy_udp@PodARecord from pod dns-1480/dns-test-7a4635d5-f28a-4cf4-9773-c2d9bbbf80ab: the server could not find the requested resource (get pods dns-test-7a4635d5-f28a-4cf4-9773-c2d9bbbf80ab)
Feb  8 13:51:41.429: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-1480/dns-test-7a4635d5-f28a-4cf4-9773-c2d9bbbf80ab: the server could not find the requested resource (get pods dns-test-7a4635d5-f28a-4cf4-9773-c2d9bbbf80ab)
Feb  8 13:51:41.436: INFO: Unable to read 10.96.44.2_udp@PTR from pod dns-1480/dns-test-7a4635d5-f28a-4cf4-9773-c2d9bbbf80ab: the server could not find the requested resource (get pods dns-test-7a4635d5-f28a-4cf4-9773-c2d9bbbf80ab)
Feb  8 13:51:41.441: INFO: Unable to read 10.96.44.2_tcp@PTR from pod dns-1480/dns-test-7a4635d5-f28a-4cf4-9773-c2d9bbbf80ab: the server could not find the requested resource (get pods dns-test-7a4635d5-f28a-4cf4-9773-c2d9bbbf80ab)
Feb  8 13:51:41.447: INFO: Unable to read jessie_udp@dns-test-service.dns-1480.svc.cluster.local from pod dns-1480/dns-test-7a4635d5-f28a-4cf4-9773-c2d9bbbf80ab: the server could not find the requested resource (get pods dns-test-7a4635d5-f28a-4cf4-9773-c2d9bbbf80ab)
Feb  8 13:51:41.457: INFO: Unable to read jessie_tcp@dns-test-service.dns-1480.svc.cluster.local from pod dns-1480/dns-test-7a4635d5-f28a-4cf4-9773-c2d9bbbf80ab: the server could not find the requested resource (get pods dns-test-7a4635d5-f28a-4cf4-9773-c2d9bbbf80ab)
Feb  8 13:51:41.462: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1480.svc.cluster.local from pod dns-1480/dns-test-7a4635d5-f28a-4cf4-9773-c2d9bbbf80ab: the server could not find the requested resource (get pods dns-test-7a4635d5-f28a-4cf4-9773-c2d9bbbf80ab)
Feb  8 13:51:41.466: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1480.svc.cluster.local from pod dns-1480/dns-test-7a4635d5-f28a-4cf4-9773-c2d9bbbf80ab: the server could not find the requested resource (get pods dns-test-7a4635d5-f28a-4cf4-9773-c2d9bbbf80ab)
Feb  8 13:51:41.472: INFO: Unable to read jessie_udp@_http._tcp.test-service-2.dns-1480.svc.cluster.local from pod dns-1480/dns-test-7a4635d5-f28a-4cf4-9773-c2d9bbbf80ab: the server could not find the requested resource (get pods dns-test-7a4635d5-f28a-4cf4-9773-c2d9bbbf80ab)
Feb  8 13:51:41.477: INFO: Unable to read jessie_tcp@_http._tcp.test-service-2.dns-1480.svc.cluster.local from pod dns-1480/dns-test-7a4635d5-f28a-4cf4-9773-c2d9bbbf80ab: the server could not find the requested resource (get pods dns-test-7a4635d5-f28a-4cf4-9773-c2d9bbbf80ab)
Feb  8 13:51:41.481: INFO: Unable to read jessie_udp@PodARecord from pod dns-1480/dns-test-7a4635d5-f28a-4cf4-9773-c2d9bbbf80ab: the server could not find the requested resource (get pods dns-test-7a4635d5-f28a-4cf4-9773-c2d9bbbf80ab)
Feb  8 13:51:41.486: INFO: Unable to read jessie_tcp@PodARecord from pod dns-1480/dns-test-7a4635d5-f28a-4cf4-9773-c2d9bbbf80ab: the server could not find the requested resource (get pods dns-test-7a4635d5-f28a-4cf4-9773-c2d9bbbf80ab)
Feb  8 13:51:41.490: INFO: Unable to read 10.96.44.2_udp@PTR from pod dns-1480/dns-test-7a4635d5-f28a-4cf4-9773-c2d9bbbf80ab: the server could not find the requested resource (get pods dns-test-7a4635d5-f28a-4cf4-9773-c2d9bbbf80ab)
Feb  8 13:51:41.496: INFO: Unable to read 10.96.44.2_tcp@PTR from pod dns-1480/dns-test-7a4635d5-f28a-4cf4-9773-c2d9bbbf80ab: the server could not find the requested resource (get pods dns-test-7a4635d5-f28a-4cf4-9773-c2d9bbbf80ab)
Feb  8 13:51:41.496: INFO: Lookups using dns-1480/dns-test-7a4635d5-f28a-4cf4-9773-c2d9bbbf80ab failed for: [wheezy_udp@dns-test-service.dns-1480.svc.cluster.local wheezy_tcp@dns-test-service.dns-1480.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-1480.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-1480.svc.cluster.local wheezy_udp@_http._tcp.test-service-2.dns-1480.svc.cluster.local wheezy_tcp@_http._tcp.test-service-2.dns-1480.svc.cluster.local wheezy_udp@PodARecord wheezy_tcp@PodARecord 10.96.44.2_udp@PTR 10.96.44.2_tcp@PTR jessie_udp@dns-test-service.dns-1480.svc.cluster.local jessie_tcp@dns-test-service.dns-1480.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-1480.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-1480.svc.cluster.local jessie_udp@_http._tcp.test-service-2.dns-1480.svc.cluster.local jessie_tcp@_http._tcp.test-service-2.dns-1480.svc.cluster.local jessie_udp@PodARecord jessie_tcp@PodARecord 10.96.44.2_udp@PTR 10.96.44.2_tcp@PTR]

Feb  8 13:51:46.762: INFO: DNS probes using dns-1480/dns-test-7a4635d5-f28a-4cf4-9773-c2d9bbbf80ab succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  8 13:51:47.297: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-1480" for this suite.
Feb  8 13:51:53.374: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 13:51:53.492: INFO: namespace dns-1480 deletion completed in 6.18379127s

• [SLOW TEST:26.336 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  8 13:51:53.493: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod busybox-1239b3b2-17bb-45f9-9216-a350273bad11 in namespace container-probe-3407
Feb  8 13:52:01.574: INFO: Started pod busybox-1239b3b2-17bb-45f9-9216-a350273bad11 in namespace container-probe-3407
STEP: checking the pod's current state and verifying that restartCount is present
Feb  8 13:52:01.579: INFO: Initial restart count of pod busybox-1239b3b2-17bb-45f9-9216-a350273bad11 is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  8 13:56:02.222: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-3407" for this suite.
Feb  8 13:56:08.328: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 13:56:08.446: INFO: namespace container-probe-3407 deletion completed in 6.203963064s

• [SLOW TEST:254.954 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for ExternalName services [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  8 13:56:08.447: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for ExternalName services [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a test externalName service
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-4484.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-4484.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-4484.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-4484.svc.cluster.local; sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Feb  8 13:56:20.663: INFO: File wheezy_udp@dns-test-service-3.dns-4484.svc.cluster.local from pod  dns-4484/dns-test-cb27b211-dae5-4321-8b42-8e888d84e658 contains '' instead of 'foo.example.com.'
Feb  8 13:56:20.669: INFO: File jessie_udp@dns-test-service-3.dns-4484.svc.cluster.local from pod  dns-4484/dns-test-cb27b211-dae5-4321-8b42-8e888d84e658 contains '' instead of 'foo.example.com.'
Feb  8 13:56:20.669: INFO: Lookups using dns-4484/dns-test-cb27b211-dae5-4321-8b42-8e888d84e658 failed for: [wheezy_udp@dns-test-service-3.dns-4484.svc.cluster.local jessie_udp@dns-test-service-3.dns-4484.svc.cluster.local]

Feb  8 13:56:25.695: INFO: DNS probes using dns-test-cb27b211-dae5-4321-8b42-8e888d84e658 succeeded

STEP: deleting the pod
STEP: changing the externalName to bar.example.com
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-4484.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-4484.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-4484.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-4484.svc.cluster.local; sleep 1; done

STEP: creating a second pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Feb  8 13:56:40.488: INFO: File wheezy_udp@dns-test-service-3.dns-4484.svc.cluster.local from pod  dns-4484/dns-test-0320e38d-27c0-4bf1-992f-f2bfd72da7e0 contains '' instead of 'bar.example.com.'
Feb  8 13:56:40.498: INFO: File jessie_udp@dns-test-service-3.dns-4484.svc.cluster.local from pod  dns-4484/dns-test-0320e38d-27c0-4bf1-992f-f2bfd72da7e0 contains '' instead of 'bar.example.com.'
Feb  8 13:56:40.498: INFO: Lookups using dns-4484/dns-test-0320e38d-27c0-4bf1-992f-f2bfd72da7e0 failed for: [wheezy_udp@dns-test-service-3.dns-4484.svc.cluster.local jessie_udp@dns-test-service-3.dns-4484.svc.cluster.local]

Feb  8 13:56:45.513: INFO: File wheezy_udp@dns-test-service-3.dns-4484.svc.cluster.local from pod  dns-4484/dns-test-0320e38d-27c0-4bf1-992f-f2bfd72da7e0 contains 'foo.example.com.
' instead of 'bar.example.com.'
Feb  8 13:56:45.524: INFO: File jessie_udp@dns-test-service-3.dns-4484.svc.cluster.local from pod  dns-4484/dns-test-0320e38d-27c0-4bf1-992f-f2bfd72da7e0 contains 'foo.example.com.
' instead of 'bar.example.com.'
Feb  8 13:56:45.524: INFO: Lookups using dns-4484/dns-test-0320e38d-27c0-4bf1-992f-f2bfd72da7e0 failed for: [wheezy_udp@dns-test-service-3.dns-4484.svc.cluster.local jessie_udp@dns-test-service-3.dns-4484.svc.cluster.local]

Feb  8 13:56:50.548: INFO: File wheezy_udp@dns-test-service-3.dns-4484.svc.cluster.local from pod  dns-4484/dns-test-0320e38d-27c0-4bf1-992f-f2bfd72da7e0 contains 'foo.example.com.
' instead of 'bar.example.com.'
Feb  8 13:56:50.573: INFO: File jessie_udp@dns-test-service-3.dns-4484.svc.cluster.local from pod  dns-4484/dns-test-0320e38d-27c0-4bf1-992f-f2bfd72da7e0 contains 'foo.example.com.
' instead of 'bar.example.com.'
Feb  8 13:56:50.574: INFO: Lookups using dns-4484/dns-test-0320e38d-27c0-4bf1-992f-f2bfd72da7e0 failed for: [wheezy_udp@dns-test-service-3.dns-4484.svc.cluster.local jessie_udp@dns-test-service-3.dns-4484.svc.cluster.local]

Feb  8 13:56:55.523: INFO: DNS probes using dns-test-0320e38d-27c0-4bf1-992f-f2bfd72da7e0 succeeded

STEP: deleting the pod
STEP: changing the service to type=ClusterIP
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-4484.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-4484.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-4484.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-4484.svc.cluster.local; sleep 1; done

STEP: creating a third pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Feb  8 13:57:09.809: INFO: File wheezy_udp@dns-test-service-3.dns-4484.svc.cluster.local from pod  dns-4484/dns-test-57f872fb-7628-453a-b112-240a5556d8b1 contains '' instead of '10.96.238.51'
Feb  8 13:57:09.816: INFO: File jessie_udp@dns-test-service-3.dns-4484.svc.cluster.local from pod  dns-4484/dns-test-57f872fb-7628-453a-b112-240a5556d8b1 contains '' instead of '10.96.238.51'
Feb  8 13:57:09.816: INFO: Lookups using dns-4484/dns-test-57f872fb-7628-453a-b112-240a5556d8b1 failed for: [wheezy_udp@dns-test-service-3.dns-4484.svc.cluster.local jessie_udp@dns-test-service-3.dns-4484.svc.cluster.local]

Feb  8 13:57:14.834: INFO: DNS probes using dns-test-57f872fb-7628-453a-b112-240a5556d8b1 succeeded

STEP: deleting the pod
STEP: deleting the test externalName service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  8 13:57:15.081: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-4484" for this suite.
Feb  8 13:57:23.153: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 13:57:23.279: INFO: namespace dns-4484 deletion completed in 8.17180012s

• [SLOW TEST:74.832 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for ExternalName services [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  8 13:57:23.280: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0777 on node default medium
Feb  8 13:57:23.425: INFO: Waiting up to 5m0s for pod "pod-7680d24e-8ca5-402f-badd-5e37df5fa597" in namespace "emptydir-4902" to be "success or failure"
Feb  8 13:57:23.431: INFO: Pod "pod-7680d24e-8ca5-402f-badd-5e37df5fa597": Phase="Pending", Reason="", readiness=false. Elapsed: 5.612884ms
Feb  8 13:57:25.442: INFO: Pod "pod-7680d24e-8ca5-402f-badd-5e37df5fa597": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016757098s
Feb  8 13:57:27.452: INFO: Pod "pod-7680d24e-8ca5-402f-badd-5e37df5fa597": Phase="Pending", Reason="", readiness=false. Elapsed: 4.027212175s
Feb  8 13:57:29.460: INFO: Pod "pod-7680d24e-8ca5-402f-badd-5e37df5fa597": Phase="Pending", Reason="", readiness=false. Elapsed: 6.035260599s
Feb  8 13:57:31.482: INFO: Pod "pod-7680d24e-8ca5-402f-badd-5e37df5fa597": Phase="Pending", Reason="", readiness=false. Elapsed: 8.056679844s
Feb  8 13:57:33.497: INFO: Pod "pod-7680d24e-8ca5-402f-badd-5e37df5fa597": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.071678797s
STEP: Saw pod success
Feb  8 13:57:33.497: INFO: Pod "pod-7680d24e-8ca5-402f-badd-5e37df5fa597" satisfied condition "success or failure"
Feb  8 13:57:33.501: INFO: Trying to get logs from node iruya-node pod pod-7680d24e-8ca5-402f-badd-5e37df5fa597 container test-container: 
STEP: delete the pod
Feb  8 13:57:33.726: INFO: Waiting for pod pod-7680d24e-8ca5-402f-badd-5e37df5fa597 to disappear
Feb  8 13:57:33.763: INFO: Pod pod-7680d24e-8ca5-402f-badd-5e37df5fa597 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  8 13:57:33.763: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-4902" for this suite.
Feb  8 13:57:39.811: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 13:57:39.949: INFO: namespace emptydir-4902 deletion completed in 6.178515688s

• [SLOW TEST:16.669 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl logs 
  should be able to retrieve and filter logs  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  8 13:57:39.949: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1292
STEP: creating an rc
Feb  8 13:57:40.070: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4607'
Feb  8 13:57:42.030: INFO: stderr: ""
Feb  8 13:57:42.030: INFO: stdout: "replicationcontroller/redis-master created\n"
[It] should be able to retrieve and filter logs  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Waiting for Redis master to start.
Feb  8 13:57:43.039: INFO: Selector matched 1 pods for map[app:redis]
Feb  8 13:57:43.039: INFO: Found 0 / 1
Feb  8 13:57:44.037: INFO: Selector matched 1 pods for map[app:redis]
Feb  8 13:57:44.038: INFO: Found 0 / 1
Feb  8 13:57:45.039: INFO: Selector matched 1 pods for map[app:redis]
Feb  8 13:57:45.039: INFO: Found 0 / 1
Feb  8 13:57:46.037: INFO: Selector matched 1 pods for map[app:redis]
Feb  8 13:57:46.037: INFO: Found 0 / 1
Feb  8 13:57:47.041: INFO: Selector matched 1 pods for map[app:redis]
Feb  8 13:57:47.041: INFO: Found 0 / 1
Feb  8 13:57:48.036: INFO: Selector matched 1 pods for map[app:redis]
Feb  8 13:57:48.036: INFO: Found 0 / 1
Feb  8 13:57:49.038: INFO: Selector matched 1 pods for map[app:redis]
Feb  8 13:57:49.038: INFO: Found 0 / 1
Feb  8 13:57:50.047: INFO: Selector matched 1 pods for map[app:redis]
Feb  8 13:57:50.047: INFO: Found 1 / 1
Feb  8 13:57:50.047: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Feb  8 13:57:50.059: INFO: Selector matched 1 pods for map[app:redis]
Feb  8 13:57:50.059: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
STEP: checking for a matching strings
Feb  8 13:57:50.059: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-srsgv redis-master --namespace=kubectl-4607'
Feb  8 13:57:50.241: INFO: stderr: ""
Feb  8 13:57:50.241: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 08 Feb 13:57:48.650 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 08 Feb 13:57:48.650 # Server started, Redis version 3.2.12\n1:M 08 Feb 13:57:48.650 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 08 Feb 13:57:48.651 * The server is now ready to accept connections on port 6379\n"
STEP: limiting log lines
Feb  8 13:57:50.242: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-srsgv redis-master --namespace=kubectl-4607 --tail=1'
Feb  8 13:57:50.429: INFO: stderr: ""
Feb  8 13:57:50.429: INFO: stdout: "1:M 08 Feb 13:57:48.651 * The server is now ready to accept connections on port 6379\n"
STEP: limiting log bytes
Feb  8 13:57:50.430: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-srsgv redis-master --namespace=kubectl-4607 --limit-bytes=1'
Feb  8 13:57:50.546: INFO: stderr: ""
Feb  8 13:57:50.547: INFO: stdout: " "
STEP: exposing timestamps
Feb  8 13:57:50.547: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-srsgv redis-master --namespace=kubectl-4607 --tail=1 --timestamps'
Feb  8 13:57:50.702: INFO: stderr: ""
Feb  8 13:57:50.702: INFO: stdout: "2020-02-08T13:57:48.652056916Z 1:M 08 Feb 13:57:48.651 * The server is now ready to accept connections on port 6379\n"
STEP: restricting to a time range
Feb  8 13:57:53.202: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-srsgv redis-master --namespace=kubectl-4607 --since=1s'
Feb  8 13:57:53.385: INFO: stderr: ""
Feb  8 13:57:53.385: INFO: stdout: ""
Feb  8 13:57:53.385: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-srsgv redis-master --namespace=kubectl-4607 --since=24h'
Feb  8 13:57:53.520: INFO: stderr: ""
Feb  8 13:57:53.520: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 08 Feb 13:57:48.650 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 08 Feb 13:57:48.650 # Server started, Redis version 3.2.12\n1:M 08 Feb 13:57:48.650 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 08 Feb 13:57:48.651 * The server is now ready to accept connections on port 6379\n"
[AfterEach] [k8s.io] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1298
STEP: using delete to clean up resources
Feb  8 13:57:53.520: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4607'
Feb  8 13:57:53.644: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb  8 13:57:53.644: INFO: stdout: "replicationcontroller \"redis-master\" force deleted\n"
Feb  8 13:57:53.645: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=nginx --no-headers --namespace=kubectl-4607'
Feb  8 13:57:53.784: INFO: stderr: "No resources found.\n"
Feb  8 13:57:53.784: INFO: stdout: ""
Feb  8 13:57:53.784: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=nginx --namespace=kubectl-4607 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Feb  8 13:57:54.027: INFO: stderr: ""
Feb  8 13:57:54.027: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  8 13:57:54.028: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4607" for this suite.
Feb  8 13:58:16.112: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 13:58:16.201: INFO: namespace kubectl-4607 deletion completed in 22.160308351s

• [SLOW TEST:36.251 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should be able to retrieve and filter logs  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  8 13:58:16.201: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0777 on node default medium
Feb  8 13:58:16.244: INFO: Waiting up to 5m0s for pod "pod-e6b01800-e8e9-4a95-b9ef-6635f7c7eac7" in namespace "emptydir-5009" to be "success or failure"
Feb  8 13:58:16.321: INFO: Pod "pod-e6b01800-e8e9-4a95-b9ef-6635f7c7eac7": Phase="Pending", Reason="", readiness=false. Elapsed: 76.585425ms
Feb  8 13:58:18.330: INFO: Pod "pod-e6b01800-e8e9-4a95-b9ef-6635f7c7eac7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0864035s
Feb  8 13:58:20.344: INFO: Pod "pod-e6b01800-e8e9-4a95-b9ef-6635f7c7eac7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.099621169s
Feb  8 13:58:22.352: INFO: Pod "pod-e6b01800-e8e9-4a95-b9ef-6635f7c7eac7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.108166719s
Feb  8 13:58:24.358: INFO: Pod "pod-e6b01800-e8e9-4a95-b9ef-6635f7c7eac7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.113688617s
STEP: Saw pod success
Feb  8 13:58:24.358: INFO: Pod "pod-e6b01800-e8e9-4a95-b9ef-6635f7c7eac7" satisfied condition "success or failure"
Feb  8 13:58:24.360: INFO: Trying to get logs from node iruya-node pod pod-e6b01800-e8e9-4a95-b9ef-6635f7c7eac7 container test-container: 
STEP: delete the pod
Feb  8 13:58:24.438: INFO: Waiting for pod pod-e6b01800-e8e9-4a95-b9ef-6635f7c7eac7 to disappear
Feb  8 13:58:24.454: INFO: Pod pod-e6b01800-e8e9-4a95-b9ef-6635f7c7eac7 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  8 13:58:24.454: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-5009" for this suite.
Feb  8 13:58:30.574: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 13:58:30.733: INFO: namespace emptydir-5009 deletion completed in 6.251577256s

• [SLOW TEST:14.532 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  8 13:58:30.733: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name projected-secret-test-map-22ee8b1b-cc01-4f1e-bf7b-f10b6ef4454f
STEP: Creating a pod to test consume secrets
Feb  8 13:58:30.917: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-1fabb9c8-d282-454a-97e1-bcc485eefb62" in namespace "projected-8048" to be "success or failure"
Feb  8 13:58:30.943: INFO: Pod "pod-projected-secrets-1fabb9c8-d282-454a-97e1-bcc485eefb62": Phase="Pending", Reason="", readiness=false. Elapsed: 25.7871ms
Feb  8 13:58:32.958: INFO: Pod "pod-projected-secrets-1fabb9c8-d282-454a-97e1-bcc485eefb62": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040856579s
Feb  8 13:58:34.964: INFO: Pod "pod-projected-secrets-1fabb9c8-d282-454a-97e1-bcc485eefb62": Phase="Pending", Reason="", readiness=false. Elapsed: 4.047351819s
Feb  8 13:58:37.029: INFO: Pod "pod-projected-secrets-1fabb9c8-d282-454a-97e1-bcc485eefb62": Phase="Pending", Reason="", readiness=false. Elapsed: 6.111837455s
Feb  8 13:58:39.039: INFO: Pod "pod-projected-secrets-1fabb9c8-d282-454a-97e1-bcc485eefb62": Phase="Pending", Reason="", readiness=false. Elapsed: 8.122023866s
Feb  8 13:58:41.059: INFO: Pod "pod-projected-secrets-1fabb9c8-d282-454a-97e1-bcc485eefb62": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.142457078s
STEP: Saw pod success
Feb  8 13:58:41.059: INFO: Pod "pod-projected-secrets-1fabb9c8-d282-454a-97e1-bcc485eefb62" satisfied condition "success or failure"
Feb  8 13:58:41.063: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-1fabb9c8-d282-454a-97e1-bcc485eefb62 container projected-secret-volume-test: 
STEP: delete the pod
Feb  8 13:58:41.489: INFO: Waiting for pod pod-projected-secrets-1fabb9c8-d282-454a-97e1-bcc485eefb62 to disappear
Feb  8 13:58:41.510: INFO: Pod pod-projected-secrets-1fabb9c8-d282-454a-97e1-bcc485eefb62 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  8 13:58:41.510: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8048" for this suite.
Feb  8 13:58:47.534: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 13:58:47.687: INFO: namespace projected-8048 deletion completed in 6.171414125s

• [SLOW TEST:16.955 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  8 13:58:47.688: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a watch on configmaps with label A
STEP: creating a watch on configmaps with label B
STEP: creating a watch on configmaps with label A or B
STEP: creating a configmap with label A and ensuring the correct watchers observe the notification
Feb  8 13:58:47.819: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-7137,SelfLink:/api/v1/namespaces/watch-7137/configmaps/e2e-watch-test-configmap-a,UID:967e49ce-b9e1-404c-a3a8-d5c9b19d82a5,ResourceVersion:23575133,Generation:0,CreationTimestamp:2020-02-08 13:58:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
Feb  8 13:58:47.820: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-7137,SelfLink:/api/v1/namespaces/watch-7137/configmaps/e2e-watch-test-configmap-a,UID:967e49ce-b9e1-404c-a3a8-d5c9b19d82a5,ResourceVersion:23575133,Generation:0,CreationTimestamp:2020-02-08 13:58:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
STEP: modifying configmap A and ensuring the correct watchers observe the notification
Feb  8 13:58:57.849: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-7137,SelfLink:/api/v1/namespaces/watch-7137/configmaps/e2e-watch-test-configmap-a,UID:967e49ce-b9e1-404c-a3a8-d5c9b19d82a5,ResourceVersion:23575148,Generation:0,CreationTimestamp:2020-02-08 13:58:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
Feb  8 13:58:57.850: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-7137,SelfLink:/api/v1/namespaces/watch-7137/configmaps/e2e-watch-test-configmap-a,UID:967e49ce-b9e1-404c-a3a8-d5c9b19d82a5,ResourceVersion:23575148,Generation:0,CreationTimestamp:2020-02-08 13:58:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying configmap A again and ensuring the correct watchers observe the notification
Feb  8 13:59:07.871: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-7137,SelfLink:/api/v1/namespaces/watch-7137/configmaps/e2e-watch-test-configmap-a,UID:967e49ce-b9e1-404c-a3a8-d5c9b19d82a5,ResourceVersion:23575162,Generation:0,CreationTimestamp:2020-02-08 13:58:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Feb  8 13:59:07.871: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-7137,SelfLink:/api/v1/namespaces/watch-7137/configmaps/e2e-watch-test-configmap-a,UID:967e49ce-b9e1-404c-a3a8-d5c9b19d82a5,ResourceVersion:23575162,Generation:0,CreationTimestamp:2020-02-08 13:58:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
STEP: deleting configmap A and ensuring the correct watchers observe the notification
Feb  8 13:59:17.885: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-7137,SelfLink:/api/v1/namespaces/watch-7137/configmaps/e2e-watch-test-configmap-a,UID:967e49ce-b9e1-404c-a3a8-d5c9b19d82a5,ResourceVersion:23575177,Generation:0,CreationTimestamp:2020-02-08 13:58:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Feb  8 13:59:17.885: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-7137,SelfLink:/api/v1/namespaces/watch-7137/configmaps/e2e-watch-test-configmap-a,UID:967e49ce-b9e1-404c-a3a8-d5c9b19d82a5,ResourceVersion:23575177,Generation:0,CreationTimestamp:2020-02-08 13:58:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
STEP: creating a configmap with label B and ensuring the correct watchers observe the notification
Feb  8 13:59:27.909: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-7137,SelfLink:/api/v1/namespaces/watch-7137/configmaps/e2e-watch-test-configmap-b,UID:e2aaeb38-0c13-4954-8935-8b1378237170,ResourceVersion:23575191,Generation:0,CreationTimestamp:2020-02-08 13:59:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
Feb  8 13:59:27.909: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-7137,SelfLink:/api/v1/namespaces/watch-7137/configmaps/e2e-watch-test-configmap-b,UID:e2aaeb38-0c13-4954-8935-8b1378237170,ResourceVersion:23575191,Generation:0,CreationTimestamp:2020-02-08 13:59:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
STEP: deleting configmap B and ensuring the correct watchers observe the notification
Feb  8 13:59:37.928: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-7137,SelfLink:/api/v1/namespaces/watch-7137/configmaps/e2e-watch-test-configmap-b,UID:e2aaeb38-0c13-4954-8935-8b1378237170,ResourceVersion:23575205,Generation:0,CreationTimestamp:2020-02-08 13:59:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
Feb  8 13:59:37.929: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-7137,SelfLink:/api/v1/namespaces/watch-7137/configmaps/e2e-watch-test-configmap-b,UID:e2aaeb38-0c13-4954-8935-8b1378237170,ResourceVersion:23575205,Generation:0,CreationTimestamp:2020-02-08 13:59:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  8 13:59:47.929: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-7137" for this suite.
Feb  8 13:59:53.967: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 13:59:54.060: INFO: namespace watch-7137 deletion completed in 6.120882693s

• [SLOW TEST:66.372 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Proxy server 
  should support --unix-socket=/path  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  8 13:59:54.060: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should support --unix-socket=/path  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Starting the proxy
Feb  8 13:59:54.119: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix281414815/test'
STEP: retrieving proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  8 13:59:54.175: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5926" for this suite.
Feb  8 14:00:00.208: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 14:00:00.328: INFO: namespace kubectl-5926 deletion completed in 6.143352643s

• [SLOW TEST:6.268 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Proxy server
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should support --unix-socket=/path  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  8 14:00:00.328: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb  8 14:00:00.558: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"872c0bc7-a871-4552-ad59-c6e68c1d216a", Controller:(*bool)(0xc002a1e8ba), BlockOwnerDeletion:(*bool)(0xc002a1e8bb)}}
Feb  8 14:00:00.591: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"10dc39de-1580-4b74-9263-e757a9377581", Controller:(*bool)(0xc00317a6f2), BlockOwnerDeletion:(*bool)(0xc00317a6f3)}}
Feb  8 14:00:00.620: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"9795ba69-0d8d-4c35-a28a-0827c063862e", Controller:(*bool)(0xc00317a8c2), BlockOwnerDeletion:(*bool)(0xc00317a8c3)}}
[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  8 14:00:05.691: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-6111" for this suite.
Feb  8 14:00:11.721: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 14:00:11.901: INFO: namespace gc-6111 deletion completed in 6.20628605s

• [SLOW TEST:11.573 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  8 14:00:11.902: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
[It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb  8 14:00:12.090: INFO: Creating simple daemon set daemon-set
STEP: Check that daemon pods launch on every node of the cluster.
Feb  8 14:00:12.109: INFO: Number of nodes with available pods: 0
Feb  8 14:00:12.109: INFO: Node iruya-node is running more than one daemon pod
Feb  8 14:00:13.130: INFO: Number of nodes with available pods: 0
Feb  8 14:00:13.130: INFO: Node iruya-node is running more than one daemon pod
Feb  8 14:00:14.148: INFO: Number of nodes with available pods: 0
Feb  8 14:00:14.148: INFO: Node iruya-node is running more than one daemon pod
Feb  8 14:00:15.132: INFO: Number of nodes with available pods: 0
Feb  8 14:00:15.132: INFO: Node iruya-node is running more than one daemon pod
Feb  8 14:00:16.124: INFO: Number of nodes with available pods: 0
Feb  8 14:00:16.124: INFO: Node iruya-node is running more than one daemon pod
Feb  8 14:00:18.040: INFO: Number of nodes with available pods: 0
Feb  8 14:00:18.040: INFO: Node iruya-node is running more than one daemon pod
Feb  8 14:00:18.518: INFO: Number of nodes with available pods: 0
Feb  8 14:00:18.518: INFO: Node iruya-node is running more than one daemon pod
Feb  8 14:00:19.699: INFO: Number of nodes with available pods: 0
Feb  8 14:00:19.699: INFO: Node iruya-node is running more than one daemon pod
Feb  8 14:00:20.197: INFO: Number of nodes with available pods: 0
Feb  8 14:00:20.197: INFO: Node iruya-node is running more than one daemon pod
Feb  8 14:00:21.120: INFO: Number of nodes with available pods: 1
Feb  8 14:00:21.120: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Feb  8 14:00:22.150: INFO: Number of nodes with available pods: 2
Feb  8 14:00:22.150: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Update daemon pods image.
STEP: Check that daemon pods images are updated.
Feb  8 14:00:22.217: INFO: Wrong image for pod: daemon-set-fv7dr. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  8 14:00:22.217: INFO: Wrong image for pod: daemon-set-rgmq7. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  8 14:00:23.257: INFO: Wrong image for pod: daemon-set-fv7dr. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  8 14:00:23.257: INFO: Wrong image for pod: daemon-set-rgmq7. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  8 14:00:24.242: INFO: Wrong image for pod: daemon-set-fv7dr. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  8 14:00:24.242: INFO: Wrong image for pod: daemon-set-rgmq7. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  8 14:00:25.248: INFO: Wrong image for pod: daemon-set-fv7dr. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  8 14:00:25.248: INFO: Wrong image for pod: daemon-set-rgmq7. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  8 14:00:26.243: INFO: Wrong image for pod: daemon-set-fv7dr. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  8 14:00:26.243: INFO: Wrong image for pod: daemon-set-rgmq7. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  8 14:00:27.241: INFO: Wrong image for pod: daemon-set-fv7dr. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  8 14:00:27.241: INFO: Pod daemon-set-fv7dr is not available
Feb  8 14:00:27.241: INFO: Wrong image for pod: daemon-set-rgmq7. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  8 14:00:28.242: INFO: Wrong image for pod: daemon-set-fv7dr. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  8 14:00:28.242: INFO: Pod daemon-set-fv7dr is not available
Feb  8 14:00:28.242: INFO: Wrong image for pod: daemon-set-rgmq7. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  8 14:00:29.245: INFO: Wrong image for pod: daemon-set-fv7dr. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  8 14:00:29.245: INFO: Pod daemon-set-fv7dr is not available
Feb  8 14:00:29.245: INFO: Wrong image for pod: daemon-set-rgmq7. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  8 14:00:30.245: INFO: Wrong image for pod: daemon-set-fv7dr. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  8 14:00:30.245: INFO: Pod daemon-set-fv7dr is not available
Feb  8 14:00:30.245: INFO: Wrong image for pod: daemon-set-rgmq7. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  8 14:00:31.245: INFO: Wrong image for pod: daemon-set-fv7dr. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  8 14:00:31.245: INFO: Pod daemon-set-fv7dr is not available
Feb  8 14:00:31.245: INFO: Wrong image for pod: daemon-set-rgmq7. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  8 14:00:32.372: INFO: Wrong image for pod: daemon-set-fv7dr. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  8 14:00:32.372: INFO: Pod daemon-set-fv7dr is not available
Feb  8 14:00:32.372: INFO: Wrong image for pod: daemon-set-rgmq7. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  8 14:00:33.244: INFO: Wrong image for pod: daemon-set-fv7dr. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  8 14:00:33.244: INFO: Pod daemon-set-fv7dr is not available
Feb  8 14:00:33.244: INFO: Wrong image for pod: daemon-set-rgmq7. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  8 14:00:34.242: INFO: Wrong image for pod: daemon-set-fv7dr. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  8 14:00:34.242: INFO: Pod daemon-set-fv7dr is not available
Feb  8 14:00:34.242: INFO: Wrong image for pod: daemon-set-rgmq7. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  8 14:00:35.249: INFO: Wrong image for pod: daemon-set-fv7dr. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  8 14:00:35.249: INFO: Pod daemon-set-fv7dr is not available
Feb  8 14:00:35.249: INFO: Wrong image for pod: daemon-set-rgmq7. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  8 14:00:36.285: INFO: Wrong image for pod: daemon-set-fv7dr. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  8 14:00:36.285: INFO: Pod daemon-set-fv7dr is not available
Feb  8 14:00:36.285: INFO: Wrong image for pod: daemon-set-rgmq7. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  8 14:00:37.245: INFO: Wrong image for pod: daemon-set-rgmq7. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  8 14:00:37.245: INFO: Pod daemon-set-w2slp is not available
Feb  8 14:00:38.246: INFO: Wrong image for pod: daemon-set-rgmq7. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  8 14:00:38.246: INFO: Pod daemon-set-w2slp is not available
Feb  8 14:00:39.251: INFO: Wrong image for pod: daemon-set-rgmq7. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  8 14:00:39.251: INFO: Pod daemon-set-w2slp is not available
Feb  8 14:00:40.247: INFO: Wrong image for pod: daemon-set-rgmq7. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  8 14:00:40.247: INFO: Pod daemon-set-w2slp is not available
Feb  8 14:00:41.317: INFO: Wrong image for pod: daemon-set-rgmq7. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  8 14:00:41.317: INFO: Pod daemon-set-w2slp is not available
Feb  8 14:00:42.252: INFO: Wrong image for pod: daemon-set-rgmq7. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  8 14:00:42.252: INFO: Pod daemon-set-w2slp is not available
Feb  8 14:00:43.248: INFO: Wrong image for pod: daemon-set-rgmq7. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  8 14:00:43.248: INFO: Pod daemon-set-w2slp is not available
Feb  8 14:00:44.247: INFO: Wrong image for pod: daemon-set-rgmq7. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  8 14:00:44.247: INFO: Pod daemon-set-w2slp is not available
Feb  8 14:00:45.246: INFO: Wrong image for pod: daemon-set-rgmq7. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  8 14:00:45.246: INFO: Pod daemon-set-w2slp is not available
Feb  8 14:00:46.245: INFO: Wrong image for pod: daemon-set-rgmq7. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  8 14:00:47.297: INFO: Wrong image for pod: daemon-set-rgmq7. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  8 14:00:48.248: INFO: Wrong image for pod: daemon-set-rgmq7. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  8 14:00:49.246: INFO: Wrong image for pod: daemon-set-rgmq7. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  8 14:00:50.245: INFO: Wrong image for pod: daemon-set-rgmq7. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  8 14:00:50.245: INFO: Pod daemon-set-rgmq7 is not available
Feb  8 14:00:51.244: INFO: Wrong image for pod: daemon-set-rgmq7. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  8 14:00:51.244: INFO: Pod daemon-set-rgmq7 is not available
Feb  8 14:00:52.242: INFO: Wrong image for pod: daemon-set-rgmq7. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  8 14:00:52.242: INFO: Pod daemon-set-rgmq7 is not available
Feb  8 14:00:53.244: INFO: Wrong image for pod: daemon-set-rgmq7. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  8 14:00:53.244: INFO: Pod daemon-set-rgmq7 is not available
Feb  8 14:00:54.245: INFO: Wrong image for pod: daemon-set-rgmq7. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  8 14:00:54.245: INFO: Pod daemon-set-rgmq7 is not available
Feb  8 14:00:55.245: INFO: Wrong image for pod: daemon-set-rgmq7. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  8 14:00:55.245: INFO: Pod daemon-set-rgmq7 is not available
Feb  8 14:00:56.247: INFO: Wrong image for pod: daemon-set-rgmq7. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  8 14:00:56.247: INFO: Pod daemon-set-rgmq7 is not available
Feb  8 14:00:57.246: INFO: Wrong image for pod: daemon-set-rgmq7. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  8 14:00:57.246: INFO: Pod daemon-set-rgmq7 is not available
Feb  8 14:00:58.243: INFO: Pod daemon-set-bz89d is not available
STEP: Check that daemon pods are still running on every node of the cluster.
Feb  8 14:00:58.264: INFO: Number of nodes with available pods: 1
Feb  8 14:00:58.264: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Feb  8 14:00:59.543: INFO: Number of nodes with available pods: 1
Feb  8 14:00:59.543: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Feb  8 14:01:00.280: INFO: Number of nodes with available pods: 1
Feb  8 14:01:00.280: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Feb  8 14:01:01.282: INFO: Number of nodes with available pods: 1
Feb  8 14:01:01.282: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Feb  8 14:01:02.549: INFO: Number of nodes with available pods: 1
Feb  8 14:01:02.549: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Feb  8 14:01:03.313: INFO: Number of nodes with available pods: 1
Feb  8 14:01:03.313: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Feb  8 14:01:04.282: INFO: Number of nodes with available pods: 1
Feb  8 14:01:04.282: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Feb  8 14:01:05.281: INFO: Number of nodes with available pods: 1
Feb  8 14:01:05.281: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Feb  8 14:01:06.306: INFO: Number of nodes with available pods: 2
Feb  8 14:01:06.306: INFO: Number of running nodes: 2, number of available pods: 2
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-139, will wait for the garbage collector to delete the pods
Feb  8 14:01:07.218: INFO: Deleting DaemonSet.extensions daemon-set took: 808.852117ms
Feb  8 14:01:07.518: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.493727ms
Feb  8 14:01:27.927: INFO: Number of nodes with available pods: 0
Feb  8 14:01:27.927: INFO: Number of running nodes: 0, number of available pods: 0
Feb  8 14:01:27.930: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-139/daemonsets","resourceVersion":"23575476"},"items":null}

Feb  8 14:01:27.932: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-139/pods","resourceVersion":"23575476"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  8 14:01:27.943: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-139" for this suite.
Feb  8 14:01:33.975: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 14:01:34.056: INFO: namespace daemonsets-139 deletion completed in 6.109738874s

• [SLOW TEST:82.153 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform canary updates and phased rolling updates of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  8 14:01:34.056: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75
STEP: Creating service test in namespace statefulset-5445
[It] should perform canary updates and phased rolling updates of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a new StatefulSet
Feb  8 14:01:34.228: INFO: Found 0 stateful pods, waiting for 3
Feb  8 14:01:44.235: INFO: Found 2 stateful pods, waiting for 3
Feb  8 14:01:54.285: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Feb  8 14:01:54.285: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Feb  8 14:01:54.285: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Feb  8 14:02:04.241: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Feb  8 14:02:04.241: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Feb  8 14:02:04.241: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Updating stateful set template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine
Feb  8 14:02:04.280: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Not applying an update when the partition is greater than the number of replicas
STEP: Performing a canary update
Feb  8 14:02:14.348: INFO: Updating stateful set ss2
Feb  8 14:02:14.353: INFO: Waiting for Pod statefulset-5445/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb  8 14:02:24.370: INFO: Waiting for Pod statefulset-5445/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
STEP: Restoring Pods to the correct revision when they are deleted
Feb  8 14:02:34.653: INFO: Found 2 stateful pods, waiting for 3
Feb  8 14:02:44.672: INFO: Found 2 stateful pods, waiting for 3
Feb  8 14:02:54.661: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Feb  8 14:02:54.661: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Feb  8 14:02:54.661: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Performing a phased rolling update
Feb  8 14:02:54.694: INFO: Updating stateful set ss2
Feb  8 14:02:54.720: INFO: Waiting for Pod statefulset-5445/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb  8 14:03:04.734: INFO: Waiting for Pod statefulset-5445/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb  8 14:03:14.756: INFO: Updating stateful set ss2
Feb  8 14:03:14.840: INFO: Waiting for StatefulSet statefulset-5445/ss2 to complete update
Feb  8 14:03:14.840: INFO: Waiting for Pod statefulset-5445/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb  8 14:03:24.871: INFO: Waiting for StatefulSet statefulset-5445/ss2 to complete update
Feb  8 14:03:24.871: INFO: Waiting for Pod statefulset-5445/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86
Feb  8 14:03:34.871: INFO: Deleting all statefulset in ns statefulset-5445
Feb  8 14:03:34.881: INFO: Scaling statefulset ss2 to 0
Feb  8 14:04:04.924: INFO: Waiting for statefulset status.replicas updated to 0
Feb  8 14:04:04.929: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  8 14:04:04.995: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-5445" for this suite.
Feb  8 14:04:13.044: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 14:04:13.194: INFO: namespace statefulset-5445 deletion completed in 8.187431212s

• [SLOW TEST:159.138 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should perform canary updates and phased rolling updates of template modifications [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  8 14:04:13.194: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0666 on node default medium
Feb  8 14:04:13.278: INFO: Waiting up to 5m0s for pod "pod-9d905628-5f15-4de2-8667-c92a67d4d23b" in namespace "emptydir-3812" to be "success or failure"
Feb  8 14:04:13.353: INFO: Pod "pod-9d905628-5f15-4de2-8667-c92a67d4d23b": Phase="Pending", Reason="", readiness=false. Elapsed: 74.272446ms
Feb  8 14:04:15.361: INFO: Pod "pod-9d905628-5f15-4de2-8667-c92a67d4d23b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.082170637s
Feb  8 14:04:17.369: INFO: Pod "pod-9d905628-5f15-4de2-8667-c92a67d4d23b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.090645176s
Feb  8 14:04:19.379: INFO: Pod "pod-9d905628-5f15-4de2-8667-c92a67d4d23b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.100519742s
Feb  8 14:04:21.390: INFO: Pod "pod-9d905628-5f15-4de2-8667-c92a67d4d23b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.111283089s
STEP: Saw pod success
Feb  8 14:04:21.390: INFO: Pod "pod-9d905628-5f15-4de2-8667-c92a67d4d23b" satisfied condition "success or failure"
Feb  8 14:04:21.546: INFO: Trying to get logs from node iruya-node pod pod-9d905628-5f15-4de2-8667-c92a67d4d23b container test-container: 
STEP: delete the pod
Feb  8 14:04:21.626: INFO: Waiting for pod pod-9d905628-5f15-4de2-8667-c92a67d4d23b to disappear
Feb  8 14:04:21.641: INFO: Pod pod-9d905628-5f15-4de2-8667-c92a67d4d23b no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  8 14:04:21.641: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-3812" for this suite.
Feb  8 14:04:27.761: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 14:04:27.932: INFO: namespace emptydir-3812 deletion completed in 6.216669012s

• [SLOW TEST:14.738 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  8 14:04:27.933: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name cm-test-opt-del-414af36b-e010-450b-809d-694ad3418662
STEP: Creating configMap with name cm-test-opt-upd-1b989e8f-34a2-4381-87b1-01e2341d8a6c
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-414af36b-e010-450b-809d-694ad3418662
STEP: Updating configmap cm-test-opt-upd-1b989e8f-34a2-4381-87b1-01e2341d8a6c
STEP: Creating configMap with name cm-test-opt-create-ff57ca4d-f472-4b04-b296-4aa41c5073d6
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  8 14:06:00.049: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-3881" for this suite.
Feb  8 14:06:24.082: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 14:06:24.206: INFO: namespace configmap-3881 deletion completed in 24.15195034s

• [SLOW TEST:116.272 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicaSet 
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  8 14:06:24.207: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb  8 14:06:24.317: INFO: Creating ReplicaSet my-hostname-basic-f84ca898-e206-49b1-b3b9-82349f9a4e4a
Feb  8 14:06:24.343: INFO: Pod name my-hostname-basic-f84ca898-e206-49b1-b3b9-82349f9a4e4a: Found 0 pods out of 1
Feb  8 14:06:29.352: INFO: Pod name my-hostname-basic-f84ca898-e206-49b1-b3b9-82349f9a4e4a: Found 1 pods out of 1
Feb  8 14:06:29.352: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-f84ca898-e206-49b1-b3b9-82349f9a4e4a" is running
Feb  8 14:06:31.369: INFO: Pod "my-hostname-basic-f84ca898-e206-49b1-b3b9-82349f9a4e4a-c64rm" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-08 14:06:24 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-08 14:06:24 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-f84ca898-e206-49b1-b3b9-82349f9a4e4a]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-08 14:06:24 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-f84ca898-e206-49b1-b3b9-82349f9a4e4a]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-08 14:06:24 +0000 UTC Reason: Message:}])
Feb  8 14:06:31.369: INFO: Trying to dial the pod
Feb  8 14:06:36.400: INFO: Controller my-hostname-basic-f84ca898-e206-49b1-b3b9-82349f9a4e4a: Got expected result from replica 1 [my-hostname-basic-f84ca898-e206-49b1-b3b9-82349f9a4e4a-c64rm]: "my-hostname-basic-f84ca898-e206-49b1-b3b9-82349f9a4e4a-c64rm", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  8 14:06:36.400: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replicaset-8741" for this suite.
Feb  8 14:06:42.429: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 14:06:42.600: INFO: namespace replicaset-8741 deletion completed in 6.194802245s

• [SLOW TEST:18.393 seconds]
[sig-apps] ReplicaSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  8 14:06:42.601: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb  8 14:07:12.799: INFO: Container started at 2020-02-08 14:06:51 +0000 UTC, pod became ready at 2020-02-08 14:07:11 +0000 UTC
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  8 14:07:12.799: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-5645" for this suite.
Feb  8 14:07:34.831: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 14:07:34.977: INFO: namespace container-probe-5645 deletion completed in 22.172085584s

• [SLOW TEST:52.377 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should have monotonically increasing restart count [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  8 14:07:34.978: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should have monotonically increasing restart count [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod liveness-484fef20-f6c7-4e71-8932-e1df483fab08 in namespace container-probe-7156
Feb  8 14:07:43.142: INFO: Started pod liveness-484fef20-f6c7-4e71-8932-e1df483fab08 in namespace container-probe-7156
STEP: checking the pod's current state and verifying that restartCount is present
Feb  8 14:07:43.146: INFO: Initial restart count of pod liveness-484fef20-f6c7-4e71-8932-e1df483fab08 is 0
Feb  8 14:08:05.261: INFO: Restart count of pod container-probe-7156/liveness-484fef20-f6c7-4e71-8932-e1df483fab08 is now 1 (22.114870709s elapsed)
Feb  8 14:08:25.373: INFO: Restart count of pod container-probe-7156/liveness-484fef20-f6c7-4e71-8932-e1df483fab08 is now 2 (42.226226782s elapsed)
Feb  8 14:08:45.477: INFO: Restart count of pod container-probe-7156/liveness-484fef20-f6c7-4e71-8932-e1df483fab08 is now 3 (1m2.330316309s elapsed)
Feb  8 14:09:05.622: INFO: Restart count of pod container-probe-7156/liveness-484fef20-f6c7-4e71-8932-e1df483fab08 is now 4 (1m22.475719464s elapsed)
Feb  8 14:10:18.609: INFO: Restart count of pod container-probe-7156/liveness-484fef20-f6c7-4e71-8932-e1df483fab08 is now 5 (2m35.462305745s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  8 14:10:18.660: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-7156" for this suite.
Feb  8 14:10:24.717: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 14:10:24.836: INFO: namespace container-probe-7156 deletion completed in 6.165122414s

• [SLOW TEST:169.858 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should have monotonically increasing restart count [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period 
  should be submitted and removed [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  8 14:10:24.837: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Delete Grace Period
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:47
[It] should be submitted and removed [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: setting up selector
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
Feb  8 14:10:35.145: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0'
STEP: deleting the pod gracefully
STEP: verifying the kubelet observed the termination notice
Feb  8 14:10:50.277: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed
[AfterEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  8 14:10:50.284: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-2650" for this suite.
Feb  8 14:10:56.329: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 14:10:56.458: INFO: namespace pods-2650 deletion completed in 6.168727189s

• [SLOW TEST:31.622 seconds]
[k8s.io] [sig-node] Pods Extended
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  [k8s.io] Delete Grace Period
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should be submitted and removed [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  8 14:10:56.459: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name cm-test-opt-del-5263216f-9770-4df9-be17-b093e5a7e6e0
STEP: Creating configMap with name cm-test-opt-upd-4f47a166-2af4-41f8-a605-ec6399cdd18b
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-5263216f-9770-4df9-be17-b093e5a7e6e0
STEP: Updating configmap cm-test-opt-upd-4f47a166-2af4-41f8-a605-ec6399cdd18b
STEP: Creating configMap with name cm-test-opt-create-a6ead932-3ee4-448f-9754-4e4db5773d86
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  8 14:11:14.999: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4493" for this suite.
Feb  8 14:11:37.019: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 14:11:37.141: INFO: namespace projected-4493 deletion completed in 22.138163773s

• [SLOW TEST:40.682 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  8 14:11:37.141: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating the pod
Feb  8 14:11:47.854: INFO: Successfully updated pod "annotationupdate288a284b-0aae-468f-93e8-05214a4e8441"
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  8 14:11:49.931: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-6952" for this suite.
Feb  8 14:12:11.978: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 14:12:12.097: INFO: namespace downward-api-6952 deletion completed in 22.157140006s

• [SLOW TEST:34.956 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  8 14:12:12.098: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Feb  8 14:15:11.466: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  8 14:15:11.569: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  8 14:15:13.569: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  8 14:15:13.582: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  8 14:15:15.569: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  8 14:15:15.579: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  8 14:15:17.569: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  8 14:15:17.578: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  8 14:15:19.569: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  8 14:15:19.577: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  8 14:15:21.569: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  8 14:15:21.577: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  8 14:15:23.569: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  8 14:15:23.578: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  8 14:15:25.569: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  8 14:15:25.579: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  8 14:15:27.569: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  8 14:15:27.580: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  8 14:15:29.569: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  8 14:15:29.578: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  8 14:15:31.569: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  8 14:15:31.585: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  8 14:15:33.569: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  8 14:15:33.580: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  8 14:15:35.569: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  8 14:15:35.578: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  8 14:15:37.569: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  8 14:15:37.587: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  8 14:15:39.569: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  8 14:15:39.645: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  8 14:15:41.569: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  8 14:15:41.584: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  8 14:15:43.569: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  8 14:15:43.582: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  8 14:15:45.569: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  8 14:15:45.578: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  8 14:15:47.569: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  8 14:15:47.581: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  8 14:15:49.569: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  8 14:15:49.578: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  8 14:15:51.570: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  8 14:15:51.591: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  8 14:15:53.569: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  8 14:15:53.581: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  8 14:15:55.569: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  8 14:15:55.579: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  8 14:15:57.569: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  8 14:15:57.578: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  8 14:15:59.569: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  8 14:15:59.576: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  8 14:16:01.569: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  8 14:16:01.577: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  8 14:16:03.569: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  8 14:16:03.577: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  8 14:16:05.569: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  8 14:16:05.654: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  8 14:16:07.569: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  8 14:16:07.577: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  8 14:16:09.569: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  8 14:16:09.580: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  8 14:16:11.569: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  8 14:16:11.583: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  8 14:16:13.569: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  8 14:16:13.579: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  8 14:16:15.569: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  8 14:16:15.579: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  8 14:16:17.569: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  8 14:16:17.576: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  8 14:16:19.569: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  8 14:16:19.577: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  8 14:16:21.569: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  8 14:16:21.594: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  8 14:16:23.569: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  8 14:16:23.581: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  8 14:16:25.569: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  8 14:16:25.578: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  8 14:16:27.569: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  8 14:16:27.578: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  8 14:16:29.569: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  8 14:16:29.582: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  8 14:16:31.569: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  8 14:16:31.581: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  8 14:16:33.569: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  8 14:16:33.581: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  8 14:16:35.569: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  8 14:16:35.581: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  8 14:16:37.569: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  8 14:16:37.575: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  8 14:16:39.569: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  8 14:16:39.578: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  8 14:16:41.569: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  8 14:16:41.579: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  8 14:16:43.570: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  8 14:16:43.588: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  8 14:16:45.569: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  8 14:16:45.580: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  8 14:16:47.569: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  8 14:16:47.586: INFO: Pod pod-with-poststart-exec-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  8 14:16:47.587: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-3390" for this suite.
Feb  8 14:17:11.628: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 14:17:11.766: INFO: namespace container-lifecycle-hook-3390 deletion completed in 24.169848592s

• [SLOW TEST:299.668 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute poststart exec hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  8 14:17:11.766: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs
STEP: Gathering metrics
W0208 14:17:42.463211       8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Feb  8 14:17:42.463: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  8 14:17:42.463: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-4378" for this suite.
Feb  8 14:17:48.509: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 14:17:48.616: INFO: namespace gc-4378 deletion completed in 6.146817966s

• [SLOW TEST:36.850 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Aggregator 
  Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  8 14:17:48.617: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename aggregator
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76
Feb  8 14:17:49.747: INFO: >>> kubeConfig: /root/.kube/config
[It] Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Registering the sample API server.
Feb  8 14:17:50.454: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set
Feb  8 14:17:52.688: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716768270, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716768270, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716768271, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716768270, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  8 14:17:54.702: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716768270, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716768270, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716768271, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716768270, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  8 14:17:56.695: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716768270, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716768270, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716768271, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716768270, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  8 14:17:58.696: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716768270, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716768270, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716768271, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716768270, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  8 14:18:00.699: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716768270, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716768270, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716768271, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716768270, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  8 14:18:06.957: INFO: Waited 4.250242506s for the sample-apiserver to be ready to handle requests.
[AfterEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67
[AfterEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  8 14:18:07.650: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "aggregator-440" for this suite.
Feb  8 14:18:13.681: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 14:18:13.792: INFO: namespace aggregator-440 deletion completed in 6.130874666s

• [SLOW TEST:25.175 seconds]
[sig-api-machinery] Aggregator
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  8 14:18:13.793: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a watch on configmaps with a certain label
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: changing the label value of the configmap
STEP: Expecting to observe a delete notification for the watched object
Feb  8 14:18:14.011: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-851,SelfLink:/api/v1/namespaces/watch-851/configmaps/e2e-watch-test-label-changed,UID:45f78ffe-d087-4be8-8e97-11d5ba396d38,ResourceVersion:23577556,Generation:0,CreationTimestamp:2020-02-08 14:18:13 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
Feb  8 14:18:14.012: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-851,SelfLink:/api/v1/namespaces/watch-851/configmaps/e2e-watch-test-label-changed,UID:45f78ffe-d087-4be8-8e97-11d5ba396d38,ResourceVersion:23577557,Generation:0,CreationTimestamp:2020-02-08 14:18:13 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
Feb  8 14:18:14.012: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-851,SelfLink:/api/v1/namespaces/watch-851/configmaps/e2e-watch-test-label-changed,UID:45f78ffe-d087-4be8-8e97-11d5ba396d38,ResourceVersion:23577558,Generation:0,CreationTimestamp:2020-02-08 14:18:13 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying the configmap a second time
STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements
STEP: changing the label value of the configmap back
STEP: modifying the configmap a third time
STEP: deleting the configmap
STEP: Expecting to observe an add notification for the watched object when the label value was restored
Feb  8 14:18:24.156: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-851,SelfLink:/api/v1/namespaces/watch-851/configmaps/e2e-watch-test-label-changed,UID:45f78ffe-d087-4be8-8e97-11d5ba396d38,ResourceVersion:23577574,Generation:0,CreationTimestamp:2020-02-08 14:18:13 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Feb  8 14:18:24.156: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-851,SelfLink:/api/v1/namespaces/watch-851/configmaps/e2e-watch-test-label-changed,UID:45f78ffe-d087-4be8-8e97-11d5ba396d38,ResourceVersion:23577575,Generation:0,CreationTimestamp:2020-02-08 14:18:13 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},}
Feb  8 14:18:24.156: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-851,SelfLink:/api/v1/namespaces/watch-851/configmaps/e2e-watch-test-label-changed,UID:45f78ffe-d087-4be8-8e97-11d5ba396d38,ResourceVersion:23577576,Generation:0,CreationTimestamp:2020-02-08 14:18:13 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  8 14:18:24.156: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-851" for this suite.
Feb  8 14:18:30.188: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 14:18:30.292: INFO: namespace watch-851 deletion completed in 6.129974481s

• [SLOW TEST:16.499 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  8 14:18:30.292: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-f2238c63-95dc-4fb5-a017-3d1909546441
STEP: Creating a pod to test consume secrets
Feb  8 14:18:30.442: INFO: Waiting up to 5m0s for pod "pod-secrets-504ea25b-97ae-4b36-8cde-0c676ce8ad42" in namespace "secrets-3068" to be "success or failure"
Feb  8 14:18:30.462: INFO: Pod "pod-secrets-504ea25b-97ae-4b36-8cde-0c676ce8ad42": Phase="Pending", Reason="", readiness=false. Elapsed: 19.730293ms
Feb  8 14:18:32.473: INFO: Pod "pod-secrets-504ea25b-97ae-4b36-8cde-0c676ce8ad42": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03083692s
Feb  8 14:18:34.483: INFO: Pod "pod-secrets-504ea25b-97ae-4b36-8cde-0c676ce8ad42": Phase="Pending", Reason="", readiness=false. Elapsed: 4.040954809s
Feb  8 14:18:36.495: INFO: Pod "pod-secrets-504ea25b-97ae-4b36-8cde-0c676ce8ad42": Phase="Pending", Reason="", readiness=false. Elapsed: 6.052449579s
Feb  8 14:18:38.510: INFO: Pod "pod-secrets-504ea25b-97ae-4b36-8cde-0c676ce8ad42": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.067894003s
STEP: Saw pod success
Feb  8 14:18:38.510: INFO: Pod "pod-secrets-504ea25b-97ae-4b36-8cde-0c676ce8ad42" satisfied condition "success or failure"
Feb  8 14:18:38.523: INFO: Trying to get logs from node iruya-node pod pod-secrets-504ea25b-97ae-4b36-8cde-0c676ce8ad42 container secret-volume-test: 
STEP: delete the pod
Feb  8 14:18:38.679: INFO: Waiting for pod pod-secrets-504ea25b-97ae-4b36-8cde-0c676ce8ad42 to disappear
Feb  8 14:18:38.687: INFO: Pod pod-secrets-504ea25b-97ae-4b36-8cde-0c676ce8ad42 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  8 14:18:38.688: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-3068" for this suite.
Feb  8 14:18:44.791: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 14:18:44.953: INFO: namespace secrets-3068 deletion completed in 6.192429266s

• [SLOW TEST:14.661 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  8 14:18:44.953: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb  8 14:18:45.069: INFO: Waiting up to 5m0s for pod "downwardapi-volume-df2e01e8-3890-4b0a-87d9-4dfeb1240ae9" in namespace "downward-api-3697" to be "success or failure"
Feb  8 14:18:45.080: INFO: Pod "downwardapi-volume-df2e01e8-3890-4b0a-87d9-4dfeb1240ae9": Phase="Pending", Reason="", readiness=false. Elapsed: 10.541907ms
Feb  8 14:18:47.090: INFO: Pod "downwardapi-volume-df2e01e8-3890-4b0a-87d9-4dfeb1240ae9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020810243s
Feb  8 14:18:49.156: INFO: Pod "downwardapi-volume-df2e01e8-3890-4b0a-87d9-4dfeb1240ae9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.086736705s
Feb  8 14:18:51.184: INFO: Pod "downwardapi-volume-df2e01e8-3890-4b0a-87d9-4dfeb1240ae9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.114245965s
Feb  8 14:18:54.001: INFO: Pod "downwardapi-volume-df2e01e8-3890-4b0a-87d9-4dfeb1240ae9": Phase="Running", Reason="", readiness=true. Elapsed: 8.930996469s
Feb  8 14:18:56.008: INFO: Pod "downwardapi-volume-df2e01e8-3890-4b0a-87d9-4dfeb1240ae9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.938756133s
STEP: Saw pod success
Feb  8 14:18:56.008: INFO: Pod "downwardapi-volume-df2e01e8-3890-4b0a-87d9-4dfeb1240ae9" satisfied condition "success or failure"
Feb  8 14:18:56.013: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-df2e01e8-3890-4b0a-87d9-4dfeb1240ae9 container client-container: 
STEP: delete the pod
Feb  8 14:18:56.066: INFO: Waiting for pod downwardapi-volume-df2e01e8-3890-4b0a-87d9-4dfeb1240ae9 to disappear
Feb  8 14:18:56.069: INFO: Pod downwardapi-volume-df2e01e8-3890-4b0a-87d9-4dfeb1240ae9 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  8 14:18:56.069: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-3697" for this suite.
Feb  8 14:19:02.099: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 14:19:02.236: INFO: namespace downward-api-3697 deletion completed in 6.162677865s

• [SLOW TEST:17.283 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  8 14:19:02.236: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-map-91299210-f061-445a-be27-7aa3773cb993
STEP: Creating a pod to test consume configMaps
Feb  8 14:19:02.348: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-af9fde3b-4f3c-478d-873a-1b094c44be36" in namespace "projected-6467" to be "success or failure"
Feb  8 14:19:02.373: INFO: Pod "pod-projected-configmaps-af9fde3b-4f3c-478d-873a-1b094c44be36": Phase="Pending", Reason="", readiness=false. Elapsed: 25.5104ms
Feb  8 14:19:04.380: INFO: Pod "pod-projected-configmaps-af9fde3b-4f3c-478d-873a-1b094c44be36": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031959965s
Feb  8 14:19:06.388: INFO: Pod "pod-projected-configmaps-af9fde3b-4f3c-478d-873a-1b094c44be36": Phase="Pending", Reason="", readiness=false. Elapsed: 4.039939974s
Feb  8 14:19:08.395: INFO: Pod "pod-projected-configmaps-af9fde3b-4f3c-478d-873a-1b094c44be36": Phase="Pending", Reason="", readiness=false. Elapsed: 6.047519561s
Feb  8 14:19:10.403: INFO: Pod "pod-projected-configmaps-af9fde3b-4f3c-478d-873a-1b094c44be36": Phase="Pending", Reason="", readiness=false. Elapsed: 8.05549221s
Feb  8 14:19:13.163: INFO: Pod "pod-projected-configmaps-af9fde3b-4f3c-478d-873a-1b094c44be36": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.815495171s
STEP: Saw pod success
Feb  8 14:19:13.163: INFO: Pod "pod-projected-configmaps-af9fde3b-4f3c-478d-873a-1b094c44be36" satisfied condition "success or failure"
Feb  8 14:19:13.188: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-af9fde3b-4f3c-478d-873a-1b094c44be36 container projected-configmap-volume-test: 
STEP: delete the pod
Feb  8 14:19:13.598: INFO: Waiting for pod pod-projected-configmaps-af9fde3b-4f3c-478d-873a-1b094c44be36 to disappear
Feb  8 14:19:13.604: INFO: Pod pod-projected-configmaps-af9fde3b-4f3c-478d-873a-1b094c44be36 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  8 14:19:13.605: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6467" for this suite.
Feb  8 14:19:19.643: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 14:19:19.828: INFO: namespace projected-6467 deletion completed in 6.218057196s

• [SLOW TEST:17.592 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  8 14:19:19.828: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
[It] should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Feb  8 14:19:20.000: INFO: Number of nodes with available pods: 0
Feb  8 14:19:20.000: INFO: Node iruya-node is running more than one daemon pod
Feb  8 14:19:21.013: INFO: Number of nodes with available pods: 0
Feb  8 14:19:21.013: INFO: Node iruya-node is running more than one daemon pod
Feb  8 14:19:23.042: INFO: Number of nodes with available pods: 0
Feb  8 14:19:23.042: INFO: Node iruya-node is running more than one daemon pod
Feb  8 14:19:24.072: INFO: Number of nodes with available pods: 0
Feb  8 14:19:24.072: INFO: Node iruya-node is running more than one daemon pod
Feb  8 14:19:25.009: INFO: Number of nodes with available pods: 0
Feb  8 14:19:25.009: INFO: Node iruya-node is running more than one daemon pod
Feb  8 14:19:27.165: INFO: Number of nodes with available pods: 0
Feb  8 14:19:27.165: INFO: Node iruya-node is running more than one daemon pod
Feb  8 14:19:28.012: INFO: Number of nodes with available pods: 0
Feb  8 14:19:28.012: INFO: Node iruya-node is running more than one daemon pod
Feb  8 14:19:29.011: INFO: Number of nodes with available pods: 1
Feb  8 14:19:29.011: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Feb  8 14:19:30.028: INFO: Number of nodes with available pods: 2
Feb  8 14:19:30.028: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived.
Feb  8 14:19:30.087: INFO: Number of nodes with available pods: 1
Feb  8 14:19:30.087: INFO: Node iruya-node is running more than one daemon pod
Feb  8 14:19:31.115: INFO: Number of nodes with available pods: 1
Feb  8 14:19:31.115: INFO: Node iruya-node is running more than one daemon pod
Feb  8 14:19:32.110: INFO: Number of nodes with available pods: 1
Feb  8 14:19:32.110: INFO: Node iruya-node is running more than one daemon pod
Feb  8 14:19:33.102: INFO: Number of nodes with available pods: 1
Feb  8 14:19:33.102: INFO: Node iruya-node is running more than one daemon pod
Feb  8 14:19:34.105: INFO: Number of nodes with available pods: 1
Feb  8 14:19:34.105: INFO: Node iruya-node is running more than one daemon pod
Feb  8 14:19:35.103: INFO: Number of nodes with available pods: 1
Feb  8 14:19:35.103: INFO: Node iruya-node is running more than one daemon pod
Feb  8 14:19:36.104: INFO: Number of nodes with available pods: 1
Feb  8 14:19:36.104: INFO: Node iruya-node is running more than one daemon pod
Feb  8 14:19:37.139: INFO: Number of nodes with available pods: 1
Feb  8 14:19:37.139: INFO: Node iruya-node is running more than one daemon pod
Feb  8 14:19:38.117: INFO: Number of nodes with available pods: 1
Feb  8 14:19:38.117: INFO: Node iruya-node is running more than one daemon pod
Feb  8 14:19:39.099: INFO: Number of nodes with available pods: 1
Feb  8 14:19:39.099: INFO: Node iruya-node is running more than one daemon pod
Feb  8 14:19:40.100: INFO: Number of nodes with available pods: 2
Feb  8 14:19:40.100: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Wait for the failed daemon pod to be completely deleted.
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-8169, will wait for the garbage collector to delete the pods
Feb  8 14:19:40.174: INFO: Deleting DaemonSet.extensions daemon-set took: 16.640918ms
Feb  8 14:19:40.474: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.352384ms
Feb  8 14:19:57.883: INFO: Number of nodes with available pods: 0
Feb  8 14:19:57.883: INFO: Number of running nodes: 0, number of available pods: 0
Feb  8 14:19:57.887: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-8169/daemonsets","resourceVersion":"23577837"},"items":null}

Feb  8 14:19:57.890: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-8169/pods","resourceVersion":"23577837"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  8 14:19:57.906: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-8169" for this suite.
Feb  8 14:20:03.943: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 14:20:04.034: INFO: namespace daemonsets-8169 deletion completed in 6.123697568s

• [SLOW TEST:44.206 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  8 14:20:04.034: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0644 on tmpfs
Feb  8 14:20:04.270: INFO: Waiting up to 5m0s for pod "pod-b8ab80eb-2613-49a5-9ef4-a0ff64f372ed" in namespace "emptydir-6950" to be "success or failure"
Feb  8 14:20:04.278: INFO: Pod "pod-b8ab80eb-2613-49a5-9ef4-a0ff64f372ed": Phase="Pending", Reason="", readiness=false. Elapsed: 7.27057ms
Feb  8 14:20:06.287: INFO: Pod "pod-b8ab80eb-2613-49a5-9ef4-a0ff64f372ed": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016553464s
Feb  8 14:20:08.377: INFO: Pod "pod-b8ab80eb-2613-49a5-9ef4-a0ff64f372ed": Phase="Pending", Reason="", readiness=false. Elapsed: 4.106547372s
Feb  8 14:20:10.389: INFO: Pod "pod-b8ab80eb-2613-49a5-9ef4-a0ff64f372ed": Phase="Pending", Reason="", readiness=false. Elapsed: 6.117961905s
Feb  8 14:20:12.405: INFO: Pod "pod-b8ab80eb-2613-49a5-9ef4-a0ff64f372ed": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.134382153s
STEP: Saw pod success
Feb  8 14:20:12.405: INFO: Pod "pod-b8ab80eb-2613-49a5-9ef4-a0ff64f372ed" satisfied condition "success or failure"
Feb  8 14:20:12.409: INFO: Trying to get logs from node iruya-node pod pod-b8ab80eb-2613-49a5-9ef4-a0ff64f372ed container test-container: 
STEP: delete the pod
Feb  8 14:20:12.515: INFO: Waiting for pod pod-b8ab80eb-2613-49a5-9ef4-a0ff64f372ed to disappear
Feb  8 14:20:12.559: INFO: Pod pod-b8ab80eb-2613-49a5-9ef4-a0ff64f372ed no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  8 14:20:12.559: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-6950" for this suite.
Feb  8 14:20:18.638: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 14:20:18.750: INFO: namespace emptydir-6950 deletion completed in 6.142165653s

• [SLOW TEST:14.716 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  8 14:20:18.750: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-map-d0d36aaa-84bf-487b-b2b0-a3c9302665f4
STEP: Creating a pod to test consume configMaps
Feb  8 14:20:18.876: INFO: Waiting up to 5m0s for pod "pod-configmaps-70cfcb9a-cb61-4621-b3e0-11a664eda7a6" in namespace "configmap-7179" to be "success or failure"
Feb  8 14:20:18.894: INFO: Pod "pod-configmaps-70cfcb9a-cb61-4621-b3e0-11a664eda7a6": Phase="Pending", Reason="", readiness=false. Elapsed: 18.155358ms
Feb  8 14:20:20.902: INFO: Pod "pod-configmaps-70cfcb9a-cb61-4621-b3e0-11a664eda7a6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026547634s
Feb  8 14:20:22.909: INFO: Pod "pod-configmaps-70cfcb9a-cb61-4621-b3e0-11a664eda7a6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.03372365s
Feb  8 14:20:24.914: INFO: Pod "pod-configmaps-70cfcb9a-cb61-4621-b3e0-11a664eda7a6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.038716578s
Feb  8 14:20:26.922: INFO: Pod "pod-configmaps-70cfcb9a-cb61-4621-b3e0-11a664eda7a6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.04582085s
STEP: Saw pod success
Feb  8 14:20:26.922: INFO: Pod "pod-configmaps-70cfcb9a-cb61-4621-b3e0-11a664eda7a6" satisfied condition "success or failure"
Feb  8 14:20:26.924: INFO: Trying to get logs from node iruya-node pod pod-configmaps-70cfcb9a-cb61-4621-b3e0-11a664eda7a6 container configmap-volume-test: 
STEP: delete the pod
Feb  8 14:20:27.188: INFO: Waiting for pod pod-configmaps-70cfcb9a-cb61-4621-b3e0-11a664eda7a6 to disappear
Feb  8 14:20:27.253: INFO: Pod pod-configmaps-70cfcb9a-cb61-4621-b3e0-11a664eda7a6 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  8 14:20:27.253: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-7179" for this suite.
Feb  8 14:20:33.301: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 14:20:33.434: INFO: namespace configmap-7179 deletion completed in 6.168471628s

• [SLOW TEST:14.683 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  8 14:20:33.434: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72
[It] deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb  8 14:20:33.548: INFO: Pod name rollover-pod: Found 0 pods out of 1
Feb  8 14:20:38.565: INFO: Pod name rollover-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Feb  8 14:20:42.591: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready
Feb  8 14:20:44.599: INFO: Creating deployment "test-rollover-deployment"
Feb  8 14:20:44.623: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations
Feb  8 14:20:47.452: INFO: Check revision of new replica set for deployment "test-rollover-deployment"
Feb  8 14:20:47.463: INFO: Ensure that both replica sets have 1 created replica
Feb  8 14:20:47.485: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update
Feb  8 14:20:47.493: INFO: Updating deployment test-rollover-deployment
Feb  8 14:20:47.493: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller
Feb  8 14:20:49.508: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2
Feb  8 14:20:49.520: INFO: Make sure deployment "test-rollover-deployment" is complete
Feb  8 14:20:49.530: INFO: all replica sets need to contain the pod-template-hash label
Feb  8 14:20:49.530: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716768444, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716768444, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716768448, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716768444, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  8 14:20:51.551: INFO: all replica sets need to contain the pod-template-hash label
Feb  8 14:20:51.551: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716768444, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716768444, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716768448, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716768444, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  8 14:20:53.541: INFO: all replica sets need to contain the pod-template-hash label
Feb  8 14:20:53.541: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716768444, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716768444, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716768448, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716768444, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  8 14:20:55.540: INFO: all replica sets need to contain the pod-template-hash label
Feb  8 14:20:55.540: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716768444, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716768444, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716768448, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716768444, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  8 14:20:57.543: INFO: all replica sets need to contain the pod-template-hash label
Feb  8 14:20:57.543: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716768444, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716768444, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716768448, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716768444, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  8 14:20:59.546: INFO: all replica sets need to contain the pod-template-hash label
Feb  8 14:20:59.546: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716768444, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716768444, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716768458, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716768444, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  8 14:21:01.539: INFO: all replica sets need to contain the pod-template-hash label
Feb  8 14:21:01.539: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716768444, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716768444, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716768458, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716768444, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  8 14:21:03.544: INFO: all replica sets need to contain the pod-template-hash label
Feb  8 14:21:03.544: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716768444, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716768444, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716768458, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716768444, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  8 14:21:05.544: INFO: all replica sets need to contain the pod-template-hash label
Feb  8 14:21:05.544: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716768444, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716768444, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716768458, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716768444, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  8 14:21:07.544: INFO: all replica sets need to contain the pod-template-hash label
Feb  8 14:21:07.544: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716768444, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716768444, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716768458, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716768444, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  8 14:21:09.542: INFO: 
Feb  8 14:21:09.542: INFO: Ensure that both old replica sets have no replicas
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66
Feb  8 14:21:09.554: INFO: Deployment "test-rollover-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment,GenerateName:,Namespace:deployment-3527,SelfLink:/apis/apps/v1/namespaces/deployment-3527/deployments/test-rollover-deployment,UID:ec73051f-78f4-4cb4-900b-9b01e9660d52,ResourceVersion:23578075,Generation:2,CreationTimestamp:2020-02-08 14:20:44 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-02-08 14:20:44 +0000 UTC 2020-02-08 14:20:44 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-02-08 14:21:08 +0000 UTC 2020-02-08 14:20:44 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rollover-deployment-854595fc44" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},}

Feb  8 14:21:09.559: INFO: New ReplicaSet "test-rollover-deployment-854595fc44" of Deployment "test-rollover-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-854595fc44,GenerateName:,Namespace:deployment-3527,SelfLink:/apis/apps/v1/namespaces/deployment-3527/replicasets/test-rollover-deployment-854595fc44,UID:00a7197e-1299-4abd-9d67-2781a4a5baf7,ResourceVersion:23578064,Generation:2,CreationTimestamp:2020-02-08 14:20:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment ec73051f-78f4-4cb4-900b-9b01e9660d52 0xc002a09f77 0xc002a09f78}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Feb  8 14:21:09.559: INFO: All old ReplicaSets of Deployment "test-rollover-deployment":
Feb  8 14:21:09.559: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-controller,GenerateName:,Namespace:deployment-3527,SelfLink:/apis/apps/v1/namespaces/deployment-3527/replicasets/test-rollover-controller,UID:194c3b4f-4852-4acc-a241-dae87007041c,ResourceVersion:23578074,Generation:2,CreationTimestamp:2020-02-08 14:20:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment ec73051f-78f4-4cb4-900b-9b01e9660d52 0xc002a09ea7 0xc002a09ea8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Feb  8 14:21:09.560: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-9b8b997cf,GenerateName:,Namespace:deployment-3527,SelfLink:/apis/apps/v1/namespaces/deployment-3527/replicasets/test-rollover-deployment-9b8b997cf,UID:fd72b5a4-eebe-49eb-a26e-a5ef15a05425,ResourceVersion:23578026,Generation:2,CreationTimestamp:2020-02-08 14:20:44 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment ec73051f-78f4-4cb4-900b-9b01e9660d52 0xc0033b0050 0xc0033b0051}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Feb  8 14:21:09.565: INFO: Pod "test-rollover-deployment-854595fc44-5cmng" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-854595fc44-5cmng,GenerateName:test-rollover-deployment-854595fc44-,Namespace:deployment-3527,SelfLink:/api/v1/namespaces/deployment-3527/pods/test-rollover-deployment-854595fc44-5cmng,UID:e19ef834-1211-4b22-a9f5-215a81ba7bf2,ResourceVersion:23578049,Generation:0,CreationTimestamp:2020-02-08 14:20:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rollover-deployment-854595fc44 00a7197e-1299-4abd-9d67-2781a4a5baf7 0xc0033b0c57 0xc0033b0c58}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-6cq9h {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6cq9h,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [{default-token-6cq9h true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0033b0cd0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0033b0cf0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-08 14:20:47 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-08 14:20:58 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-08 14:20:58 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-08 14:20:47 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.2,StartTime:2020-02-08 14:20:47 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-02-08 14:20:56 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 docker://f846cd6730c904a79fe15b8d6407703d2ade179588b0539d91bfd7eba685b094}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  8 14:21:09.565: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-3527" for this suite.
Feb  8 14:21:15.627: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 14:21:15.726: INFO: namespace deployment-3527 deletion completed in 6.154286535s

• [SLOW TEST:42.292 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  8 14:21:15.727: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb  8 14:21:15.840: INFO: Waiting up to 5m0s for pod "downwardapi-volume-10b008fc-0aac-45e0-90b7-1779ea28ae0c" in namespace "downward-api-6319" to be "success or failure"
Feb  8 14:21:15.885: INFO: Pod "downwardapi-volume-10b008fc-0aac-45e0-90b7-1779ea28ae0c": Phase="Pending", Reason="", readiness=false. Elapsed: 45.579094ms
Feb  8 14:21:17.893: INFO: Pod "downwardapi-volume-10b008fc-0aac-45e0-90b7-1779ea28ae0c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.053664363s
Feb  8 14:21:19.899: INFO: Pod "downwardapi-volume-10b008fc-0aac-45e0-90b7-1779ea28ae0c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.059570633s
Feb  8 14:21:21.906: INFO: Pod "downwardapi-volume-10b008fc-0aac-45e0-90b7-1779ea28ae0c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.066146746s
Feb  8 14:21:23.929: INFO: Pod "downwardapi-volume-10b008fc-0aac-45e0-90b7-1779ea28ae0c": Phase="Pending", Reason="", readiness=false. Elapsed: 8.089257379s
Feb  8 14:21:25.979: INFO: Pod "downwardapi-volume-10b008fc-0aac-45e0-90b7-1779ea28ae0c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.139049152s
STEP: Saw pod success
Feb  8 14:21:25.979: INFO: Pod "downwardapi-volume-10b008fc-0aac-45e0-90b7-1779ea28ae0c" satisfied condition "success or failure"
Feb  8 14:21:25.985: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-10b008fc-0aac-45e0-90b7-1779ea28ae0c container client-container: 
STEP: delete the pod
Feb  8 14:21:26.030: INFO: Waiting for pod downwardapi-volume-10b008fc-0aac-45e0-90b7-1779ea28ae0c to disappear
Feb  8 14:21:26.033: INFO: Pod downwardapi-volume-10b008fc-0aac-45e0-90b7-1779ea28ae0c no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  8 14:21:26.033: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-6319" for this suite.
Feb  8 14:21:34.066: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 14:21:34.283: INFO: namespace downward-api-6319 deletion completed in 8.246693716s

• [SLOW TEST:18.556 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  8 14:21:34.283: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a watch on configmaps
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: closing the watch once it receives two notifications
Feb  8 14:21:34.385: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-3322,SelfLink:/api/v1/namespaces/watch-3322/configmaps/e2e-watch-test-watch-closed,UID:77b42c36-d3bb-4dd7-aa3f-03dfa0fc5ae9,ResourceVersion:23578167,Generation:0,CreationTimestamp:2020-02-08 14:21:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
Feb  8 14:21:34.385: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-3322,SelfLink:/api/v1/namespaces/watch-3322/configmaps/e2e-watch-test-watch-closed,UID:77b42c36-d3bb-4dd7-aa3f-03dfa0fc5ae9,ResourceVersion:23578168,Generation:0,CreationTimestamp:2020-02-08 14:21:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying the configmap a second time, while the watch is closed
STEP: creating a new watch on configmaps from the last resource version observed by the first watch
STEP: deleting the configmap
STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed
Feb  8 14:21:34.434: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-3322,SelfLink:/api/v1/namespaces/watch-3322/configmaps/e2e-watch-test-watch-closed,UID:77b42c36-d3bb-4dd7-aa3f-03dfa0fc5ae9,ResourceVersion:23578169,Generation:0,CreationTimestamp:2020-02-08 14:21:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Feb  8 14:21:34.434: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-3322,SelfLink:/api/v1/namespaces/watch-3322/configmaps/e2e-watch-test-watch-closed,UID:77b42c36-d3bb-4dd7-aa3f-03dfa0fc5ae9,ResourceVersion:23578170,Generation:0,CreationTimestamp:2020-02-08 14:21:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  8 14:21:34.434: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-3322" for this suite.
Feb  8 14:21:40.464: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 14:21:40.597: INFO: namespace watch-3322 deletion completed in 6.156001398s

• [SLOW TEST:6.314 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run job 
  should create a job from an image when restart is OnFailure  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  8 14:21:40.599: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1612
[It] should create a job from an image when restart is OnFailure  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Feb  8 14:21:40.678: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-1776'
Feb  8 14:21:42.596: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Feb  8 14:21:42.596: INFO: stdout: "job.batch/e2e-test-nginx-job created\n"
STEP: verifying the job e2e-test-nginx-job was created
[AfterEach] [k8s.io] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1617
Feb  8 14:21:42.694: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-nginx-job --namespace=kubectl-1776'
Feb  8 14:21:42.914: INFO: stderr: ""
Feb  8 14:21:42.914: INFO: stdout: "job.batch \"e2e-test-nginx-job\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  8 14:21:42.914: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1776" for this suite.
Feb  8 14:22:06.968: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 14:22:07.174: INFO: namespace kubectl-1776 deletion completed in 24.245919476s

• [SLOW TEST:26.575 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create a job from an image when restart is OnFailure  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  8 14:22:07.174: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44
[It] should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
Feb  8 14:22:07.281: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  8 14:22:25.775: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-7889" for this suite.
Feb  8 14:22:49.811: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 14:22:49.948: INFO: namespace init-container-7889 deletion completed in 24.158388964s

• [SLOW TEST:42.774 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  8 14:22:49.948: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test substitution in container's command
Feb  8 14:22:50.111: INFO: Waiting up to 5m0s for pod "var-expansion-ac7c2b1a-fef3-42da-8af4-4a2b94ed3fc8" in namespace "var-expansion-5225" to be "success or failure"
Feb  8 14:22:50.126: INFO: Pod "var-expansion-ac7c2b1a-fef3-42da-8af4-4a2b94ed3fc8": Phase="Pending", Reason="", readiness=false. Elapsed: 15.216696ms
Feb  8 14:22:53.877: INFO: Pod "var-expansion-ac7c2b1a-fef3-42da-8af4-4a2b94ed3fc8": Phase="Pending", Reason="", readiness=false. Elapsed: 3.76588291s
Feb  8 14:22:55.888: INFO: Pod "var-expansion-ac7c2b1a-fef3-42da-8af4-4a2b94ed3fc8": Phase="Pending", Reason="", readiness=false. Elapsed: 5.77721032s
Feb  8 14:22:57.901: INFO: Pod "var-expansion-ac7c2b1a-fef3-42da-8af4-4a2b94ed3fc8": Phase="Pending", Reason="", readiness=false. Elapsed: 7.789839342s
Feb  8 14:22:59.917: INFO: Pod "var-expansion-ac7c2b1a-fef3-42da-8af4-4a2b94ed3fc8": Phase="Pending", Reason="", readiness=false. Elapsed: 9.806306385s
Feb  8 14:23:01.924: INFO: Pod "var-expansion-ac7c2b1a-fef3-42da-8af4-4a2b94ed3fc8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.812585931s
STEP: Saw pod success
Feb  8 14:23:01.924: INFO: Pod "var-expansion-ac7c2b1a-fef3-42da-8af4-4a2b94ed3fc8" satisfied condition "success or failure"
Feb  8 14:23:01.927: INFO: Trying to get logs from node iruya-node pod var-expansion-ac7c2b1a-fef3-42da-8af4-4a2b94ed3fc8 container dapi-container: 
STEP: delete the pod
Feb  8 14:23:02.030: INFO: Waiting for pod var-expansion-ac7c2b1a-fef3-42da-8af4-4a2b94ed3fc8 to disappear
Feb  8 14:23:02.065: INFO: Pod var-expansion-ac7c2b1a-fef3-42da-8af4-4a2b94ed3fc8 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  8 14:23:02.065: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-5225" for this suite.
Feb  8 14:23:08.095: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 14:23:08.235: INFO: namespace var-expansion-5225 deletion completed in 6.166137909s

• [SLOW TEST:18.287 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  8 14:23:08.236: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72
[It] RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb  8 14:23:08.385: INFO: Creating deployment "test-recreate-deployment"
Feb  8 14:23:08.410: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1
Feb  8 14:23:08.436: INFO: deployment "test-recreate-deployment" doesn't have the required revision set
Feb  8 14:23:10.459: INFO: Waiting deployment "test-recreate-deployment" to complete
Feb  8 14:23:10.465: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716768588, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716768588, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716768588, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716768588, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  8 14:23:12.496: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716768588, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716768588, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716768588, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716768588, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  8 14:23:14.476: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716768588, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716768588, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716768588, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716768588, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  8 14:23:16.473: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716768588, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716768588, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716768588, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716768588, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  8 14:23:18.480: INFO: Triggering a new rollout for deployment "test-recreate-deployment"
Feb  8 14:23:18.590: INFO: Updating deployment test-recreate-deployment
Feb  8 14:23:18.590: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66
Feb  8 14:23:18.964: INFO: Deployment "test-recreate-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment,GenerateName:,Namespace:deployment-4926,SelfLink:/apis/apps/v1/namespaces/deployment-4926/deployments/test-recreate-deployment,UID:c7b124b9-fa5c-43ad-b66b-67fe2bd07177,ResourceVersion:23578443,Generation:2,CreationTimestamp:2020-02-08 14:23:08 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[{Available False 2020-02-08 14:23:18 +0000 UTC 2020-02-08 14:23:18 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2020-02-08 14:23:18 +0000 UTC 2020-02-08 14:23:08 +0000 UTC ReplicaSetUpdated ReplicaSet "test-recreate-deployment-5c8c9cc69d" is progressing.}],ReadyReplicas:0,CollisionCount:nil,},}

Feb  8 14:23:18.971: INFO: New ReplicaSet "test-recreate-deployment-5c8c9cc69d" of Deployment "test-recreate-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5c8c9cc69d,GenerateName:,Namespace:deployment-4926,SelfLink:/apis/apps/v1/namespaces/deployment-4926/replicasets/test-recreate-deployment-5c8c9cc69d,UID:55221d68-de99-4684-861c-44bb1c249984,ResourceVersion:23578442,Generation:1,CreationTimestamp:2020-02-08 14:23:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment c7b124b9-fa5c-43ad-b66b-67fe2bd07177 0xc002f87977 0xc002f87978}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Feb  8 14:23:18.971: INFO: All old ReplicaSets of Deployment "test-recreate-deployment":
Feb  8 14:23:18.971: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-6df85df6b9,GenerateName:,Namespace:deployment-4926,SelfLink:/apis/apps/v1/namespaces/deployment-4926/replicasets/test-recreate-deployment-6df85df6b9,UID:56c5fa7a-7952-4e40-86bf-8585b84d08c4,ResourceVersion:23578431,Generation:2,CreationTimestamp:2020-02-08 14:23:08 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment c7b124b9-fa5c-43ad-b66b-67fe2bd07177 0xc002f87a47 0xc002f87a48}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Feb  8 14:23:18.975: INFO: Pod "test-recreate-deployment-5c8c9cc69d-8rvjx" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5c8c9cc69d-8rvjx,GenerateName:test-recreate-deployment-5c8c9cc69d-,Namespace:deployment-4926,SelfLink:/api/v1/namespaces/deployment-4926/pods/test-recreate-deployment-5c8c9cc69d-8rvjx,UID:6d554a99-b663-4d0e-9633-c73bb9e8f2f9,ResourceVersion:23578435,Generation:0,CreationTimestamp:2020-02-08 14:23:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-recreate-deployment-5c8c9cc69d 55221d68-de99-4684-861c-44bb1c249984 0xc0029414d7 0xc0029414d8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-gq8mg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-gq8mg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-gq8mg true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002941680} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0029416a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-08 14:23:18 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  8 14:23:18.975: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-4926" for this suite.
Feb  8 14:23:25.140: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 14:23:25.222: INFO: namespace deployment-4926 deletion completed in 6.239961235s

• [SLOW TEST:16.986 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[k8s.io] KubeletManagedEtcHosts 
  should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] KubeletManagedEtcHosts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  8 14:23:25.222: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Setting up the test
STEP: Creating hostNetwork=false pod
STEP: Creating hostNetwork=true pod
STEP: Running the test
STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false
Feb  8 14:23:47.438: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-8863 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb  8 14:23:47.438: INFO: >>> kubeConfig: /root/.kube/config
I0208 14:23:47.569133       8 log.go:172] (0xc001894370) (0xc002ce39a0) Create stream
I0208 14:23:47.569188       8 log.go:172] (0xc001894370) (0xc002ce39a0) Stream added, broadcasting: 1
I0208 14:23:47.578436       8 log.go:172] (0xc001894370) Reply frame received for 1
I0208 14:23:47.578481       8 log.go:172] (0xc001894370) (0xc0024db7c0) Create stream
I0208 14:23:47.578491       8 log.go:172] (0xc001894370) (0xc0024db7c0) Stream added, broadcasting: 3
I0208 14:23:47.582805       8 log.go:172] (0xc001894370) Reply frame received for 3
I0208 14:23:47.582845       8 log.go:172] (0xc001894370) (0xc002ce3a40) Create stream
I0208 14:23:47.582860       8 log.go:172] (0xc001894370) (0xc002ce3a40) Stream added, broadcasting: 5
I0208 14:23:47.585175       8 log.go:172] (0xc001894370) Reply frame received for 5
I0208 14:23:47.715890       8 log.go:172] (0xc001894370) Data frame received for 3
I0208 14:23:47.716230       8 log.go:172] (0xc0024db7c0) (3) Data frame handling
I0208 14:23:47.716311       8 log.go:172] (0xc0024db7c0) (3) Data frame sent
I0208 14:23:47.897342       8 log.go:172] (0xc001894370) (0xc0024db7c0) Stream removed, broadcasting: 3
I0208 14:23:47.897566       8 log.go:172] (0xc001894370) Data frame received for 1
I0208 14:23:47.897587       8 log.go:172] (0xc002ce39a0) (1) Data frame handling
I0208 14:23:47.897599       8 log.go:172] (0xc002ce39a0) (1) Data frame sent
I0208 14:23:47.897614       8 log.go:172] (0xc001894370) (0xc002ce39a0) Stream removed, broadcasting: 1
I0208 14:23:47.897736       8 log.go:172] (0xc001894370) (0xc002ce3a40) Stream removed, broadcasting: 5
I0208 14:23:47.897756       8 log.go:172] (0xc001894370) Go away received
I0208 14:23:47.897815       8 log.go:172] (0xc001894370) (0xc002ce39a0) Stream removed, broadcasting: 1
I0208 14:23:47.897875       8 log.go:172] (0xc001894370) (0xc0024db7c0) Stream removed, broadcasting: 3
I0208 14:23:47.897902       8 log.go:172] (0xc001894370) (0xc002ce3a40) Stream removed, broadcasting: 5
Feb  8 14:23:47.897: INFO: Exec stderr: ""
Feb  8 14:23:47.897: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-8863 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb  8 14:23:47.898: INFO: >>> kubeConfig: /root/.kube/config
I0208 14:23:47.983167       8 log.go:172] (0xc0007a0f20) (0xc0022deb40) Create stream
I0208 14:23:47.983363       8 log.go:172] (0xc0007a0f20) (0xc0022deb40) Stream added, broadcasting: 1
I0208 14:23:47.994270       8 log.go:172] (0xc0007a0f20) Reply frame received for 1
I0208 14:23:47.994345       8 log.go:172] (0xc0007a0f20) (0xc002ce3c20) Create stream
I0208 14:23:47.994355       8 log.go:172] (0xc0007a0f20) (0xc002ce3c20) Stream added, broadcasting: 3
I0208 14:23:47.995841       8 log.go:172] (0xc0007a0f20) Reply frame received for 3
I0208 14:23:47.995882       8 log.go:172] (0xc0007a0f20) (0xc002ce3cc0) Create stream
I0208 14:23:47.995895       8 log.go:172] (0xc0007a0f20) (0xc002ce3cc0) Stream added, broadcasting: 5
I0208 14:23:47.998049       8 log.go:172] (0xc0007a0f20) Reply frame received for 5
I0208 14:23:48.102917       8 log.go:172] (0xc0007a0f20) Data frame received for 3
I0208 14:23:48.102969       8 log.go:172] (0xc002ce3c20) (3) Data frame handling
I0208 14:23:48.102983       8 log.go:172] (0xc002ce3c20) (3) Data frame sent
I0208 14:23:48.239028       8 log.go:172] (0xc0007a0f20) Data frame received for 1
I0208 14:23:48.239240       8 log.go:172] (0xc0007a0f20) (0xc002ce3c20) Stream removed, broadcasting: 3
I0208 14:23:48.239294       8 log.go:172] (0xc0022deb40) (1) Data frame handling
I0208 14:23:48.239310       8 log.go:172] (0xc0022deb40) (1) Data frame sent
I0208 14:23:48.239461       8 log.go:172] (0xc0007a0f20) (0xc002ce3cc0) Stream removed, broadcasting: 5
I0208 14:23:48.239489       8 log.go:172] (0xc0007a0f20) (0xc0022deb40) Stream removed, broadcasting: 1
I0208 14:23:48.239501       8 log.go:172] (0xc0007a0f20) Go away received
I0208 14:23:48.239966       8 log.go:172] (0xc0007a0f20) (0xc0022deb40) Stream removed, broadcasting: 1
I0208 14:23:48.239989       8 log.go:172] (0xc0007a0f20) (0xc002ce3c20) Stream removed, broadcasting: 3
I0208 14:23:48.240001       8 log.go:172] (0xc0007a0f20) (0xc002ce3cc0) Stream removed, broadcasting: 5
Feb  8 14:23:48.240: INFO: Exec stderr: ""
Feb  8 14:23:48.240: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-8863 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb  8 14:23:48.240: INFO: >>> kubeConfig: /root/.kube/config
I0208 14:23:48.309208       8 log.go:172] (0xc0007a1970) (0xc0022dee60) Create stream
I0208 14:23:48.309298       8 log.go:172] (0xc0007a1970) (0xc0022dee60) Stream added, broadcasting: 1
I0208 14:23:48.317857       8 log.go:172] (0xc0007a1970) Reply frame received for 1
I0208 14:23:48.317953       8 log.go:172] (0xc0007a1970) (0xc001ef8b40) Create stream
I0208 14:23:48.317967       8 log.go:172] (0xc0007a1970) (0xc001ef8b40) Stream added, broadcasting: 3
I0208 14:23:48.321075       8 log.go:172] (0xc0007a1970) Reply frame received for 3
I0208 14:23:48.321163       8 log.go:172] (0xc0007a1970) (0xc0024db9a0) Create stream
I0208 14:23:48.321184       8 log.go:172] (0xc0007a1970) (0xc0024db9a0) Stream added, broadcasting: 5
I0208 14:23:48.323113       8 log.go:172] (0xc0007a1970) Reply frame received for 5
I0208 14:23:48.411915       8 log.go:172] (0xc0007a1970) Data frame received for 3
I0208 14:23:48.411965       8 log.go:172] (0xc001ef8b40) (3) Data frame handling
I0208 14:23:48.411986       8 log.go:172] (0xc001ef8b40) (3) Data frame sent
I0208 14:23:48.696201       8 log.go:172] (0xc0007a1970) (0xc001ef8b40) Stream removed, broadcasting: 3
I0208 14:23:48.696326       8 log.go:172] (0xc0007a1970) (0xc0024db9a0) Stream removed, broadcasting: 5
I0208 14:23:48.696364       8 log.go:172] (0xc0007a1970) Data frame received for 1
I0208 14:23:48.696400       8 log.go:172] (0xc0022dee60) (1) Data frame handling
I0208 14:23:48.696422       8 log.go:172] (0xc0022dee60) (1) Data frame sent
I0208 14:23:48.696436       8 log.go:172] (0xc0007a1970) (0xc0022dee60) Stream removed, broadcasting: 1
I0208 14:23:48.696448       8 log.go:172] (0xc0007a1970) Go away received
I0208 14:23:48.696670       8 log.go:172] (0xc0007a1970) (0xc0022dee60) Stream removed, broadcasting: 1
I0208 14:23:48.696721       8 log.go:172] (0xc0007a1970) (0xc001ef8b40) Stream removed, broadcasting: 3
I0208 14:23:48.696735       8 log.go:172] (0xc0007a1970) (0xc0024db9a0) Stream removed, broadcasting: 5
Feb  8 14:23:48.696: INFO: Exec stderr: ""
Feb  8 14:23:48.696: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-8863 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb  8 14:23:48.696: INFO: >>> kubeConfig: /root/.kube/config
I0208 14:23:48.861263       8 log.go:172] (0xc00154fa20) (0xc0024dbcc0) Create stream
I0208 14:23:48.861309       8 log.go:172] (0xc00154fa20) (0xc0024dbcc0) Stream added, broadcasting: 1
I0208 14:23:48.870613       8 log.go:172] (0xc00154fa20) Reply frame received for 1
I0208 14:23:48.870696       8 log.go:172] (0xc00154fa20) (0xc002ce3d60) Create stream
I0208 14:23:48.870704       8 log.go:172] (0xc00154fa20) (0xc002ce3d60) Stream added, broadcasting: 3
I0208 14:23:48.872546       8 log.go:172] (0xc00154fa20) Reply frame received for 3
I0208 14:23:48.872575       8 log.go:172] (0xc00154fa20) (0xc0022def00) Create stream
I0208 14:23:48.872584       8 log.go:172] (0xc00154fa20) (0xc0022def00) Stream added, broadcasting: 5
I0208 14:23:48.873983       8 log.go:172] (0xc00154fa20) Reply frame received for 5
I0208 14:23:48.972283       8 log.go:172] (0xc00154fa20) Data frame received for 3
I0208 14:23:48.972323       8 log.go:172] (0xc002ce3d60) (3) Data frame handling
I0208 14:23:48.972391       8 log.go:172] (0xc002ce3d60) (3) Data frame sent
I0208 14:23:49.079436       8 log.go:172] (0xc00154fa20) Data frame received for 1
I0208 14:23:49.079551       8 log.go:172] (0xc0024dbcc0) (1) Data frame handling
I0208 14:23:49.079649       8 log.go:172] (0xc0024dbcc0) (1) Data frame sent
I0208 14:23:49.079667       8 log.go:172] (0xc00154fa20) (0xc0024dbcc0) Stream removed, broadcasting: 1
I0208 14:23:49.079704       8 log.go:172] (0xc00154fa20) (0xc002ce3d60) Stream removed, broadcasting: 3
I0208 14:23:49.079750       8 log.go:172] (0xc00154fa20) (0xc0022def00) Stream removed, broadcasting: 5
I0208 14:23:49.079794       8 log.go:172] (0xc00154fa20) (0xc0024dbcc0) Stream removed, broadcasting: 1
I0208 14:23:49.079816       8 log.go:172] (0xc00154fa20) Go away received
I0208 14:23:49.079982       8 log.go:172] (0xc00154fa20) (0xc002ce3d60) Stream removed, broadcasting: 3
I0208 14:23:49.080050       8 log.go:172] (0xc00154fa20) (0xc0022def00) Stream removed, broadcasting: 5
Feb  8 14:23:49.080: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount
Feb  8 14:23:49.080: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-8863 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb  8 14:23:49.080: INFO: >>> kubeConfig: /root/.kube/config
I0208 14:23:49.143935       8 log.go:172] (0xc002220630) (0xc0018ca0a0) Create stream
I0208 14:23:49.143991       8 log.go:172] (0xc002220630) (0xc0018ca0a0) Stream added, broadcasting: 1
I0208 14:23:49.149878       8 log.go:172] (0xc002220630) Reply frame received for 1
I0208 14:23:49.149898       8 log.go:172] (0xc002220630) (0xc0019f5d60) Create stream
I0208 14:23:49.149903       8 log.go:172] (0xc002220630) (0xc0019f5d60) Stream added, broadcasting: 3
I0208 14:23:49.153436       8 log.go:172] (0xc002220630) Reply frame received for 3
I0208 14:23:49.153465       8 log.go:172] (0xc002220630) (0xc0022defa0) Create stream
I0208 14:23:49.153477       8 log.go:172] (0xc002220630) (0xc0022defa0) Stream added, broadcasting: 5
I0208 14:23:49.154738       8 log.go:172] (0xc002220630) Reply frame received for 5
I0208 14:23:49.263873       8 log.go:172] (0xc002220630) Data frame received for 3
I0208 14:23:49.263905       8 log.go:172] (0xc0019f5d60) (3) Data frame handling
I0208 14:23:49.263924       8 log.go:172] (0xc0019f5d60) (3) Data frame sent
I0208 14:23:49.365614       8 log.go:172] (0xc002220630) Data frame received for 1
I0208 14:23:49.365801       8 log.go:172] (0xc0018ca0a0) (1) Data frame handling
I0208 14:23:49.365899       8 log.go:172] (0xc0018ca0a0) (1) Data frame sent
I0208 14:23:49.366180       8 log.go:172] (0xc002220630) (0xc0022defa0) Stream removed, broadcasting: 5
I0208 14:23:49.366237       8 log.go:172] (0xc002220630) (0xc0018ca0a0) Stream removed, broadcasting: 1
I0208 14:23:49.366337       8 log.go:172] (0xc002220630) (0xc0019f5d60) Stream removed, broadcasting: 3
I0208 14:23:49.366359       8 log.go:172] (0xc002220630) (0xc0018ca0a0) Stream removed, broadcasting: 1
I0208 14:23:49.366367       8 log.go:172] (0xc002220630) (0xc0019f5d60) Stream removed, broadcasting: 3
I0208 14:23:49.366376       8 log.go:172] (0xc002220630) (0xc0022defa0) Stream removed, broadcasting: 5
Feb  8 14:23:49.366: INFO: Exec stderr: ""
Feb  8 14:23:49.366: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-8863 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb  8 14:23:49.366: INFO: >>> kubeConfig: /root/.kube/config
I0208 14:23:49.367067       8 log.go:172] (0xc002220630) Go away received
I0208 14:23:49.438896       8 log.go:172] (0xc0021a2f20) (0xc0022df540) Create stream
I0208 14:23:49.438955       8 log.go:172] (0xc0021a2f20) (0xc0022df540) Stream added, broadcasting: 1
I0208 14:23:49.445727       8 log.go:172] (0xc0021a2f20) Reply frame received for 1
I0208 14:23:49.445746       8 log.go:172] (0xc0021a2f20) (0xc001ef8c80) Create stream
I0208 14:23:49.445752       8 log.go:172] (0xc0021a2f20) (0xc001ef8c80) Stream added, broadcasting: 3
I0208 14:23:49.447473       8 log.go:172] (0xc0021a2f20) Reply frame received for 3
I0208 14:23:49.447492       8 log.go:172] (0xc0021a2f20) (0xc0019f5f40) Create stream
I0208 14:23:49.447499       8 log.go:172] (0xc0021a2f20) (0xc0019f5f40) Stream added, broadcasting: 5
I0208 14:23:49.448858       8 log.go:172] (0xc0021a2f20) Reply frame received for 5
I0208 14:23:49.558603       8 log.go:172] (0xc0021a2f20) Data frame received for 3
I0208 14:23:49.558667       8 log.go:172] (0xc001ef8c80) (3) Data frame handling
I0208 14:23:49.558706       8 log.go:172] (0xc001ef8c80) (3) Data frame sent
I0208 14:23:49.658171       8 log.go:172] (0xc0021a2f20) (0xc001ef8c80) Stream removed, broadcasting: 3
I0208 14:23:49.658308       8 log.go:172] (0xc0021a2f20) Data frame received for 1
I0208 14:23:49.658336       8 log.go:172] (0xc0022df540) (1) Data frame handling
I0208 14:23:49.658348       8 log.go:172] (0xc0022df540) (1) Data frame sent
I0208 14:23:49.658353       8 log.go:172] (0xc0021a2f20) (0xc0022df540) Stream removed, broadcasting: 1
I0208 14:23:49.658372       8 log.go:172] (0xc0021a2f20) (0xc0019f5f40) Stream removed, broadcasting: 5
I0208 14:23:49.658398       8 log.go:172] (0xc0021a2f20) Go away received
I0208 14:23:49.658470       8 log.go:172] (0xc0021a2f20) (0xc0022df540) Stream removed, broadcasting: 1
I0208 14:23:49.658477       8 log.go:172] (0xc0021a2f20) (0xc001ef8c80) Stream removed, broadcasting: 3
I0208 14:23:49.658480       8 log.go:172] (0xc0021a2f20) (0xc0019f5f40) Stream removed, broadcasting: 5
Feb  8 14:23:49.658: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true
Feb  8 14:23:49.658: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-8863 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb  8 14:23:49.658: INFO: >>> kubeConfig: /root/.kube/config
I0208 14:23:49.706245       8 log.go:172] (0xc0013d13f0) (0xc00125e5a0) Create stream
I0208 14:23:49.706262       8 log.go:172] (0xc0013d13f0) (0xc00125e5a0) Stream added, broadcasting: 1
I0208 14:23:49.712353       8 log.go:172] (0xc0013d13f0) Reply frame received for 1
I0208 14:23:49.712378       8 log.go:172] (0xc0013d13f0) (0xc00125eaa0) Create stream
I0208 14:23:49.712387       8 log.go:172] (0xc0013d13f0) (0xc00125eaa0) Stream added, broadcasting: 3
I0208 14:23:49.714794       8 log.go:172] (0xc0013d13f0) Reply frame received for 3
I0208 14:23:49.714810       8 log.go:172] (0xc0013d13f0) (0xc0022df5e0) Create stream
I0208 14:23:49.714816       8 log.go:172] (0xc0013d13f0) (0xc0022df5e0) Stream added, broadcasting: 5
I0208 14:23:49.715919       8 log.go:172] (0xc0013d13f0) Reply frame received for 5
I0208 14:23:49.798365       8 log.go:172] (0xc0013d13f0) Data frame received for 3
I0208 14:23:49.798406       8 log.go:172] (0xc00125eaa0) (3) Data frame handling
I0208 14:23:49.798418       8 log.go:172] (0xc00125eaa0) (3) Data frame sent
I0208 14:23:49.916781       8 log.go:172] (0xc0013d13f0) (0xc00125eaa0) Stream removed, broadcasting: 3
I0208 14:23:49.916896       8 log.go:172] (0xc0013d13f0) Data frame received for 1
I0208 14:23:49.916908       8 log.go:172] (0xc00125e5a0) (1) Data frame handling
I0208 14:23:49.916919       8 log.go:172] (0xc00125e5a0) (1) Data frame sent
I0208 14:23:49.916929       8 log.go:172] (0xc0013d13f0) (0xc00125e5a0) Stream removed, broadcasting: 1
I0208 14:23:49.916960       8 log.go:172] (0xc0013d13f0) (0xc0022df5e0) Stream removed, broadcasting: 5
I0208 14:23:49.917005       8 log.go:172] (0xc0013d13f0) Go away received
I0208 14:23:49.917041       8 log.go:172] (0xc0013d13f0) (0xc00125e5a0) Stream removed, broadcasting: 1
I0208 14:23:49.917058       8 log.go:172] (0xc0013d13f0) (0xc00125eaa0) Stream removed, broadcasting: 3
I0208 14:23:49.917159       8 log.go:172] (0xc0013d13f0) (0xc0022df5e0) Stream removed, broadcasting: 5
Feb  8 14:23:49.917: INFO: Exec stderr: ""
Feb  8 14:23:49.917: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-8863 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb  8 14:23:49.917: INFO: >>> kubeConfig: /root/.kube/config
I0208 14:23:49.986540       8 log.go:172] (0xc00285a210) (0xc00125f860) Create stream
I0208 14:23:49.986624       8 log.go:172] (0xc00285a210) (0xc00125f860) Stream added, broadcasting: 1
I0208 14:23:49.992566       8 log.go:172] (0xc00285a210) Reply frame received for 1
I0208 14:23:49.992622       8 log.go:172] (0xc00285a210) (0xc002ce3e00) Create stream
I0208 14:23:49.992638       8 log.go:172] (0xc00285a210) (0xc002ce3e00) Stream added, broadcasting: 3
I0208 14:23:49.994874       8 log.go:172] (0xc00285a210) Reply frame received for 3
I0208 14:23:49.994900       8 log.go:172] (0xc00285a210) (0xc002ce3ea0) Create stream
I0208 14:23:49.994911       8 log.go:172] (0xc00285a210) (0xc002ce3ea0) Stream added, broadcasting: 5
I0208 14:23:49.996445       8 log.go:172] (0xc00285a210) Reply frame received for 5
I0208 14:23:50.108097       8 log.go:172] (0xc00285a210) Data frame received for 3
I0208 14:23:50.108145       8 log.go:172] (0xc002ce3e00) (3) Data frame handling
I0208 14:23:50.108158       8 log.go:172] (0xc002ce3e00) (3) Data frame sent
I0208 14:23:50.228770       8 log.go:172] (0xc00285a210) (0xc002ce3e00) Stream removed, broadcasting: 3
I0208 14:23:50.228911       8 log.go:172] (0xc00285a210) Data frame received for 1
I0208 14:23:50.228925       8 log.go:172] (0xc00125f860) (1) Data frame handling
I0208 14:23:50.228948       8 log.go:172] (0xc00125f860) (1) Data frame sent
I0208 14:23:50.228961       8 log.go:172] (0xc00285a210) (0xc00125f860) Stream removed, broadcasting: 1
I0208 14:23:50.229038       8 log.go:172] (0xc00285a210) (0xc002ce3ea0) Stream removed, broadcasting: 5
I0208 14:23:50.229120       8 log.go:172] (0xc00285a210) Go away received
I0208 14:23:50.229379       8 log.go:172] (0xc00285a210) (0xc00125f860) Stream removed, broadcasting: 1
I0208 14:23:50.229395       8 log.go:172] (0xc00285a210) (0xc002ce3e00) Stream removed, broadcasting: 3
I0208 14:23:50.229409       8 log.go:172] (0xc00285a210) (0xc002ce3ea0) Stream removed, broadcasting: 5
Feb  8 14:23:50.229: INFO: Exec stderr: ""
Feb  8 14:23:50.229: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-8863 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb  8 14:23:50.229: INFO: >>> kubeConfig: /root/.kube/config
I0208 14:23:50.285089       8 log.go:172] (0xc00285aa50) (0xc00125ff40) Create stream
I0208 14:23:50.285158       8 log.go:172] (0xc00285aa50) (0xc00125ff40) Stream added, broadcasting: 1
I0208 14:23:50.291530       8 log.go:172] (0xc00285aa50) Reply frame received for 1
I0208 14:23:50.291551       8 log.go:172] (0xc00285aa50) (0xc001ef8f00) Create stream
I0208 14:23:50.291558       8 log.go:172] (0xc00285aa50) (0xc001ef8f00) Stream added, broadcasting: 3
I0208 14:23:50.293269       8 log.go:172] (0xc00285aa50) Reply frame received for 3
I0208 14:23:50.293301       8 log.go:172] (0xc00285aa50) (0xc0018ca3c0) Create stream
I0208 14:23:50.293310       8 log.go:172] (0xc00285aa50) (0xc0018ca3c0) Stream added, broadcasting: 5
I0208 14:23:50.295128       8 log.go:172] (0xc00285aa50) Reply frame received for 5
I0208 14:23:50.383775       8 log.go:172] (0xc00285aa50) Data frame received for 3
I0208 14:23:50.383830       8 log.go:172] (0xc001ef8f00) (3) Data frame handling
I0208 14:23:50.383846       8 log.go:172] (0xc001ef8f00) (3) Data frame sent
I0208 14:23:50.612703       8 log.go:172] (0xc00285aa50) Data frame received for 1
I0208 14:23:50.612815       8 log.go:172] (0xc00125ff40) (1) Data frame handling
I0208 14:23:50.612845       8 log.go:172] (0xc00125ff40) (1) Data frame sent
I0208 14:23:50.612858       8 log.go:172] (0xc00285aa50) (0xc00125ff40) Stream removed, broadcasting: 1
I0208 14:23:50.613642       8 log.go:172] (0xc00285aa50) (0xc001ef8f00) Stream removed, broadcasting: 3
I0208 14:23:50.613730       8 log.go:172] (0xc00285aa50) (0xc0018ca3c0) Stream removed, broadcasting: 5
I0208 14:23:50.613787       8 log.go:172] (0xc00285aa50) Go away received
I0208 14:23:50.613844       8 log.go:172] (0xc00285aa50) (0xc00125ff40) Stream removed, broadcasting: 1
I0208 14:23:50.613900       8 log.go:172] (0xc00285aa50) (0xc001ef8f00) Stream removed, broadcasting: 3
I0208 14:23:50.613916       8 log.go:172] (0xc00285aa50) (0xc0018ca3c0) Stream removed, broadcasting: 5
Feb  8 14:23:50.613: INFO: Exec stderr: ""
Feb  8 14:23:50.613: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-8863 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb  8 14:23:50.614: INFO: >>> kubeConfig: /root/.kube/config
I0208 14:23:50.723537       8 log.go:172] (0xc002221810) (0xc0018ca6e0) Create stream
I0208 14:23:50.723835       8 log.go:172] (0xc002221810) (0xc0018ca6e0) Stream added, broadcasting: 1
I0208 14:23:50.743859       8 log.go:172] (0xc002221810) Reply frame received for 1
I0208 14:23:50.744123       8 log.go:172] (0xc002221810) (0xc0018ca8c0) Create stream
I0208 14:23:50.744171       8 log.go:172] (0xc002221810) (0xc0018ca8c0) Stream added, broadcasting: 3
I0208 14:23:50.748084       8 log.go:172] (0xc002221810) Reply frame received for 3
I0208 14:23:50.748160       8 log.go:172] (0xc002221810) (0xc002ce3f40) Create stream
I0208 14:23:50.748187       8 log.go:172] (0xc002221810) (0xc002ce3f40) Stream added, broadcasting: 5
I0208 14:23:50.755754       8 log.go:172] (0xc002221810) Reply frame received for 5
I0208 14:23:50.948751       8 log.go:172] (0xc002221810) Data frame received for 3
I0208 14:23:50.948837       8 log.go:172] (0xc0018ca8c0) (3) Data frame handling
I0208 14:23:50.948872       8 log.go:172] (0xc0018ca8c0) (3) Data frame sent
I0208 14:23:51.090071       8 log.go:172] (0xc002221810) Data frame received for 1
I0208 14:23:51.090149       8 log.go:172] (0xc0018ca6e0) (1) Data frame handling
I0208 14:23:51.090209       8 log.go:172] (0xc0018ca6e0) (1) Data frame sent
I0208 14:23:51.090599       8 log.go:172] (0xc002221810) (0xc0018ca6e0) Stream removed, broadcasting: 1
I0208 14:23:51.090684       8 log.go:172] (0xc002221810) (0xc0018ca8c0) Stream removed, broadcasting: 3
I0208 14:23:51.093511       8 log.go:172] (0xc002221810) (0xc002ce3f40) Stream removed, broadcasting: 5
I0208 14:23:51.093564       8 log.go:172] (0xc002221810) (0xc0018ca6e0) Stream removed, broadcasting: 1
I0208 14:23:51.093589       8 log.go:172] (0xc002221810) (0xc0018ca8c0) Stream removed, broadcasting: 3
I0208 14:23:51.093648       8 log.go:172] (0xc002221810) (0xc002ce3f40) Stream removed, broadcasting: 5
Feb  8 14:23:51.094: INFO: Exec stderr: ""
[AfterEach] [k8s.io] KubeletManagedEtcHosts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  8 14:23:51.094: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-kubelet-etc-hosts-8863" for this suite.
Feb  8 14:24:43.135: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 14:24:43.265: INFO: namespace e2e-kubelet-etc-hosts-8863 deletion completed in 52.158823905s

• [SLOW TEST:78.043 seconds]
[k8s.io] KubeletManagedEtcHosts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  8 14:24:43.266: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-map-43661749-63ee-4b4c-9dd6-4c4e08f2f2a9
STEP: Creating a pod to test consume configMaps
Feb  8 14:24:43.417: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-2494c33e-c811-4822-b84d-11067bff29bf" in namespace "projected-2908" to be "success or failure"
Feb  8 14:24:43.423: INFO: Pod "pod-projected-configmaps-2494c33e-c811-4822-b84d-11067bff29bf": Phase="Pending", Reason="", readiness=false. Elapsed: 5.944086ms
Feb  8 14:24:45.430: INFO: Pod "pod-projected-configmaps-2494c33e-c811-4822-b84d-11067bff29bf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013427729s
Feb  8 14:24:47.445: INFO: Pod "pod-projected-configmaps-2494c33e-c811-4822-b84d-11067bff29bf": Phase="Pending", Reason="", readiness=false. Elapsed: 4.028475355s
Feb  8 14:24:49.454: INFO: Pod "pod-projected-configmaps-2494c33e-c811-4822-b84d-11067bff29bf": Phase="Pending", Reason="", readiness=false. Elapsed: 6.03666837s
Feb  8 14:24:51.465: INFO: Pod "pod-projected-configmaps-2494c33e-c811-4822-b84d-11067bff29bf": Phase="Pending", Reason="", readiness=false. Elapsed: 8.047856889s
Feb  8 14:24:53.475: INFO: Pod "pod-projected-configmaps-2494c33e-c811-4822-b84d-11067bff29bf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.057576732s
STEP: Saw pod success
Feb  8 14:24:53.475: INFO: Pod "pod-projected-configmaps-2494c33e-c811-4822-b84d-11067bff29bf" satisfied condition "success or failure"
Feb  8 14:24:53.483: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-2494c33e-c811-4822-b84d-11067bff29bf container projected-configmap-volume-test: 
STEP: delete the pod
Feb  8 14:24:53.598: INFO: Waiting for pod pod-projected-configmaps-2494c33e-c811-4822-b84d-11067bff29bf to disappear
Feb  8 14:24:53.660: INFO: Pod pod-projected-configmaps-2494c33e-c811-4822-b84d-11067bff29bf no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  8 14:24:53.661: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2908" for this suite.
Feb  8 14:24:59.697: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 14:24:59.933: INFO: namespace projected-2908 deletion completed in 6.2615729s

• [SLOW TEST:16.667 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-network] Service endpoints latency 
  should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Service endpoints latency
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  8 14:24:59.933: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svc-latency
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating replication controller svc-latency-rc in namespace svc-latency-8168
I0208 14:25:00.167584       8 runners.go:180] Created replication controller with name: svc-latency-rc, namespace: svc-latency-8168, replica count: 1
I0208 14:25:01.217985       8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0208 14:25:02.218285       8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0208 14:25:03.218488       8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0208 14:25:04.218647       8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0208 14:25:05.218805       8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0208 14:25:06.219090       8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0208 14:25:07.219262       8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Feb  8 14:25:07.443: INFO: Created: latency-svc-wkls2
Feb  8 14:25:07.480: INFO: Got endpoints: latency-svc-wkls2 [161.116561ms]
Feb  8 14:25:07.560: INFO: Created: latency-svc-9fpcb
Feb  8 14:25:07.569: INFO: Got endpoints: latency-svc-9fpcb [88.846394ms]
Feb  8 14:25:07.714: INFO: Created: latency-svc-vxn25
Feb  8 14:25:07.717: INFO: Got endpoints: latency-svc-vxn25 [236.35305ms]
Feb  8 14:25:07.778: INFO: Created: latency-svc-v6c6p
Feb  8 14:25:07.797: INFO: Got endpoints: latency-svc-v6c6p [316.213986ms]
Feb  8 14:25:07.900: INFO: Created: latency-svc-k4wdf
Feb  8 14:25:07.911: INFO: Got endpoints: latency-svc-k4wdf [430.469513ms]
Feb  8 14:25:07.956: INFO: Created: latency-svc-89vkb
Feb  8 14:25:07.968: INFO: Got endpoints: latency-svc-89vkb [487.183376ms]
Feb  8 14:25:08.105: INFO: Created: latency-svc-6vwvd
Feb  8 14:25:08.105: INFO: Got endpoints: latency-svc-6vwvd [625.201598ms]
Feb  8 14:25:08.162: INFO: Created: latency-svc-6d9jg
Feb  8 14:25:08.256: INFO: Got endpoints: latency-svc-6d9jg [775.795443ms]
Feb  8 14:25:08.284: INFO: Created: latency-svc-4xx2h
Feb  8 14:25:08.287: INFO: Got endpoints: latency-svc-4xx2h [805.940653ms]
Feb  8 14:25:08.363: INFO: Created: latency-svc-hx4hf
Feb  8 14:25:08.472: INFO: Got endpoints: latency-svc-hx4hf [991.546577ms]
Feb  8 14:25:08.538: INFO: Created: latency-svc-hj8zm
Feb  8 14:25:08.540: INFO: Got endpoints: latency-svc-hj8zm [1.059258279s]
Feb  8 14:25:08.691: INFO: Created: latency-svc-qrjjk
Feb  8 14:25:08.733: INFO: Got endpoints: latency-svc-qrjjk [1.252045397s]
Feb  8 14:25:08.752: INFO: Created: latency-svc-2qpz9
Feb  8 14:25:08.753: INFO: Got endpoints: latency-svc-2qpz9 [1.272313784s]
Feb  8 14:25:08.859: INFO: Created: latency-svc-c5lwr
Feb  8 14:25:08.860: INFO: Got endpoints: latency-svc-c5lwr [1.379098388s]
Feb  8 14:25:08.949: INFO: Created: latency-svc-bx8lz
Feb  8 14:25:08.999: INFO: Got endpoints: latency-svc-bx8lz [1.518199099s]
Feb  8 14:25:09.032: INFO: Created: latency-svc-4z5j6
Feb  8 14:25:09.056: INFO: Got endpoints: latency-svc-4z5j6 [1.574814338s]
Feb  8 14:25:09.257: INFO: Created: latency-svc-xhwjb
Feb  8 14:25:09.271: INFO: Got endpoints: latency-svc-xhwjb [1.701506385s]
Feb  8 14:25:09.334: INFO: Created: latency-svc-7cnf7
Feb  8 14:25:09.349: INFO: Got endpoints: latency-svc-7cnf7 [1.63190694s]
Feb  8 14:25:09.529: INFO: Created: latency-svc-9hqth
Feb  8 14:25:09.535: INFO: Got endpoints: latency-svc-9hqth [1.738539532s]
Feb  8 14:25:09.794: INFO: Created: latency-svc-n66qx
Feb  8 14:25:09.797: INFO: Got endpoints: latency-svc-n66qx [1.885686687s]
Feb  8 14:25:09.973: INFO: Created: latency-svc-zdlp2
Feb  8 14:25:10.027: INFO: Got endpoints: latency-svc-zdlp2 [2.059644268s]
Feb  8 14:25:10.077: INFO: Created: latency-svc-hft8d
Feb  8 14:25:10.214: INFO: Got endpoints: latency-svc-hft8d [2.10879537s]
Feb  8 14:25:10.283: INFO: Created: latency-svc-xhmhf
Feb  8 14:25:10.291: INFO: Got endpoints: latency-svc-xhmhf [2.034142891s]
Feb  8 14:25:10.393: INFO: Created: latency-svc-rnp52
Feb  8 14:25:10.397: INFO: Got endpoints: latency-svc-rnp52 [2.110091316s]
Feb  8 14:25:10.430: INFO: Created: latency-svc-c8zzp
Feb  8 14:25:10.478: INFO: Got endpoints: latency-svc-c8zzp [2.005851737s]
Feb  8 14:25:10.620: INFO: Created: latency-svc-flb67
Feb  8 14:25:10.661: INFO: Got endpoints: latency-svc-flb67 [2.120940677s]
Feb  8 14:25:10.706: INFO: Created: latency-svc-gnmzr
Feb  8 14:25:10.779: INFO: Got endpoints: latency-svc-gnmzr [2.046105978s]
Feb  8 14:25:10.820: INFO: Created: latency-svc-9rr7x
Feb  8 14:25:10.825: INFO: Got endpoints: latency-svc-9rr7x [2.072189642s]
Feb  8 14:25:10.882: INFO: Created: latency-svc-92hwz
Feb  8 14:25:10.999: INFO: Got endpoints: latency-svc-92hwz [2.13935547s]
Feb  8 14:25:11.021: INFO: Created: latency-svc-77s9d
Feb  8 14:25:11.031: INFO: Got endpoints: latency-svc-77s9d [2.031824645s]
Feb  8 14:25:11.088: INFO: Created: latency-svc-bbxcd
Feb  8 14:25:11.233: INFO: Got endpoints: latency-svc-bbxcd [2.176762895s]
Feb  8 14:25:11.264: INFO: Created: latency-svc-b7ldc
Feb  8 14:25:11.304: INFO: Got endpoints: latency-svc-b7ldc [2.032976533s]
Feb  8 14:25:11.386: INFO: Created: latency-svc-zv7kw
Feb  8 14:25:11.389: INFO: Got endpoints: latency-svc-zv7kw [156.641525ms]
Feb  8 14:25:11.445: INFO: Created: latency-svc-d8wcr
Feb  8 14:25:11.449: INFO: Got endpoints: latency-svc-d8wcr [2.099458982s]
Feb  8 14:25:11.480: INFO: Created: latency-svc-6gnkd
Feb  8 14:25:11.538: INFO: Got endpoints: latency-svc-6gnkd [2.002364892s]
Feb  8 14:25:11.572: INFO: Created: latency-svc-tqjbr
Feb  8 14:25:11.592: INFO: Got endpoints: latency-svc-tqjbr [1.794851767s]
Feb  8 14:25:11.633: INFO: Created: latency-svc-sfhf6
Feb  8 14:25:11.726: INFO: Got endpoints: latency-svc-sfhf6 [1.698438709s]
Feb  8 14:25:11.740: INFO: Created: latency-svc-zt6sq
Feb  8 14:25:11.740: INFO: Got endpoints: latency-svc-zt6sq [1.525791226s]
Feb  8 14:25:11.807: INFO: Created: latency-svc-cb4zv
Feb  8 14:25:11.882: INFO: Got endpoints: latency-svc-cb4zv [1.591545259s]
Feb  8 14:25:11.934: INFO: Created: latency-svc-sgg24
Feb  8 14:25:11.953: INFO: Got endpoints: latency-svc-sgg24 [1.556403831s]
Feb  8 14:25:12.074: INFO: Created: latency-svc-fwwzc
Feb  8 14:25:12.104: INFO: Got endpoints: latency-svc-fwwzc [1.625514623s]
Feb  8 14:25:12.113: INFO: Created: latency-svc-4tgt5
Feb  8 14:25:12.124: INFO: Got endpoints: latency-svc-4tgt5 [1.463002118s]
Feb  8 14:25:12.256: INFO: Created: latency-svc-cmvbm
Feb  8 14:25:12.273: INFO: Got endpoints: latency-svc-cmvbm [1.494021466s]
Feb  8 14:25:12.427: INFO: Created: latency-svc-kg9kw
Feb  8 14:25:12.432: INFO: Got endpoints: latency-svc-kg9kw [1.606698795s]
Feb  8 14:25:12.514: INFO: Created: latency-svc-vn74g
Feb  8 14:25:12.610: INFO: Got endpoints: latency-svc-vn74g [1.6101211s]
Feb  8 14:25:12.646: INFO: Created: latency-svc-gfbsw
Feb  8 14:25:12.656: INFO: Got endpoints: latency-svc-gfbsw [1.624749889s]
Feb  8 14:25:12.801: INFO: Created: latency-svc-p4456
Feb  8 14:25:12.814: INFO: Got endpoints: latency-svc-p4456 [1.509097832s]
Feb  8 14:25:12.885: INFO: Created: latency-svc-vfl8s
Feb  8 14:25:12.885: INFO: Got endpoints: latency-svc-vfl8s [1.495608651s]
Feb  8 14:25:12.987: INFO: Created: latency-svc-gnrzw
Feb  8 14:25:12.990: INFO: Got endpoints: latency-svc-gnrzw [1.541744866s]
Feb  8 14:25:13.057: INFO: Created: latency-svc-v8k4q
Feb  8 14:25:13.057: INFO: Got endpoints: latency-svc-v8k4q [1.519097537s]
Feb  8 14:25:13.183: INFO: Created: latency-svc-gl2pf
Feb  8 14:25:13.184: INFO: Got endpoints: latency-svc-gl2pf [1.592586224s]
Feb  8 14:25:13.402: INFO: Created: latency-svc-f4gs2
Feb  8 14:25:13.409: INFO: Got endpoints: latency-svc-f4gs2 [1.682690157s]
Feb  8 14:25:13.443: INFO: Created: latency-svc-k2mj8
Feb  8 14:25:13.462: INFO: Got endpoints: latency-svc-k2mj8 [1.72156698s]
Feb  8 14:25:13.540: INFO: Created: latency-svc-5w2l4
Feb  8 14:25:13.555: INFO: Got endpoints: latency-svc-5w2l4 [1.672553137s]
Feb  8 14:25:13.607: INFO: Created: latency-svc-qp6wv
Feb  8 14:25:13.702: INFO: Got endpoints: latency-svc-qp6wv [1.74847176s]
Feb  8 14:25:13.733: INFO: Created: latency-svc-4xhxg
Feb  8 14:25:13.778: INFO: Got endpoints: latency-svc-4xhxg [1.674011774s]
Feb  8 14:25:13.781: INFO: Created: latency-svc-2zmql
Feb  8 14:25:13.804: INFO: Got endpoints: latency-svc-2zmql [1.679437699s]
Feb  8 14:25:13.918: INFO: Created: latency-svc-9g4ls
Feb  8 14:25:13.942: INFO: Got endpoints: latency-svc-9g4ls [1.668409682s]
Feb  8 14:25:13.992: INFO: Created: latency-svc-hflcj
Feb  8 14:25:14.065: INFO: Got endpoints: latency-svc-hflcj [1.633033921s]
Feb  8 14:25:14.087: INFO: Created: latency-svc-569cc
Feb  8 14:25:14.115: INFO: Got endpoints: latency-svc-569cc [1.505411901s]
Feb  8 14:25:14.166: INFO: Created: latency-svc-ts2pp
Feb  8 14:25:14.257: INFO: Got endpoints: latency-svc-ts2pp [1.600797511s]
Feb  8 14:25:14.279: INFO: Created: latency-svc-ljjmj
Feb  8 14:25:14.300: INFO: Got endpoints: latency-svc-ljjmj [1.485849125s]
Feb  8 14:25:14.360: INFO: Created: latency-svc-w62pg
Feb  8 14:25:14.431: INFO: Got endpoints: latency-svc-w62pg [1.546209919s]
Feb  8 14:25:14.458: INFO: Created: latency-svc-crpz6
Feb  8 14:25:14.487: INFO: Got endpoints: latency-svc-crpz6 [1.496231164s]
Feb  8 14:25:14.516: INFO: Created: latency-svc-qvvhr
Feb  8 14:25:14.528: INFO: Got endpoints: latency-svc-qvvhr [1.471096074s]
Feb  8 14:25:14.639: INFO: Created: latency-svc-s5dqn
Feb  8 14:25:14.767: INFO: Created: latency-svc-lkjxq
Feb  8 14:25:14.777: INFO: Got endpoints: latency-svc-s5dqn [1.593041353s]
Feb  8 14:25:14.825: INFO: Got endpoints: latency-svc-lkjxq [1.415819312s]
Feb  8 14:25:14.836: INFO: Created: latency-svc-zr7mc
Feb  8 14:25:14.864: INFO: Got endpoints: latency-svc-zr7mc [1.401415811s]
Feb  8 14:25:14.956: INFO: Created: latency-svc-wbpgk
Feb  8 14:25:14.980: INFO: Got endpoints: latency-svc-wbpgk [1.425247196s]
Feb  8 14:25:15.043: INFO: Created: latency-svc-vmlrp
Feb  8 14:25:15.107: INFO: Got endpoints: latency-svc-vmlrp [1.405427267s]
Feb  8 14:25:15.131: INFO: Created: latency-svc-mwqqj
Feb  8 14:25:15.142: INFO: Got endpoints: latency-svc-mwqqj [1.363960485s]
Feb  8 14:25:15.208: INFO: Created: latency-svc-bxtzt
Feb  8 14:25:15.450: INFO: Got endpoints: latency-svc-bxtzt [1.646671509s]
Feb  8 14:25:15.494: INFO: Created: latency-svc-85tbw
Feb  8 14:25:15.498: INFO: Got endpoints: latency-svc-85tbw [1.556661576s]
Feb  8 14:25:15.538: INFO: Created: latency-svc-zzgzw
Feb  8 14:25:15.543: INFO: Got endpoints: latency-svc-zzgzw [1.476798073s]
Feb  8 14:25:15.634: INFO: Created: latency-svc-vjm9g
Feb  8 14:25:15.642: INFO: Got endpoints: latency-svc-vjm9g [1.526737651s]
Feb  8 14:25:15.768: INFO: Created: latency-svc-bvqj9
Feb  8 14:25:15.783: INFO: Got endpoints: latency-svc-bvqj9 [1.526589703s]
Feb  8 14:25:15.819: INFO: Created: latency-svc-ngxr7
Feb  8 14:25:15.829: INFO: Got endpoints: latency-svc-ngxr7 [1.528741883s]
Feb  8 14:25:15.916: INFO: Created: latency-svc-ndfxh
Feb  8 14:25:15.927: INFO: Got endpoints: latency-svc-ndfxh [1.495036122s]
Feb  8 14:25:15.989: INFO: Created: latency-svc-qf6fz
Feb  8 14:25:16.003: INFO: Got endpoints: latency-svc-qf6fz [1.51621482s]
Feb  8 14:25:16.087: INFO: Created: latency-svc-chdwn
Feb  8 14:25:16.103: INFO: Got endpoints: latency-svc-chdwn [1.574702119s]
Feb  8 14:25:16.254: INFO: Created: latency-svc-df9bb
Feb  8 14:25:16.255: INFO: Got endpoints: latency-svc-df9bb [1.477213789s]
Feb  8 14:25:16.380: INFO: Created: latency-svc-6d85p
Feb  8 14:25:16.431: INFO: Got endpoints: latency-svc-6d85p [1.605699079s]
Feb  8 14:25:16.498: INFO: Created: latency-svc-6pzbn
Feb  8 14:25:16.498: INFO: Got endpoints: latency-svc-6pzbn [1.634148772s]
Feb  8 14:25:16.615: INFO: Created: latency-svc-pgxfl
Feb  8 14:25:16.628: INFO: Got endpoints: latency-svc-pgxfl [1.647998125s]
Feb  8 14:25:16.689: INFO: Created: latency-svc-l92ck
Feb  8 14:25:16.785: INFO: Got endpoints: latency-svc-l92ck [1.677647503s]
Feb  8 14:25:16.867: INFO: Created: latency-svc-p9k96
Feb  8 14:25:16.868: INFO: Got endpoints: latency-svc-p9k96 [1.725359999s]
Feb  8 14:25:17.038: INFO: Created: latency-svc-vlv5n
Feb  8 14:25:17.059: INFO: Got endpoints: latency-svc-vlv5n [1.608710409s]
Feb  8 14:25:17.141: INFO: Created: latency-svc-pzp29
Feb  8 14:25:17.147: INFO: Got endpoints: latency-svc-pzp29 [1.648251878s]
Feb  8 14:25:17.369: INFO: Created: latency-svc-cptwg
Feb  8 14:25:17.372: INFO: Got endpoints: latency-svc-cptwg [1.829807035s]
Feb  8 14:25:17.517: INFO: Created: latency-svc-kq9wc
Feb  8 14:25:17.525: INFO: Got endpoints: latency-svc-kq9wc [1.882898976s]
Feb  8 14:25:17.581: INFO: Created: latency-svc-rg59w
Feb  8 14:25:17.581: INFO: Got endpoints: latency-svc-rg59w [1.797798547s]
Feb  8 14:25:17.700: INFO: Created: latency-svc-pxt5t
Feb  8 14:25:17.762: INFO: Created: latency-svc-697zt
Feb  8 14:25:17.770: INFO: Got endpoints: latency-svc-pxt5t [1.940978273s]
Feb  8 14:25:17.793: INFO: Got endpoints: latency-svc-697zt [1.866223035s]
Feb  8 14:25:17.892: INFO: Created: latency-svc-t97md
Feb  8 14:25:17.897: INFO: Got endpoints: latency-svc-t97md [1.893324225s]
Feb  8 14:25:18.039: INFO: Created: latency-svc-qnhfw
Feb  8 14:25:18.041: INFO: Got endpoints: latency-svc-qnhfw [1.937670726s]
Feb  8 14:25:18.137: INFO: Created: latency-svc-lftfn
Feb  8 14:25:18.197: INFO: Got endpoints: latency-svc-lftfn [1.941605751s]
Feb  8 14:25:18.212: INFO: Created: latency-svc-vlbvc
Feb  8 14:25:18.212: INFO: Got endpoints: latency-svc-vlbvc [1.780917518s]
Feb  8 14:25:18.276: INFO: Created: latency-svc-z4mxb
Feb  8 14:25:18.422: INFO: Got endpoints: latency-svc-z4mxb [1.92382373s]
Feb  8 14:25:18.466: INFO: Created: latency-svc-k2bc8
Feb  8 14:25:18.493: INFO: Got endpoints: latency-svc-k2bc8 [1.864897082s]
Feb  8 14:25:18.621: INFO: Created: latency-svc-gmdcz
Feb  8 14:25:18.633: INFO: Got endpoints: latency-svc-gmdcz [1.847622814s]
Feb  8 14:25:18.687: INFO: Created: latency-svc-xspkv
Feb  8 14:25:18.813: INFO: Got endpoints: latency-svc-xspkv [1.945115431s]
Feb  8 14:25:18.816: INFO: Created: latency-svc-6p4fc
Feb  8 14:25:18.820: INFO: Got endpoints: latency-svc-6p4fc [1.761019624s]
Feb  8 14:25:18.883: INFO: Created: latency-svc-mbxdf
Feb  8 14:25:18.896: INFO: Got endpoints: latency-svc-mbxdf [1.74901079s]
Feb  8 14:25:19.043: INFO: Created: latency-svc-zp9jk
Feb  8 14:25:19.073: INFO: Got endpoints: latency-svc-zp9jk [1.70055038s]
Feb  8 14:25:19.110: INFO: Created: latency-svc-n9v26
Feb  8 14:25:19.130: INFO: Got endpoints: latency-svc-n9v26 [1.604974012s]
Feb  8 14:25:19.226: INFO: Created: latency-svc-5497l
Feb  8 14:25:19.240: INFO: Got endpoints: latency-svc-5497l [1.659146368s]
Feb  8 14:25:19.302: INFO: Created: latency-svc-h26ks
Feb  8 14:25:19.483: INFO: Got endpoints: latency-svc-h26ks [1.713098097s]
Feb  8 14:25:19.501: INFO: Created: latency-svc-kqdh6
Feb  8 14:25:19.516: INFO: Got endpoints: latency-svc-kqdh6 [1.722720531s]
Feb  8 14:25:19.562: INFO: Created: latency-svc-lqjkw
Feb  8 14:25:19.575: INFO: Got endpoints: latency-svc-lqjkw [1.678004839s]
Feb  8 14:25:19.663: INFO: Created: latency-svc-26gmh
Feb  8 14:25:19.686: INFO: Got endpoints: latency-svc-26gmh [1.645117263s]
Feb  8 14:25:19.746: INFO: Created: latency-svc-scsl5
Feb  8 14:25:19.748: INFO: Got endpoints: latency-svc-scsl5 [1.551205338s]
Feb  8 14:25:19.902: INFO: Created: latency-svc-r4xjg
Feb  8 14:25:19.902: INFO: Got endpoints: latency-svc-r4xjg [1.689629109s]
Feb  8 14:25:19.944: INFO: Created: latency-svc-lxm2p
Feb  8 14:25:19.947: INFO: Got endpoints: latency-svc-lxm2p [1.525495695s]
Feb  8 14:25:20.090: INFO: Created: latency-svc-4n2mt
Feb  8 14:25:20.133: INFO: Got endpoints: latency-svc-4n2mt [1.639181014s]
Feb  8 14:25:20.259: INFO: Created: latency-svc-npdsw
Feb  8 14:25:20.262: INFO: Got endpoints: latency-svc-npdsw [1.629351135s]
Feb  8 14:25:20.311: INFO: Created: latency-svc-2zdwj
Feb  8 14:25:20.349: INFO: Got endpoints: latency-svc-2zdwj [1.535485945s]
Feb  8 14:25:20.463: INFO: Created: latency-svc-frtkf
Feb  8 14:25:20.708: INFO: Got endpoints: latency-svc-frtkf [1.886999658s]
Feb  8 14:25:20.724: INFO: Created: latency-svc-8z94l
Feb  8 14:25:20.940: INFO: Got endpoints: latency-svc-8z94l [2.044076973s]
Feb  8 14:25:21.004: INFO: Created: latency-svc-7xwn4
Feb  8 14:25:21.020: INFO: Got endpoints: latency-svc-7xwn4 [1.946753584s]
Feb  8 14:25:21.273: INFO: Created: latency-svc-7plcg
Feb  8 14:25:21.280: INFO: Got endpoints: latency-svc-7plcg [2.149892629s]
Feb  8 14:25:21.340: INFO: Created: latency-svc-qb8zn
Feb  8 14:25:21.351: INFO: Got endpoints: latency-svc-qb8zn [2.110350798s]
Feb  8 14:25:21.581: INFO: Created: latency-svc-wpfwk
Feb  8 14:25:21.594: INFO: Got endpoints: latency-svc-wpfwk [2.110790986s]
Feb  8 14:25:21.799: INFO: Created: latency-svc-frjz7
Feb  8 14:25:21.820: INFO: Got endpoints: latency-svc-frjz7 [2.304683555s]
Feb  8 14:25:21.876: INFO: Created: latency-svc-c6z2m
Feb  8 14:25:22.015: INFO: Got endpoints: latency-svc-c6z2m [2.439544585s]
Feb  8 14:25:22.044: INFO: Created: latency-svc-86xg7
Feb  8 14:25:22.057: INFO: Got endpoints: latency-svc-86xg7 [2.371047111s]
Feb  8 14:25:22.093: INFO: Created: latency-svc-lx9fv
Feb  8 14:25:22.102: INFO: Got endpoints: latency-svc-lx9fv [2.354208217s]
Feb  8 14:25:22.201: INFO: Created: latency-svc-hdk6r
Feb  8 14:25:22.206: INFO: Got endpoints: latency-svc-hdk6r [2.30356995s]
Feb  8 14:25:22.289: INFO: Created: latency-svc-gzk6c
Feb  8 14:25:22.384: INFO: Created: latency-svc-5m426
Feb  8 14:25:22.384: INFO: Got endpoints: latency-svc-gzk6c [2.436462224s]
Feb  8 14:25:22.440: INFO: Got endpoints: latency-svc-5m426 [2.307619872s]
Feb  8 14:25:22.441: INFO: Created: latency-svc-hhgbv
Feb  8 14:25:22.637: INFO: Got endpoints: latency-svc-hhgbv [2.375020492s]
Feb  8 14:25:22.638: INFO: Created: latency-svc-9dzkg
Feb  8 14:25:22.648: INFO: Got endpoints: latency-svc-9dzkg [2.298981018s]
Feb  8 14:25:22.689: INFO: Created: latency-svc-9w2b5
Feb  8 14:25:22.708: INFO: Got endpoints: latency-svc-9w2b5 [2.00009126s]
Feb  8 14:25:22.819: INFO: Created: latency-svc-r2hv7
Feb  8 14:25:22.836: INFO: Got endpoints: latency-svc-r2hv7 [1.895607137s]
Feb  8 14:25:22.892: INFO: Created: latency-svc-9vjcr
Feb  8 14:25:24.696: INFO: Got endpoints: latency-svc-9vjcr [3.676306872s]
Feb  8 14:25:24.759: INFO: Created: latency-svc-xr99b
Feb  8 14:25:24.786: INFO: Got endpoints: latency-svc-xr99b [3.506401807s]
Feb  8 14:25:24.955: INFO: Created: latency-svc-zn7jd
Feb  8 14:25:24.965: INFO: Got endpoints: latency-svc-zn7jd [3.613709298s]
Feb  8 14:25:25.094: INFO: Created: latency-svc-gk4dr
Feb  8 14:25:25.101: INFO: Got endpoints: latency-svc-gk4dr [3.50735665s]
Feb  8 14:25:25.160: INFO: Created: latency-svc-klkcz
Feb  8 14:25:25.172: INFO: Got endpoints: latency-svc-klkcz [3.351120022s]
Feb  8 14:25:25.257: INFO: Created: latency-svc-zqtfc
Feb  8 14:25:25.257: INFO: Got endpoints: latency-svc-zqtfc [3.241902178s]
Feb  8 14:25:25.331: INFO: Created: latency-svc-8mcfm
Feb  8 14:25:25.458: INFO: Got endpoints: latency-svc-8mcfm [3.400254936s]
Feb  8 14:25:25.464: INFO: Created: latency-svc-2jptd
Feb  8 14:25:25.479: INFO: Got endpoints: latency-svc-2jptd [3.376300064s]
Feb  8 14:25:25.538: INFO: Created: latency-svc-l2w98
Feb  8 14:25:25.539: INFO: Got endpoints: latency-svc-l2w98 [3.333514817s]
Feb  8 14:25:25.641: INFO: Created: latency-svc-jcswj
Feb  8 14:25:25.643: INFO: Got endpoints: latency-svc-jcswj [3.258861482s]
Feb  8 14:25:25.692: INFO: Created: latency-svc-5dg6v
Feb  8 14:25:25.705: INFO: Got endpoints: latency-svc-5dg6v [3.263998818s]
Feb  8 14:25:25.805: INFO: Created: latency-svc-v6pfd
Feb  8 14:25:25.806: INFO: Got endpoints: latency-svc-v6pfd [3.167856223s]
Feb  8 14:25:25.852: INFO: Created: latency-svc-n4wz2
Feb  8 14:25:25.879: INFO: Created: latency-svc-m6vjc
Feb  8 14:25:25.879: INFO: Got endpoints: latency-svc-n4wz2 [3.231257671s]
Feb  8 14:25:26.012: INFO: Got endpoints: latency-svc-m6vjc [3.303647246s]
Feb  8 14:25:26.025: INFO: Created: latency-svc-pmrr7
Feb  8 14:25:26.052: INFO: Got endpoints: latency-svc-pmrr7 [3.216126949s]
Feb  8 14:25:26.110: INFO: Created: latency-svc-qtjms
Feb  8 14:25:26.196: INFO: Got endpoints: latency-svc-qtjms [1.498986419s]
Feb  8 14:25:26.234: INFO: Created: latency-svc-hpjt2
Feb  8 14:25:26.245: INFO: Got endpoints: latency-svc-hpjt2 [1.45876508s]
Feb  8 14:25:26.276: INFO: Created: latency-svc-7g2zk
Feb  8 14:25:26.284: INFO: Got endpoints: latency-svc-7g2zk [1.318866517s]
Feb  8 14:25:26.469: INFO: Created: latency-svc-9htfj
Feb  8 14:25:26.477: INFO: Got endpoints: latency-svc-9htfj [1.375226838s]
Feb  8 14:25:26.655: INFO: Created: latency-svc-bhpcc
Feb  8 14:25:26.657: INFO: Got endpoints: latency-svc-bhpcc [1.484513051s]
Feb  8 14:25:26.725: INFO: Created: latency-svc-cpnwx
Feb  8 14:25:26.725: INFO: Got endpoints: latency-svc-cpnwx [1.468193889s]
Feb  8 14:25:26.825: INFO: Created: latency-svc-n95fs
Feb  8 14:25:26.842: INFO: Got endpoints: latency-svc-n95fs [1.384166791s]
Feb  8 14:25:26.903: INFO: Created: latency-svc-z86w6
Feb  8 14:25:27.054: INFO: Created: latency-svc-44zgj
Feb  8 14:25:27.067: INFO: Got endpoints: latency-svc-z86w6 [1.58804375s]
Feb  8 14:25:27.071: INFO: Got endpoints: latency-svc-44zgj [1.531955886s]
Feb  8 14:25:27.095: INFO: Created: latency-svc-lltgw
Feb  8 14:25:27.100: INFO: Got endpoints: latency-svc-lltgw [1.456919498s]
Feb  8 14:25:27.209: INFO: Created: latency-svc-z2fv7
Feb  8 14:25:27.209: INFO: Got endpoints: latency-svc-z2fv7 [1.504084309s]
Feb  8 14:25:27.261: INFO: Created: latency-svc-6qk79
Feb  8 14:25:27.291: INFO: Got endpoints: latency-svc-6qk79 [1.485464958s]
Feb  8 14:25:27.295: INFO: Created: latency-svc-tgmxj
Feb  8 14:25:27.366: INFO: Got endpoints: latency-svc-tgmxj [1.486293307s]
Feb  8 14:25:27.386: INFO: Created: latency-svc-gssdx
Feb  8 14:25:27.394: INFO: Got endpoints: latency-svc-gssdx [1.382287139s]
Feb  8 14:25:27.438: INFO: Created: latency-svc-9b6fz
Feb  8 14:25:27.453: INFO: Got endpoints: latency-svc-9b6fz [1.401366293s]
Feb  8 14:25:27.593: INFO: Created: latency-svc-ncdtl
Feb  8 14:25:27.623: INFO: Got endpoints: latency-svc-ncdtl [1.42673442s]
Feb  8 14:25:27.647: INFO: Created: latency-svc-tj9pw
Feb  8 14:25:27.657: INFO: Got endpoints: latency-svc-tj9pw [1.411244273s]
Feb  8 14:25:27.692: INFO: Created: latency-svc-qd5vc
Feb  8 14:25:27.763: INFO: Got endpoints: latency-svc-qd5vc [1.478867668s]
Feb  8 14:25:27.832: INFO: Created: latency-svc-mznnl
Feb  8 14:25:27.928: INFO: Got endpoints: latency-svc-mznnl [1.451092069s]
Feb  8 14:25:27.977: INFO: Created: latency-svc-j97nq
Feb  8 14:25:27.994: INFO: Got endpoints: latency-svc-j97nq [1.33790622s]
Feb  8 14:25:28.141: INFO: Created: latency-svc-ntb9q
Feb  8 14:25:28.151: INFO: Got endpoints: latency-svc-ntb9q [1.425894994s]
Feb  8 14:25:28.196: INFO: Created: latency-svc-5w9fw
Feb  8 14:25:28.295: INFO: Got endpoints: latency-svc-5w9fw [1.452854002s]
Feb  8 14:25:28.326: INFO: Created: latency-svc-hrzkb
Feb  8 14:25:28.335: INFO: Got endpoints: latency-svc-hrzkb [1.267554531s]
Feb  8 14:25:28.390: INFO: Created: latency-svc-k9f2w
Feb  8 14:25:28.390: INFO: Got endpoints: latency-svc-k9f2w [1.318551398s]
Feb  8 14:25:28.519: INFO: Created: latency-svc-nj28z
Feb  8 14:25:28.546: INFO: Got endpoints: latency-svc-nj28z [1.446259441s]
Feb  8 14:25:28.548: INFO: Created: latency-svc-d6pgp
Feb  8 14:25:28.639: INFO: Got endpoints: latency-svc-d6pgp [1.430218843s]
Feb  8 14:25:28.660: INFO: Created: latency-svc-m9vm8
Feb  8 14:25:28.660: INFO: Got endpoints: latency-svc-m9vm8 [1.368535248s]
Feb  8 14:25:28.692: INFO: Created: latency-svc-6dsn6
Feb  8 14:25:28.692: INFO: Got endpoints: latency-svc-6dsn6 [1.325458566s]
Feb  8 14:25:28.730: INFO: Created: latency-svc-476fd
Feb  8 14:25:28.732: INFO: Got endpoints: latency-svc-476fd [1.337486113s]
Feb  8 14:25:28.858: INFO: Created: latency-svc-tmxhk
Feb  8 14:25:28.894: INFO: Got endpoints: latency-svc-tmxhk [1.44082445s]
Feb  8 14:25:28.941: INFO: Created: latency-svc-g2gbg
Feb  8 14:25:29.038: INFO: Got endpoints: latency-svc-g2gbg [1.415444636s]
Feb  8 14:25:29.115: INFO: Created: latency-svc-5rwdv
Feb  8 14:25:29.131: INFO: Created: latency-svc-wqt8n
Feb  8 14:25:29.133: INFO: Got endpoints: latency-svc-5rwdv [1.476725173s]
Feb  8 14:25:29.138: INFO: Got endpoints: latency-svc-wqt8n [1.375169444s]
Feb  8 14:25:29.239: INFO: Created: latency-svc-cnc8m
Feb  8 14:25:29.262: INFO: Got endpoints: latency-svc-cnc8m [1.334223849s]
Feb  8 14:25:29.271: INFO: Created: latency-svc-5gqzd
Feb  8 14:25:29.275: INFO: Got endpoints: latency-svc-5gqzd [1.280757109s]
Feb  8 14:25:29.453: INFO: Created: latency-svc-sjpsk
Feb  8 14:25:29.467: INFO: Got endpoints: latency-svc-sjpsk [1.316056324s]
Feb  8 14:25:29.578: INFO: Created: latency-svc-5j4rm
Feb  8 14:25:29.578: INFO: Got endpoints: latency-svc-5j4rm [1.283041475s]
Feb  8 14:25:29.639: INFO: Created: latency-svc-4xpbm
Feb  8 14:25:29.652: INFO: Got endpoints: latency-svc-4xpbm [1.317298276s]
Feb  8 14:25:29.742: INFO: Created: latency-svc-4kkvg
Feb  8 14:25:29.790: INFO: Got endpoints: latency-svc-4kkvg [1.399481915s]
Feb  8 14:25:29.832: INFO: Created: latency-svc-jssg7
Feb  8 14:25:29.832: INFO: Got endpoints: latency-svc-jssg7 [1.285436175s]
Feb  8 14:25:29.937: INFO: Created: latency-svc-hrf9c
Feb  8 14:25:29.952: INFO: Got endpoints: latency-svc-hrf9c [1.312974013s]
Feb  8 14:25:30.037: INFO: Created: latency-svc-qgz49
Feb  8 14:25:30.044: INFO: Got endpoints: latency-svc-qgz49 [1.383931039s]
Feb  8 14:25:30.122: INFO: Created: latency-svc-xjzzf
Feb  8 14:25:30.213: INFO: Got endpoints: latency-svc-xjzzf [1.521038166s]
Feb  8 14:25:30.225: INFO: Created: latency-svc-kfj69
Feb  8 14:25:30.238: INFO: Got endpoints: latency-svc-kfj69 [1.506019093s]
Feb  8 14:25:30.291: INFO: Created: latency-svc-hf5n9
Feb  8 14:25:30.297: INFO: Got endpoints: latency-svc-hf5n9 [1.402139004s]
Feb  8 14:25:30.408: INFO: Created: latency-svc-bzfwx
Feb  8 14:25:30.423: INFO: Got endpoints: latency-svc-bzfwx [1.384329185s]
Feb  8 14:25:30.455: INFO: Created: latency-svc-xcchn
Feb  8 14:25:30.461: INFO: Got endpoints: latency-svc-xcchn [1.327509796s]
Feb  8 14:25:30.628: INFO: Created: latency-svc-k85gc
Feb  8 14:25:30.642: INFO: Got endpoints: latency-svc-k85gc [1.503798505s]
Feb  8 14:25:30.702: INFO: Created: latency-svc-vrfs2
Feb  8 14:25:30.710: INFO: Got endpoints: latency-svc-vrfs2 [1.447944058s]
Feb  8 14:25:30.811: INFO: Created: latency-svc-89vwh
Feb  8 14:25:30.843: INFO: Got endpoints: latency-svc-89vwh [1.568010763s]
Feb  8 14:25:30.844: INFO: Created: latency-svc-66q5v
Feb  8 14:25:30.847: INFO: Got endpoints: latency-svc-66q5v [1.379745225s]
Feb  8 14:25:30.879: INFO: Created: latency-svc-rtdr4
Feb  8 14:25:30.881: INFO: Got endpoints: latency-svc-rtdr4 [1.30310604s]
Feb  8 14:25:31.033: INFO: Created: latency-svc-pb89s
Feb  8 14:25:31.036: INFO: Got endpoints: latency-svc-pb89s [1.383600921s]
Feb  8 14:25:31.036: INFO: Latencies: [88.846394ms 156.641525ms 236.35305ms 316.213986ms 430.469513ms 487.183376ms 625.201598ms 775.795443ms 805.940653ms 991.546577ms 1.059258279s 1.252045397s 1.267554531s 1.272313784s 1.280757109s 1.283041475s 1.285436175s 1.30310604s 1.312974013s 1.316056324s 1.317298276s 1.318551398s 1.318866517s 1.325458566s 1.327509796s 1.334223849s 1.337486113s 1.33790622s 1.363960485s 1.368535248s 1.375169444s 1.375226838s 1.379098388s 1.379745225s 1.382287139s 1.383600921s 1.383931039s 1.384166791s 1.384329185s 1.399481915s 1.401366293s 1.401415811s 1.402139004s 1.405427267s 1.411244273s 1.415444636s 1.415819312s 1.425247196s 1.425894994s 1.42673442s 1.430218843s 1.44082445s 1.446259441s 1.447944058s 1.451092069s 1.452854002s 1.456919498s 1.45876508s 1.463002118s 1.468193889s 1.471096074s 1.476725173s 1.476798073s 1.477213789s 1.478867668s 1.484513051s 1.485464958s 1.485849125s 1.486293307s 1.494021466s 1.495036122s 1.495608651s 1.496231164s 1.498986419s 1.503798505s 1.504084309s 1.505411901s 1.506019093s 1.509097832s 1.51621482s 1.518199099s 1.519097537s 1.521038166s 1.525495695s 1.525791226s 1.526589703s 1.526737651s 1.528741883s 1.531955886s 1.535485945s 1.541744866s 1.546209919s 1.551205338s 1.556403831s 1.556661576s 1.568010763s 1.574702119s 1.574814338s 1.58804375s 1.591545259s 1.592586224s 1.593041353s 1.600797511s 1.604974012s 1.605699079s 1.606698795s 1.608710409s 1.6101211s 1.624749889s 1.625514623s 1.629351135s 1.63190694s 1.633033921s 1.634148772s 1.639181014s 1.645117263s 1.646671509s 1.647998125s 1.648251878s 1.659146368s 1.668409682s 1.672553137s 1.674011774s 1.677647503s 1.678004839s 1.679437699s 1.682690157s 1.689629109s 1.698438709s 1.70055038s 1.701506385s 1.713098097s 1.72156698s 1.722720531s 1.725359999s 1.738539532s 1.74847176s 1.74901079s 1.761019624s 1.780917518s 1.794851767s 1.797798547s 1.829807035s 1.847622814s 1.864897082s 1.866223035s 1.882898976s 1.885686687s 1.886999658s 1.893324225s 1.895607137s 1.92382373s 1.937670726s 1.940978273s 1.941605751s 1.945115431s 1.946753584s 2.00009126s 2.002364892s 2.005851737s 2.031824645s 2.032976533s 2.034142891s 2.044076973s 2.046105978s 2.059644268s 2.072189642s 2.099458982s 2.10879537s 2.110091316s 2.110350798s 2.110790986s 2.120940677s 2.13935547s 2.149892629s 2.176762895s 2.298981018s 2.30356995s 2.304683555s 2.307619872s 2.354208217s 2.371047111s 2.375020492s 2.436462224s 2.439544585s 3.167856223s 3.216126949s 3.231257671s 3.241902178s 3.258861482s 3.263998818s 3.303647246s 3.333514817s 3.351120022s 3.376300064s 3.400254936s 3.506401807s 3.50735665s 3.613709298s 3.676306872s]
Feb  8 14:25:31.036: INFO: 50 %ile: 1.592586224s
Feb  8 14:25:31.036: INFO: 90 %ile: 2.354208217s
Feb  8 14:25:31.036: INFO: 99 %ile: 3.613709298s
Feb  8 14:25:31.036: INFO: Total sample count: 200
[AfterEach] [sig-network] Service endpoints latency
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  8 14:25:31.037: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svc-latency-8168" for this suite.
Feb  8 14:26:09.076: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 14:26:09.170: INFO: namespace svc-latency-8168 deletion completed in 38.122859692s

• [SLOW TEST:69.237 seconds]
[sig-network] Service endpoints latency
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  8 14:26:09.170: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0777 on tmpfs
Feb  8 14:26:09.237: INFO: Waiting up to 5m0s for pod "pod-4eef5f8f-38e8-4d2f-bf60-4021d2dfc856" in namespace "emptydir-1155" to be "success or failure"
Feb  8 14:26:09.243: INFO: Pod "pod-4eef5f8f-38e8-4d2f-bf60-4021d2dfc856": Phase="Pending", Reason="", readiness=false. Elapsed: 5.384511ms
Feb  8 14:26:11.253: INFO: Pod "pod-4eef5f8f-38e8-4d2f-bf60-4021d2dfc856": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015403459s
Feb  8 14:26:13.264: INFO: Pod "pod-4eef5f8f-38e8-4d2f-bf60-4021d2dfc856": Phase="Pending", Reason="", readiness=false. Elapsed: 4.026860161s
Feb  8 14:26:15.271: INFO: Pod "pod-4eef5f8f-38e8-4d2f-bf60-4021d2dfc856": Phase="Pending", Reason="", readiness=false. Elapsed: 6.034218137s
Feb  8 14:26:17.279: INFO: Pod "pod-4eef5f8f-38e8-4d2f-bf60-4021d2dfc856": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.041780844s
STEP: Saw pod success
Feb  8 14:26:17.279: INFO: Pod "pod-4eef5f8f-38e8-4d2f-bf60-4021d2dfc856" satisfied condition "success or failure"
Feb  8 14:26:17.282: INFO: Trying to get logs from node iruya-node pod pod-4eef5f8f-38e8-4d2f-bf60-4021d2dfc856 container test-container: 
STEP: delete the pod
Feb  8 14:26:17.358: INFO: Waiting for pod pod-4eef5f8f-38e8-4d2f-bf60-4021d2dfc856 to disappear
Feb  8 14:26:17.364: INFO: Pod pod-4eef5f8f-38e8-4d2f-bf60-4021d2dfc856 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  8 14:26:17.364: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-1155" for this suite.
Feb  8 14:26:23.414: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 14:26:23.512: INFO: namespace emptydir-1155 deletion completed in 6.142906035s

• [SLOW TEST:14.342 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  8 14:26:23.513: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Performing setup for networking test in namespace pod-network-test-1189
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Feb  8 14:26:23.605: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Feb  8 14:26:57.943: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.44.0.1:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-1189 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb  8 14:26:57.943: INFO: >>> kubeConfig: /root/.kube/config
I0208 14:26:58.017295       8 log.go:172] (0xc00154e0b0) (0xc0017a0320) Create stream
I0208 14:26:58.017482       8 log.go:172] (0xc00154e0b0) (0xc0017a0320) Stream added, broadcasting: 1
I0208 14:26:58.032848       8 log.go:172] (0xc00154e0b0) Reply frame received for 1
I0208 14:26:58.032925       8 log.go:172] (0xc00154e0b0) (0xc001c1a320) Create stream
I0208 14:26:58.032935       8 log.go:172] (0xc00154e0b0) (0xc001c1a320) Stream added, broadcasting: 3
I0208 14:26:58.035807       8 log.go:172] (0xc00154e0b0) Reply frame received for 3
I0208 14:26:58.035902       8 log.go:172] (0xc00154e0b0) (0xc001582000) Create stream
I0208 14:26:58.035936       8 log.go:172] (0xc00154e0b0) (0xc001582000) Stream added, broadcasting: 5
I0208 14:26:58.038019       8 log.go:172] (0xc00154e0b0) Reply frame received for 5
I0208 14:26:58.150812       8 log.go:172] (0xc00154e0b0) Data frame received for 3
I0208 14:26:58.150874       8 log.go:172] (0xc001c1a320) (3) Data frame handling
I0208 14:26:58.150889       8 log.go:172] (0xc001c1a320) (3) Data frame sent
I0208 14:26:58.290107       8 log.go:172] (0xc00154e0b0) Data frame received for 1
I0208 14:26:58.290157       8 log.go:172] (0xc00154e0b0) (0xc001582000) Stream removed, broadcasting: 5
I0208 14:26:58.290221       8 log.go:172] (0xc0017a0320) (1) Data frame handling
I0208 14:26:58.290249       8 log.go:172] (0xc0017a0320) (1) Data frame sent
I0208 14:26:58.290266       8 log.go:172] (0xc00154e0b0) (0xc001c1a320) Stream removed, broadcasting: 3
I0208 14:26:58.290328       8 log.go:172] (0xc00154e0b0) (0xc0017a0320) Stream removed, broadcasting: 1
I0208 14:26:58.290449       8 log.go:172] (0xc00154e0b0) (0xc0017a0320) Stream removed, broadcasting: 1
I0208 14:26:58.290465       8 log.go:172] (0xc00154e0b0) (0xc001c1a320) Stream removed, broadcasting: 3
I0208 14:26:58.290479       8 log.go:172] (0xc00154e0b0) (0xc001582000) Stream removed, broadcasting: 5
Feb  8 14:26:58.290: INFO: Found all expected endpoints: [netserver-0]
I0208 14:26:58.290614       8 log.go:172] (0xc00154e0b0) Go away received
Feb  8 14:26:58.302: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.32.0.4:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-1189 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb  8 14:26:58.302: INFO: >>> kubeConfig: /root/.kube/config
I0208 14:26:58.397366       8 log.go:172] (0xc0007a0790) (0xc001c1a500) Create stream
I0208 14:26:58.397486       8 log.go:172] (0xc0007a0790) (0xc001c1a500) Stream added, broadcasting: 1
I0208 14:26:58.408609       8 log.go:172] (0xc0007a0790) Reply frame received for 1
I0208 14:26:58.408745       8 log.go:172] (0xc0007a0790) (0xc0015820a0) Create stream
I0208 14:26:58.408793       8 log.go:172] (0xc0007a0790) (0xc0015820a0) Stream added, broadcasting: 3
I0208 14:26:58.412274       8 log.go:172] (0xc0007a0790) Reply frame received for 3
I0208 14:26:58.412381       8 log.go:172] (0xc0007a0790) (0xc0017a03c0) Create stream
I0208 14:26:58.412404       8 log.go:172] (0xc0007a0790) (0xc0017a03c0) Stream added, broadcasting: 5
I0208 14:26:58.414062       8 log.go:172] (0xc0007a0790) Reply frame received for 5
I0208 14:26:58.738842       8 log.go:172] (0xc0007a0790) Data frame received for 3
I0208 14:26:58.739015       8 log.go:172] (0xc0015820a0) (3) Data frame handling
I0208 14:26:58.739040       8 log.go:172] (0xc0015820a0) (3) Data frame sent
I0208 14:26:58.856408       8 log.go:172] (0xc0007a0790) Data frame received for 1
I0208 14:26:58.856526       8 log.go:172] (0xc0007a0790) (0xc0015820a0) Stream removed, broadcasting: 3
I0208 14:26:58.856563       8 log.go:172] (0xc001c1a500) (1) Data frame handling
I0208 14:26:58.856610       8 log.go:172] (0xc001c1a500) (1) Data frame sent
I0208 14:26:58.856674       8 log.go:172] (0xc0007a0790) (0xc001c1a500) Stream removed, broadcasting: 1
I0208 14:26:58.856761       8 log.go:172] (0xc0007a0790) (0xc0017a03c0) Stream removed, broadcasting: 5
I0208 14:26:58.856856       8 log.go:172] (0xc0007a0790) (0xc001c1a500) Stream removed, broadcasting: 1
I0208 14:26:58.856891       8 log.go:172] (0xc0007a0790) (0xc0015820a0) Stream removed, broadcasting: 3
I0208 14:26:58.856907       8 log.go:172] (0xc0007a0790) (0xc0017a03c0) Stream removed, broadcasting: 5
I0208 14:26:58.857524       8 log.go:172] (0xc0007a0790) Go away received
Feb  8 14:26:58.857: INFO: Found all expected endpoints: [netserver-1]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  8 14:26:58.857: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-1189" for this suite.
Feb  8 14:27:22.906: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 14:27:23.001: INFO: namespace pod-network-test-1189 deletion completed in 24.122911073s

• [SLOW TEST:59.488 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  8 14:27:23.001: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb  8 14:27:23.141: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2df4fa33-2860-498c-b801-f66f49dc2ff3" in namespace "projected-9948" to be "success or failure"
Feb  8 14:27:23.154: INFO: Pod "downwardapi-volume-2df4fa33-2860-498c-b801-f66f49dc2ff3": Phase="Pending", Reason="", readiness=false. Elapsed: 12.804649ms
Feb  8 14:27:25.159: INFO: Pod "downwardapi-volume-2df4fa33-2860-498c-b801-f66f49dc2ff3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018262071s
Feb  8 14:27:27.167: INFO: Pod "downwardapi-volume-2df4fa33-2860-498c-b801-f66f49dc2ff3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.025972919s
Feb  8 14:27:29.173: INFO: Pod "downwardapi-volume-2df4fa33-2860-498c-b801-f66f49dc2ff3": Phase="Pending", Reason="", readiness=false. Elapsed: 6.03196387s
Feb  8 14:27:31.187: INFO: Pod "downwardapi-volume-2df4fa33-2860-498c-b801-f66f49dc2ff3": Phase="Pending", Reason="", readiness=false. Elapsed: 8.045747076s
Feb  8 14:27:33.194: INFO: Pod "downwardapi-volume-2df4fa33-2860-498c-b801-f66f49dc2ff3": Phase="Pending", Reason="", readiness=false. Elapsed: 10.052330188s
Feb  8 14:27:35.203: INFO: Pod "downwardapi-volume-2df4fa33-2860-498c-b801-f66f49dc2ff3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.06134858s
STEP: Saw pod success
Feb  8 14:27:35.203: INFO: Pod "downwardapi-volume-2df4fa33-2860-498c-b801-f66f49dc2ff3" satisfied condition "success or failure"
Feb  8 14:27:35.207: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-2df4fa33-2860-498c-b801-f66f49dc2ff3 container client-container: 
STEP: delete the pod
Feb  8 14:27:35.315: INFO: Waiting for pod downwardapi-volume-2df4fa33-2860-498c-b801-f66f49dc2ff3 to disappear
Feb  8 14:27:35.353: INFO: Pod downwardapi-volume-2df4fa33-2860-498c-b801-f66f49dc2ff3 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  8 14:27:35.354: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9948" for this suite.
Feb  8 14:27:41.391: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 14:27:41.548: INFO: namespace projected-9948 deletion completed in 6.185771193s

• [SLOW TEST:18.548 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  8 14:27:41.549: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod liveness-5d244418-07e7-40e5-9437-1412f426a527 in namespace container-probe-7871
Feb  8 14:27:51.692: INFO: Started pod liveness-5d244418-07e7-40e5-9437-1412f426a527 in namespace container-probe-7871
STEP: checking the pod's current state and verifying that restartCount is present
Feb  8 14:27:51.698: INFO: Initial restart count of pod liveness-5d244418-07e7-40e5-9437-1412f426a527 is 0
Feb  8 14:28:10.112: INFO: Restart count of pod container-probe-7871/liveness-5d244418-07e7-40e5-9437-1412f426a527 is now 1 (18.41399899s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  8 14:28:10.139: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-7871" for this suite.
Feb  8 14:28:16.180: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 14:28:16.281: INFO: namespace container-probe-7871 deletion completed in 6.125549495s

• [SLOW TEST:34.732 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[sig-network] Services 
  should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  8 14:28:16.281: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88
[It] should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating service endpoint-test2 in namespace services-3006
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-3006 to expose endpoints map[]
Feb  8 14:28:16.424: INFO: Get endpoints failed (5.848439ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found
Feb  8 14:28:17.431: INFO: successfully validated that service endpoint-test2 in namespace services-3006 exposes endpoints map[] (1.012188247s elapsed)
STEP: Creating pod pod1 in namespace services-3006
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-3006 to expose endpoints map[pod1:[80]]
Feb  8 14:28:21.558: INFO: Unexpected endpoints: found map[], expected map[pod1:[80]] (4.110242608s elapsed, will retry)
Feb  8 14:28:25.634: INFO: successfully validated that service endpoint-test2 in namespace services-3006 exposes endpoints map[pod1:[80]] (8.187115904s elapsed)
STEP: Creating pod pod2 in namespace services-3006
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-3006 to expose endpoints map[pod1:[80] pod2:[80]]
Feb  8 14:28:30.673: INFO: Unexpected endpoints: found map[c4916135-36d5-4cb0-96a8-4575d255c041:[80]], expected map[pod1:[80] pod2:[80]] (5.029371776s elapsed, will retry)
Feb  8 14:28:33.729: INFO: successfully validated that service endpoint-test2 in namespace services-3006 exposes endpoints map[pod1:[80] pod2:[80]] (8.084746592s elapsed)
STEP: Deleting pod pod1 in namespace services-3006
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-3006 to expose endpoints map[pod2:[80]]
Feb  8 14:28:34.841: INFO: successfully validated that service endpoint-test2 in namespace services-3006 exposes endpoints map[pod2:[80]] (1.101102612s elapsed)
STEP: Deleting pod pod2 in namespace services-3006
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-3006 to expose endpoints map[]
Feb  8 14:28:34.953: INFO: successfully validated that service endpoint-test2 in namespace services-3006 exposes endpoints map[] (77.118555ms elapsed)
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  8 14:28:35.021: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-3006" for this suite.
Feb  8 14:28:57.119: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 14:28:57.284: INFO: namespace services-3006 deletion completed in 22.240404786s
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92

• [SLOW TEST:41.002 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  8 14:28:57.284: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb  8 14:28:57.499: INFO: Waiting up to 5m0s for pod "downwardapi-volume-29fd6fa8-96fd-4674-aa2c-614065963859" in namespace "projected-8174" to be "success or failure"
Feb  8 14:28:57.515: INFO: Pod "downwardapi-volume-29fd6fa8-96fd-4674-aa2c-614065963859": Phase="Pending", Reason="", readiness=false. Elapsed: 15.989728ms
Feb  8 14:28:59.524: INFO: Pod "downwardapi-volume-29fd6fa8-96fd-4674-aa2c-614065963859": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025364922s
Feb  8 14:29:01.537: INFO: Pod "downwardapi-volume-29fd6fa8-96fd-4674-aa2c-614065963859": Phase="Pending", Reason="", readiness=false. Elapsed: 4.037596697s
Feb  8 14:29:03.546: INFO: Pod "downwardapi-volume-29fd6fa8-96fd-4674-aa2c-614065963859": Phase="Pending", Reason="", readiness=false. Elapsed: 6.047339038s
Feb  8 14:29:05.557: INFO: Pod "downwardapi-volume-29fd6fa8-96fd-4674-aa2c-614065963859": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.057452688s
STEP: Saw pod success
Feb  8 14:29:05.557: INFO: Pod "downwardapi-volume-29fd6fa8-96fd-4674-aa2c-614065963859" satisfied condition "success or failure"
Feb  8 14:29:05.561: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-29fd6fa8-96fd-4674-aa2c-614065963859 container client-container: 
STEP: delete the pod
Feb  8 14:29:05.703: INFO: Waiting for pod downwardapi-volume-29fd6fa8-96fd-4674-aa2c-614065963859 to disappear
Feb  8 14:29:05.706: INFO: Pod downwardapi-volume-29fd6fa8-96fd-4674-aa2c-614065963859 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  8 14:29:05.706: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8174" for this suite.
Feb  8 14:29:11.731: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 14:29:11.891: INFO: namespace projected-8174 deletion completed in 6.180337803s

• [SLOW TEST:14.607 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  8 14:29:11.891: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44
[It] should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
Feb  8 14:29:12.045: INFO: PodSpec: initContainers in spec.initContainers
Feb  8 14:30:19.112: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-1f462255-30fc-4151-84d5-d148ff86fc9b", GenerateName:"", Namespace:"init-container-2515", SelfLink:"/api/v1/namespaces/init-container-2515/pods/pod-init-1f462255-30fc-4151-84d5-d148ff86fc9b", UID:"8f4f9192-1366-4e7d-8b04-777520e5e380", ResourceVersion:"23580682", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63716768952, loc:(*time.Location)(0x7ea48a0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"45550128"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-dffmq", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc002e5f680), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-dffmq", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-dffmq", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-dffmq", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc002f4eb78), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"iruya-node", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc00289d200), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002f4ec00)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002f4ec20)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc002f4ec28), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc002f4ec2c), PreemptionPolicy:(*v1.PreemptionPolicy)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716768952, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716768952, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716768952, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716768952, loc:(*time.Location)(0x7ea48a0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.96.3.65", PodIP:"10.44.0.1", StartTime:(*v1.Time)(0xc002a382a0), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc002186ee0)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc002186f50)}, Ready:false, RestartCount:3, Image:"busybox:1.29", ImageID:"docker-pullable://busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"docker://52d2e47a1313f0009f19e6de3e512158ec88e3f8d200a39855307260dc4eb58c"}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc002a382e0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:""}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc002a382c0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:""}}, QOSClass:"Guaranteed"}}
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  8 14:30:19.114: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-2515" for this suite.
Feb  8 14:30:41.153: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 14:30:41.263: INFO: namespace init-container-2515 deletion completed in 22.135253703s

• [SLOW TEST:89.372 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  8 14:30:41.264: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test override all
Feb  8 14:30:41.403: INFO: Waiting up to 5m0s for pod "client-containers-800af79d-ad6b-4544-84dd-4a89ec59a015" in namespace "containers-1413" to be "success or failure"
Feb  8 14:30:41.408: INFO: Pod "client-containers-800af79d-ad6b-4544-84dd-4a89ec59a015": Phase="Pending", Reason="", readiness=false. Elapsed: 4.245409ms
Feb  8 14:30:43.419: INFO: Pod "client-containers-800af79d-ad6b-4544-84dd-4a89ec59a015": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015378816s
Feb  8 14:30:45.491: INFO: Pod "client-containers-800af79d-ad6b-4544-84dd-4a89ec59a015": Phase="Pending", Reason="", readiness=false. Elapsed: 4.087320023s
Feb  8 14:30:47.498: INFO: Pod "client-containers-800af79d-ad6b-4544-84dd-4a89ec59a015": Phase="Pending", Reason="", readiness=false. Elapsed: 6.094460484s
Feb  8 14:30:49.506: INFO: Pod "client-containers-800af79d-ad6b-4544-84dd-4a89ec59a015": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.102408664s
STEP: Saw pod success
Feb  8 14:30:49.506: INFO: Pod "client-containers-800af79d-ad6b-4544-84dd-4a89ec59a015" satisfied condition "success or failure"
Feb  8 14:30:49.510: INFO: Trying to get logs from node iruya-node pod client-containers-800af79d-ad6b-4544-84dd-4a89ec59a015 container test-container: 
STEP: delete the pod
Feb  8 14:30:49.607: INFO: Waiting for pod client-containers-800af79d-ad6b-4544-84dd-4a89ec59a015 to disappear
Feb  8 14:30:49.615: INFO: Pod client-containers-800af79d-ad6b-4544-84dd-4a89ec59a015 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  8 14:30:49.615: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-1413" for this suite.
Feb  8 14:30:55.664: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 14:30:55.807: INFO: namespace containers-1413 deletion completed in 6.183514948s

• [SLOW TEST:14.544 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Guestbook application 
  should create and stop a working application  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  8 14:30:55.808: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should create and stop a working application  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating all guestbook components
Feb  8 14:30:55.935: INFO: apiVersion: v1
kind: Service
metadata:
  name: redis-slave
  labels:
    app: redis
    role: slave
    tier: backend
spec:
  ports:
  - port: 6379
  selector:
    app: redis
    role: slave
    tier: backend

Feb  8 14:30:55.935: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8089'
Feb  8 14:30:56.434: INFO: stderr: ""
Feb  8 14:30:56.434: INFO: stdout: "service/redis-slave created\n"
Feb  8 14:30:56.434: INFO: apiVersion: v1
kind: Service
metadata:
  name: redis-master
  labels:
    app: redis
    role: master
    tier: backend
spec:
  ports:
  - port: 6379
    targetPort: 6379
  selector:
    app: redis
    role: master
    tier: backend

Feb  8 14:30:56.435: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8089'
Feb  8 14:30:56.771: INFO: stderr: ""
Feb  8 14:30:56.771: INFO: stdout: "service/redis-master created\n"
Feb  8 14:30:56.772: INFO: apiVersion: v1
kind: Service
metadata:
  name: frontend
  labels:
    app: guestbook
    tier: frontend
spec:
  # if your cluster supports it, uncomment the following to automatically create
  # an external load-balanced IP for the frontend service.
  # type: LoadBalancer
  ports:
  - port: 80
  selector:
    app: guestbook
    tier: frontend

Feb  8 14:30:56.772: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8089'
Feb  8 14:30:57.446: INFO: stderr: ""
Feb  8 14:30:57.446: INFO: stdout: "service/frontend created\n"
Feb  8 14:30:57.447: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: frontend
spec:
  replicas: 3
  selector:
    matchLabels:
      app: guestbook
      tier: frontend
  template:
    metadata:
      labels:
        app: guestbook
        tier: frontend
    spec:
      containers:
      - name: php-redis
        image: gcr.io/google-samples/gb-frontend:v6
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        env:
        - name: GET_HOSTS_FROM
          value: dns
          # If your cluster config does not include a dns service, then to
          # instead access environment variables to find service host
          # info, comment out the 'value: dns' line above, and uncomment the
          # line below:
          # value: env
        ports:
        - containerPort: 80

Feb  8 14:30:57.447: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8089'
Feb  8 14:30:57.852: INFO: stderr: ""
Feb  8 14:30:57.853: INFO: stdout: "deployment.apps/frontend created\n"
Feb  8 14:30:57.853: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: redis-master
spec:
  replicas: 1
  selector:
    matchLabels:
      app: redis
      role: master
      tier: backend
  template:
    metadata:
      labels:
        app: redis
        role: master
        tier: backend
    spec:
      containers:
      - name: master
        image: gcr.io/kubernetes-e2e-test-images/redis:1.0
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 6379

Feb  8 14:30:57.853: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8089'
Feb  8 14:30:59.924: INFO: stderr: ""
Feb  8 14:30:59.924: INFO: stdout: "deployment.apps/redis-master created\n"
Feb  8 14:30:59.925: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: redis-slave
spec:
  replicas: 2
  selector:
    matchLabels:
      app: redis
      role: slave
      tier: backend
  template:
    metadata:
      labels:
        app: redis
        role: slave
        tier: backend
    spec:
      containers:
      - name: slave
        image: gcr.io/google-samples/gb-redisslave:v3
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        env:
        - name: GET_HOSTS_FROM
          value: dns
          # If your cluster config does not include a dns service, then to
          # instead access an environment variable to find the master
          # service's host, comment out the 'value: dns' line above, and
          # uncomment the line below:
          # value: env
        ports:
        - containerPort: 6379

Feb  8 14:30:59.925: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8089'
Feb  8 14:31:00.601: INFO: stderr: ""
Feb  8 14:31:00.601: INFO: stdout: "deployment.apps/redis-slave created\n"
STEP: validating guestbook app
Feb  8 14:31:00.601: INFO: Waiting for all frontend pods to be Running.
Feb  8 14:31:25.652: INFO: Waiting for frontend to serve content.
Feb  8 14:31:25.833: INFO: Trying to add a new entry to the guestbook.
Feb  8 14:31:25.865: INFO: Verifying that added entry can be retrieved.
STEP: using delete to clean up resources
Feb  8 14:31:25.911: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8089'
Feb  8 14:31:26.210: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb  8 14:31:26.210: INFO: stdout: "service \"redis-slave\" force deleted\n"
STEP: using delete to clean up resources
Feb  8 14:31:26.211: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8089'
Feb  8 14:31:26.384: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb  8 14:31:26.384: INFO: stdout: "service \"redis-master\" force deleted\n"
STEP: using delete to clean up resources
Feb  8 14:31:26.385: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8089'
Feb  8 14:31:26.515: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb  8 14:31:26.515: INFO: stdout: "service \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Feb  8 14:31:26.516: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8089'
Feb  8 14:31:26.618: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb  8 14:31:26.619: INFO: stdout: "deployment.apps \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Feb  8 14:31:26.619: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8089'
Feb  8 14:31:26.756: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb  8 14:31:26.756: INFO: stdout: "deployment.apps \"redis-master\" force deleted\n"
STEP: using delete to clean up resources
Feb  8 14:31:26.756: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8089'
Feb  8 14:31:26.905: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb  8 14:31:26.905: INFO: stdout: "deployment.apps \"redis-slave\" force deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  8 14:31:26.906: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-8089" for this suite.
Feb  8 14:32:09.140: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 14:32:09.248: INFO: namespace kubectl-8089 deletion completed in 42.315630296s

• [SLOW TEST:73.440 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Guestbook application
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create and stop a working application  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[k8s.io] Kubelet when scheduling a read only busybox container 
  should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  8 14:32:09.248: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  8 14:32:17.412: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-7128" for this suite.
Feb  8 14:33:09.444: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 14:33:09.631: INFO: namespace kubelet-test-7128 deletion completed in 52.213240927s

• [SLOW TEST:60.383 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when scheduling a read only busybox container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:187
    should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  8 14:33:09.632: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward api env vars
Feb  8 14:33:09.705: INFO: Waiting up to 5m0s for pod "downward-api-5eed83ab-4dd6-4dca-9d7a-f50ce05fd7e8" in namespace "downward-api-4938" to be "success or failure"
Feb  8 14:33:09.714: INFO: Pod "downward-api-5eed83ab-4dd6-4dca-9d7a-f50ce05fd7e8": Phase="Pending", Reason="", readiness=false. Elapsed: 9.752307ms
Feb  8 14:33:11.738: INFO: Pod "downward-api-5eed83ab-4dd6-4dca-9d7a-f50ce05fd7e8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033492949s
Feb  8 14:33:13.748: INFO: Pod "downward-api-5eed83ab-4dd6-4dca-9d7a-f50ce05fd7e8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.043397307s
Feb  8 14:33:15.762: INFO: Pod "downward-api-5eed83ab-4dd6-4dca-9d7a-f50ce05fd7e8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.057291746s
Feb  8 14:33:17.773: INFO: Pod "downward-api-5eed83ab-4dd6-4dca-9d7a-f50ce05fd7e8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.068610039s
STEP: Saw pod success
Feb  8 14:33:17.773: INFO: Pod "downward-api-5eed83ab-4dd6-4dca-9d7a-f50ce05fd7e8" satisfied condition "success or failure"
Feb  8 14:33:17.777: INFO: Trying to get logs from node iruya-node pod downward-api-5eed83ab-4dd6-4dca-9d7a-f50ce05fd7e8 container dapi-container: 
STEP: delete the pod
Feb  8 14:33:17.832: INFO: Waiting for pod downward-api-5eed83ab-4dd6-4dca-9d7a-f50ce05fd7e8 to disappear
Feb  8 14:33:17.848: INFO: Pod downward-api-5eed83ab-4dd6-4dca-9d7a-f50ce05fd7e8 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  8 14:33:17.848: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-4938" for this suite.
Feb  8 14:33:23.897: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 14:33:24.014: INFO: namespace downward-api-4938 deletion completed in 6.157945884s

• [SLOW TEST:14.383 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  8 14:33:24.015: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-5350.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-5350.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5350.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-5350.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-5350.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5350.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe /etc/hosts
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Feb  8 14:33:40.184: INFO: Unable to read jessie_hosts@dns-querier-1.dns-test-service.dns-5350.svc.cluster.local from pod dns-5350/dns-test-900744e8-5054-46fd-b078-20f0926a0967: the server could not find the requested resource (get pods dns-test-900744e8-5054-46fd-b078-20f0926a0967)
Feb  8 14:33:40.193: INFO: Unable to read jessie_hosts@dns-querier-1 from pod dns-5350/dns-test-900744e8-5054-46fd-b078-20f0926a0967: the server could not find the requested resource (get pods dns-test-900744e8-5054-46fd-b078-20f0926a0967)
Feb  8 14:33:40.198: INFO: Unable to read jessie_udp@PodARecord from pod dns-5350/dns-test-900744e8-5054-46fd-b078-20f0926a0967: the server could not find the requested resource (get pods dns-test-900744e8-5054-46fd-b078-20f0926a0967)
Feb  8 14:33:40.218: INFO: Unable to read jessie_tcp@PodARecord from pod dns-5350/dns-test-900744e8-5054-46fd-b078-20f0926a0967: the server could not find the requested resource (get pods dns-test-900744e8-5054-46fd-b078-20f0926a0967)
Feb  8 14:33:40.218: INFO: Lookups using dns-5350/dns-test-900744e8-5054-46fd-b078-20f0926a0967 failed for: [jessie_hosts@dns-querier-1.dns-test-service.dns-5350.svc.cluster.local jessie_hosts@dns-querier-1 jessie_udp@PodARecord jessie_tcp@PodARecord]

Feb  8 14:33:45.296: INFO: DNS probes using dns-5350/dns-test-900744e8-5054-46fd-b078-20f0926a0967 succeeded

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  8 14:33:45.402: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-5350" for this suite.
Feb  8 14:33:51.506: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 14:33:51.614: INFO: namespace dns-5350 deletion completed in 6.178168863s

• [SLOW TEST:27.599 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[k8s.io] [sig-node] Events 
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] [sig-node] Events
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  8 14:33:51.614: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename events
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: retrieving the pod
Feb  8 14:33:59.787: INFO: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:send-events-a07fc9d2-dcd6-4145-97ce-d90a0cb3a51c,GenerateName:,Namespace:events-464,SelfLink:/api/v1/namespaces/events-464/pods/send-events-a07fc9d2-dcd6-4145-97ce-d90a0cb3a51c,UID:47998ec8-c829-42ae-81b0-02332353fb96,ResourceVersion:23581302,Generation:0,CreationTimestamp:2020-02-08 14:33:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: foo,time: 710794644,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-tlmtv {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-tlmtv,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{p gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 [] []  [{ 0 80 TCP }] [] [] {map[] map[]} [{default-token-tlmtv true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001c45460} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001c45480}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-08 14:33:51 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-08 14:33:59 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-08 14:33:59 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-08 14:33:51 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.1,StartTime:2020-02-08 14:33:51 +0000 UTC,ContainerStatuses:[{p {nil ContainerStateRunning{StartedAt:2020-02-08 14:33:58 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 docker-pullable://gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 docker://60ba30aded04beae81c7ed2a09d3519f16f44d625d640a65d2fbc4719df58819}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}

STEP: checking for scheduler event about the pod
Feb  8 14:34:01.797: INFO: Saw scheduler event for our pod.
STEP: checking for kubelet event about the pod
Feb  8 14:34:03.805: INFO: Saw kubelet event for our pod.
STEP: deleting the pod
[AfterEach] [k8s.io] [sig-node] Events
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  8 14:34:03.825: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "events-464" for this suite.
Feb  8 14:34:43.980: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 14:34:44.082: INFO: namespace events-464 deletion completed in 40.203950375s

• [SLOW TEST:52.467 seconds]
[k8s.io] [sig-node] Events
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  8 14:34:44.083: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0666 on tmpfs
Feb  8 14:34:44.221: INFO: Waiting up to 5m0s for pod "pod-c2afa6a1-d2ea-44f5-ba73-772582534a86" in namespace "emptydir-7772" to be "success or failure"
Feb  8 14:34:44.260: INFO: Pod "pod-c2afa6a1-d2ea-44f5-ba73-772582534a86": Phase="Pending", Reason="", readiness=false. Elapsed: 39.68542ms
Feb  8 14:34:46.269: INFO: Pod "pod-c2afa6a1-d2ea-44f5-ba73-772582534a86": Phase="Pending", Reason="", readiness=false. Elapsed: 2.047994595s
Feb  8 14:34:48.274: INFO: Pod "pod-c2afa6a1-d2ea-44f5-ba73-772582534a86": Phase="Pending", Reason="", readiness=false. Elapsed: 4.053654933s
Feb  8 14:34:50.284: INFO: Pod "pod-c2afa6a1-d2ea-44f5-ba73-772582534a86": Phase="Pending", Reason="", readiness=false. Elapsed: 6.063648907s
Feb  8 14:34:52.289: INFO: Pod "pod-c2afa6a1-d2ea-44f5-ba73-772582534a86": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.068271563s
STEP: Saw pod success
Feb  8 14:34:52.289: INFO: Pod "pod-c2afa6a1-d2ea-44f5-ba73-772582534a86" satisfied condition "success or failure"
Feb  8 14:34:52.291: INFO: Trying to get logs from node iruya-node pod pod-c2afa6a1-d2ea-44f5-ba73-772582534a86 container test-container: 
STEP: delete the pod
Feb  8 14:34:52.374: INFO: Waiting for pod pod-c2afa6a1-d2ea-44f5-ba73-772582534a86 to disappear
Feb  8 14:34:52.388: INFO: Pod pod-c2afa6a1-d2ea-44f5-ba73-772582534a86 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  8 14:34:52.388: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-7772" for this suite.
Feb  8 14:34:58.426: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 14:34:58.541: INFO: namespace emptydir-7772 deletion completed in 6.150004548s

• [SLOW TEST:14.459 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl label 
  should update the label on a resource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  8 14:34:58.542: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1210
STEP: creating the pod
Feb  8 14:34:58.611: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7187'
Feb  8 14:35:01.261: INFO: stderr: ""
Feb  8 14:35:01.261: INFO: stdout: "pod/pause created\n"
Feb  8 14:35:01.261: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause]
Feb  8 14:35:01.261: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-7187" to be "running and ready"
Feb  8 14:35:01.356: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 94.587628ms
Feb  8 14:35:03.370: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.108844728s
Feb  8 14:35:05.378: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 4.117204234s
Feb  8 14:35:07.392: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 6.130691823s
Feb  8 14:35:09.406: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 8.144619008s
Feb  8 14:35:09.406: INFO: Pod "pause" satisfied condition "running and ready"
Feb  8 14:35:09.406: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause]
[It] should update the label on a resource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: adding the label testing-label with value testing-label-value to a pod
Feb  8 14:35:09.406: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-7187'
Feb  8 14:35:09.547: INFO: stderr: ""
Feb  8 14:35:09.547: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod has the label testing-label with the value testing-label-value
Feb  8 14:35:09.547: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-7187'
Feb  8 14:35:09.626: INFO: stderr: ""
Feb  8 14:35:09.626: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          8s    testing-label-value\n"
STEP: removing the label testing-label of a pod
Feb  8 14:35:09.626: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-7187'
Feb  8 14:35:09.760: INFO: stderr: ""
Feb  8 14:35:09.760: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod doesn't have the label testing-label
Feb  8 14:35:09.760: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-7187'
Feb  8 14:35:09.853: INFO: stderr: ""
Feb  8 14:35:09.853: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          8s    \n"
[AfterEach] [k8s.io] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1217
STEP: using delete to clean up resources
Feb  8 14:35:09.853: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-7187'
Feb  8 14:35:09.998: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb  8 14:35:09.998: INFO: stdout: "pod \"pause\" force deleted\n"
Feb  8 14:35:09.999: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-7187'
Feb  8 14:35:10.193: INFO: stderr: "No resources found.\n"
Feb  8 14:35:10.193: INFO: stdout: ""
Feb  8 14:35:10.193: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-7187 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Feb  8 14:35:10.267: INFO: stderr: ""
Feb  8 14:35:10.267: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  8 14:35:10.267: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-7187" for this suite.
Feb  8 14:35:16.299: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 14:35:16.434: INFO: namespace kubectl-7187 deletion completed in 6.162643179s

• [SLOW TEST:17.892 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should update the label on a resource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  8 14:35:16.434: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-4bb1d184-5a04-4f9b-8d31-9c09143c81a2
STEP: Creating a pod to test consume configMaps
Feb  8 14:35:16.544: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-fbfc73bc-4c52-46e3-b812-3699703175d2" in namespace "projected-5066" to be "success or failure"
Feb  8 14:35:16.563: INFO: Pod "pod-projected-configmaps-fbfc73bc-4c52-46e3-b812-3699703175d2": Phase="Pending", Reason="", readiness=false. Elapsed: 18.319072ms
Feb  8 14:35:18.581: INFO: Pod "pod-projected-configmaps-fbfc73bc-4c52-46e3-b812-3699703175d2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036194807s
Feb  8 14:35:20.590: INFO: Pod "pod-projected-configmaps-fbfc73bc-4c52-46e3-b812-3699703175d2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.04524359s
Feb  8 14:35:22.603: INFO: Pod "pod-projected-configmaps-fbfc73bc-4c52-46e3-b812-3699703175d2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.058231327s
Feb  8 14:35:24.614: INFO: Pod "pod-projected-configmaps-fbfc73bc-4c52-46e3-b812-3699703175d2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.069763052s
STEP: Saw pod success
Feb  8 14:35:24.614: INFO: Pod "pod-projected-configmaps-fbfc73bc-4c52-46e3-b812-3699703175d2" satisfied condition "success or failure"
Feb  8 14:35:24.617: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-fbfc73bc-4c52-46e3-b812-3699703175d2 container projected-configmap-volume-test: 
STEP: delete the pod
Feb  8 14:35:24.664: INFO: Waiting for pod pod-projected-configmaps-fbfc73bc-4c52-46e3-b812-3699703175d2 to disappear
Feb  8 14:35:24.672: INFO: Pod pod-projected-configmaps-fbfc73bc-4c52-46e3-b812-3699703175d2 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  8 14:35:24.672: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5066" for this suite.
Feb  8 14:35:30.697: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 14:35:30.788: INFO: namespace projected-5066 deletion completed in 6.112689286s

• [SLOW TEST:14.354 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-auth] ServiceAccounts 
  should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  8 14:35:30.789: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: getting the auto-created API token
STEP: reading a file in the container
Feb  8 14:35:39.470: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-3837 pod-service-account-f9a4818e-f46f-481a-b7d0-4dbc4fb29360 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token'
STEP: reading a file in the container
Feb  8 14:35:39.978: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-3837 pod-service-account-f9a4818e-f46f-481a-b7d0-4dbc4fb29360 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt'
STEP: reading a file in the container
Feb  8 14:35:40.471: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-3837 pod-service-account-f9a4818e-f46f-481a-b7d0-4dbc4fb29360 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace'
[AfterEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  8 14:35:40.905: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-3837" for this suite.
Feb  8 14:35:46.931: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 14:35:47.059: INFO: namespace svcaccounts-3837 deletion completed in 6.149737822s

• [SLOW TEST:16.271 seconds]
[sig-auth] ServiceAccounts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23
  should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  8 14:35:47.060: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-b57ae9fc-874a-47fc-ad91-9f9e377237f9
STEP: Creating a pod to test consume secrets
Feb  8 14:35:47.286: INFO: Waiting up to 5m0s for pod "pod-secrets-3179c3a8-00f7-463b-9f9c-2ba4ae92beac" in namespace "secrets-5054" to be "success or failure"
Feb  8 14:35:47.299: INFO: Pod "pod-secrets-3179c3a8-00f7-463b-9f9c-2ba4ae92beac": Phase="Pending", Reason="", readiness=false. Elapsed: 13.154042ms
Feb  8 14:35:49.306: INFO: Pod "pod-secrets-3179c3a8-00f7-463b-9f9c-2ba4ae92beac": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019820495s
Feb  8 14:35:51.317: INFO: Pod "pod-secrets-3179c3a8-00f7-463b-9f9c-2ba4ae92beac": Phase="Pending", Reason="", readiness=false. Elapsed: 4.030583195s
Feb  8 14:35:53.324: INFO: Pod "pod-secrets-3179c3a8-00f7-463b-9f9c-2ba4ae92beac": Phase="Pending", Reason="", readiness=false. Elapsed: 6.037627866s
Feb  8 14:35:55.329: INFO: Pod "pod-secrets-3179c3a8-00f7-463b-9f9c-2ba4ae92beac": Phase="Pending", Reason="", readiness=false. Elapsed: 8.042492471s
Feb  8 14:35:57.336: INFO: Pod "pod-secrets-3179c3a8-00f7-463b-9f9c-2ba4ae92beac": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.049428844s
STEP: Saw pod success
Feb  8 14:35:57.336: INFO: Pod "pod-secrets-3179c3a8-00f7-463b-9f9c-2ba4ae92beac" satisfied condition "success or failure"
Feb  8 14:35:57.339: INFO: Trying to get logs from node iruya-node pod pod-secrets-3179c3a8-00f7-463b-9f9c-2ba4ae92beac container secret-volume-test: 
STEP: delete the pod
Feb  8 14:35:57.512: INFO: Waiting for pod pod-secrets-3179c3a8-00f7-463b-9f9c-2ba4ae92beac to disappear
Feb  8 14:35:57.522: INFO: Pod pod-secrets-3179c3a8-00f7-463b-9f9c-2ba4ae92beac no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  8 14:35:57.522: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-5054" for this suite.
Feb  8 14:36:05.583: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 14:36:05.699: INFO: namespace secrets-5054 deletion completed in 8.169673922s
STEP: Destroying namespace "secret-namespace-8427" for this suite.
Feb  8 14:36:11.722: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 14:36:11.877: INFO: namespace secret-namespace-8427 deletion completed in 6.178006595s

• [SLOW TEST:24.818 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  8 14:36:11.878: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0644 on tmpfs
Feb  8 14:36:12.034: INFO: Waiting up to 5m0s for pod "pod-c408f98e-7cc0-4fdf-9235-3f545386eb97" in namespace "emptydir-4811" to be "success or failure"
Feb  8 14:36:12.071: INFO: Pod "pod-c408f98e-7cc0-4fdf-9235-3f545386eb97": Phase="Pending", Reason="", readiness=false. Elapsed: 36.289808ms
Feb  8 14:36:14.078: INFO: Pod "pod-c408f98e-7cc0-4fdf-9235-3f545386eb97": Phase="Pending", Reason="", readiness=false. Elapsed: 2.043268992s
Feb  8 14:36:16.083: INFO: Pod "pod-c408f98e-7cc0-4fdf-9235-3f545386eb97": Phase="Pending", Reason="", readiness=false. Elapsed: 4.048446664s
Feb  8 14:36:18.089: INFO: Pod "pod-c408f98e-7cc0-4fdf-9235-3f545386eb97": Phase="Pending", Reason="", readiness=false. Elapsed: 6.054239455s
Feb  8 14:36:20.097: INFO: Pod "pod-c408f98e-7cc0-4fdf-9235-3f545386eb97": Phase="Pending", Reason="", readiness=false. Elapsed: 8.062798899s
Feb  8 14:36:22.119: INFO: Pod "pod-c408f98e-7cc0-4fdf-9235-3f545386eb97": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.084654074s
STEP: Saw pod success
Feb  8 14:36:22.119: INFO: Pod "pod-c408f98e-7cc0-4fdf-9235-3f545386eb97" satisfied condition "success or failure"
Feb  8 14:36:22.124: INFO: Trying to get logs from node iruya-node pod pod-c408f98e-7cc0-4fdf-9235-3f545386eb97 container test-container: 
STEP: delete the pod
Feb  8 14:36:22.199: INFO: Waiting for pod pod-c408f98e-7cc0-4fdf-9235-3f545386eb97 to disappear
Feb  8 14:36:22.204: INFO: Pod pod-c408f98e-7cc0-4fdf-9235-3f545386eb97 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  8 14:36:22.204: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-4811" for this suite.
Feb  8 14:36:28.264: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 14:36:28.410: INFO: namespace emptydir-4811 deletion completed in 6.202459064s

• [SLOW TEST:16.533 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  8 14:36:28.411: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0666 on node default medium
Feb  8 14:36:28.487: INFO: Waiting up to 5m0s for pod "pod-d0330f08-cd15-48b2-bfe5-f53f75180e75" in namespace "emptydir-1790" to be "success or failure"
Feb  8 14:36:28.502: INFO: Pod "pod-d0330f08-cd15-48b2-bfe5-f53f75180e75": Phase="Pending", Reason="", readiness=false. Elapsed: 14.476178ms
Feb  8 14:36:30.527: INFO: Pod "pod-d0330f08-cd15-48b2-bfe5-f53f75180e75": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03989s
Feb  8 14:36:32.543: INFO: Pod "pod-d0330f08-cd15-48b2-bfe5-f53f75180e75": Phase="Pending", Reason="", readiness=false. Elapsed: 4.055620076s
Feb  8 14:36:34.897: INFO: Pod "pod-d0330f08-cd15-48b2-bfe5-f53f75180e75": Phase="Pending", Reason="", readiness=false. Elapsed: 6.40923063s
Feb  8 14:36:36.907: INFO: Pod "pod-d0330f08-cd15-48b2-bfe5-f53f75180e75": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.420002746s
STEP: Saw pod success
Feb  8 14:36:36.907: INFO: Pod "pod-d0330f08-cd15-48b2-bfe5-f53f75180e75" satisfied condition "success or failure"
Feb  8 14:36:36.911: INFO: Trying to get logs from node iruya-node pod pod-d0330f08-cd15-48b2-bfe5-f53f75180e75 container test-container: 
STEP: delete the pod
Feb  8 14:36:36.989: INFO: Waiting for pod pod-d0330f08-cd15-48b2-bfe5-f53f75180e75 to disappear
Feb  8 14:36:37.093: INFO: Pod pod-d0330f08-cd15-48b2-bfe5-f53f75180e75 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  8 14:36:37.093: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-1790" for this suite.
Feb  8 14:36:43.143: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 14:36:43.243: INFO: namespace emptydir-1790 deletion completed in 6.141757679s

• [SLOW TEST:14.832 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with downward pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  8 14:36:43.243: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with downward pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod pod-subpath-test-downwardapi-5jzs
STEP: Creating a pod to test atomic-volume-subpath
Feb  8 14:36:43.549: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-5jzs" in namespace "subpath-383" to be "success or failure"
Feb  8 14:36:43.700: INFO: Pod "pod-subpath-test-downwardapi-5jzs": Phase="Pending", Reason="", readiness=false. Elapsed: 150.341968ms
Feb  8 14:36:45.706: INFO: Pod "pod-subpath-test-downwardapi-5jzs": Phase="Pending", Reason="", readiness=false. Elapsed: 2.156792961s
Feb  8 14:36:47.716: INFO: Pod "pod-subpath-test-downwardapi-5jzs": Phase="Pending", Reason="", readiness=false. Elapsed: 4.16649862s
Feb  8 14:36:49.721: INFO: Pod "pod-subpath-test-downwardapi-5jzs": Phase="Pending", Reason="", readiness=false. Elapsed: 6.171060929s
Feb  8 14:36:51.728: INFO: Pod "pod-subpath-test-downwardapi-5jzs": Phase="Pending", Reason="", readiness=false. Elapsed: 8.178604625s
Feb  8 14:36:53.738: INFO: Pod "pod-subpath-test-downwardapi-5jzs": Phase="Running", Reason="", readiness=true. Elapsed: 10.188824375s
Feb  8 14:36:55.747: INFO: Pod "pod-subpath-test-downwardapi-5jzs": Phase="Running", Reason="", readiness=true. Elapsed: 12.197141463s
Feb  8 14:36:57.757: INFO: Pod "pod-subpath-test-downwardapi-5jzs": Phase="Running", Reason="", readiness=true. Elapsed: 14.207973394s
Feb  8 14:36:59.769: INFO: Pod "pod-subpath-test-downwardapi-5jzs": Phase="Running", Reason="", readiness=true. Elapsed: 16.219841411s
Feb  8 14:37:01.780: INFO: Pod "pod-subpath-test-downwardapi-5jzs": Phase="Running", Reason="", readiness=true. Elapsed: 18.230756916s
Feb  8 14:37:03.792: INFO: Pod "pod-subpath-test-downwardapi-5jzs": Phase="Running", Reason="", readiness=true. Elapsed: 20.242744537s
Feb  8 14:37:05.838: INFO: Pod "pod-subpath-test-downwardapi-5jzs": Phase="Running", Reason="", readiness=true. Elapsed: 22.288905098s
Feb  8 14:37:07.846: INFO: Pod "pod-subpath-test-downwardapi-5jzs": Phase="Running", Reason="", readiness=true. Elapsed: 24.296995379s
Feb  8 14:37:09.859: INFO: Pod "pod-subpath-test-downwardapi-5jzs": Phase="Running", Reason="", readiness=true. Elapsed: 26.309795355s
Feb  8 14:37:11.870: INFO: Pod "pod-subpath-test-downwardapi-5jzs": Phase="Running", Reason="", readiness=true. Elapsed: 28.320343852s
Feb  8 14:37:13.885: INFO: Pod "pod-subpath-test-downwardapi-5jzs": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.335243379s
STEP: Saw pod success
Feb  8 14:37:13.885: INFO: Pod "pod-subpath-test-downwardapi-5jzs" satisfied condition "success or failure"
Feb  8 14:37:13.892: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-downwardapi-5jzs container test-container-subpath-downwardapi-5jzs: 
STEP: delete the pod
Feb  8 14:37:14.012: INFO: Waiting for pod pod-subpath-test-downwardapi-5jzs to disappear
Feb  8 14:37:14.020: INFO: Pod pod-subpath-test-downwardapi-5jzs no longer exists
STEP: Deleting pod pod-subpath-test-downwardapi-5jzs
Feb  8 14:37:14.020: INFO: Deleting pod "pod-subpath-test-downwardapi-5jzs" in namespace "subpath-383"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  8 14:37:14.024: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-383" for this suite.
Feb  8 14:37:20.073: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 14:37:20.215: INFO: namespace subpath-383 deletion completed in 6.18505734s

• [SLOW TEST:36.972 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with downward pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run pod 
  should create a pod from an image when restart is Never  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  8 14:37:20.215: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1685
[It] should create a pod from an image when restart is Never  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Feb  8 14:37:20.312: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-3595'
Feb  8 14:37:20.599: INFO: stderr: ""
Feb  8 14:37:20.599: INFO: stdout: "pod/e2e-test-nginx-pod created\n"
STEP: verifying the pod e2e-test-nginx-pod was created
[AfterEach] [k8s.io] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1690
Feb  8 14:37:20.606: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=kubectl-3595'
Feb  8 14:37:26.554: INFO: stderr: ""
Feb  8 14:37:26.555: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  8 14:37:26.555: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3595" for this suite.
Feb  8 14:37:32.645: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 14:37:32.740: INFO: namespace kubectl-3595 deletion completed in 6.169078278s

• [SLOW TEST:12.525 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create a pod from an image when restart is Never  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run default 
  should create an rc or deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  8 14:37:32.741: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1420
[It] should create an rc or deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Feb  8 14:37:32.801: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-1426'
Feb  8 14:37:32.950: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Feb  8 14:37:32.950: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n"
STEP: verifying the pod controlled by e2e-test-nginx-deployment gets created
[AfterEach] [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1426
Feb  8 14:37:33.087: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=kubectl-1426'
Feb  8 14:37:33.231: INFO: stderr: ""
Feb  8 14:37:33.231: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  8 14:37:33.232: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1426" for this suite.
Feb  8 14:37:55.292: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 14:37:55.452: INFO: namespace kubectl-1426 deletion completed in 22.21149218s

• [SLOW TEST:22.712 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create an rc or deployment from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[k8s.io] Container Runtime blackbox test when starting a container that exits 
  should run with the expected status [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  8 14:37:55.452: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run with the expected status [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpa': should get the expected 'State'
STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpof': should get the expected 'State'
STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpn': should get the expected 'State'
STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance]
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  8 14:38:51.031: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-3149" for this suite.
Feb  8 14:38:57.065: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 14:38:57.162: INFO: namespace container-runtime-3149 deletion completed in 6.125033105s

• [SLOW TEST:61.709 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    when starting a container that exits
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:39
      should run with the expected status [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  8 14:38:57.162: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Feb  8 14:39:15.486: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Feb  8 14:39:15.492: INFO: Pod pod-with-poststart-http-hook still exists
Feb  8 14:39:17.492: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Feb  8 14:39:17.503: INFO: Pod pod-with-poststart-http-hook still exists
Feb  8 14:39:19.492: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Feb  8 14:39:19.500: INFO: Pod pod-with-poststart-http-hook still exists
Feb  8 14:39:21.492: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Feb  8 14:39:21.501: INFO: Pod pod-with-poststart-http-hook still exists
Feb  8 14:39:23.492: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Feb  8 14:39:23.502: INFO: Pod pod-with-poststart-http-hook still exists
Feb  8 14:39:25.492: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Feb  8 14:39:25.499: INFO: Pod pod-with-poststart-http-hook still exists
Feb  8 14:39:27.492: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Feb  8 14:39:27.502: INFO: Pod pod-with-poststart-http-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  8 14:39:27.502: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-8036" for this suite.
Feb  8 14:39:49.545: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 14:39:49.680: INFO: namespace container-lifecycle-hook-8036 deletion completed in 22.169903736s

• [SLOW TEST:52.518 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute poststart http hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform rolling updates and roll backs of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  8 14:39:49.680: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75
STEP: Creating service test in namespace statefulset-796
[It] should perform rolling updates and roll backs of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a new StatefulSet
Feb  8 14:39:49.830: INFO: Found 0 stateful pods, waiting for 3
Feb  8 14:39:59.842: INFO: Found 2 stateful pods, waiting for 3
Feb  8 14:40:09.855: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Feb  8 14:40:09.855: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Feb  8 14:40:09.855: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Feb  8 14:40:19.847: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Feb  8 14:40:19.847: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Feb  8 14:40:19.847: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
Feb  8 14:40:19.865: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-796 ss2-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb  8 14:40:20.312: INFO: stderr: "I0208 14:40:20.092738    2886 log.go:172] (0xc000984420) (0xc00029e820) Create stream\nI0208 14:40:20.093529    2886 log.go:172] (0xc000984420) (0xc00029e820) Stream added, broadcasting: 1\nI0208 14:40:20.102020    2886 log.go:172] (0xc000984420) Reply frame received for 1\nI0208 14:40:20.102428    2886 log.go:172] (0xc000984420) (0xc0005dc320) Create stream\nI0208 14:40:20.102594    2886 log.go:172] (0xc000984420) (0xc0005dc320) Stream added, broadcasting: 3\nI0208 14:40:20.109331    2886 log.go:172] (0xc000984420) Reply frame received for 3\nI0208 14:40:20.109390    2886 log.go:172] (0xc000984420) (0xc0005dc280) Create stream\nI0208 14:40:20.109411    2886 log.go:172] (0xc000984420) (0xc0005dc280) Stream added, broadcasting: 5\nI0208 14:40:20.110297    2886 log.go:172] (0xc000984420) Reply frame received for 5\nI0208 14:40:20.185009    2886 log.go:172] (0xc000984420) Data frame received for 5\nI0208 14:40:20.185079    2886 log.go:172] (0xc0005dc280) (5) Data frame handling\nI0208 14:40:20.185114    2886 log.go:172] (0xc0005dc280) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0208 14:40:20.234470    2886 log.go:172] (0xc000984420) Data frame received for 3\nI0208 14:40:20.234512    2886 log.go:172] (0xc0005dc320) (3) Data frame handling\nI0208 14:40:20.234532    2886 log.go:172] (0xc0005dc320) (3) Data frame sent\nI0208 14:40:20.296969    2886 log.go:172] (0xc000984420) (0xc0005dc320) Stream removed, broadcasting: 3\nI0208 14:40:20.297146    2886 log.go:172] (0xc000984420) Data frame received for 1\nI0208 14:40:20.297201    2886 log.go:172] (0xc00029e820) (1) Data frame handling\nI0208 14:40:20.297254    2886 log.go:172] (0xc00029e820) (1) Data frame sent\nI0208 14:40:20.297296    2886 log.go:172] (0xc000984420) (0xc00029e820) Stream removed, broadcasting: 1\nI0208 14:40:20.297368    2886 log.go:172] (0xc000984420) (0xc0005dc280) Stream removed, broadcasting: 5\nI0208 14:40:20.297534    2886 log.go:172] (0xc000984420) Go away received\nI0208 14:40:20.298300    2886 log.go:172] (0xc000984420) (0xc00029e820) Stream removed, broadcasting: 1\nI0208 14:40:20.298594    2886 log.go:172] (0xc000984420) (0xc0005dc320) Stream removed, broadcasting: 3\nI0208 14:40:20.298676    2886 log.go:172] (0xc000984420) (0xc0005dc280) Stream removed, broadcasting: 5\n"
Feb  8 14:40:20.312: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb  8 14:40:20.312: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

STEP: Updating StatefulSet template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine
Feb  8 14:40:30.371: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Updating Pods in reverse ordinal order
Feb  8 14:40:40.834: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-796 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  8 14:40:41.242: INFO: stderr: "I0208 14:40:41.053124    2908 log.go:172] (0xc0006a6370) (0xc000358820) Create stream\nI0208 14:40:41.053338    2908 log.go:172] (0xc0006a6370) (0xc000358820) Stream added, broadcasting: 1\nI0208 14:40:41.056919    2908 log.go:172] (0xc0006a6370) Reply frame received for 1\nI0208 14:40:41.056968    2908 log.go:172] (0xc0006a6370) (0xc000916000) Create stream\nI0208 14:40:41.056987    2908 log.go:172] (0xc0006a6370) (0xc000916000) Stream added, broadcasting: 3\nI0208 14:40:41.059094    2908 log.go:172] (0xc0006a6370) Reply frame received for 3\nI0208 14:40:41.059215    2908 log.go:172] (0xc0006a6370) (0xc0007b2000) Create stream\nI0208 14:40:41.059242    2908 log.go:172] (0xc0006a6370) (0xc0007b2000) Stream added, broadcasting: 5\nI0208 14:40:41.060525    2908 log.go:172] (0xc0006a6370) Reply frame received for 5\nI0208 14:40:41.146917    2908 log.go:172] (0xc0006a6370) Data frame received for 5\nI0208 14:40:41.146986    2908 log.go:172] (0xc0007b2000) (5) Data frame handling\nI0208 14:40:41.147010    2908 log.go:172] (0xc0007b2000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0208 14:40:41.147410    2908 log.go:172] (0xc0006a6370) Data frame received for 3\nI0208 14:40:41.147430    2908 log.go:172] (0xc000916000) (3) Data frame handling\nI0208 14:40:41.147448    2908 log.go:172] (0xc000916000) (3) Data frame sent\nI0208 14:40:41.233266    2908 log.go:172] (0xc0006a6370) (0xc000916000) Stream removed, broadcasting: 3\nI0208 14:40:41.233412    2908 log.go:172] (0xc0006a6370) Data frame received for 1\nI0208 14:40:41.233443    2908 log.go:172] (0xc000358820) (1) Data frame handling\nI0208 14:40:41.233465    2908 log.go:172] (0xc000358820) (1) Data frame sent\nI0208 14:40:41.233539    2908 log.go:172] (0xc0006a6370) (0xc0007b2000) Stream removed, broadcasting: 5\nI0208 14:40:41.233632    2908 log.go:172] (0xc0006a6370) (0xc000358820) Stream removed, broadcasting: 1\nI0208 14:40:41.233655    2908 log.go:172] (0xc0006a6370) Go away received\nI0208 14:40:41.234244    2908 log.go:172] (0xc0006a6370) (0xc000358820) Stream removed, broadcasting: 1\nI0208 14:40:41.234321    2908 log.go:172] (0xc0006a6370) (0xc000916000) Stream removed, broadcasting: 3\nI0208 14:40:41.234342    2908 log.go:172] (0xc0006a6370) (0xc0007b2000) Stream removed, broadcasting: 5\n"
Feb  8 14:40:41.242: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Feb  8 14:40:41.242: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Feb  8 14:40:51.283: INFO: Waiting for StatefulSet statefulset-796/ss2 to complete update
Feb  8 14:40:51.283: INFO: Waiting for Pod statefulset-796/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb  8 14:40:51.283: INFO: Waiting for Pod statefulset-796/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb  8 14:40:51.283: INFO: Waiting for Pod statefulset-796/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb  8 14:41:01.298: INFO: Waiting for StatefulSet statefulset-796/ss2 to complete update
Feb  8 14:41:01.298: INFO: Waiting for Pod statefulset-796/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb  8 14:41:01.298: INFO: Waiting for Pod statefulset-796/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb  8 14:41:11.300: INFO: Waiting for StatefulSet statefulset-796/ss2 to complete update
Feb  8 14:41:11.300: INFO: Waiting for Pod statefulset-796/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb  8 14:41:21.425: INFO: Waiting for StatefulSet statefulset-796/ss2 to complete update
Feb  8 14:41:21.425: INFO: Waiting for Pod statefulset-796/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb  8 14:41:31.297: INFO: Waiting for StatefulSet statefulset-796/ss2 to complete update
STEP: Rolling back to a previous revision
Feb  8 14:41:41.296: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-796 ss2-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb  8 14:41:41.639: INFO: stderr: "I0208 14:41:41.451250    2926 log.go:172] (0xc0008fa420) (0xc00077a640) Create stream\nI0208 14:41:41.451452    2926 log.go:172] (0xc0008fa420) (0xc00077a640) Stream added, broadcasting: 1\nI0208 14:41:41.454103    2926 log.go:172] (0xc0008fa420) Reply frame received for 1\nI0208 14:41:41.454130    2926 log.go:172] (0xc0008fa420) (0xc0008d6000) Create stream\nI0208 14:41:41.454137    2926 log.go:172] (0xc0008fa420) (0xc0008d6000) Stream added, broadcasting: 3\nI0208 14:41:41.455119    2926 log.go:172] (0xc0008fa420) Reply frame received for 3\nI0208 14:41:41.455140    2926 log.go:172] (0xc0008fa420) (0xc00077a6e0) Create stream\nI0208 14:41:41.455150    2926 log.go:172] (0xc0008fa420) (0xc00077a6e0) Stream added, broadcasting: 5\nI0208 14:41:41.456073    2926 log.go:172] (0xc0008fa420) Reply frame received for 5\nI0208 14:41:41.532245    2926 log.go:172] (0xc0008fa420) Data frame received for 5\nI0208 14:41:41.532283    2926 log.go:172] (0xc00077a6e0) (5) Data frame handling\nI0208 14:41:41.532298    2926 log.go:172] (0xc00077a6e0) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0208 14:41:41.549916    2926 log.go:172] (0xc0008fa420) Data frame received for 3\nI0208 14:41:41.549959    2926 log.go:172] (0xc0008d6000) (3) Data frame handling\nI0208 14:41:41.549982    2926 log.go:172] (0xc0008d6000) (3) Data frame sent\nI0208 14:41:41.629380    2926 log.go:172] (0xc0008fa420) Data frame received for 1\nI0208 14:41:41.629428    2926 log.go:172] (0xc0008fa420) (0xc0008d6000) Stream removed, broadcasting: 3\nI0208 14:41:41.629459    2926 log.go:172] (0xc00077a640) (1) Data frame handling\nI0208 14:41:41.629479    2926 log.go:172] (0xc00077a640) (1) Data frame sent\nI0208 14:41:41.629573    2926 log.go:172] (0xc0008fa420) (0xc00077a640) Stream removed, broadcasting: 1\nI0208 14:41:41.629853    2926 log.go:172] (0xc0008fa420) (0xc00077a6e0) Stream removed, broadcasting: 5\nI0208 14:41:41.629910    2926 log.go:172] (0xc0008fa420) Go away received\nI0208 14:41:41.629945    2926 log.go:172] (0xc0008fa420) (0xc00077a640) Stream removed, broadcasting: 1\nI0208 14:41:41.629987    2926 log.go:172] (0xc0008fa420) (0xc0008d6000) Stream removed, broadcasting: 3\nI0208 14:41:41.629995    2926 log.go:172] (0xc0008fa420) (0xc00077a6e0) Stream removed, broadcasting: 5\n"
Feb  8 14:41:41.640: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb  8 14:41:41.640: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Feb  8 14:41:51.699: INFO: Updating stateful set ss2
STEP: Rolling back update in reverse ordinal order
Feb  8 14:42:01.750: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-796 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  8 14:42:02.297: INFO: stderr: "I0208 14:42:02.100798    2944 log.go:172] (0xc0008ec370) (0xc0007f8780) Create stream\nI0208 14:42:02.100930    2944 log.go:172] (0xc0008ec370) (0xc0007f8780) Stream added, broadcasting: 1\nI0208 14:42:02.103503    2944 log.go:172] (0xc0008ec370) Reply frame received for 1\nI0208 14:42:02.103531    2944 log.go:172] (0xc0008ec370) (0xc0003ac1e0) Create stream\nI0208 14:42:02.103538    2944 log.go:172] (0xc0008ec370) (0xc0003ac1e0) Stream added, broadcasting: 3\nI0208 14:42:02.104631    2944 log.go:172] (0xc0008ec370) Reply frame received for 3\nI0208 14:42:02.104674    2944 log.go:172] (0xc0008ec370) (0xc0005d4000) Create stream\nI0208 14:42:02.104687    2944 log.go:172] (0xc0008ec370) (0xc0005d4000) Stream added, broadcasting: 5\nI0208 14:42:02.106976    2944 log.go:172] (0xc0008ec370) Reply frame received for 5\nI0208 14:42:02.201544    2944 log.go:172] (0xc0008ec370) Data frame received for 5\nI0208 14:42:02.201723    2944 log.go:172] (0xc0005d4000) (5) Data frame handling\nI0208 14:42:02.201767    2944 log.go:172] (0xc0005d4000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0208 14:42:02.201802    2944 log.go:172] (0xc0008ec370) Data frame received for 3\nI0208 14:42:02.201968    2944 log.go:172] (0xc0003ac1e0) (3) Data frame handling\nI0208 14:42:02.202003    2944 log.go:172] (0xc0003ac1e0) (3) Data frame sent\nI0208 14:42:02.287517    2944 log.go:172] (0xc0008ec370) (0xc0003ac1e0) Stream removed, broadcasting: 3\nI0208 14:42:02.287681    2944 log.go:172] (0xc0008ec370) Data frame received for 1\nI0208 14:42:02.287802    2944 log.go:172] (0xc0007f8780) (1) Data frame handling\nI0208 14:42:02.287844    2944 log.go:172] (0xc0007f8780) (1) Data frame sent\nI0208 14:42:02.287882    2944 log.go:172] (0xc0008ec370) (0xc0005d4000) Stream removed, broadcasting: 5\nI0208 14:42:02.287952    2944 log.go:172] (0xc0008ec370) (0xc0007f8780) Stream removed, broadcasting: 1\nI0208 14:42:02.287977    2944 log.go:172] (0xc0008ec370) Go away received\nI0208 14:42:02.289245    2944 log.go:172] (0xc0008ec370) (0xc0007f8780) Stream removed, broadcasting: 1\nI0208 14:42:02.289361    2944 log.go:172] (0xc0008ec370) (0xc0003ac1e0) Stream removed, broadcasting: 3\nI0208 14:42:02.289409    2944 log.go:172] (0xc0008ec370) (0xc0005d4000) Stream removed, broadcasting: 5\n"
Feb  8 14:42:02.297: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Feb  8 14:42:02.297: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Feb  8 14:42:12.323: INFO: Waiting for StatefulSet statefulset-796/ss2 to complete update
Feb  8 14:42:12.323: INFO: Waiting for Pod statefulset-796/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Feb  8 14:42:12.323: INFO: Waiting for Pod statefulset-796/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Feb  8 14:42:12.323: INFO: Waiting for Pod statefulset-796/ss2-2 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Feb  8 14:42:22.337: INFO: Waiting for StatefulSet statefulset-796/ss2 to complete update
Feb  8 14:42:22.337: INFO: Waiting for Pod statefulset-796/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Feb  8 14:42:22.338: INFO: Waiting for Pod statefulset-796/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Feb  8 14:42:32.334: INFO: Waiting for StatefulSet statefulset-796/ss2 to complete update
Feb  8 14:42:32.334: INFO: Waiting for Pod statefulset-796/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Feb  8 14:42:32.334: INFO: Waiting for Pod statefulset-796/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Feb  8 14:42:42.877: INFO: Waiting for StatefulSet statefulset-796/ss2 to complete update
Feb  8 14:42:42.877: INFO: Waiting for Pod statefulset-796/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Feb  8 14:42:52.336: INFO: Waiting for StatefulSet statefulset-796/ss2 to complete update
Feb  8 14:42:52.336: INFO: Waiting for Pod statefulset-796/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Feb  8 14:43:02.335: INFO: Waiting for StatefulSet statefulset-796/ss2 to complete update
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86
Feb  8 14:43:12.334: INFO: Deleting all statefulset in ns statefulset-796
Feb  8 14:43:12.338: INFO: Scaling statefulset ss2 to 0
Feb  8 14:43:52.367: INFO: Waiting for statefulset status.replicas updated to 0
Feb  8 14:43:52.370: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  8 14:43:52.408: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-796" for this suite.
Feb  8 14:44:00.449: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 14:44:00.650: INFO: namespace statefulset-796 deletion completed in 8.231579295s

• [SLOW TEST:250.970 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should perform rolling updates and roll backs of template modifications [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  8 14:44:00.651: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-b4f5dd9b-53ec-44b1-8590-0c4daaed634d
STEP: Creating a pod to test consume secrets
Feb  8 14:44:00.784: INFO: Waiting up to 5m0s for pod "pod-secrets-33f34018-6f9b-4ff9-9fd5-4a22150075f2" in namespace "secrets-1522" to be "success or failure"
Feb  8 14:44:00.795: INFO: Pod "pod-secrets-33f34018-6f9b-4ff9-9fd5-4a22150075f2": Phase="Pending", Reason="", readiness=false. Elapsed: 10.677524ms
Feb  8 14:44:02.806: INFO: Pod "pod-secrets-33f34018-6f9b-4ff9-9fd5-4a22150075f2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021710489s
Feb  8 14:44:04.815: INFO: Pod "pod-secrets-33f34018-6f9b-4ff9-9fd5-4a22150075f2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.031060362s
Feb  8 14:44:06.824: INFO: Pod "pod-secrets-33f34018-6f9b-4ff9-9fd5-4a22150075f2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.040209445s
Feb  8 14:44:08.831: INFO: Pod "pod-secrets-33f34018-6f9b-4ff9-9fd5-4a22150075f2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.047044812s
STEP: Saw pod success
Feb  8 14:44:08.831: INFO: Pod "pod-secrets-33f34018-6f9b-4ff9-9fd5-4a22150075f2" satisfied condition "success or failure"
Feb  8 14:44:08.836: INFO: Trying to get logs from node iruya-node pod pod-secrets-33f34018-6f9b-4ff9-9fd5-4a22150075f2 container secret-volume-test: 
STEP: delete the pod
Feb  8 14:44:08.926: INFO: Waiting for pod pod-secrets-33f34018-6f9b-4ff9-9fd5-4a22150075f2 to disappear
Feb  8 14:44:08.930: INFO: Pod pod-secrets-33f34018-6f9b-4ff9-9fd5-4a22150075f2 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  8 14:44:08.930: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-1522" for this suite.
Feb  8 14:44:15.049: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 14:44:15.158: INFO: namespace secrets-1522 deletion completed in 6.222390199s

• [SLOW TEST:14.508 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  8 14:44:15.159: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb  8 14:44:15.277: INFO: Waiting up to 5m0s for pod "downwardapi-volume-64ca3ee9-b3f2-44cc-a97e-eeb4f89e3441" in namespace "downward-api-5813" to be "success or failure"
Feb  8 14:44:15.282: INFO: Pod "downwardapi-volume-64ca3ee9-b3f2-44cc-a97e-eeb4f89e3441": Phase="Pending", Reason="", readiness=false. Elapsed: 4.969191ms
Feb  8 14:44:17.291: INFO: Pod "downwardapi-volume-64ca3ee9-b3f2-44cc-a97e-eeb4f89e3441": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013093996s
Feb  8 14:44:19.352: INFO: Pod "downwardapi-volume-64ca3ee9-b3f2-44cc-a97e-eeb4f89e3441": Phase="Pending", Reason="", readiness=false. Elapsed: 4.074580487s
Feb  8 14:44:21.410: INFO: Pod "downwardapi-volume-64ca3ee9-b3f2-44cc-a97e-eeb4f89e3441": Phase="Pending", Reason="", readiness=false. Elapsed: 6.132940501s
Feb  8 14:44:23.418: INFO: Pod "downwardapi-volume-64ca3ee9-b3f2-44cc-a97e-eeb4f89e3441": Phase="Pending", Reason="", readiness=false. Elapsed: 8.140433017s
Feb  8 14:44:25.875: INFO: Pod "downwardapi-volume-64ca3ee9-b3f2-44cc-a97e-eeb4f89e3441": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.597984821s
STEP: Saw pod success
Feb  8 14:44:25.876: INFO: Pod "downwardapi-volume-64ca3ee9-b3f2-44cc-a97e-eeb4f89e3441" satisfied condition "success or failure"
Feb  8 14:44:25.883: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-64ca3ee9-b3f2-44cc-a97e-eeb4f89e3441 container client-container: 
STEP: delete the pod
Feb  8 14:44:26.213: INFO: Waiting for pod downwardapi-volume-64ca3ee9-b3f2-44cc-a97e-eeb4f89e3441 to disappear
Feb  8 14:44:26.235: INFO: Pod downwardapi-volume-64ca3ee9-b3f2-44cc-a97e-eeb4f89e3441 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  8 14:44:26.235: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-5813" for this suite.
Feb  8 14:44:32.275: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 14:44:32.344: INFO: namespace downward-api-5813 deletion completed in 6.101025103s

• [SLOW TEST:17.185 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-apps] ReplicationController 
  should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  8 14:44:32.344: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Given a ReplicationController is created
STEP: When the matched label of one of its pods change
Feb  8 14:44:32.439: INFO: Pod name pod-release: Found 0 pods out of 1
Feb  8 14:44:37.446: INFO: Pod name pod-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  8 14:44:37.494: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-1548" for this suite.
Feb  8 14:44:43.777: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 14:44:43.876: INFO: namespace replication-controller-1548 deletion completed in 6.29391961s

• [SLOW TEST:11.533 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl patch 
  should add annotations for pods in rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  8 14:44:43.877: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should add annotations for pods in rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating Redis RC
Feb  8 14:44:43.943: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2060'
Feb  8 14:44:44.277: INFO: stderr: ""
Feb  8 14:44:44.277: INFO: stdout: "replicationcontroller/redis-master created\n"
STEP: Waiting for Redis master to start.
Feb  8 14:44:45.287: INFO: Selector matched 1 pods for map[app:redis]
Feb  8 14:44:45.288: INFO: Found 0 / 1
Feb  8 14:44:46.291: INFO: Selector matched 1 pods for map[app:redis]
Feb  8 14:44:46.291: INFO: Found 0 / 1
Feb  8 14:44:47.285: INFO: Selector matched 1 pods for map[app:redis]
Feb  8 14:44:47.285: INFO: Found 0 / 1
Feb  8 14:44:48.286: INFO: Selector matched 1 pods for map[app:redis]
Feb  8 14:44:48.286: INFO: Found 0 / 1
Feb  8 14:44:49.288: INFO: Selector matched 1 pods for map[app:redis]
Feb  8 14:44:49.288: INFO: Found 0 / 1
Feb  8 14:44:50.287: INFO: Selector matched 1 pods for map[app:redis]
Feb  8 14:44:50.287: INFO: Found 0 / 1
Feb  8 14:44:51.287: INFO: Selector matched 1 pods for map[app:redis]
Feb  8 14:44:51.287: INFO: Found 0 / 1
Feb  8 14:44:52.286: INFO: Selector matched 1 pods for map[app:redis]
Feb  8 14:44:52.286: INFO: Found 0 / 1
Feb  8 14:44:53.449: INFO: Selector matched 1 pods for map[app:redis]
Feb  8 14:44:53.449: INFO: Found 0 / 1
Feb  8 14:44:54.288: INFO: Selector matched 1 pods for map[app:redis]
Feb  8 14:44:54.289: INFO: Found 0 / 1
Feb  8 14:44:55.287: INFO: Selector matched 1 pods for map[app:redis]
Feb  8 14:44:55.287: INFO: Found 0 / 1
Feb  8 14:44:56.288: INFO: Selector matched 1 pods for map[app:redis]
Feb  8 14:44:56.289: INFO: Found 1 / 1
Feb  8 14:44:56.289: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
STEP: patching all pods
Feb  8 14:44:56.295: INFO: Selector matched 1 pods for map[app:redis]
Feb  8 14:44:56.295: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Feb  8 14:44:56.295: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod redis-master-zbf59 --namespace=kubectl-2060 -p {"metadata":{"annotations":{"x":"y"}}}'
Feb  8 14:44:56.538: INFO: stderr: ""
Feb  8 14:44:56.538: INFO: stdout: "pod/redis-master-zbf59 patched\n"
STEP: checking annotations
Feb  8 14:44:56.554: INFO: Selector matched 1 pods for map[app:redis]
Feb  8 14:44:56.554: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  8 14:44:56.554: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-2060" for this suite.
Feb  8 14:45:18.591: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 14:45:18.810: INFO: namespace kubectl-2060 deletion completed in 22.246119172s

• [SLOW TEST:34.933 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl patch
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should add annotations for pods in rc  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-storage] Projected combined 
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected combined
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  8 14:45:18.810: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-projected-all-test-volume-ef947d7a-9d59-48d7-bebc-6e263cbe1aef
STEP: Creating secret with name secret-projected-all-test-volume-139b9345-c285-4434-93a3-db7b4f51746e
STEP: Creating a pod to test Check all projections for projected volume plugin
Feb  8 14:45:18.941: INFO: Waiting up to 5m0s for pod "projected-volume-b17b71ca-2582-4950-9fb9-c2e50a241965" in namespace "projected-5120" to be "success or failure"
Feb  8 14:45:18.952: INFO: Pod "projected-volume-b17b71ca-2582-4950-9fb9-c2e50a241965": Phase="Pending", Reason="", readiness=false. Elapsed: 10.082112ms
Feb  8 14:45:20.963: INFO: Pod "projected-volume-b17b71ca-2582-4950-9fb9-c2e50a241965": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021858557s
Feb  8 14:45:22.972: INFO: Pod "projected-volume-b17b71ca-2582-4950-9fb9-c2e50a241965": Phase="Pending", Reason="", readiness=false. Elapsed: 4.030373734s
Feb  8 14:45:24.979: INFO: Pod "projected-volume-b17b71ca-2582-4950-9fb9-c2e50a241965": Phase="Pending", Reason="", readiness=false. Elapsed: 6.03775826s
Feb  8 14:45:26.988: INFO: Pod "projected-volume-b17b71ca-2582-4950-9fb9-c2e50a241965": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.046220747s
STEP: Saw pod success
Feb  8 14:45:26.988: INFO: Pod "projected-volume-b17b71ca-2582-4950-9fb9-c2e50a241965" satisfied condition "success or failure"
Feb  8 14:45:26.996: INFO: Trying to get logs from node iruya-node pod projected-volume-b17b71ca-2582-4950-9fb9-c2e50a241965 container projected-all-volume-test: 
STEP: delete the pod
Feb  8 14:45:27.229: INFO: Waiting for pod projected-volume-b17b71ca-2582-4950-9fb9-c2e50a241965 to disappear
Feb  8 14:45:27.235: INFO: Pod projected-volume-b17b71ca-2582-4950-9fb9-c2e50a241965 no longer exists
[AfterEach] [sig-storage] Projected combined
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  8 14:45:27.235: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5120" for this suite.
Feb  8 14:45:33.267: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 14:45:33.436: INFO: namespace projected-5120 deletion completed in 6.194375984s

• [SLOW TEST:14.626 seconds]
[sig-storage] Projected combined
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  8 14:45:33.438: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0666 on tmpfs
Feb  8 14:45:33.664: INFO: Waiting up to 5m0s for pod "pod-0326711c-5281-4334-88fd-f5a62a2542e2" in namespace "emptydir-1368" to be "success or failure"
Feb  8 14:45:33.671: INFO: Pod "pod-0326711c-5281-4334-88fd-f5a62a2542e2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.469975ms
Feb  8 14:45:35.693: INFO: Pod "pod-0326711c-5281-4334-88fd-f5a62a2542e2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02847106s
Feb  8 14:45:37.705: INFO: Pod "pod-0326711c-5281-4334-88fd-f5a62a2542e2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.040394421s
Feb  8 14:45:39.722: INFO: Pod "pod-0326711c-5281-4334-88fd-f5a62a2542e2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.057450559s
Feb  8 14:45:41.734: INFO: Pod "pod-0326711c-5281-4334-88fd-f5a62a2542e2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.070184935s
STEP: Saw pod success
Feb  8 14:45:41.734: INFO: Pod "pod-0326711c-5281-4334-88fd-f5a62a2542e2" satisfied condition "success or failure"
Feb  8 14:45:41.739: INFO: Trying to get logs from node iruya-node pod pod-0326711c-5281-4334-88fd-f5a62a2542e2 container test-container: 
STEP: delete the pod
Feb  8 14:45:41.895: INFO: Waiting for pod pod-0326711c-5281-4334-88fd-f5a62a2542e2 to disappear
Feb  8 14:45:41.908: INFO: Pod pod-0326711c-5281-4334-88fd-f5a62a2542e2 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  8 14:45:41.908: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-1368" for this suite.
Feb  8 14:45:47.966: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 14:45:48.100: INFO: namespace emptydir-1368 deletion completed in 6.179367655s

• [SLOW TEST:14.662 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-storage] HostPath 
  should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  8 14:45:48.100: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename hostpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37
[It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test hostPath mode
Feb  8 14:45:48.215: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-1693" to be "success or failure"
Feb  8 14:45:48.240: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 25.379135ms
Feb  8 14:45:50.247: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032506542s
Feb  8 14:45:52.255: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.040176994s
Feb  8 14:45:54.261: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 6.046347142s
Feb  8 14:45:56.286: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 8.071643455s
Feb  8 14:45:58.296: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 10.081315509s
Feb  8 14:46:00.338: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.123481129s
STEP: Saw pod success
Feb  8 14:46:00.338: INFO: Pod "pod-host-path-test" satisfied condition "success or failure"
Feb  8 14:46:00.348: INFO: Trying to get logs from node iruya-node pod pod-host-path-test container test-container-1: 
STEP: delete the pod
Feb  8 14:46:01.231: INFO: Waiting for pod pod-host-path-test to disappear
Feb  8 14:46:01.244: INFO: Pod pod-host-path-test no longer exists
[AfterEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  8 14:46:01.244: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "hostpath-1693" for this suite.
Feb  8 14:46:07.282: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 14:46:07.439: INFO: namespace hostpath-1693 deletion completed in 6.183088036s

• [SLOW TEST:19.339 seconds]
[sig-storage] HostPath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34
  should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  8 14:46:07.441: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Feb  8 14:46:17.669: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  8 14:46:17.777: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-1956" for this suite.
Feb  8 14:46:23.820: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 14:46:23.963: INFO: namespace container-runtime-1956 deletion completed in 6.174140652s

• [SLOW TEST:16.522 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129
      should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  8 14:46:23.964: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb  8 14:46:24.071: INFO: Waiting up to 5m0s for pod "downwardapi-volume-cbb4d264-7393-4750-914c-4df15b9a1d59" in namespace "downward-api-7375" to be "success or failure"
Feb  8 14:46:24.126: INFO: Pod "downwardapi-volume-cbb4d264-7393-4750-914c-4df15b9a1d59": Phase="Pending", Reason="", readiness=false. Elapsed: 54.63406ms
Feb  8 14:46:26.137: INFO: Pod "downwardapi-volume-cbb4d264-7393-4750-914c-4df15b9a1d59": Phase="Pending", Reason="", readiness=false. Elapsed: 2.065742975s
Feb  8 14:46:28.144: INFO: Pod "downwardapi-volume-cbb4d264-7393-4750-914c-4df15b9a1d59": Phase="Pending", Reason="", readiness=false. Elapsed: 4.07253035s
Feb  8 14:46:30.151: INFO: Pod "downwardapi-volume-cbb4d264-7393-4750-914c-4df15b9a1d59": Phase="Pending", Reason="", readiness=false. Elapsed: 6.080388223s
Feb  8 14:46:32.163: INFO: Pod "downwardapi-volume-cbb4d264-7393-4750-914c-4df15b9a1d59": Phase="Running", Reason="", readiness=true. Elapsed: 8.092182952s
Feb  8 14:46:34.171: INFO: Pod "downwardapi-volume-cbb4d264-7393-4750-914c-4df15b9a1d59": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.100147302s
STEP: Saw pod success
Feb  8 14:46:34.171: INFO: Pod "downwardapi-volume-cbb4d264-7393-4750-914c-4df15b9a1d59" satisfied condition "success or failure"
Feb  8 14:46:34.175: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-cbb4d264-7393-4750-914c-4df15b9a1d59 container client-container: 
STEP: delete the pod
Feb  8 14:46:34.403: INFO: Waiting for pod downwardapi-volume-cbb4d264-7393-4750-914c-4df15b9a1d59 to disappear
Feb  8 14:46:34.422: INFO: Pod downwardapi-volume-cbb4d264-7393-4750-914c-4df15b9a1d59 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  8 14:46:34.422: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-7375" for this suite.
Feb  8 14:46:40.448: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 14:46:40.604: INFO: namespace downward-api-7375 deletion completed in 6.175402989s

• [SLOW TEST:16.640 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  8 14:46:40.604: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
Feb  8 14:46:51.011: INFO: 10 pods remaining
Feb  8 14:46:51.011: INFO: 10 pods has nil DeletionTimestamp
Feb  8 14:46:51.011: INFO: 
Feb  8 14:46:51.714: INFO: 10 pods remaining
Feb  8 14:46:51.714: INFO: 0 pods has nil DeletionTimestamp
Feb  8 14:46:51.714: INFO: 
STEP: Gathering metrics
W0208 14:46:52.300469       8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Feb  8 14:46:52.300: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  8 14:46:52.300: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-2809" for this suite.
Feb  8 14:47:06.398: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 14:47:06.550: INFO: namespace gc-2809 deletion completed in 14.235301901s

• [SLOW TEST:25.946 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  8 14:47:06.551: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  8 14:48:06.768: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-3506" for this suite.
Feb  8 14:48:28.803: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 14:48:28.984: INFO: namespace container-probe-3506 deletion completed in 22.207477813s

• [SLOW TEST:82.433 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl cluster-info 
  should check if Kubernetes master services is included in cluster-info  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  8 14:48:28.984: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should check if Kubernetes master services is included in cluster-info  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: validating cluster-info
Feb  8 14:48:29.146: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info'
Feb  8 14:48:30.850: INFO: stderr: ""
Feb  8 14:48:30.850: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.24.4.57:6443\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.24.4.57:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  8 14:48:30.850: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-2056" for this suite.
Feb  8 14:48:36.899: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 14:48:37.136: INFO: namespace kubectl-2056 deletion completed in 6.268110539s

• [SLOW TEST:8.152 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl cluster-info
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should check if Kubernetes master services is included in cluster-info  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run rc 
  should create an rc from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  8 14:48:37.137: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1456
[It] should create an rc from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Feb  8 14:48:37.315: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=kubectl-2587'
Feb  8 14:48:37.536: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Feb  8 14:48:37.536: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n"
STEP: verifying the rc e2e-test-nginx-rc was created
STEP: verifying the pod controlled by rc e2e-test-nginx-rc was created
STEP: confirm that you can get logs from an rc
Feb  8 14:48:37.595: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-nginx-rc-j7fsz]
Feb  8 14:48:37.595: INFO: Waiting up to 5m0s for pod "e2e-test-nginx-rc-j7fsz" in namespace "kubectl-2587" to be "running and ready"
Feb  8 14:48:37.682: INFO: Pod "e2e-test-nginx-rc-j7fsz": Phase="Pending", Reason="", readiness=false. Elapsed: 86.282054ms
Feb  8 14:48:39.695: INFO: Pod "e2e-test-nginx-rc-j7fsz": Phase="Pending", Reason="", readiness=false. Elapsed: 2.09938019s
Feb  8 14:48:41.711: INFO: Pod "e2e-test-nginx-rc-j7fsz": Phase="Pending", Reason="", readiness=false. Elapsed: 4.115286873s
Feb  8 14:48:43.717: INFO: Pod "e2e-test-nginx-rc-j7fsz": Phase="Pending", Reason="", readiness=false. Elapsed: 6.122045781s
Feb  8 14:48:45.724: INFO: Pod "e2e-test-nginx-rc-j7fsz": Phase="Pending", Reason="", readiness=false. Elapsed: 8.129059656s
Feb  8 14:48:47.731: INFO: Pod "e2e-test-nginx-rc-j7fsz": Phase="Running", Reason="", readiness=true. Elapsed: 10.135933476s
Feb  8 14:48:47.731: INFO: Pod "e2e-test-nginx-rc-j7fsz" satisfied condition "running and ready"
Feb  8 14:48:47.731: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-nginx-rc-j7fsz]
Feb  8 14:48:47.732: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-nginx-rc --namespace=kubectl-2587'
Feb  8 14:48:47.984: INFO: stderr: ""
Feb  8 14:48:47.984: INFO: stdout: ""
[AfterEach] [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1461
Feb  8 14:48:47.985: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=kubectl-2587'
Feb  8 14:48:48.110: INFO: stderr: ""
Feb  8 14:48:48.111: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  8 14:48:48.111: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-2587" for this suite.
Feb  8 14:49:10.145: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 14:49:10.257: INFO: namespace kubectl-2587 deletion completed in 22.134633677s

• [SLOW TEST:33.121 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create an rc from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  8 14:49:10.258: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Feb  8 14:49:26.428: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb  8 14:49:26.455: INFO: Pod pod-with-prestop-exec-hook still exists
Feb  8 14:49:28.455: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb  8 14:49:28.470: INFO: Pod pod-with-prestop-exec-hook still exists
Feb  8 14:49:30.455: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb  8 14:49:30.468: INFO: Pod pod-with-prestop-exec-hook still exists
Feb  8 14:49:32.455: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb  8 14:49:32.471: INFO: Pod pod-with-prestop-exec-hook still exists
Feb  8 14:49:34.455: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb  8 14:49:34.470: INFO: Pod pod-with-prestop-exec-hook still exists
Feb  8 14:49:36.455: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb  8 14:49:36.468: INFO: Pod pod-with-prestop-exec-hook still exists
Feb  8 14:49:38.457: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb  8 14:49:38.474: INFO: Pod pod-with-prestop-exec-hook still exists
Feb  8 14:49:40.456: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb  8 14:49:40.470: INFO: Pod pod-with-prestop-exec-hook still exists
Feb  8 14:49:42.455: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb  8 14:49:42.467: INFO: Pod pod-with-prestop-exec-hook still exists
Feb  8 14:49:44.455: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb  8 14:49:44.467: INFO: Pod pod-with-prestop-exec-hook still exists
Feb  8 14:49:46.455: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb  8 14:49:46.476: INFO: Pod pod-with-prestop-exec-hook still exists
Feb  8 14:49:48.455: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb  8 14:49:48.465: INFO: Pod pod-with-prestop-exec-hook still exists
Feb  8 14:49:50.455: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb  8 14:49:50.470: INFO: Pod pod-with-prestop-exec-hook still exists
Feb  8 14:49:52.455: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb  8 14:49:52.467: INFO: Pod pod-with-prestop-exec-hook still exists
Feb  8 14:49:54.455: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb  8 14:49:54.463: INFO: Pod pod-with-prestop-exec-hook still exists
Feb  8 14:49:56.455: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb  8 14:49:56.462: INFO: Pod pod-with-prestop-exec-hook still exists
Feb  8 14:49:58.455: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb  8 14:49:58.467: INFO: Pod pod-with-prestop-exec-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  8 14:49:58.508: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-7723" for this suite.
Feb  8 14:50:20.906: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 14:50:21.124: INFO: namespace container-lifecycle-hook-7723 deletion completed in 22.609422325s

• [SLOW TEST:70.866 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute prestop exec hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  8 14:50:21.124: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb  8 14:50:21.205: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  8 14:50:29.673: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-9007" for this suite.
Feb  8 14:51:19.728: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 14:51:19.895: INFO: namespace pods-9007 deletion completed in 50.190642844s

• [SLOW TEST:58.771 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  8 14:51:19.896: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name projected-secret-test-88aa9f89-f774-4b68-a056-30f32d5a787d
STEP: Creating a pod to test consume secrets
Feb  8 14:51:20.021: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-01f797f1-8935-4798-b222-470a60114c72" in namespace "projected-3170" to be "success or failure"
Feb  8 14:51:20.041: INFO: Pod "pod-projected-secrets-01f797f1-8935-4798-b222-470a60114c72": Phase="Pending", Reason="", readiness=false. Elapsed: 20.138218ms
Feb  8 14:51:22.048: INFO: Pod "pod-projected-secrets-01f797f1-8935-4798-b222-470a60114c72": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026546044s
Feb  8 14:51:24.058: INFO: Pod "pod-projected-secrets-01f797f1-8935-4798-b222-470a60114c72": Phase="Pending", Reason="", readiness=false. Elapsed: 4.036756073s
Feb  8 14:51:26.063: INFO: Pod "pod-projected-secrets-01f797f1-8935-4798-b222-470a60114c72": Phase="Pending", Reason="", readiness=false. Elapsed: 6.041910152s
Feb  8 14:51:28.074: INFO: Pod "pod-projected-secrets-01f797f1-8935-4798-b222-470a60114c72": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.053068853s
STEP: Saw pod success
Feb  8 14:51:28.075: INFO: Pod "pod-projected-secrets-01f797f1-8935-4798-b222-470a60114c72" satisfied condition "success or failure"
Feb  8 14:51:28.080: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-01f797f1-8935-4798-b222-470a60114c72 container projected-secret-volume-test: 
STEP: delete the pod
Feb  8 14:51:28.192: INFO: Waiting for pod pod-projected-secrets-01f797f1-8935-4798-b222-470a60114c72 to disappear
Feb  8 14:51:28.199: INFO: Pod pod-projected-secrets-01f797f1-8935-4798-b222-470a60114c72 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  8 14:51:28.199: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3170" for this suite.
Feb  8 14:51:36.227: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 14:51:36.374: INFO: namespace projected-3170 deletion completed in 8.167650854s

• [SLOW TEST:16.478 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  8 14:51:36.374: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb  8 14:51:36.521: INFO: Waiting up to 5m0s for pod "downwardapi-volume-dee73c7c-fdeb-4fb1-9fd5-59eafcc621ea" in namespace "downward-api-9423" to be "success or failure"
Feb  8 14:51:36.531: INFO: Pod "downwardapi-volume-dee73c7c-fdeb-4fb1-9fd5-59eafcc621ea": Phase="Pending", Reason="", readiness=false. Elapsed: 10.146684ms
Feb  8 14:51:38.545: INFO: Pod "downwardapi-volume-dee73c7c-fdeb-4fb1-9fd5-59eafcc621ea": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024207819s
Feb  8 14:51:40.558: INFO: Pod "downwardapi-volume-dee73c7c-fdeb-4fb1-9fd5-59eafcc621ea": Phase="Pending", Reason="", readiness=false. Elapsed: 4.036698382s
Feb  8 14:51:42.570: INFO: Pod "downwardapi-volume-dee73c7c-fdeb-4fb1-9fd5-59eafcc621ea": Phase="Pending", Reason="", readiness=false. Elapsed: 6.049154328s
Feb  8 14:51:44.586: INFO: Pod "downwardapi-volume-dee73c7c-fdeb-4fb1-9fd5-59eafcc621ea": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.065006294s
STEP: Saw pod success
Feb  8 14:51:44.586: INFO: Pod "downwardapi-volume-dee73c7c-fdeb-4fb1-9fd5-59eafcc621ea" satisfied condition "success or failure"
Feb  8 14:51:44.594: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-dee73c7c-fdeb-4fb1-9fd5-59eafcc621ea container client-container: 
STEP: delete the pod
Feb  8 14:51:44.747: INFO: Waiting for pod downwardapi-volume-dee73c7c-fdeb-4fb1-9fd5-59eafcc621ea to disappear
Feb  8 14:51:44.756: INFO: Pod downwardapi-volume-dee73c7c-fdeb-4fb1-9fd5-59eafcc621ea no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  8 14:51:44.756: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-9423" for this suite.
Feb  8 14:51:50.817: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 14:51:50.935: INFO: namespace downward-api-9423 deletion completed in 6.169112384s

• [SLOW TEST:14.561 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  8 14:51:50.936: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test override command
Feb  8 14:51:51.031: INFO: Waiting up to 5m0s for pod "client-containers-c20f260e-b28e-485e-ac3f-7fe9f91c1ced" in namespace "containers-4979" to be "success or failure"
Feb  8 14:51:51.063: INFO: Pod "client-containers-c20f260e-b28e-485e-ac3f-7fe9f91c1ced": Phase="Pending", Reason="", readiness=false. Elapsed: 32.042181ms
Feb  8 14:51:53.069: INFO: Pod "client-containers-c20f260e-b28e-485e-ac3f-7fe9f91c1ced": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038139276s
Feb  8 14:51:55.078: INFO: Pod "client-containers-c20f260e-b28e-485e-ac3f-7fe9f91c1ced": Phase="Pending", Reason="", readiness=false. Elapsed: 4.046852111s
Feb  8 14:51:57.085: INFO: Pod "client-containers-c20f260e-b28e-485e-ac3f-7fe9f91c1ced": Phase="Pending", Reason="", readiness=false. Elapsed: 6.05328666s
Feb  8 14:51:59.090: INFO: Pod "client-containers-c20f260e-b28e-485e-ac3f-7fe9f91c1ced": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.058311841s
STEP: Saw pod success
Feb  8 14:51:59.090: INFO: Pod "client-containers-c20f260e-b28e-485e-ac3f-7fe9f91c1ced" satisfied condition "success or failure"
Feb  8 14:51:59.091: INFO: Trying to get logs from node iruya-node pod client-containers-c20f260e-b28e-485e-ac3f-7fe9f91c1ced container test-container: 
STEP: delete the pod
Feb  8 14:51:59.128: INFO: Waiting for pod client-containers-c20f260e-b28e-485e-ac3f-7fe9f91c1ced to disappear
Feb  8 14:51:59.147: INFO: Pod client-containers-c20f260e-b28e-485e-ac3f-7fe9f91c1ced no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  8 14:51:59.148: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-4979" for this suite.
Feb  8 14:52:05.219: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 14:52:05.363: INFO: namespace containers-4979 deletion completed in 6.211638083s

• [SLOW TEST:14.427 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  8 14:52:05.363: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb  8 14:52:06.131: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3f457213-4dfa-49b4-8586-1979154d9233" in namespace "projected-700" to be "success or failure"
Feb  8 14:52:06.144: INFO: Pod "downwardapi-volume-3f457213-4dfa-49b4-8586-1979154d9233": Phase="Pending", Reason="", readiness=false. Elapsed: 13.61941ms
Feb  8 14:52:08.262: INFO: Pod "downwardapi-volume-3f457213-4dfa-49b4-8586-1979154d9233": Phase="Pending", Reason="", readiness=false. Elapsed: 2.131791332s
Feb  8 14:52:10.288: INFO: Pod "downwardapi-volume-3f457213-4dfa-49b4-8586-1979154d9233": Phase="Pending", Reason="", readiness=false. Elapsed: 4.157475283s
Feb  8 14:52:12.296: INFO: Pod "downwardapi-volume-3f457213-4dfa-49b4-8586-1979154d9233": Phase="Pending", Reason="", readiness=false. Elapsed: 6.16568992s
Feb  8 14:52:14.312: INFO: Pod "downwardapi-volume-3f457213-4dfa-49b4-8586-1979154d9233": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.181282083s
STEP: Saw pod success
Feb  8 14:52:14.312: INFO: Pod "downwardapi-volume-3f457213-4dfa-49b4-8586-1979154d9233" satisfied condition "success or failure"
Feb  8 14:52:14.316: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-3f457213-4dfa-49b4-8586-1979154d9233 container client-container: 
STEP: delete the pod
Feb  8 14:52:14.424: INFO: Waiting for pod downwardapi-volume-3f457213-4dfa-49b4-8586-1979154d9233 to disappear
Feb  8 14:52:14.432: INFO: Pod downwardapi-volume-3f457213-4dfa-49b4-8586-1979154d9233 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  8 14:52:14.432: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-700" for this suite.
Feb  8 14:52:20.464: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 14:52:20.607: INFO: namespace projected-700 deletion completed in 6.170141706s

• [SLOW TEST:15.244 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy logs on node using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  8 14:52:20.607: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy logs on node using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb  8 14:52:20.729: INFO: (0) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 11.682836ms)
Feb  8 14:52:20.734: INFO: (1) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.47349ms)
Feb  8 14:52:20.738: INFO: (2) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.215512ms)
Feb  8 14:52:20.743: INFO: (3) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.746301ms)
Feb  8 14:52:20.747: INFO: (4) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.835869ms)
Feb  8 14:52:20.751: INFO: (5) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.517641ms)
Feb  8 14:52:20.755: INFO: (6) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.937731ms)
Feb  8 14:52:20.759: INFO: (7) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.746364ms)
Feb  8 14:52:20.766: INFO: (8) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.396008ms)
Feb  8 14:52:20.771: INFO: (9) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.056052ms)
Feb  8 14:52:20.777: INFO: (10) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.85242ms)
Feb  8 14:52:20.784: INFO: (11) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.807189ms)
Feb  8 14:52:20.792: INFO: (12) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.227991ms)
Feb  8 14:52:20.803: INFO: (13) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 11.364364ms)
Feb  8 14:52:20.812: INFO: (14) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.657711ms)
Feb  8 14:52:20.820: INFO: (15) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.952637ms)
Feb  8 14:52:20.830: INFO: (16) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 10.369277ms)
Feb  8 14:52:20.834: INFO: (17) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.894105ms)
Feb  8 14:52:20.838: INFO: (18) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.393259ms)
Feb  8 14:52:20.842: INFO: (19) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.191838ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  8 14:52:20.842: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-8970" for this suite.
Feb  8 14:52:26.866: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 14:52:26.964: INFO: namespace proxy-8970 deletion completed in 6.119041295s

• [SLOW TEST:6.357 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58
    should proxy logs on node using proxy subresource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  8 14:52:26.965: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward api env vars
Feb  8 14:52:27.042: INFO: Waiting up to 5m0s for pod "downward-api-46974c90-1e59-448c-a3a4-156773a66159" in namespace "downward-api-101" to be "success or failure"
Feb  8 14:52:27.228: INFO: Pod "downward-api-46974c90-1e59-448c-a3a4-156773a66159": Phase="Pending", Reason="", readiness=false. Elapsed: 185.647648ms
Feb  8 14:52:29.236: INFO: Pod "downward-api-46974c90-1e59-448c-a3a4-156773a66159": Phase="Pending", Reason="", readiness=false. Elapsed: 2.193883173s
Feb  8 14:52:31.245: INFO: Pod "downward-api-46974c90-1e59-448c-a3a4-156773a66159": Phase="Pending", Reason="", readiness=false. Elapsed: 4.202492486s
Feb  8 14:52:33.316: INFO: Pod "downward-api-46974c90-1e59-448c-a3a4-156773a66159": Phase="Pending", Reason="", readiness=false. Elapsed: 6.273655078s
Feb  8 14:52:35.324: INFO: Pod "downward-api-46974c90-1e59-448c-a3a4-156773a66159": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.281594727s
STEP: Saw pod success
Feb  8 14:52:35.324: INFO: Pod "downward-api-46974c90-1e59-448c-a3a4-156773a66159" satisfied condition "success or failure"
Feb  8 14:52:35.328: INFO: Trying to get logs from node iruya-node pod downward-api-46974c90-1e59-448c-a3a4-156773a66159 container dapi-container: 
STEP: delete the pod
Feb  8 14:52:35.408: INFO: Waiting for pod downward-api-46974c90-1e59-448c-a3a4-156773a66159 to disappear
Feb  8 14:52:35.413: INFO: Pod downward-api-46974c90-1e59-448c-a3a4-156773a66159 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  8 14:52:35.413: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-101" for this suite.
Feb  8 14:52:41.504: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 14:52:41.655: INFO: namespace downward-api-101 deletion completed in 6.234611988s

• [SLOW TEST:14.691 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  8 14:52:41.656: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Feb  8 14:52:50.530: INFO: Successfully updated pod "pod-update-activedeadlineseconds-a68d5414-9950-4bf1-b839-30178ffdcc49"
Feb  8 14:52:50.530: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-a68d5414-9950-4bf1-b839-30178ffdcc49" in namespace "pods-8877" to be "terminated due to deadline exceeded"
Feb  8 14:52:50.575: INFO: Pod "pod-update-activedeadlineseconds-a68d5414-9950-4bf1-b839-30178ffdcc49": Phase="Running", Reason="", readiness=true. Elapsed: 44.497521ms
Feb  8 14:52:52.588: INFO: Pod "pod-update-activedeadlineseconds-a68d5414-9950-4bf1-b839-30178ffdcc49": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.05788972s
Feb  8 14:52:52.588: INFO: Pod "pod-update-activedeadlineseconds-a68d5414-9950-4bf1-b839-30178ffdcc49" satisfied condition "terminated due to deadline exceeded"
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  8 14:52:52.589: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-8877" for this suite.
Feb  8 14:52:58.620: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 14:52:58.799: INFO: namespace pods-8877 deletion completed in 6.202767561s

• [SLOW TEST:17.143 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[k8s.io] Pods 
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  8 14:52:58.799: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb  8 14:52:58.871: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  8 14:53:07.016: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-8062" for this suite.
Feb  8 14:53:59.045: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 14:53:59.138: INFO: namespace pods-8062 deletion completed in 52.114019359s

• [SLOW TEST:60.339 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  8 14:53:59.138: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
[It] should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb  8 14:53:59.250: INFO: Create a RollingUpdate DaemonSet
Feb  8 14:53:59.256: INFO: Check that daemon pods launch on every node of the cluster
Feb  8 14:53:59.263: INFO: Number of nodes with available pods: 0
Feb  8 14:53:59.263: INFO: Node iruya-node is running more than one daemon pod
Feb  8 14:54:00.291: INFO: Number of nodes with available pods: 0
Feb  8 14:54:00.291: INFO: Node iruya-node is running more than one daemon pod
Feb  8 14:54:01.609: INFO: Number of nodes with available pods: 0
Feb  8 14:54:01.609: INFO: Node iruya-node is running more than one daemon pod
Feb  8 14:54:02.574: INFO: Number of nodes with available pods: 0
Feb  8 14:54:02.574: INFO: Node iruya-node is running more than one daemon pod
Feb  8 14:54:03.279: INFO: Number of nodes with available pods: 0
Feb  8 14:54:03.279: INFO: Node iruya-node is running more than one daemon pod
Feb  8 14:54:04.292: INFO: Number of nodes with available pods: 0
Feb  8 14:54:04.292: INFO: Node iruya-node is running more than one daemon pod
Feb  8 14:54:07.361: INFO: Number of nodes with available pods: 0
Feb  8 14:54:07.361: INFO: Node iruya-node is running more than one daemon pod
Feb  8 14:54:08.420: INFO: Number of nodes with available pods: 0
Feb  8 14:54:08.420: INFO: Node iruya-node is running more than one daemon pod
Feb  8 14:54:09.337: INFO: Number of nodes with available pods: 0
Feb  8 14:54:09.337: INFO: Node iruya-node is running more than one daemon pod
Feb  8 14:54:10.302: INFO: Number of nodes with available pods: 2
Feb  8 14:54:10.302: INFO: Number of running nodes: 2, number of available pods: 2
Feb  8 14:54:10.302: INFO: Update the DaemonSet to trigger a rollout
Feb  8 14:54:10.454: INFO: Updating DaemonSet daemon-set
Feb  8 14:54:27.539: INFO: Roll back the DaemonSet before rollout is complete
Feb  8 14:54:27.557: INFO: Updating DaemonSet daemon-set
Feb  8 14:54:27.557: INFO: Make sure DaemonSet rollback is complete
Feb  8 14:54:27.567: INFO: Wrong image for pod: daemon-set-nzszl. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent.
Feb  8 14:54:27.567: INFO: Pod daemon-set-nzszl is not available
Feb  8 14:54:28.599: INFO: Wrong image for pod: daemon-set-nzszl. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent.
Feb  8 14:54:28.599: INFO: Pod daemon-set-nzszl is not available
Feb  8 14:54:29.599: INFO: Wrong image for pod: daemon-set-nzszl. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent.
Feb  8 14:54:29.599: INFO: Pod daemon-set-nzszl is not available
Feb  8 14:54:30.609: INFO: Wrong image for pod: daemon-set-nzszl. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent.
Feb  8 14:54:30.609: INFO: Pod daemon-set-nzszl is not available
Feb  8 14:54:31.638: INFO: Wrong image for pod: daemon-set-nzszl. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent.
Feb  8 14:54:31.638: INFO: Pod daemon-set-nzszl is not available
Feb  8 14:54:32.607: INFO: Wrong image for pod: daemon-set-nzszl. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent.
Feb  8 14:54:32.607: INFO: Pod daemon-set-nzszl is not available
Feb  8 14:54:33.633: INFO: Pod daemon-set-ql6v5 is not available
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-2891, will wait for the garbage collector to delete the pods
Feb  8 14:54:33.755: INFO: Deleting DaemonSet.extensions daemon-set took: 34.865322ms
Feb  8 14:54:34.155: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.379498ms
Feb  8 14:54:40.564: INFO: Number of nodes with available pods: 0
Feb  8 14:54:40.564: INFO: Number of running nodes: 0, number of available pods: 0
Feb  8 14:54:40.567: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-2891/daemonsets","resourceVersion":"23584431"},"items":null}

Feb  8 14:54:40.570: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-2891/pods","resourceVersion":"23584431"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  8 14:54:40.583: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-2891" for this suite.
Feb  8 14:54:46.615: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 14:54:46.719: INFO: namespace daemonsets-2891 deletion completed in 6.132895059s

• [SLOW TEST:47.581 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  8 14:54:46.720: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb  8 14:54:46.865: INFO: Waiting up to 5m0s for pod "downwardapi-volume-030e2412-aa37-42c1-b544-b73f0818c1f9" in namespace "projected-5654" to be "success or failure"
Feb  8 14:54:46.885: INFO: Pod "downwardapi-volume-030e2412-aa37-42c1-b544-b73f0818c1f9": Phase="Pending", Reason="", readiness=false. Elapsed: 19.901581ms
Feb  8 14:54:48.897: INFO: Pod "downwardapi-volume-030e2412-aa37-42c1-b544-b73f0818c1f9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032342874s
Feb  8 14:54:50.907: INFO: Pod "downwardapi-volume-030e2412-aa37-42c1-b544-b73f0818c1f9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.042485836s
Feb  8 14:54:52.917: INFO: Pod "downwardapi-volume-030e2412-aa37-42c1-b544-b73f0818c1f9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.052485469s
Feb  8 14:54:54.927: INFO: Pod "downwardapi-volume-030e2412-aa37-42c1-b544-b73f0818c1f9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.062383704s
STEP: Saw pod success
Feb  8 14:54:54.927: INFO: Pod "downwardapi-volume-030e2412-aa37-42c1-b544-b73f0818c1f9" satisfied condition "success or failure"
Feb  8 14:54:54.931: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-030e2412-aa37-42c1-b544-b73f0818c1f9 container client-container: 
STEP: delete the pod
Feb  8 14:54:55.012: INFO: Waiting for pod downwardapi-volume-030e2412-aa37-42c1-b544-b73f0818c1f9 to disappear
Feb  8 14:54:55.024: INFO: Pod downwardapi-volume-030e2412-aa37-42c1-b544-b73f0818c1f9 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  8 14:54:55.024: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5654" for this suite.
Feb  8 14:55:01.052: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 14:55:01.179: INFO: namespace projected-5654 deletion completed in 6.151015102s

• [SLOW TEST:14.460 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  8 14:55:01.180: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb  8 14:55:01.286: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ef5b1322-dbc9-47c4-956a-7995fdb7f768" in namespace "projected-4889" to be "success or failure"
Feb  8 14:55:01.300: INFO: Pod "downwardapi-volume-ef5b1322-dbc9-47c4-956a-7995fdb7f768": Phase="Pending", Reason="", readiness=false. Elapsed: 14.190769ms
Feb  8 14:55:03.307: INFO: Pod "downwardapi-volume-ef5b1322-dbc9-47c4-956a-7995fdb7f768": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020813461s
Feb  8 14:55:05.314: INFO: Pod "downwardapi-volume-ef5b1322-dbc9-47c4-956a-7995fdb7f768": Phase="Pending", Reason="", readiness=false. Elapsed: 4.027610989s
Feb  8 14:55:07.321: INFO: Pod "downwardapi-volume-ef5b1322-dbc9-47c4-956a-7995fdb7f768": Phase="Pending", Reason="", readiness=false. Elapsed: 6.03439527s
Feb  8 14:55:09.498: INFO: Pod "downwardapi-volume-ef5b1322-dbc9-47c4-956a-7995fdb7f768": Phase="Pending", Reason="", readiness=false. Elapsed: 8.211706514s
Feb  8 14:55:11.522: INFO: Pod "downwardapi-volume-ef5b1322-dbc9-47c4-956a-7995fdb7f768": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.235343105s
STEP: Saw pod success
Feb  8 14:55:11.522: INFO: Pod "downwardapi-volume-ef5b1322-dbc9-47c4-956a-7995fdb7f768" satisfied condition "success or failure"
Feb  8 14:55:11.528: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-ef5b1322-dbc9-47c4-956a-7995fdb7f768 container client-container: 
STEP: delete the pod
Feb  8 14:55:11.595: INFO: Waiting for pod downwardapi-volume-ef5b1322-dbc9-47c4-956a-7995fdb7f768 to disappear
Feb  8 14:55:11.611: INFO: Pod downwardapi-volume-ef5b1322-dbc9-47c4-956a-7995fdb7f768 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  8 14:55:11.611: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4889" for this suite.
Feb  8 14:55:17.705: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 14:55:17.875: INFO: namespace projected-4889 deletion completed in 6.252889729s

• [SLOW TEST:16.696 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  8 14:55:17.876: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-b73cd1f8-0455-4500-956a-d25e94bb687b
STEP: Creating a pod to test consume configMaps
Feb  8 14:55:18.021: INFO: Waiting up to 5m0s for pod "pod-configmaps-b65a972b-53a5-4753-b3c6-9ffcf5fca6b3" in namespace "configmap-1789" to be "success or failure"
Feb  8 14:55:18.041: INFO: Pod "pod-configmaps-b65a972b-53a5-4753-b3c6-9ffcf5fca6b3": Phase="Pending", Reason="", readiness=false. Elapsed: 19.629365ms
Feb  8 14:55:20.060: INFO: Pod "pod-configmaps-b65a972b-53a5-4753-b3c6-9ffcf5fca6b3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03825426s
Feb  8 14:55:22.081: INFO: Pod "pod-configmaps-b65a972b-53a5-4753-b3c6-9ffcf5fca6b3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.059189163s
Feb  8 14:55:24.096: INFO: Pod "pod-configmaps-b65a972b-53a5-4753-b3c6-9ffcf5fca6b3": Phase="Pending", Reason="", readiness=false. Elapsed: 6.074417282s
Feb  8 14:55:26.105: INFO: Pod "pod-configmaps-b65a972b-53a5-4753-b3c6-9ffcf5fca6b3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.083105067s
STEP: Saw pod success
Feb  8 14:55:26.105: INFO: Pod "pod-configmaps-b65a972b-53a5-4753-b3c6-9ffcf5fca6b3" satisfied condition "success or failure"
Feb  8 14:55:26.108: INFO: Trying to get logs from node iruya-node pod pod-configmaps-b65a972b-53a5-4753-b3c6-9ffcf5fca6b3 container configmap-volume-test: 
STEP: delete the pod
Feb  8 14:55:26.187: INFO: Waiting for pod pod-configmaps-b65a972b-53a5-4753-b3c6-9ffcf5fca6b3 to disappear
Feb  8 14:55:26.193: INFO: Pod pod-configmaps-b65a972b-53a5-4753-b3c6-9ffcf5fca6b3 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  8 14:55:26.193: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-1789" for this suite.
Feb  8 14:55:32.213: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 14:55:32.347: INFO: namespace configmap-1789 deletion completed in 6.149900088s

• [SLOW TEST:14.472 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should fail to create ConfigMap with empty key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  8 14:55:32.348: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create ConfigMap with empty key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap that has name configmap-test-emptyKey-b492a3f2-d573-4795-b93e-2e6580033ff7
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  8 14:55:32.509: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-7861" for this suite.
Feb  8 14:55:38.647: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 14:55:38.807: INFO: namespace configmap-7861 deletion completed in 6.281391318s

• [SLOW TEST:6.459 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should fail to create ConfigMap with empty key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not cause race condition when used for configmaps [Serial] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  8 14:55:38.807: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not cause race condition when used for configmaps [Serial] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating 50 configmaps
STEP: Creating RC which spawns configmap-volume pods
Feb  8 14:55:39.653: INFO: Pod name wrapped-volume-race-c76edcc2-2bfd-4286-aa91-c2fe7e58ba74: Found 0 pods out of 5
Feb  8 14:55:44.671: INFO: Pod name wrapped-volume-race-c76edcc2-2bfd-4286-aa91-c2fe7e58ba74: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-c76edcc2-2bfd-4286-aa91-c2fe7e58ba74 in namespace emptydir-wrapper-9178, will wait for the garbage collector to delete the pods
Feb  8 14:56:12.798: INFO: Deleting ReplicationController wrapped-volume-race-c76edcc2-2bfd-4286-aa91-c2fe7e58ba74 took: 21.969598ms
Feb  8 14:56:13.198: INFO: Terminating ReplicationController wrapped-volume-race-c76edcc2-2bfd-4286-aa91-c2fe7e58ba74 pods took: 400.494774ms
STEP: Creating RC which spawns configmap-volume pods
Feb  8 14:57:07.091: INFO: Pod name wrapped-volume-race-caca5a91-81be-4a14-826c-8c6e9bc3e9b9: Found 0 pods out of 5
Feb  8 14:57:12.130: INFO: Pod name wrapped-volume-race-caca5a91-81be-4a14-826c-8c6e9bc3e9b9: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-caca5a91-81be-4a14-826c-8c6e9bc3e9b9 in namespace emptydir-wrapper-9178, will wait for the garbage collector to delete the pods
Feb  8 14:57:44.296: INFO: Deleting ReplicationController wrapped-volume-race-caca5a91-81be-4a14-826c-8c6e9bc3e9b9 took: 68.469424ms
Feb  8 14:57:44.696: INFO: Terminating ReplicationController wrapped-volume-race-caca5a91-81be-4a14-826c-8c6e9bc3e9b9 pods took: 400.403491ms
STEP: Creating RC which spawns configmap-volume pods
Feb  8 14:58:37.173: INFO: Pod name wrapped-volume-race-3140bfc0-f6c8-4da8-bf88-3808bdad0cc0: Found 0 pods out of 5
Feb  8 14:58:42.186: INFO: Pod name wrapped-volume-race-3140bfc0-f6c8-4da8-bf88-3808bdad0cc0: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-3140bfc0-f6c8-4da8-bf88-3808bdad0cc0 in namespace emptydir-wrapper-9178, will wait for the garbage collector to delete the pods
Feb  8 14:59:12.295: INFO: Deleting ReplicationController wrapped-volume-race-3140bfc0-f6c8-4da8-bf88-3808bdad0cc0 took: 9.278418ms
Feb  8 14:59:12.695: INFO: Terminating ReplicationController wrapped-volume-race-3140bfc0-f6c8-4da8-bf88-3808bdad0cc0 pods took: 400.517726ms
STEP: Cleaning up the configMaps
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  8 14:59:57.897: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-wrapper-9178" for this suite.
Feb  8 15:00:10.106: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 15:00:10.236: INFO: namespace emptydir-wrapper-9178 deletion completed in 12.172011126s

• [SLOW TEST:271.429 seconds]
[sig-storage] EmptyDir wrapper volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  should not cause race condition when used for configmaps [Serial] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  8 15:00:10.237: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a service in the namespace
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
STEP: Recreating the namespace
STEP: Verifying there is no service in the namespace
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  8 15:00:16.794: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "namespaces-5244" for this suite.
Feb  8 15:00:22.818: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 15:00:22.917: INFO: namespace namespaces-5244 deletion completed in 6.117258501s
STEP: Destroying namespace "nsdeletetest-6549" for this suite.
Feb  8 15:00:22.920: INFO: Namespace nsdeletetest-6549 was already deleted
STEP: Destroying namespace "nsdeletetest-234" for this suite.
Feb  8 15:00:28.941: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 15:00:29.079: INFO: namespace nsdeletetest-234 deletion completed in 6.159543887s

• [SLOW TEST:18.842 seconds]
[sig-api-machinery] Namespaces [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  8 15:00:29.080: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test use defaults
Feb  8 15:00:29.163: INFO: Waiting up to 5m0s for pod "client-containers-2d87ac43-b54a-4bc9-839a-50679ac14bb8" in namespace "containers-1257" to be "success or failure"
Feb  8 15:00:29.211: INFO: Pod "client-containers-2d87ac43-b54a-4bc9-839a-50679ac14bb8": Phase="Pending", Reason="", readiness=false. Elapsed: 48.011622ms
Feb  8 15:00:31.218: INFO: Pod "client-containers-2d87ac43-b54a-4bc9-839a-50679ac14bb8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.055235053s
Feb  8 15:00:33.230: INFO: Pod "client-containers-2d87ac43-b54a-4bc9-839a-50679ac14bb8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.067049189s
Feb  8 15:00:35.240: INFO: Pod "client-containers-2d87ac43-b54a-4bc9-839a-50679ac14bb8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.077060075s
Feb  8 15:00:37.285: INFO: Pod "client-containers-2d87ac43-b54a-4bc9-839a-50679ac14bb8": Phase="Pending", Reason="", readiness=false. Elapsed: 8.122152266s
Feb  8 15:00:39.295: INFO: Pod "client-containers-2d87ac43-b54a-4bc9-839a-50679ac14bb8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.132287236s
STEP: Saw pod success
Feb  8 15:00:39.295: INFO: Pod "client-containers-2d87ac43-b54a-4bc9-839a-50679ac14bb8" satisfied condition "success or failure"
Feb  8 15:00:39.304: INFO: Trying to get logs from node iruya-node pod client-containers-2d87ac43-b54a-4bc9-839a-50679ac14bb8 container test-container: 
STEP: delete the pod
Feb  8 15:00:39.433: INFO: Waiting for pod client-containers-2d87ac43-b54a-4bc9-839a-50679ac14bb8 to disappear
Feb  8 15:00:39.444: INFO: Pod client-containers-2d87ac43-b54a-4bc9-839a-50679ac14bb8 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  8 15:00:39.444: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-1257" for this suite.
Feb  8 15:00:45.490: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 15:00:45.572: INFO: namespace containers-1257 deletion completed in 6.118965704s

• [SLOW TEST:16.492 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  8 15:00:45.572: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir volume type on tmpfs
Feb  8 15:00:45.702: INFO: Waiting up to 5m0s for pod "pod-0582ba65-61c0-4938-8d69-7b9ba2946586" in namespace "emptydir-9081" to be "success or failure"
Feb  8 15:00:45.769: INFO: Pod "pod-0582ba65-61c0-4938-8d69-7b9ba2946586": Phase="Pending", Reason="", readiness=false. Elapsed: 66.304276ms
Feb  8 15:00:47.779: INFO: Pod "pod-0582ba65-61c0-4938-8d69-7b9ba2946586": Phase="Pending", Reason="", readiness=false. Elapsed: 2.076245905s
Feb  8 15:00:49.784: INFO: Pod "pod-0582ba65-61c0-4938-8d69-7b9ba2946586": Phase="Pending", Reason="", readiness=false. Elapsed: 4.08141003s
Feb  8 15:00:51.798: INFO: Pod "pod-0582ba65-61c0-4938-8d69-7b9ba2946586": Phase="Pending", Reason="", readiness=false. Elapsed: 6.0955346s
Feb  8 15:00:53.815: INFO: Pod "pod-0582ba65-61c0-4938-8d69-7b9ba2946586": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.112965552s
STEP: Saw pod success
Feb  8 15:00:53.816: INFO: Pod "pod-0582ba65-61c0-4938-8d69-7b9ba2946586" satisfied condition "success or failure"
Feb  8 15:00:53.839: INFO: Trying to get logs from node iruya-node pod pod-0582ba65-61c0-4938-8d69-7b9ba2946586 container test-container: 
STEP: delete the pod
Feb  8 15:00:54.014: INFO: Waiting for pod pod-0582ba65-61c0-4938-8d69-7b9ba2946586 to disappear
Feb  8 15:00:54.095: INFO: Pod pod-0582ba65-61c0-4938-8d69-7b9ba2946586 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  8 15:00:54.095: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-9081" for this suite.
Feb  8 15:01:00.142: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 15:01:00.271: INFO: namespace emptydir-9081 deletion completed in 6.168147839s

• [SLOW TEST:14.699 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  8 15:01:00.272: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72
[It] deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb  8 15:01:00.397: INFO: Creating deployment "nginx-deployment"
Feb  8 15:01:00.462: INFO: Waiting for observed generation 1
Feb  8 15:01:02.865: INFO: Waiting for all required pods to come up
Feb  8 15:01:04.467: INFO: Pod name nginx: Found 10 pods out of 10
STEP: ensuring each pod is running
Feb  8 15:01:30.239: INFO: Waiting for deployment "nginx-deployment" to complete
Feb  8 15:01:30.247: INFO: Updating deployment "nginx-deployment" with a non-existent image
Feb  8 15:01:30.259: INFO: Updating deployment nginx-deployment
Feb  8 15:01:30.259: INFO: Waiting for observed generation 2
Feb  8 15:01:33.631: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8
Feb  8 15:01:34.406: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8
Feb  8 15:01:34.446: INFO: Waiting for the first rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas
Feb  8 15:01:34.461: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0
Feb  8 15:01:34.461: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5
Feb  8 15:01:34.465: INFO: Waiting for the second rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas
Feb  8 15:01:34.472: INFO: Verifying that deployment "nginx-deployment" has minimum required number of available replicas
Feb  8 15:01:34.472: INFO: Scaling up the deployment "nginx-deployment" from 10 to 30
Feb  8 15:01:34.487: INFO: Updating deployment nginx-deployment
Feb  8 15:01:34.487: INFO: Waiting for the replicasets of deployment "nginx-deployment" to have desired number of replicas
Feb  8 15:01:35.379: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20
Feb  8 15:01:41.699: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66
Feb  8 15:01:45.565: INFO: Deployment "nginx-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment,GenerateName:,Namespace:deployment-2773,SelfLink:/apis/apps/v1/namespaces/deployment-2773/deployments/nginx-deployment,UID:92268826-5948-43b2-a884-39378af219ff,ResourceVersion:23586186,Generation:3,CreationTimestamp:2020-02-08 15:01:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*30,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:13,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[{Progressing True 2020-02-08 15:01:33 +0000 UTC 2020-02-08 15:01:00 +0000 UTC ReplicaSetUpdated ReplicaSet "nginx-deployment-55fb7cb77f" is progressing.} {Available False 2020-02-08 15:01:35 +0000 UTC 2020-02-08 15:01:35 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.}],ReadyReplicas:8,CollisionCount:nil,},}

Feb  8 15:01:45.597: INFO: New ReplicaSet "nginx-deployment-55fb7cb77f" of Deployment "nginx-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f,GenerateName:,Namespace:deployment-2773,SelfLink:/apis/apps/v1/namespaces/deployment-2773/replicasets/nginx-deployment-55fb7cb77f,UID:23e2a7a1-b2f1-4af6-9099-e0e99ec69cec,ResourceVersion:23586252,Generation:3,CreationTimestamp:2020-02-08 15:01:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 92268826-5948-43b2-a884-39378af219ff 0xc0020ea1e7 0xc0020ea1e8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*13,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Feb  8 15:01:45.597: INFO: All old ReplicaSets of Deployment "nginx-deployment":
Feb  8 15:01:45.598: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498,GenerateName:,Namespace:deployment-2773,SelfLink:/apis/apps/v1/namespaces/deployment-2773/replicasets/nginx-deployment-7b8c6f4498,UID:35524bd4-0fd1-4e2b-abf8-1cc89369efa9,ResourceVersion:23586248,Generation:3,CreationTimestamp:2020-02-08 15:01:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 92268826-5948-43b2-a884-39378af219ff 0xc0020ea2b7 0xc0020ea2b8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*20,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[],},}
Feb  8 15:01:48.779: INFO: Pod "nginx-deployment-55fb7cb77f-25fnw" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-25fnw,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-2773,SelfLink:/api/v1/namespaces/deployment-2773/pods/nginx-deployment-55fb7cb77f-25fnw,UID:1a1f47c4-7cbd-4053-889c-cf2fcdb7ad0c,ResourceVersion:23586172,Generation:0,CreationTimestamp:2020-02-08 15:01:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 23e2a7a1-b2f1-4af6-9099-e0e99ec69cec 0xc0025e8087 0xc0025e8088}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-cr8s6 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-cr8s6,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-cr8s6 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0025e8100} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0025e8120}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-08 15:01:32 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-08 15:01:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-08 15:01:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-08 15:01:30 +0000 UTC  }],Message:,Reason:,HostIP:10.96.2.216,PodIP:,StartTime:2020-02-08 15:01:32 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  8 15:01:48.780: INFO: Pod "nginx-deployment-55fb7cb77f-2bcqg" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-2bcqg,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-2773,SelfLink:/api/v1/namespaces/deployment-2773/pods/nginx-deployment-55fb7cb77f-2bcqg,UID:5a161a59-7594-43ef-909c-cceb52fc591f,ResourceVersion:23586210,Generation:0,CreationTimestamp:2020-02-08 15:01:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 23e2a7a1-b2f1-4af6-9099-e0e99ec69cec 0xc0025e81f7 0xc0025e81f8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-cr8s6 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-cr8s6,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-cr8s6 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0025e8260} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0025e8280}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-08 15:01:37 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  8 15:01:48.780: INFO: Pod "nginx-deployment-55fb7cb77f-7pwqq" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-7pwqq,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-2773,SelfLink:/api/v1/namespaces/deployment-2773/pods/nginx-deployment-55fb7cb77f-7pwqq,UID:aef90bd4-ff4b-49dc-b562-ad5d57ba62d3,ResourceVersion:23586247,Generation:0,CreationTimestamp:2020-02-08 15:01:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 23e2a7a1-b2f1-4af6-9099-e0e99ec69cec 0xc0025e8307 0xc0025e8308}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-cr8s6 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-cr8s6,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-cr8s6 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0025e83b0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0025e8410}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-08 15:01:38 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-08 15:01:38 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-08 15:01:38 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-08 15:01:37 +0000 UTC  }],Message:,Reason:,HostIP:10.96.2.216,PodIP:,StartTime:2020-02-08 15:01:38 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  8 15:01:48.781: INFO: Pod "nginx-deployment-55fb7cb77f-b7vz9" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-b7vz9,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-2773,SelfLink:/api/v1/namespaces/deployment-2773/pods/nginx-deployment-55fb7cb77f-b7vz9,UID:0bc17295-6cbc-4669-a357-66b4e46e6bf3,ResourceVersion:23586150,Generation:0,CreationTimestamp:2020-02-08 15:01:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 23e2a7a1-b2f1-4af6-9099-e0e99ec69cec 0xc0025e8677 0xc0025e8678}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-cr8s6 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-cr8s6,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-cr8s6 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0025e8740} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0025e8780}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-08 15:01:30 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-08 15:01:30 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-08 15:01:30 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-08 15:01:30 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2020-02-08 15:01:30 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  8 15:01:48.781: INFO: Pod "nginx-deployment-55fb7cb77f-clcrk" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-clcrk,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-2773,SelfLink:/api/v1/namespaces/deployment-2773/pods/nginx-deployment-55fb7cb77f-clcrk,UID:ece773ef-6601-45b9-b989-c706ed010db9,ResourceVersion:23586245,Generation:0,CreationTimestamp:2020-02-08 15:01:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 23e2a7a1-b2f1-4af6-9099-e0e99ec69cec 0xc0025e8927 0xc0025e8928}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-cr8s6 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-cr8s6,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-cr8s6 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0025e89a0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0025e89d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-08 15:01:38 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-08 15:01:38 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-08 15:01:38 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-08 15:01:37 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2020-02-08 15:01:38 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  8 15:01:48.781: INFO: Pod "nginx-deployment-55fb7cb77f-jhfqg" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-jhfqg,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-2773,SelfLink:/api/v1/namespaces/deployment-2773/pods/nginx-deployment-55fb7cb77f-jhfqg,UID:450156c5-74ae-4ee8-b5e9-26fb94946e42,ResourceVersion:23586151,Generation:0,CreationTimestamp:2020-02-08 15:01:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 23e2a7a1-b2f1-4af6-9099-e0e99ec69cec 0xc0025e8b17 0xc0025e8b18}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-cr8s6 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-cr8s6,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-cr8s6 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0025e8b80} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0025e8ba0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-08 15:01:30 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-08 15:01:30 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-08 15:01:30 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-08 15:01:30 +0000 UTC  }],Message:,Reason:,HostIP:10.96.2.216,PodIP:,StartTime:2020-02-08 15:01:30 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  8 15:01:48.782: INFO: Pod "nginx-deployment-55fb7cb77f-ksfg9" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-ksfg9,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-2773,SelfLink:/api/v1/namespaces/deployment-2773/pods/nginx-deployment-55fb7cb77f-ksfg9,UID:815f7d13-9791-41d6-acde-33d0ae8b48a2,ResourceVersion:23586228,Generation:0,CreationTimestamp:2020-02-08 15:01:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 23e2a7a1-b2f1-4af6-9099-e0e99ec69cec 0xc0025e8ce7 0xc0025e8ce8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-cr8s6 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-cr8s6,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-cr8s6 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0025e8d90} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0025e8df0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-08 15:01:38 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  8 15:01:48.782: INFO: Pod "nginx-deployment-55fb7cb77f-kswlz" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-kswlz,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-2773,SelfLink:/api/v1/namespaces/deployment-2773/pods/nginx-deployment-55fb7cb77f-kswlz,UID:f5e14f5c-84f7-40b5-adb0-fc4406f1834a,ResourceVersion:23586232,Generation:0,CreationTimestamp:2020-02-08 15:01:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 23e2a7a1-b2f1-4af6-9099-e0e99ec69cec 0xc0025e8ed7 0xc0025e8ed8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-cr8s6 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-cr8s6,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-cr8s6 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0025e8f80} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0025e8fa0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-08 15:01:38 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  8 15:01:48.782: INFO: Pod "nginx-deployment-55fb7cb77f-lj6sw" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-lj6sw,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-2773,SelfLink:/api/v1/namespaces/deployment-2773/pods/nginx-deployment-55fb7cb77f-lj6sw,UID:3b955a54-b257-40cf-821e-8e2f3391fc99,ResourceVersion:23586174,Generation:0,CreationTimestamp:2020-02-08 15:01:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 23e2a7a1-b2f1-4af6-9099-e0e99ec69cec 0xc0025e9117 0xc0025e9118}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-cr8s6 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-cr8s6,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-cr8s6 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0025e9190} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0025e91e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-08 15:01:33 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-08 15:01:33 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-08 15:01:33 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-08 15:01:30 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2020-02-08 15:01:33 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  8 15:01:48.783: INFO: Pod "nginx-deployment-55fb7cb77f-nc687" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-nc687,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-2773,SelfLink:/api/v1/namespaces/deployment-2773/pods/nginx-deployment-55fb7cb77f-nc687,UID:0113551e-7622-4506-9aac-6202936494e3,ResourceVersion:23586160,Generation:0,CreationTimestamp:2020-02-08 15:01:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 23e2a7a1-b2f1-4af6-9099-e0e99ec69cec 0xc0025e9357 0xc0025e9358}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-cr8s6 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-cr8s6,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-cr8s6 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0025e9420} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0025e9480}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-08 15:01:30 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-08 15:01:30 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-08 15:01:30 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-08 15:01:30 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2020-02-08 15:01:30 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  8 15:01:48.783: INFO: Pod "nginx-deployment-55fb7cb77f-rthts" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-rthts,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-2773,SelfLink:/api/v1/namespaces/deployment-2773/pods/nginx-deployment-55fb7cb77f-rthts,UID:2bcf50ae-8ca5-4e61-91fe-3041319e61d4,ResourceVersion:23586234,Generation:0,CreationTimestamp:2020-02-08 15:01:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 23e2a7a1-b2f1-4af6-9099-e0e99ec69cec 0xc0025e95d7 0xc0025e95d8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-cr8s6 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-cr8s6,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-cr8s6 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0025e96d0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0025e96f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-08 15:01:39 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  8 15:01:48.783: INFO: Pod "nginx-deployment-55fb7cb77f-tdbfv" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-tdbfv,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-2773,SelfLink:/api/v1/namespaces/deployment-2773/pods/nginx-deployment-55fb7cb77f-tdbfv,UID:49d50845-9bb1-46bf-b085-c665e084193b,ResourceVersion:23586224,Generation:0,CreationTimestamp:2020-02-08 15:01:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 23e2a7a1-b2f1-4af6-9099-e0e99ec69cec 0xc0025e97d7 0xc0025e97d8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-cr8s6 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-cr8s6,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-cr8s6 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0025e98d0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0025e98f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-08 15:01:38 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  8 15:01:48.783: INFO: Pod "nginx-deployment-55fb7cb77f-w7jwh" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-w7jwh,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-2773,SelfLink:/api/v1/namespaces/deployment-2773/pods/nginx-deployment-55fb7cb77f-w7jwh,UID:00154861-76eb-4d67-aa9a-9cf9eed2888a,ResourceVersion:23586223,Generation:0,CreationTimestamp:2020-02-08 15:01:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 23e2a7a1-b2f1-4af6-9099-e0e99ec69cec 0xc0025e99c7 0xc0025e99c8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-cr8s6 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-cr8s6,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-cr8s6 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0025e9b00} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0025e9b20}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-08 15:01:38 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  8 15:01:48.784: INFO: Pod "nginx-deployment-7b8c6f4498-2zp8r" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-2zp8r,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2773,SelfLink:/api/v1/namespaces/deployment-2773/pods/nginx-deployment-7b8c6f4498-2zp8r,UID:339af691-b4b2-4016-ab17-17a987f8fdb4,ResourceVersion:23586215,Generation:0,CreationTimestamp:2020-02-08 15:01:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 35524bd4-0fd1-4e2b-abf8-1cc89369efa9 0xc0025e9c07 0xc0025e9c08}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-cr8s6 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-cr8s6,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-cr8s6 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0025e9c90} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0025e9ce0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-08 15:01:37 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  8 15:01:48.784: INFO: Pod "nginx-deployment-7b8c6f4498-4jzvq" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-4jzvq,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2773,SelfLink:/api/v1/namespaces/deployment-2773/pods/nginx-deployment-7b8c6f4498-4jzvq,UID:5d278a74-f9a4-4367-b3e7-a7773b950c55,ResourceVersion:23586233,Generation:0,CreationTimestamp:2020-02-08 15:01:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 35524bd4-0fd1-4e2b-abf8-1cc89369efa9 0xc0025e9da7 0xc0025e9da8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-cr8s6 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-cr8s6,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-cr8s6 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0025e9e20} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0025e9e40}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-08 15:01:37 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-08 15:01:37 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-08 15:01:37 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-08 15:01:35 +0000 UTC  }],Message:,Reason:,HostIP:10.96.2.216,PodIP:,StartTime:2020-02-08 15:01:37 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  8 15:01:48.784: INFO: Pod "nginx-deployment-7b8c6f4498-65dx8" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-65dx8,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2773,SelfLink:/api/v1/namespaces/deployment-2773/pods/nginx-deployment-7b8c6f4498-65dx8,UID:3f6b07a8-0bab-4638-bceb-8db35b72ca6b,ResourceVersion:23586238,Generation:0,CreationTimestamp:2020-02-08 15:01:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 35524bd4-0fd1-4e2b-abf8-1cc89369efa9 0xc0025e9fd7 0xc0025e9fd8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-cr8s6 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-cr8s6,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-cr8s6 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0033b8050} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0033b8070}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-08 15:01:38 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-08 15:01:38 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-08 15:01:38 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-08 15:01:37 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2020-02-08 15:01:38 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  8 15:01:48.784: INFO: Pod "nginx-deployment-7b8c6f4498-6bftt" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-6bftt,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2773,SelfLink:/api/v1/namespaces/deployment-2773/pods/nginx-deployment-7b8c6f4498-6bftt,UID:e982d734-d5a1-400b-80d6-44c355a17096,ResourceVersion:23586222,Generation:0,CreationTimestamp:2020-02-08 15:01:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 35524bd4-0fd1-4e2b-abf8-1cc89369efa9 0xc0033b8137 0xc0033b8138}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-cr8s6 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-cr8s6,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-cr8s6 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0033b81b0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0033b81d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-08 15:01:38 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  8 15:01:48.785: INFO: Pod "nginx-deployment-7b8c6f4498-6skrf" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-6skrf,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2773,SelfLink:/api/v1/namespaces/deployment-2773/pods/nginx-deployment-7b8c6f4498-6skrf,UID:0f8ce0af-f461-4752-900e-3b8a86d83eaf,ResourceVersion:23586105,Generation:0,CreationTimestamp:2020-02-08 15:01:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 35524bd4-0fd1-4e2b-abf8-1cc89369efa9 0xc0033b8257 0xc0033b8258}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-cr8s6 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-cr8s6,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-cr8s6 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0033b82c0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0033b82e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-08 15:01:00 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-08 15:01:28 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-08 15:01:28 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-08 15:01:00 +0000 UTC  }],Message:,Reason:,HostIP:10.96.2.216,PodIP:10.32.0.5,StartTime:2020-02-08 15:01:00 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-08 15:01:25 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://e7d1b8220a868e55dc1487dea84114b592af3f8691e64e5f3ab2c03d3abcd25e}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  8 15:01:48.785: INFO: Pod "nginx-deployment-7b8c6f4498-9xqr8" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-9xqr8,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2773,SelfLink:/api/v1/namespaces/deployment-2773/pods/nginx-deployment-7b8c6f4498-9xqr8,UID:75d328e8-07f7-449a-b0c7-11192f600339,ResourceVersion:23586061,Generation:0,CreationTimestamp:2020-02-08 15:01:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 35524bd4-0fd1-4e2b-abf8-1cc89369efa9 0xc0033b83b7 0xc0033b83b8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-cr8s6 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-cr8s6,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-cr8s6 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0033b8430} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0033b8450}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-08 15:01:01 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-08 15:01:19 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-08 15:01:19 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-08 15:01:00 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.1,StartTime:2020-02-08 15:01:01 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-08 15:01:18 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://e2a3ff97263d4d0c87f567af21f877da586a407deae3c13b4253aabf1ddb877e}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  8 15:01:48.785: INFO: Pod "nginx-deployment-7b8c6f4498-cxt7c" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-cxt7c,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2773,SelfLink:/api/v1/namespaces/deployment-2773/pods/nginx-deployment-7b8c6f4498-cxt7c,UID:30189d4c-5240-43b0-8975-783b7331cd5d,ResourceVersion:23586085,Generation:0,CreationTimestamp:2020-02-08 15:01:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 35524bd4-0fd1-4e2b-abf8-1cc89369efa9 0xc0033b8527 0xc0033b8528}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-cr8s6 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-cr8s6,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-cr8s6 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0033b85a0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0033b85c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-08 15:01:01 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-08 15:01:25 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-08 15:01:25 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-08 15:01:00 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.2,StartTime:2020-02-08 15:01:01 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-08 15:01:23 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://20cbb59753247c96941ccb149039f191d35fa0e2b2f88b9a4f4040c81a0ce8b5}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  8 15:01:48.786: INFO: Pod "nginx-deployment-7b8c6f4498-dz4nt" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-dz4nt,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2773,SelfLink:/api/v1/namespaces/deployment-2773/pods/nginx-deployment-7b8c6f4498-dz4nt,UID:b427e6b3-fda8-49a8-9d4f-2845d9af82e7,ResourceVersion:23586227,Generation:0,CreationTimestamp:2020-02-08 15:01:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 35524bd4-0fd1-4e2b-abf8-1cc89369efa9 0xc0033b86b7 0xc0033b86b8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-cr8s6 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-cr8s6,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-cr8s6 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0033b8720} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0033b8740}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-08 15:01:38 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  8 15:01:48.786: INFO: Pod "nginx-deployment-7b8c6f4498-glm9c" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-glm9c,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2773,SelfLink:/api/v1/namespaces/deployment-2773/pods/nginx-deployment-7b8c6f4498-glm9c,UID:18332da3-93f7-4904-91c3-aae6f5dd3378,ResourceVersion:23586076,Generation:0,CreationTimestamp:2020-02-08 15:01:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 35524bd4-0fd1-4e2b-abf8-1cc89369efa9 0xc0033b87c7 0xc0033b87c8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-cr8s6 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-cr8s6,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-cr8s6 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0033b8850} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0033b8870}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-08 15:01:01 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-08 15:01:24 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-08 15:01:24 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-08 15:01:00 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.4,StartTime:2020-02-08 15:01:01 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-08 15:01:23 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://fce9bf8fa380badd2041c63ae6700b99957a095f4bbcb5818d668e79c0c10718}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  8 15:01:48.786: INFO: Pod "nginx-deployment-7b8c6f4498-js5wx" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-js5wx,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2773,SelfLink:/api/v1/namespaces/deployment-2773/pods/nginx-deployment-7b8c6f4498-js5wx,UID:e2991496-e8e3-4e31-b8f2-dbcc67fa7d89,ResourceVersion:23586230,Generation:0,CreationTimestamp:2020-02-08 15:01:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 35524bd4-0fd1-4e2b-abf8-1cc89369efa9 0xc0033b8947 0xc0033b8948}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-cr8s6 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-cr8s6,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-cr8s6 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0033b89c0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0033b89e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-08 15:01:38 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  8 15:01:48.786: INFO: Pod "nginx-deployment-7b8c6f4498-l6xps" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-l6xps,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2773,SelfLink:/api/v1/namespaces/deployment-2773/pods/nginx-deployment-7b8c6f4498-l6xps,UID:d0841023-8f41-43c9-9024-bd404a09de6a,ResourceVersion:23586217,Generation:0,CreationTimestamp:2020-02-08 15:01:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 35524bd4-0fd1-4e2b-abf8-1cc89369efa9 0xc0033b8a67 0xc0033b8a68}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-cr8s6 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-cr8s6,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-cr8s6 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0033b8ae0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0033b8b00}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-08 15:01:37 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  8 15:01:48.786: INFO: Pod "nginx-deployment-7b8c6f4498-p5pzb" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-p5pzb,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2773,SelfLink:/api/v1/namespaces/deployment-2773/pods/nginx-deployment-7b8c6f4498-p5pzb,UID:9c01e2f4-28af-4895-8591-a6fafce73640,ResourceVersion:23586088,Generation:0,CreationTimestamp:2020-02-08 15:01:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 35524bd4-0fd1-4e2b-abf8-1cc89369efa9 0xc0033b8b87 0xc0033b8b88}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-cr8s6 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-cr8s6,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-cr8s6 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0033b8c00} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0033b8c20}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-08 15:01:01 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-08 15:01:25 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-08 15:01:25 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-08 15:01:00 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.5,StartTime:2020-02-08 15:01:01 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-08 15:01:23 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://8aa4bb2763f5fc0581f591a28e9664a87e785ae29eeecad8bbd0a905763c1a7b}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  8 15:01:48.787: INFO: Pod "nginx-deployment-7b8c6f4498-psjss" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-psjss,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2773,SelfLink:/api/v1/namespaces/deployment-2773/pods/nginx-deployment-7b8c6f4498-psjss,UID:aaf1a7c0-58d7-4877-a466-7c4af5146f0d,ResourceVersion:23586216,Generation:0,CreationTimestamp:2020-02-08 15:01:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 35524bd4-0fd1-4e2b-abf8-1cc89369efa9 0xc0033b8cf7 0xc0033b8cf8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-cr8s6 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-cr8s6,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-cr8s6 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0033b8d60} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0033b8d80}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-08 15:01:37 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  8 15:01:48.787: INFO: Pod "nginx-deployment-7b8c6f4498-q2kv8" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-q2kv8,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2773,SelfLink:/api/v1/namespaces/deployment-2773/pods/nginx-deployment-7b8c6f4498-q2kv8,UID:3798ca8d-1cd5-495a-90a1-bc58d9905e05,ResourceVersion:23586255,Generation:0,CreationTimestamp:2020-02-08 15:01:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 35524bd4-0fd1-4e2b-abf8-1cc89369efa9 0xc0033b8e07 0xc0033b8e08}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-cr8s6 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-cr8s6,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-cr8s6 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0033b8e80} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0033b8ea0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-08 15:01:39 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-08 15:01:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-08 15:01:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-08 15:01:37 +0000 UTC  }],Message:,Reason:,HostIP:10.96.2.216,PodIP:,StartTime:2020-02-08 15:01:39 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  8 15:01:48.787: INFO: Pod "nginx-deployment-7b8c6f4498-qlpfm" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-qlpfm,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2773,SelfLink:/api/v1/namespaces/deployment-2773/pods/nginx-deployment-7b8c6f4498-qlpfm,UID:db9361e2-28a4-42f0-b0f0-fe99acc6bc5c,ResourceVersion:23586220,Generation:0,CreationTimestamp:2020-02-08 15:01:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 35524bd4-0fd1-4e2b-abf8-1cc89369efa9 0xc0033b8f77 0xc0033b8f78}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-cr8s6 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-cr8s6,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-cr8s6 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0033b8ff0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0033b9010}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-08 15:01:38 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  8 15:01:48.787: INFO: Pod "nginx-deployment-7b8c6f4498-r286g" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-r286g,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2773,SelfLink:/api/v1/namespaces/deployment-2773/pods/nginx-deployment-7b8c6f4498-r286g,UID:8add9ec1-cdef-4ba3-8ab4-364e8f4d27cd,ResourceVersion:23586081,Generation:0,CreationTimestamp:2020-02-08 15:01:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 35524bd4-0fd1-4e2b-abf8-1cc89369efa9 0xc0033b90b7 0xc0033b90b8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-cr8s6 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-cr8s6,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-cr8s6 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0033b9130} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0033b9150}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-08 15:01:01 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-08 15:01:24 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-08 15:01:24 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-08 15:01:00 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.3,StartTime:2020-02-08 15:01:01 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-08 15:01:23 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://000e202cbbc7f8c33f63d2862596213edf3dbc5598329b80a3d8bd3212ddcef2}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  8 15:01:48.788: INFO: Pod "nginx-deployment-7b8c6f4498-szrp9" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-szrp9,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2773,SelfLink:/api/v1/namespaces/deployment-2773/pods/nginx-deployment-7b8c6f4498-szrp9,UID:d799f9cd-8795-4c29-848f-710a3aad7f7f,ResourceVersion:23586260,Generation:0,CreationTimestamp:2020-02-08 15:01:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 35524bd4-0fd1-4e2b-abf8-1cc89369efa9 0xc0033b9227 0xc0033b9228}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-cr8s6 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-cr8s6,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-cr8s6 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0033b92a0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0033b92c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-08 15:01:38 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-08 15:01:38 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-08 15:01:38 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-08 15:01:37 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2020-02-08 15:01:38 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  8 15:01:48.788: INFO: Pod "nginx-deployment-7b8c6f4498-tkmx2" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-tkmx2,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2773,SelfLink:/api/v1/namespaces/deployment-2773/pods/nginx-deployment-7b8c6f4498-tkmx2,UID:63b96733-7fa1-4b9e-acd5-bb38b1ac7f37,ResourceVersion:23586231,Generation:0,CreationTimestamp:2020-02-08 15:01:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 35524bd4-0fd1-4e2b-abf8-1cc89369efa9 0xc0033b9387 0xc0033b9388}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-cr8s6 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-cr8s6,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-cr8s6 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0033b93f0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0033b9410}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-08 15:01:38 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  8 15:01:48.788: INFO: Pod "nginx-deployment-7b8c6f4498-vtrlh" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-vtrlh,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2773,SelfLink:/api/v1/namespaces/deployment-2773/pods/nginx-deployment-7b8c6f4498-vtrlh,UID:b591f427-d8c8-4253-8333-403181602615,ResourceVersion:23586108,Generation:0,CreationTimestamp:2020-02-08 15:01:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 35524bd4-0fd1-4e2b-abf8-1cc89369efa9 0xc0033b9497 0xc0033b9498}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-cr8s6 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-cr8s6,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-cr8s6 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0033b9500} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0033b9520}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-08 15:01:00 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-08 15:01:28 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-08 15:01:28 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-08 15:01:00 +0000 UTC  }],Message:,Reason:,HostIP:10.96.2.216,PodIP:10.32.0.6,StartTime:2020-02-08 15:01:00 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-08 15:01:27 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://fdfdea7e5b175545521750e5b76181ee40a624aa51328b859ba19d4a17b009b6}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  8 15:01:48.788: INFO: Pod "nginx-deployment-7b8c6f4498-w6t2z" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-w6t2z,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-2773,SelfLink:/api/v1/namespaces/deployment-2773/pods/nginx-deployment-7b8c6f4498-w6t2z,UID:1f4f017f-9136-4bfe-b220-1f26f83362a9,ResourceVersion:23586111,Generation:0,CreationTimestamp:2020-02-08 15:01:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 35524bd4-0fd1-4e2b-abf8-1cc89369efa9 0xc0033b95f7 0xc0033b95f8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-cr8s6 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-cr8s6,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-cr8s6 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0033b9660} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0033b9680}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-08 15:01:03 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-08 15:01:28 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-08 15:01:28 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-08 15:01:00 +0000 UTC  }],Message:,Reason:,HostIP:10.96.2.216,PodIP:10.32.0.8,StartTime:2020-02-08 15:01:03 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-08 15:01:27 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://bc1b4ddd0a2cca53054ab4aa6a8ef21d47bc87bdcb82779d101f8d3fcd9d643e}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  8 15:01:48.788: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-2773" for this suite.
Feb  8 15:03:00.159: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 15:03:01.400: INFO: namespace deployment-2773 deletion completed in 1m11.431484798s

• [SLOW TEST:121.127 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  8 15:03:01.400: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a pod in the namespace
STEP: Waiting for the pod to have running status
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
STEP: Recreating the namespace
STEP: Verifying there are no pods in the namespace
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  8 15:03:53.742: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "namespaces-9467" for this suite.
Feb  8 15:03:59.842: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 15:03:59.975: INFO: namespace namespaces-9467 deletion completed in 6.226871939s
STEP: Destroying namespace "nsdeletetest-89" for this suite.
Feb  8 15:03:59.977: INFO: Namespace nsdeletetest-89 was already deleted
STEP: Destroying namespace "nsdeletetest-7379" for this suite.
Feb  8 15:04:05.994: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 15:04:06.075: INFO: namespace nsdeletetest-7379 deletion completed in 6.097517641s

• [SLOW TEST:64.675 seconds]
[sig-api-machinery] Namespaces [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  8 15:04:06.075: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating replication controller my-hostname-basic-d36cb8ca-c080-46c1-857f-ee0707db6b42
Feb  8 15:04:06.226: INFO: Pod name my-hostname-basic-d36cb8ca-c080-46c1-857f-ee0707db6b42: Found 0 pods out of 1
Feb  8 15:04:11.233: INFO: Pod name my-hostname-basic-d36cb8ca-c080-46c1-857f-ee0707db6b42: Found 1 pods out of 1
Feb  8 15:04:11.233: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-d36cb8ca-c080-46c1-857f-ee0707db6b42" are running
Feb  8 15:04:15.245: INFO: Pod "my-hostname-basic-d36cb8ca-c080-46c1-857f-ee0707db6b42-jrvj2" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-08 15:04:06 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-08 15:04:06 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-d36cb8ca-c080-46c1-857f-ee0707db6b42]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-08 15:04:06 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-d36cb8ca-c080-46c1-857f-ee0707db6b42]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-08 15:04:06 +0000 UTC Reason: Message:}])
Feb  8 15:04:15.245: INFO: Trying to dial the pod
Feb  8 15:04:20.654: INFO: Controller my-hostname-basic-d36cb8ca-c080-46c1-857f-ee0707db6b42: Got expected result from replica 1 [my-hostname-basic-d36cb8ca-c080-46c1-857f-ee0707db6b42-jrvj2]: "my-hostname-basic-d36cb8ca-c080-46c1-857f-ee0707db6b42-jrvj2", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  8 15:04:20.654: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-2368" for this suite.
Feb  8 15:04:26.695: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 15:04:26.841: INFO: namespace replication-controller-2368 deletion completed in 6.179991077s

• [SLOW TEST:20.766 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  8 15:04:26.842: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name projected-secret-test-map-b1ae1d3c-40f6-4446-afea-5554c5be71fd
STEP: Creating a pod to test consume secrets
Feb  8 15:04:27.028: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-bea35019-4e50-4ac6-abfd-0ab5350244a6" in namespace "projected-1655" to be "success or failure"
Feb  8 15:04:27.038: INFO: Pod "pod-projected-secrets-bea35019-4e50-4ac6-abfd-0ab5350244a6": Phase="Pending", Reason="", readiness=false. Elapsed: 9.089045ms
Feb  8 15:04:29.059: INFO: Pod "pod-projected-secrets-bea35019-4e50-4ac6-abfd-0ab5350244a6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030874334s
Feb  8 15:04:31.076: INFO: Pod "pod-projected-secrets-bea35019-4e50-4ac6-abfd-0ab5350244a6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.047641342s
Feb  8 15:04:33.083: INFO: Pod "pod-projected-secrets-bea35019-4e50-4ac6-abfd-0ab5350244a6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.054524194s
Feb  8 15:04:35.096: INFO: Pod "pod-projected-secrets-bea35019-4e50-4ac6-abfd-0ab5350244a6": Phase="Pending", Reason="", readiness=false. Elapsed: 8.067740559s
Feb  8 15:04:37.107: INFO: Pod "pod-projected-secrets-bea35019-4e50-4ac6-abfd-0ab5350244a6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.078404177s
STEP: Saw pod success
Feb  8 15:04:37.107: INFO: Pod "pod-projected-secrets-bea35019-4e50-4ac6-abfd-0ab5350244a6" satisfied condition "success or failure"
Feb  8 15:04:37.113: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-bea35019-4e50-4ac6-abfd-0ab5350244a6 container projected-secret-volume-test: 
STEP: delete the pod
Feb  8 15:04:37.232: INFO: Waiting for pod pod-projected-secrets-bea35019-4e50-4ac6-abfd-0ab5350244a6 to disappear
Feb  8 15:04:37.243: INFO: Pod pod-projected-secrets-bea35019-4e50-4ac6-abfd-0ab5350244a6 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  8 15:04:37.243: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1655" for this suite.
Feb  8 15:04:43.323: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 15:04:43.506: INFO: namespace projected-1655 deletion completed in 6.254830141s

• [SLOW TEST:16.665 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  8 15:04:43.507: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0644 on node default medium
Feb  8 15:04:43.603: INFO: Waiting up to 5m0s for pod "pod-c83ffbf9-a91e-453f-882c-029cf0b476b7" in namespace "emptydir-1442" to be "success or failure"
Feb  8 15:04:43.662: INFO: Pod "pod-c83ffbf9-a91e-453f-882c-029cf0b476b7": Phase="Pending", Reason="", readiness=false. Elapsed: 59.0057ms
Feb  8 15:04:45.670: INFO: Pod "pod-c83ffbf9-a91e-453f-882c-029cf0b476b7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.066996848s
Feb  8 15:04:47.678: INFO: Pod "pod-c83ffbf9-a91e-453f-882c-029cf0b476b7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.074595587s
Feb  8 15:04:49.687: INFO: Pod "pod-c83ffbf9-a91e-453f-882c-029cf0b476b7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.083982656s
Feb  8 15:04:51.695: INFO: Pod "pod-c83ffbf9-a91e-453f-882c-029cf0b476b7": Phase="Pending", Reason="", readiness=false. Elapsed: 8.092058133s
Feb  8 15:04:53.707: INFO: Pod "pod-c83ffbf9-a91e-453f-882c-029cf0b476b7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.103481359s
STEP: Saw pod success
Feb  8 15:04:53.707: INFO: Pod "pod-c83ffbf9-a91e-453f-882c-029cf0b476b7" satisfied condition "success or failure"
Feb  8 15:04:53.713: INFO: Trying to get logs from node iruya-node pod pod-c83ffbf9-a91e-453f-882c-029cf0b476b7 container test-container: 
STEP: delete the pod
Feb  8 15:04:53.993: INFO: Waiting for pod pod-c83ffbf9-a91e-453f-882c-029cf0b476b7 to disappear
Feb  8 15:04:54.002: INFO: Pod pod-c83ffbf9-a91e-453f-882c-029cf0b476b7 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  8 15:04:54.002: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-1442" for this suite.
Feb  8 15:05:00.055: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 15:05:00.152: INFO: namespace emptydir-1442 deletion completed in 6.137955497s

• [SLOW TEST:16.646 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  8 15:05:00.153: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb  8 15:05:00.242: INFO: Waiting up to 5m0s for pod "downwardapi-volume-6b5a78b1-b22d-483f-a04d-c7370cb9f340" in namespace "projected-6994" to be "success or failure"
Feb  8 15:05:00.249: INFO: Pod "downwardapi-volume-6b5a78b1-b22d-483f-a04d-c7370cb9f340": Phase="Pending", Reason="", readiness=false. Elapsed: 7.584181ms
Feb  8 15:05:02.258: INFO: Pod "downwardapi-volume-6b5a78b1-b22d-483f-a04d-c7370cb9f340": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015697333s
Feb  8 15:05:04.265: INFO: Pod "downwardapi-volume-6b5a78b1-b22d-483f-a04d-c7370cb9f340": Phase="Pending", Reason="", readiness=false. Elapsed: 4.022704503s
Feb  8 15:05:06.281: INFO: Pod "downwardapi-volume-6b5a78b1-b22d-483f-a04d-c7370cb9f340": Phase="Pending", Reason="", readiness=false. Elapsed: 6.039182707s
Feb  8 15:05:08.292: INFO: Pod "downwardapi-volume-6b5a78b1-b22d-483f-a04d-c7370cb9f340": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.050661892s
STEP: Saw pod success
Feb  8 15:05:08.293: INFO: Pod "downwardapi-volume-6b5a78b1-b22d-483f-a04d-c7370cb9f340" satisfied condition "success or failure"
Feb  8 15:05:08.300: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-6b5a78b1-b22d-483f-a04d-c7370cb9f340 container client-container: 
STEP: delete the pod
Feb  8 15:05:08.398: INFO: Waiting for pod downwardapi-volume-6b5a78b1-b22d-483f-a04d-c7370cb9f340 to disappear
Feb  8 15:05:08.419: INFO: Pod downwardapi-volume-6b5a78b1-b22d-483f-a04d-c7370cb9f340 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  8 15:05:08.419: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6994" for this suite.
Feb  8 15:05:14.455: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 15:05:14.563: INFO: namespace projected-6994 deletion completed in 6.138642807s

• [SLOW TEST:14.411 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  8 15:05:14.563: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81
Feb  8 15:05:14.701: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Feb  8 15:05:14.735: INFO: Waiting for terminating namespaces to be deleted...
Feb  8 15:05:14.743: INFO: 
Logging pods the kubelet thinks is on node iruya-node before test
Feb  8 15:05:14.767: INFO: weave-net-rlp57 from kube-system started at 2019-10-12 11:56:39 +0000 UTC (2 container statuses recorded)
Feb  8 15:05:14.767: INFO: 	Container weave ready: true, restart count 0
Feb  8 15:05:14.767: INFO: 	Container weave-npc ready: true, restart count 0
Feb  8 15:05:14.767: INFO: kube-proxy-976zl from kube-system started at 2019-08-04 09:01:39 +0000 UTC (1 container statuses recorded)
Feb  8 15:05:14.767: INFO: 	Container kube-proxy ready: true, restart count 0
Feb  8 15:05:14.767: INFO: 
Logging pods the kubelet thinks is on node iruya-server-sfge57q7djm7 before test
Feb  8 15:05:14.810: INFO: kube-scheduler-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:43 +0000 UTC (1 container statuses recorded)
Feb  8 15:05:14.810: INFO: 	Container kube-scheduler ready: true, restart count 13
Feb  8 15:05:14.810: INFO: coredns-5c98db65d4-xx8w8 from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded)
Feb  8 15:05:14.810: INFO: 	Container coredns ready: true, restart count 0
Feb  8 15:05:14.810: INFO: coredns-5c98db65d4-bm4gs from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded)
Feb  8 15:05:14.810: INFO: 	Container coredns ready: true, restart count 0
Feb  8 15:05:14.810: INFO: etcd-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:38 +0000 UTC (1 container statuses recorded)
Feb  8 15:05:14.810: INFO: 	Container etcd ready: true, restart count 0
Feb  8 15:05:14.810: INFO: weave-net-bzl4d from kube-system started at 2019-08-04 08:52:37 +0000 UTC (2 container statuses recorded)
Feb  8 15:05:14.810: INFO: 	Container weave ready: true, restart count 0
Feb  8 15:05:14.810: INFO: 	Container weave-npc ready: true, restart count 0
Feb  8 15:05:14.810: INFO: kube-controller-manager-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:42 +0000 UTC (1 container statuses recorded)
Feb  8 15:05:14.810: INFO: 	Container kube-controller-manager ready: true, restart count 20
Feb  8 15:05:14.810: INFO: kube-proxy-58v95 from kube-system started at 2019-08-04 08:52:37 +0000 UTC (1 container statuses recorded)
Feb  8 15:05:14.810: INFO: 	Container kube-proxy ready: true, restart count 0
Feb  8 15:05:14.810: INFO: kube-apiserver-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:39 +0000 UTC (1 container statuses recorded)
Feb  8 15:05:14.810: INFO: 	Container kube-apiserver ready: true, restart count 0
[It] validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-0737efe3-565b-46d3-b186-de8d622740c4 42
STEP: Trying to relaunch the pod, now with labels.
STEP: removing the label kubernetes.io/e2e-0737efe3-565b-46d3-b186-de8d622740c4 off the node iruya-node
STEP: verifying the node doesn't have the label kubernetes.io/e2e-0737efe3-565b-46d3-b186-de8d622740c4
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  8 15:05:31.064: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-5450" for this suite.
Feb  8 15:05:51.091: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 15:05:51.272: INFO: namespace sched-pred-5450 deletion completed in 20.205322605s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72

• [SLOW TEST:36.709 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23
  validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  8 15:05:51.273: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb  8 15:05:51.505: INFO: Waiting up to 5m0s for pod "downwardapi-volume-332070d7-20b6-4170-a7d6-548584b48648" in namespace "projected-7918" to be "success or failure"
Feb  8 15:05:51.512: INFO: Pod "downwardapi-volume-332070d7-20b6-4170-a7d6-548584b48648": Phase="Pending", Reason="", readiness=false. Elapsed: 6.665457ms
Feb  8 15:05:53.522: INFO: Pod "downwardapi-volume-332070d7-20b6-4170-a7d6-548584b48648": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016909149s
Feb  8 15:05:55.542: INFO: Pod "downwardapi-volume-332070d7-20b6-4170-a7d6-548584b48648": Phase="Pending", Reason="", readiness=false. Elapsed: 4.036294881s
Feb  8 15:05:57.549: INFO: Pod "downwardapi-volume-332070d7-20b6-4170-a7d6-548584b48648": Phase="Pending", Reason="", readiness=false. Elapsed: 6.043659432s
Feb  8 15:05:59.555: INFO: Pod "downwardapi-volume-332070d7-20b6-4170-a7d6-548584b48648": Phase="Pending", Reason="", readiness=false. Elapsed: 8.050037959s
Feb  8 15:06:01.563: INFO: Pod "downwardapi-volume-332070d7-20b6-4170-a7d6-548584b48648": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.057628766s
STEP: Saw pod success
Feb  8 15:06:01.563: INFO: Pod "downwardapi-volume-332070d7-20b6-4170-a7d6-548584b48648" satisfied condition "success or failure"
Feb  8 15:06:01.568: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-332070d7-20b6-4170-a7d6-548584b48648 container client-container: 
STEP: delete the pod
Feb  8 15:06:01.679: INFO: Waiting for pod downwardapi-volume-332070d7-20b6-4170-a7d6-548584b48648 to disappear
Feb  8 15:06:01.684: INFO: Pod downwardapi-volume-332070d7-20b6-4170-a7d6-548584b48648 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  8 15:06:01.684: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7918" for this suite.
Feb  8 15:06:07.743: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 15:06:07.898: INFO: namespace projected-7918 deletion completed in 6.209072415s

• [SLOW TEST:16.625 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  8 15:06:07.899: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: setting up watch
STEP: submitting the pod to kubernetes
Feb  8 15:06:07.972: INFO: observed the pod list
STEP: verifying the pod is in kubernetes
STEP: verifying pod creation was observed
STEP: deleting the pod gracefully
STEP: verifying the kubelet observed the termination notice
STEP: verifying pod deletion was observed
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  8 15:06:23.385: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-5306" for this suite.
Feb  8 15:06:29.441: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 15:06:29.611: INFO: namespace pods-5306 deletion completed in 6.219847635s

• [SLOW TEST:21.713 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  8 15:06:29.612: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap configmap-4076/configmap-test-62d58202-9d37-4348-8ce8-41cb47d881ca
STEP: Creating a pod to test consume configMaps
Feb  8 15:06:29.851: INFO: Waiting up to 5m0s for pod "pod-configmaps-a5ea4fc2-5e07-42d1-bec4-bfc8181bbc51" in namespace "configmap-4076" to be "success or failure"
Feb  8 15:06:29.941: INFO: Pod "pod-configmaps-a5ea4fc2-5e07-42d1-bec4-bfc8181bbc51": Phase="Pending", Reason="", readiness=false. Elapsed: 89.684702ms
Feb  8 15:06:31.950: INFO: Pod "pod-configmaps-a5ea4fc2-5e07-42d1-bec4-bfc8181bbc51": Phase="Pending", Reason="", readiness=false. Elapsed: 2.098811736s
Feb  8 15:06:33.963: INFO: Pod "pod-configmaps-a5ea4fc2-5e07-42d1-bec4-bfc8181bbc51": Phase="Pending", Reason="", readiness=false. Elapsed: 4.111985164s
Feb  8 15:06:35.969: INFO: Pod "pod-configmaps-a5ea4fc2-5e07-42d1-bec4-bfc8181bbc51": Phase="Pending", Reason="", readiness=false. Elapsed: 6.118515107s
Feb  8 15:06:37.976: INFO: Pod "pod-configmaps-a5ea4fc2-5e07-42d1-bec4-bfc8181bbc51": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.124742443s
STEP: Saw pod success
Feb  8 15:06:37.976: INFO: Pod "pod-configmaps-a5ea4fc2-5e07-42d1-bec4-bfc8181bbc51" satisfied condition "success or failure"
Feb  8 15:06:37.978: INFO: Trying to get logs from node iruya-node pod pod-configmaps-a5ea4fc2-5e07-42d1-bec4-bfc8181bbc51 container env-test: 
STEP: delete the pod
Feb  8 15:06:38.049: INFO: Waiting for pod pod-configmaps-a5ea4fc2-5e07-42d1-bec4-bfc8181bbc51 to disappear
Feb  8 15:06:38.059: INFO: Pod pod-configmaps-a5ea4fc2-5e07-42d1-bec4-bfc8181bbc51 no longer exists
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  8 15:06:38.059: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-4076" for this suite.
Feb  8 15:06:44.083: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 15:06:44.203: INFO: namespace configmap-4076 deletion completed in 6.139904623s

• [SLOW TEST:14.591 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl replace 
  should update a single-container pod's image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  8 15:06:44.204: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1721
[It] should update a single-container pod's image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Feb  8 15:06:44.330: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --labels=run=e2e-test-nginx-pod --namespace=kubectl-549'
Feb  8 15:06:46.328: INFO: stderr: ""
Feb  8 15:06:46.328: INFO: stdout: "pod/e2e-test-nginx-pod created\n"
STEP: verifying the pod e2e-test-nginx-pod is running
STEP: verifying the pod e2e-test-nginx-pod was created
Feb  8 15:06:56.379: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-nginx-pod --namespace=kubectl-549 -o json'
Feb  8 15:06:56.577: INFO: stderr: ""
Feb  8 15:06:56.577: INFO: stdout: "{\n    \"apiVersion\": \"v1\",\n    \"kind\": \"Pod\",\n    \"metadata\": {\n        \"creationTimestamp\": \"2020-02-08T15:06:46Z\",\n        \"labels\": {\n            \"run\": \"e2e-test-nginx-pod\"\n        },\n        \"name\": \"e2e-test-nginx-pod\",\n        \"namespace\": \"kubectl-549\",\n        \"resourceVersion\": \"23587189\",\n        \"selfLink\": \"/api/v1/namespaces/kubectl-549/pods/e2e-test-nginx-pod\",\n        \"uid\": \"46782ffd-a9de-4d64-8f96-87fafde836bd\"\n    },\n    \"spec\": {\n        \"containers\": [\n            {\n                \"image\": \"docker.io/library/nginx:1.14-alpine\",\n                \"imagePullPolicy\": \"IfNotPresent\",\n                \"name\": \"e2e-test-nginx-pod\",\n                \"resources\": {},\n                \"terminationMessagePath\": \"/dev/termination-log\",\n                \"terminationMessagePolicy\": \"File\",\n                \"volumeMounts\": [\n                    {\n                        \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n                        \"name\": \"default-token-c7fxs\",\n                        \"readOnly\": true\n                    }\n                ]\n            }\n        ],\n        \"dnsPolicy\": \"ClusterFirst\",\n        \"enableServiceLinks\": true,\n        \"nodeName\": \"iruya-node\",\n        \"priority\": 0,\n        \"restartPolicy\": \"Always\",\n        \"schedulerName\": \"default-scheduler\",\n        \"securityContext\": {},\n        \"serviceAccount\": \"default\",\n        \"serviceAccountName\": \"default\",\n        \"terminationGracePeriodSeconds\": 30,\n        \"tolerations\": [\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/not-ready\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            },\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/unreachable\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            }\n        ],\n        \"volumes\": [\n            {\n                \"name\": \"default-token-c7fxs\",\n                \"secret\": {\n                    \"defaultMode\": 420,\n                    \"secretName\": \"default-token-c7fxs\"\n                }\n            }\n        ]\n    },\n    \"status\": {\n        \"conditions\": [\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-02-08T15:06:46Z\",\n                \"status\": \"True\",\n                \"type\": \"Initialized\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-02-08T15:06:54Z\",\n                \"status\": \"True\",\n                \"type\": \"Ready\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-02-08T15:06:54Z\",\n                \"status\": \"True\",\n                \"type\": \"ContainersReady\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-02-08T15:06:46Z\",\n                \"status\": \"True\",\n                \"type\": \"PodScheduled\"\n            }\n        ],\n        \"containerStatuses\": [\n            {\n                \"containerID\": \"docker://214ea03b68b558b7a79472cb2e577a30f6d4cbaf60eb24a8f576ee6e3cb8101c\",\n                \"image\": \"nginx:1.14-alpine\",\n                \"imageID\": \"docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7\",\n                \"lastState\": {},\n                \"name\": \"e2e-test-nginx-pod\",\n                \"ready\": true,\n                \"restartCount\": 0,\n                \"state\": {\n                    \"running\": {\n                        \"startedAt\": \"2020-02-08T15:06:52Z\"\n                    }\n                }\n            }\n        ],\n        \"hostIP\": \"10.96.3.65\",\n        \"phase\": \"Running\",\n        \"podIP\": \"10.44.0.1\",\n        \"qosClass\": \"BestEffort\",\n        \"startTime\": \"2020-02-08T15:06:46Z\"\n    }\n}\n"
STEP: replace the image in the pod
Feb  8 15:06:56.578: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-549'
Feb  8 15:06:56.931: INFO: stderr: ""
Feb  8 15:06:56.931: INFO: stdout: "pod/e2e-test-nginx-pod replaced\n"
STEP: verifying the pod e2e-test-nginx-pod has the right image docker.io/library/busybox:1.29
[AfterEach] [k8s.io] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1726
Feb  8 15:06:56.946: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=kubectl-549'
Feb  8 15:07:03.830: INFO: stderr: ""
Feb  8 15:07:03.830: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  8 15:07:03.830: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-549" for this suite.
Feb  8 15:07:09.868: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 15:07:09.974: INFO: namespace kubectl-549 deletion completed in 6.133062346s

• [SLOW TEST:25.770 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should update a single-container pod's image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  8 15:07:09.974: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-map-5baf5350-75a0-4e76-8231-54c179533453
STEP: Creating a pod to test consume configMaps
Feb  8 15:07:10.087: INFO: Waiting up to 5m0s for pod "pod-configmaps-2d646489-89e3-4e16-bddf-3bcddd41cbfb" in namespace "configmap-301" to be "success or failure"
Feb  8 15:07:10.097: INFO: Pod "pod-configmaps-2d646489-89e3-4e16-bddf-3bcddd41cbfb": Phase="Pending", Reason="", readiness=false. Elapsed: 10.168644ms
Feb  8 15:07:12.163: INFO: Pod "pod-configmaps-2d646489-89e3-4e16-bddf-3bcddd41cbfb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.07635162s
Feb  8 15:07:14.171: INFO: Pod "pod-configmaps-2d646489-89e3-4e16-bddf-3bcddd41cbfb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.083677511s
Feb  8 15:07:16.204: INFO: Pod "pod-configmaps-2d646489-89e3-4e16-bddf-3bcddd41cbfb": Phase="Pending", Reason="", readiness=false. Elapsed: 6.11734851s
Feb  8 15:07:18.244: INFO: Pod "pod-configmaps-2d646489-89e3-4e16-bddf-3bcddd41cbfb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.157270936s
STEP: Saw pod success
Feb  8 15:07:18.244: INFO: Pod "pod-configmaps-2d646489-89e3-4e16-bddf-3bcddd41cbfb" satisfied condition "success or failure"
Feb  8 15:07:18.248: INFO: Trying to get logs from node iruya-node pod pod-configmaps-2d646489-89e3-4e16-bddf-3bcddd41cbfb container configmap-volume-test: 
STEP: delete the pod
Feb  8 15:07:18.293: INFO: Waiting for pod pod-configmaps-2d646489-89e3-4e16-bddf-3bcddd41cbfb to disappear
Feb  8 15:07:18.312: INFO: Pod pod-configmaps-2d646489-89e3-4e16-bddf-3bcddd41cbfb no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  8 15:07:18.312: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-301" for this suite.
Feb  8 15:07:24.347: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 15:07:24.462: INFO: namespace configmap-301 deletion completed in 6.143161139s

• [SLOW TEST:14.488 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  8 15:07:24.463: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Feb  8 15:07:40.704: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Feb  8 15:07:40.784: INFO: Pod pod-with-prestop-http-hook still exists
Feb  8 15:07:42.784: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Feb  8 15:07:42.875: INFO: Pod pod-with-prestop-http-hook still exists
Feb  8 15:07:44.784: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Feb  8 15:07:44.797: INFO: Pod pod-with-prestop-http-hook still exists
Feb  8 15:07:46.785: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Feb  8 15:07:46.794: INFO: Pod pod-with-prestop-http-hook still exists
Feb  8 15:07:48.784: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Feb  8 15:07:48.799: INFO: Pod pod-with-prestop-http-hook still exists
Feb  8 15:07:50.784: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Feb  8 15:07:50.791: INFO: Pod pod-with-prestop-http-hook still exists
Feb  8 15:07:52.784: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Feb  8 15:07:52.794: INFO: Pod pod-with-prestop-http-hook still exists
Feb  8 15:07:54.784: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Feb  8 15:07:54.796: INFO: Pod pod-with-prestop-http-hook still exists
Feb  8 15:07:56.784: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Feb  8 15:07:56.798: INFO: Pod pod-with-prestop-http-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  8 15:07:56.839: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-6869" for this suite.
Feb  8 15:08:18.880: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 15:08:19.015: INFO: namespace container-lifecycle-hook-6869 deletion completed in 22.168490016s

• [SLOW TEST:54.553 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute prestop http hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  8 15:08:19.016: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb  8 15:08:19.160: INFO: Waiting up to 5m0s for pod "downwardapi-volume-06bf2567-cfce-4dea-ad80-de67c8c0714d" in namespace "downward-api-245" to be "success or failure"
Feb  8 15:08:19.180: INFO: Pod "downwardapi-volume-06bf2567-cfce-4dea-ad80-de67c8c0714d": Phase="Pending", Reason="", readiness=false. Elapsed: 19.93877ms
Feb  8 15:08:21.192: INFO: Pod "downwardapi-volume-06bf2567-cfce-4dea-ad80-de67c8c0714d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031845033s
Feb  8 15:08:23.201: INFO: Pod "downwardapi-volume-06bf2567-cfce-4dea-ad80-de67c8c0714d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.040801769s
Feb  8 15:08:25.208: INFO: Pod "downwardapi-volume-06bf2567-cfce-4dea-ad80-de67c8c0714d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.047743288s
Feb  8 15:08:27.262: INFO: Pod "downwardapi-volume-06bf2567-cfce-4dea-ad80-de67c8c0714d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.101567439s
STEP: Saw pod success
Feb  8 15:08:27.262: INFO: Pod "downwardapi-volume-06bf2567-cfce-4dea-ad80-de67c8c0714d" satisfied condition "success or failure"
Feb  8 15:08:27.266: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-06bf2567-cfce-4dea-ad80-de67c8c0714d container client-container: 
STEP: delete the pod
Feb  8 15:08:27.320: INFO: Waiting for pod downwardapi-volume-06bf2567-cfce-4dea-ad80-de67c8c0714d to disappear
Feb  8 15:08:27.435: INFO: Pod downwardapi-volume-06bf2567-cfce-4dea-ad80-de67c8c0714d no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  8 15:08:27.435: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-245" for this suite.
Feb  8 15:08:33.483: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  8 15:08:33.646: INFO: namespace downward-api-245 deletion completed in 6.200049632s

• [SLOW TEST:14.630 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSFeb  8 15:08:33.647: INFO: Running AfterSuite actions on all nodes
Feb  8 15:08:33.647: INFO: Running AfterSuite actions on node 1
Feb  8 15:08:33.647: INFO: Skipping dumping logs from cluster

Ran 215 of 4412 Specs in 7944.287 seconds
SUCCESS! -- 215 Passed | 0 Failed | 0 Pending | 4197 Skipped
PASS