I0128 12:56:11.209665 9 e2e.go:243] Starting e2e run "18d4cdca-1599-451e-af4a-7fb92d7e49cc" on Ginkgo node 1 Running Suite: Kubernetes e2e suite =================================== Random Seed: 1580216169 - Will randomize all specs Will run 215 of 4412 specs Jan 28 12:56:11.735: INFO: >>> kubeConfig: /root/.kube/config Jan 28 12:56:11.741: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Jan 28 12:56:12.357: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Jan 28 12:56:12.448: INFO: 10 / 10 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Jan 28 12:56:12.448: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Jan 28 12:56:12.448: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Jan 28 12:56:12.572: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Jan 28 12:56:12.572: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'weave-net' (0 seconds elapsed) Jan 28 12:56:12.572: INFO: e2e test version: v1.15.7 Jan 28 12:56:12.578: INFO: kube-apiserver version: v1.15.1 SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 28 12:56:12.581: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl Jan 28 12:56:12.786: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [It] should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the initial replication controller Jan 28 12:56:12.790: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2356' Jan 28 12:56:14.979: INFO: stderr: "" Jan 28 12:56:14.979: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Jan 28 12:56:14.979: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2356' Jan 28 12:56:15.189: INFO: stderr: "" Jan 28 12:56:15.189: INFO: stdout: "update-demo-nautilus-lpjqd update-demo-nautilus-vsl8h " Jan 28 12:56:15.189: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-lpjqd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2356' Jan 28 12:56:15.292: INFO: stderr: "" Jan 28 12:56:15.292: INFO: stdout: "" Jan 28 12:56:15.292: INFO: update-demo-nautilus-lpjqd is created but not running Jan 28 12:56:20.293: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2356' Jan 28 12:56:21.340: INFO: stderr: "" Jan 28 12:56:21.340: INFO: stdout: "update-demo-nautilus-lpjqd update-demo-nautilus-vsl8h " Jan 28 12:56:21.341: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-lpjqd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2356' Jan 28 12:56:21.874: INFO: stderr: "" Jan 28 12:56:21.875: INFO: stdout: "" Jan 28 12:56:21.875: INFO: update-demo-nautilus-lpjqd is created but not running Jan 28 12:56:26.876: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2356' Jan 28 12:56:27.063: INFO: stderr: "" Jan 28 12:56:27.063: INFO: stdout: "update-demo-nautilus-lpjqd update-demo-nautilus-vsl8h " Jan 28 12:56:27.064: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-lpjqd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2356' Jan 28 12:56:27.191: INFO: stderr: "" Jan 28 12:56:27.191: INFO: stdout: "true" Jan 28 12:56:27.192: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-lpjqd -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-2356' Jan 28 12:56:27.309: INFO: stderr: "" Jan 28 12:56:27.309: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 28 12:56:27.309: INFO: validating pod update-demo-nautilus-lpjqd Jan 28 12:56:27.340: INFO: got data: { "image": "nautilus.jpg" } Jan 28 12:56:27.340: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 28 12:56:27.340: INFO: update-demo-nautilus-lpjqd is verified up and running Jan 28 12:56:27.341: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vsl8h -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2356' Jan 28 12:56:27.432: INFO: stderr: "" Jan 28 12:56:27.432: INFO: stdout: "true" Jan 28 12:56:27.432: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vsl8h -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-2356' Jan 28 12:56:27.512: INFO: stderr: "" Jan 28 12:56:27.512: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 28 12:56:27.512: INFO: validating pod update-demo-nautilus-vsl8h Jan 28 12:56:27.532: INFO: got data: { "image": "nautilus.jpg" } Jan 28 12:56:27.532: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 28 12:56:27.532: INFO: update-demo-nautilus-vsl8h is verified up and running STEP: rolling-update to new replication controller Jan 28 12:56:27.537: INFO: scanned /root for discovery docs: Jan 28 12:56:27.537: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=kubectl-2356' Jan 28 12:56:58.806: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Jan 28 12:56:58.806: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n" STEP: waiting for all containers in name=update-demo pods to come up. Jan 28 12:56:58.807: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2356' Jan 28 12:56:58.976: INFO: stderr: "" Jan 28 12:56:58.976: INFO: stdout: "update-demo-kitten-vf4tc update-demo-kitten-wqznv " Jan 28 12:56:58.976: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-vf4tc -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2356' Jan 28 12:56:59.144: INFO: stderr: "" Jan 28 12:56:59.144: INFO: stdout: "true" Jan 28 12:56:59.144: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-vf4tc -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-2356' Jan 28 12:56:59.295: INFO: stderr: "" Jan 28 12:56:59.295: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" Jan 28 12:56:59.295: INFO: validating pod update-demo-kitten-vf4tc Jan 28 12:56:59.340: INFO: got data: { "image": "kitten.jpg" } Jan 28 12:56:59.340: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . Jan 28 12:56:59.340: INFO: update-demo-kitten-vf4tc is verified up and running Jan 28 12:56:59.340: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-wqznv -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2356' Jan 28 12:56:59.487: INFO: stderr: "" Jan 28 12:56:59.487: INFO: stdout: "true" Jan 28 12:56:59.487: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-wqznv -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-2356' Jan 28 12:56:59.605: INFO: stderr: "" Jan 28 12:56:59.606: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" Jan 28 12:56:59.606: INFO: validating pod update-demo-kitten-wqznv Jan 28 12:56:59.640: INFO: got data: { "image": "kitten.jpg" } Jan 28 12:56:59.640: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . Jan 28 12:56:59.641: INFO: update-demo-kitten-wqznv is verified up and running [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 28 12:56:59.641: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2356" for this suite. Jan 28 12:57:23.698: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 28 12:57:23.838: INFO: namespace kubectl-2356 deletion completed in 24.180961109s • [SLOW TEST:71.258 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 28 12:57:23.839: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test override command Jan 28 12:57:23.989: INFO: Waiting up to 5m0s for pod "client-containers-865bb48d-359a-4bc6-a61d-8fb0bbb3997d" in namespace "containers-3575" to be "success or failure" Jan 28 12:57:23.999: INFO: Pod "client-containers-865bb48d-359a-4bc6-a61d-8fb0bbb3997d": Phase="Pending", Reason="", readiness=false. Elapsed: 10.048099ms Jan 28 12:57:26.017: INFO: Pod "client-containers-865bb48d-359a-4bc6-a61d-8fb0bbb3997d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02778925s Jan 28 12:57:28.054: INFO: Pod "client-containers-865bb48d-359a-4bc6-a61d-8fb0bbb3997d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.064849047s Jan 28 12:57:30.060: INFO: Pod "client-containers-865bb48d-359a-4bc6-a61d-8fb0bbb3997d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.07081633s Jan 28 12:57:32.084: INFO: Pod "client-containers-865bb48d-359a-4bc6-a61d-8fb0bbb3997d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.09509203s Jan 28 12:57:34.095: INFO: Pod "client-containers-865bb48d-359a-4bc6-a61d-8fb0bbb3997d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.105684646s STEP: Saw pod success Jan 28 12:57:34.095: INFO: Pod "client-containers-865bb48d-359a-4bc6-a61d-8fb0bbb3997d" satisfied condition "success or failure" Jan 28 12:57:34.099: INFO: Trying to get logs from node iruya-node pod client-containers-865bb48d-359a-4bc6-a61d-8fb0bbb3997d container test-container: STEP: delete the pod Jan 28 12:57:34.271: INFO: Waiting for pod client-containers-865bb48d-359a-4bc6-a61d-8fb0bbb3997d to disappear Jan 28 12:57:34.326: INFO: Pod client-containers-865bb48d-359a-4bc6-a61d-8fb0bbb3997d no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 28 12:57:34.327: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-3575" for this suite. Jan 28 12:57:40.422: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 28 12:57:40.555: INFO: namespace containers-3575 deletion completed in 6.21904701s • [SLOW TEST:16.716 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 28 12:57:40.557: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88 [It] should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 28 12:57:41.096: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-4892" for this suite. Jan 28 12:57:47.137: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 28 12:57:47.279: INFO: namespace services-4892 deletion completed in 6.17757483s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92 • [SLOW TEST:6.722 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 28 12:57:47.279: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-upd-f672de06-ac63-4851-920e-5fa9d802fa5f STEP: Creating the pod STEP: Updating configmap configmap-test-upd-f672de06-ac63-4851-920e-5fa9d802fa5f STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 28 12:57:59.704: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-843" for this suite. Jan 28 12:58:21.732: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 28 12:58:21.909: INFO: namespace configmap-843 deletion completed in 22.197432906s • [SLOW TEST:34.629 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 28 12:58:21.911: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jan 28 12:58:22.214: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8ddfc4a0-a892-4c6d-9d1d-bec49268ffd8" in namespace "downward-api-1757" to be "success or failure" Jan 28 12:58:22.388: INFO: Pod "downwardapi-volume-8ddfc4a0-a892-4c6d-9d1d-bec49268ffd8": Phase="Pending", Reason="", readiness=false. Elapsed: 174.105739ms Jan 28 12:58:24.397: INFO: Pod "downwardapi-volume-8ddfc4a0-a892-4c6d-9d1d-bec49268ffd8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.182152968s Jan 28 12:58:26.411: INFO: Pod "downwardapi-volume-8ddfc4a0-a892-4c6d-9d1d-bec49268ffd8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.196781489s Jan 28 12:58:28.425: INFO: Pod "downwardapi-volume-8ddfc4a0-a892-4c6d-9d1d-bec49268ffd8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.21094541s Jan 28 12:58:30.457: INFO: Pod "downwardapi-volume-8ddfc4a0-a892-4c6d-9d1d-bec49268ffd8": Phase="Pending", Reason="", readiness=false. Elapsed: 8.242805989s Jan 28 12:58:32.467: INFO: Pod "downwardapi-volume-8ddfc4a0-a892-4c6d-9d1d-bec49268ffd8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.252178492s STEP: Saw pod success Jan 28 12:58:32.467: INFO: Pod "downwardapi-volume-8ddfc4a0-a892-4c6d-9d1d-bec49268ffd8" satisfied condition "success or failure" Jan 28 12:58:32.471: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-8ddfc4a0-a892-4c6d-9d1d-bec49268ffd8 container client-container: STEP: delete the pod Jan 28 12:58:32.612: INFO: Waiting for pod downwardapi-volume-8ddfc4a0-a892-4c6d-9d1d-bec49268ffd8 to disappear Jan 28 12:58:32.619: INFO: Pod downwardapi-volume-8ddfc4a0-a892-4c6d-9d1d-bec49268ffd8 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 28 12:58:32.619: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1757" for this suite. Jan 28 12:58:38.658: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 28 12:58:38.808: INFO: namespace downward-api-1757 deletion completed in 6.181244871s • [SLOW TEST:16.897 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 28 12:58:38.809: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-0f014002-b375-464e-94bd-b158e16e1035 STEP: Creating a pod to test consume configMaps Jan 28 12:58:39.008: INFO: Waiting up to 5m0s for pod "pod-configmaps-40675886-5acd-47f7-a348-850f9bd3ecab" in namespace "configmap-6649" to be "success or failure" Jan 28 12:58:39.157: INFO: Pod "pod-configmaps-40675886-5acd-47f7-a348-850f9bd3ecab": Phase="Pending", Reason="", readiness=false. Elapsed: 148.239069ms Jan 28 12:58:41.163: INFO: Pod "pod-configmaps-40675886-5acd-47f7-a348-850f9bd3ecab": Phase="Pending", Reason="", readiness=false. Elapsed: 2.154463831s Jan 28 12:58:43.171: INFO: Pod "pod-configmaps-40675886-5acd-47f7-a348-850f9bd3ecab": Phase="Pending", Reason="", readiness=false. Elapsed: 4.161959909s Jan 28 12:58:45.178: INFO: Pod "pod-configmaps-40675886-5acd-47f7-a348-850f9bd3ecab": Phase="Pending", Reason="", readiness=false. Elapsed: 6.169750988s Jan 28 12:58:47.194: INFO: Pod "pod-configmaps-40675886-5acd-47f7-a348-850f9bd3ecab": Phase="Pending", Reason="", readiness=false. Elapsed: 8.185892043s Jan 28 12:58:49.204: INFO: Pod "pod-configmaps-40675886-5acd-47f7-a348-850f9bd3ecab": Phase="Pending", Reason="", readiness=false. Elapsed: 10.195387252s Jan 28 12:58:51.215: INFO: Pod "pod-configmaps-40675886-5acd-47f7-a348-850f9bd3ecab": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.206241218s STEP: Saw pod success Jan 28 12:58:51.215: INFO: Pod "pod-configmaps-40675886-5acd-47f7-a348-850f9bd3ecab" satisfied condition "success or failure" Jan 28 12:58:51.221: INFO: Trying to get logs from node iruya-node pod pod-configmaps-40675886-5acd-47f7-a348-850f9bd3ecab container configmap-volume-test: STEP: delete the pod Jan 28 12:58:51.272: INFO: Waiting for pod pod-configmaps-40675886-5acd-47f7-a348-850f9bd3ecab to disappear Jan 28 12:58:51.285: INFO: Pod pod-configmaps-40675886-5acd-47f7-a348-850f9bd3ecab no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 28 12:58:51.285: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6649" for this suite. Jan 28 12:58:57.344: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 28 12:58:57.448: INFO: namespace configmap-6649 deletion completed in 6.156510241s • [SLOW TEST:18.639 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 28 12:58:57.449: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: Gathering metrics W0128 12:59:06.646572 9 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jan 28 12:59:06.646: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 28 12:59:06.647: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-4289" for this suite. Jan 28 12:59:18.731: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 28 12:59:18.894: INFO: namespace gc-4289 deletion completed in 12.243148615s • [SLOW TEST:21.445 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 28 12:59:18.895: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-map-f26fab51-ac14-4f6f-8419-beb12fd06c57 STEP: Creating a pod to test consume secrets Jan 28 12:59:19.143: INFO: Waiting up to 5m0s for pod "pod-secrets-b29650b0-0613-4eef-bb76-c21e8c7018fc" in namespace "secrets-5175" to be "success or failure" Jan 28 12:59:19.169: INFO: Pod "pod-secrets-b29650b0-0613-4eef-bb76-c21e8c7018fc": Phase="Pending", Reason="", readiness=false. Elapsed: 25.742045ms Jan 28 12:59:21.175: INFO: Pod "pod-secrets-b29650b0-0613-4eef-bb76-c21e8c7018fc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032419068s Jan 28 12:59:23.191: INFO: Pod "pod-secrets-b29650b0-0613-4eef-bb76-c21e8c7018fc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.047929098s Jan 28 12:59:25.204: INFO: Pod "pod-secrets-b29650b0-0613-4eef-bb76-c21e8c7018fc": Phase="Pending", Reason="", readiness=false. Elapsed: 6.060725521s Jan 28 12:59:27.221: INFO: Pod "pod-secrets-b29650b0-0613-4eef-bb76-c21e8c7018fc": Phase="Pending", Reason="", readiness=false. Elapsed: 8.077893844s Jan 28 12:59:29.238: INFO: Pod "pod-secrets-b29650b0-0613-4eef-bb76-c21e8c7018fc": Phase="Pending", Reason="", readiness=false. Elapsed: 10.095425568s Jan 28 12:59:31.263: INFO: Pod "pod-secrets-b29650b0-0613-4eef-bb76-c21e8c7018fc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.119823846s STEP: Saw pod success Jan 28 12:59:31.263: INFO: Pod "pod-secrets-b29650b0-0613-4eef-bb76-c21e8c7018fc" satisfied condition "success or failure" Jan 28 12:59:31.275: INFO: Trying to get logs from node iruya-node pod pod-secrets-b29650b0-0613-4eef-bb76-c21e8c7018fc container secret-volume-test: STEP: delete the pod Jan 28 12:59:31.389: INFO: Waiting for pod pod-secrets-b29650b0-0613-4eef-bb76-c21e8c7018fc to disappear Jan 28 12:59:31.396: INFO: Pod pod-secrets-b29650b0-0613-4eef-bb76-c21e8c7018fc no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 28 12:59:31.397: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5175" for this suite. Jan 28 12:59:37.434: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 28 12:59:37.499: INFO: namespace secrets-5175 deletion completed in 6.094310145s • [SLOW TEST:18.604 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 28 12:59:37.499: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-map-f2075a0f-d6d3-46f3-8fc5-21c95d155eea STEP: Creating a pod to test consume secrets Jan 28 12:59:37.734: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-5fb511ed-5907-4ea4-a723-f3563abbc1e1" in namespace "projected-7606" to be "success or failure" Jan 28 12:59:37.761: INFO: Pod "pod-projected-secrets-5fb511ed-5907-4ea4-a723-f3563abbc1e1": Phase="Pending", Reason="", readiness=false. Elapsed: 26.493738ms Jan 28 12:59:39.774: INFO: Pod "pod-projected-secrets-5fb511ed-5907-4ea4-a723-f3563abbc1e1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.039509148s Jan 28 12:59:41.788: INFO: Pod "pod-projected-secrets-5fb511ed-5907-4ea4-a723-f3563abbc1e1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.053799858s Jan 28 12:59:43.801: INFO: Pod "pod-projected-secrets-5fb511ed-5907-4ea4-a723-f3563abbc1e1": Phase="Pending", Reason="", readiness=false. Elapsed: 6.066836175s Jan 28 12:59:45.824: INFO: Pod "pod-projected-secrets-5fb511ed-5907-4ea4-a723-f3563abbc1e1": Phase="Pending", Reason="", readiness=false. Elapsed: 8.089168381s Jan 28 12:59:48.113: INFO: Pod "pod-projected-secrets-5fb511ed-5907-4ea4-a723-f3563abbc1e1": Phase="Pending", Reason="", readiness=false. Elapsed: 10.378417892s Jan 28 12:59:50.124: INFO: Pod "pod-projected-secrets-5fb511ed-5907-4ea4-a723-f3563abbc1e1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.388978375s STEP: Saw pod success Jan 28 12:59:50.124: INFO: Pod "pod-projected-secrets-5fb511ed-5907-4ea4-a723-f3563abbc1e1" satisfied condition "success or failure" Jan 28 12:59:50.128: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-5fb511ed-5907-4ea4-a723-f3563abbc1e1 container projected-secret-volume-test: STEP: delete the pod Jan 28 12:59:50.264: INFO: Waiting for pod pod-projected-secrets-5fb511ed-5907-4ea4-a723-f3563abbc1e1 to disappear Jan 28 12:59:50.274: INFO: Pod pod-projected-secrets-5fb511ed-5907-4ea4-a723-f3563abbc1e1 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 28 12:59:50.275: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7606" for this suite. Jan 28 12:59:56.309: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 28 12:59:56.513: INFO: namespace projected-7606 deletion completed in 6.229349259s • [SLOW TEST:19.015 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 28 12:59:56.515: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: getting the auto-created API token STEP: reading a file in the container Jan 28 13:00:07.260: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-5889 pod-service-account-64867cd2-3e5a-4536-a672-ee49b1432b22 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' STEP: reading a file in the container Jan 28 13:00:07.833: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-5889 pod-service-account-64867cd2-3e5a-4536-a672-ee49b1432b22 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' STEP: reading a file in the container Jan 28 13:00:08.595: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-5889 pod-service-account-64867cd2-3e5a-4536-a672-ee49b1432b22 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 28 13:00:09.114: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-5889" for this suite. Jan 28 13:00:15.196: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 28 13:00:15.334: INFO: namespace svcaccounts-5889 deletion completed in 6.214627644s • [SLOW TEST:18.820 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 28 13:00:15.337: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-8445.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-8445.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8445.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-8445.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-8445.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8445.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe /etc/hosts STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jan 28 13:00:29.535: INFO: Unable to read wheezy_udp@PodARecord from pod dns-8445/dns-test-21995a4a-f1e5-4627-a428-a0fff77165b8: the server could not find the requested resource (get pods dns-test-21995a4a-f1e5-4627-a428-a0fff77165b8) Jan 28 13:00:29.543: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-8445/dns-test-21995a4a-f1e5-4627-a428-a0fff77165b8: the server could not find the requested resource (get pods dns-test-21995a4a-f1e5-4627-a428-a0fff77165b8) Jan 28 13:00:29.552: INFO: Unable to read jessie_hosts@dns-querier-1.dns-test-service.dns-8445.svc.cluster.local from pod dns-8445/dns-test-21995a4a-f1e5-4627-a428-a0fff77165b8: the server could not find the requested resource (get pods dns-test-21995a4a-f1e5-4627-a428-a0fff77165b8) Jan 28 13:00:29.562: INFO: Unable to read jessie_hosts@dns-querier-1 from pod dns-8445/dns-test-21995a4a-f1e5-4627-a428-a0fff77165b8: the server could not find the requested resource (get pods dns-test-21995a4a-f1e5-4627-a428-a0fff77165b8) Jan 28 13:00:29.568: INFO: Unable to read jessie_udp@PodARecord from pod dns-8445/dns-test-21995a4a-f1e5-4627-a428-a0fff77165b8: the server could not find the requested resource (get pods dns-test-21995a4a-f1e5-4627-a428-a0fff77165b8) Jan 28 13:00:29.576: INFO: Unable to read jessie_tcp@PodARecord from pod dns-8445/dns-test-21995a4a-f1e5-4627-a428-a0fff77165b8: the server could not find the requested resource (get pods dns-test-21995a4a-f1e5-4627-a428-a0fff77165b8) Jan 28 13:00:29.576: INFO: Lookups using dns-8445/dns-test-21995a4a-f1e5-4627-a428-a0fff77165b8 failed for: [wheezy_udp@PodARecord wheezy_tcp@PodARecord jessie_hosts@dns-querier-1.dns-test-service.dns-8445.svc.cluster.local jessie_hosts@dns-querier-1 jessie_udp@PodARecord jessie_tcp@PodARecord] Jan 28 13:00:34.676: INFO: DNS probes using dns-8445/dns-test-21995a4a-f1e5-4627-a428-a0fff77165b8 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 28 13:00:34.749: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-8445" for this suite. Jan 28 13:00:40.935: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 28 13:00:41.119: INFO: namespace dns-8445 deletion completed in 6.291419645s • [SLOW TEST:25.782 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 28 13:00:41.120: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-1945 STEP: creating a selector STEP: Creating the service pods in kubernetes Jan 28 13:00:41.215: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Jan 28 13:01:13.491: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostName&protocol=http&host=10.32.0.4&port=8080&tries=1'] Namespace:pod-network-test-1945 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 28 13:01:13.491: INFO: >>> kubeConfig: /root/.kube/config I0128 13:01:13.577585 9 log.go:172] (0xc000a11080) (0xc0001b6aa0) Create stream I0128 13:01:13.577727 9 log.go:172] (0xc000a11080) (0xc0001b6aa0) Stream added, broadcasting: 1 I0128 13:01:13.590480 9 log.go:172] (0xc000a11080) Reply frame received for 1 I0128 13:01:13.590594 9 log.go:172] (0xc000a11080) (0xc001d1a000) Create stream I0128 13:01:13.590607 9 log.go:172] (0xc000a11080) (0xc001d1a000) Stream added, broadcasting: 3 I0128 13:01:13.593619 9 log.go:172] (0xc000a11080) Reply frame received for 3 I0128 13:01:13.593674 9 log.go:172] (0xc000a11080) (0xc0025d4140) Create stream I0128 13:01:13.593692 9 log.go:172] (0xc000a11080) (0xc0025d4140) Stream added, broadcasting: 5 I0128 13:01:13.598155 9 log.go:172] (0xc000a11080) Reply frame received for 5 I0128 13:01:13.894414 9 log.go:172] (0xc000a11080) Data frame received for 3 I0128 13:01:13.894635 9 log.go:172] (0xc001d1a000) (3) Data frame handling I0128 13:01:13.894704 9 log.go:172] (0xc001d1a000) (3) Data frame sent I0128 13:01:14.153265 9 log.go:172] (0xc000a11080) (0xc001d1a000) Stream removed, broadcasting: 3 I0128 13:01:14.153564 9 log.go:172] (0xc000a11080) Data frame received for 1 I0128 13:01:14.153599 9 log.go:172] (0xc0001b6aa0) (1) Data frame handling I0128 13:01:14.153667 9 log.go:172] (0xc0001b6aa0) (1) Data frame sent I0128 13:01:14.153689 9 log.go:172] (0xc000a11080) (0xc0001b6aa0) Stream removed, broadcasting: 1 I0128 13:01:14.154043 9 log.go:172] (0xc000a11080) (0xc0025d4140) Stream removed, broadcasting: 5 I0128 13:01:14.154248 9 log.go:172] (0xc000a11080) Go away received I0128 13:01:14.155555 9 log.go:172] (0xc000a11080) (0xc0001b6aa0) Stream removed, broadcasting: 1 I0128 13:01:14.155696 9 log.go:172] (0xc000a11080) (0xc001d1a000) Stream removed, broadcasting: 3 I0128 13:01:14.155727 9 log.go:172] (0xc000a11080) (0xc0025d4140) Stream removed, broadcasting: 5 Jan 28 13:01:14.155: INFO: Waiting for endpoints: map[] Jan 28 13:01:14.178: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostName&protocol=http&host=10.44.0.1&port=8080&tries=1'] Namespace:pod-network-test-1945 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 28 13:01:14.178: INFO: >>> kubeConfig: /root/.kube/config I0128 13:01:14.265373 9 log.go:172] (0xc000a11810) (0xc0001b6d20) Create stream I0128 13:01:14.265690 9 log.go:172] (0xc000a11810) (0xc0001b6d20) Stream added, broadcasting: 1 I0128 13:01:14.280413 9 log.go:172] (0xc000a11810) Reply frame received for 1 I0128 13:01:14.280499 9 log.go:172] (0xc000a11810) (0xc000a186e0) Create stream I0128 13:01:14.280523 9 log.go:172] (0xc000a11810) (0xc000a186e0) Stream added, broadcasting: 3 I0128 13:01:14.282344 9 log.go:172] (0xc000a11810) Reply frame received for 3 I0128 13:01:14.282407 9 log.go:172] (0xc000a11810) (0xc001d1a0a0) Create stream I0128 13:01:14.282440 9 log.go:172] (0xc000a11810) (0xc001d1a0a0) Stream added, broadcasting: 5 I0128 13:01:14.284155 9 log.go:172] (0xc000a11810) Reply frame received for 5 I0128 13:01:14.407566 9 log.go:172] (0xc000a11810) Data frame received for 3 I0128 13:01:14.407625 9 log.go:172] (0xc000a186e0) (3) Data frame handling I0128 13:01:14.407652 9 log.go:172] (0xc000a186e0) (3) Data frame sent I0128 13:01:14.603293 9 log.go:172] (0xc000a11810) Data frame received for 1 I0128 13:01:14.603413 9 log.go:172] (0xc000a11810) (0xc001d1a0a0) Stream removed, broadcasting: 5 I0128 13:01:14.603577 9 log.go:172] (0xc0001b6d20) (1) Data frame handling I0128 13:01:14.603621 9 log.go:172] (0xc0001b6d20) (1) Data frame sent I0128 13:01:14.603908 9 log.go:172] (0xc000a11810) (0xc000a186e0) Stream removed, broadcasting: 3 I0128 13:01:14.603982 9 log.go:172] (0xc000a11810) (0xc0001b6d20) Stream removed, broadcasting: 1 I0128 13:01:14.603999 9 log.go:172] (0xc000a11810) Go away received I0128 13:01:14.604617 9 log.go:172] (0xc000a11810) (0xc0001b6d20) Stream removed, broadcasting: 1 I0128 13:01:14.604650 9 log.go:172] (0xc000a11810) (0xc000a186e0) Stream removed, broadcasting: 3 I0128 13:01:14.604671 9 log.go:172] (0xc000a11810) (0xc001d1a0a0) Stream removed, broadcasting: 5 Jan 28 13:01:14.604: INFO: Waiting for endpoints: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 28 13:01:14.605: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-1945" for this suite. Jan 28 13:01:40.650: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 28 13:01:40.765: INFO: namespace pod-network-test-1945 deletion completed in 26.149307777s • [SLOW TEST:59.645 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 28 13:01:40.767: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Jan 28 13:04:44.038: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 28 13:04:44.143: INFO: Pod pod-with-poststart-exec-hook still exists Jan 28 13:04:46.144: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 28 13:04:46.153: INFO: Pod pod-with-poststart-exec-hook still exists Jan 28 13:04:48.144: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 28 13:04:48.155: INFO: Pod pod-with-poststart-exec-hook still exists Jan 28 13:04:50.144: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 28 13:04:50.152: INFO: Pod pod-with-poststart-exec-hook still exists Jan 28 13:04:52.144: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 28 13:04:52.156: INFO: Pod pod-with-poststart-exec-hook still exists Jan 28 13:04:54.144: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 28 13:04:54.155: INFO: Pod pod-with-poststart-exec-hook still exists Jan 28 13:04:56.144: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 28 13:04:56.152: INFO: Pod pod-with-poststart-exec-hook still exists Jan 28 13:04:58.144: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 28 13:04:58.155: INFO: Pod pod-with-poststart-exec-hook still exists Jan 28 13:05:00.144: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 28 13:05:00.151: INFO: Pod pod-with-poststart-exec-hook still exists Jan 28 13:05:02.144: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 28 13:05:02.161: INFO: Pod pod-with-poststart-exec-hook still exists Jan 28 13:05:04.144: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 28 13:05:04.159: INFO: Pod pod-with-poststart-exec-hook still exists Jan 28 13:05:06.144: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 28 13:05:06.153: INFO: Pod pod-with-poststart-exec-hook still exists Jan 28 13:05:08.144: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 28 13:05:08.163: INFO: Pod pod-with-poststart-exec-hook still exists Jan 28 13:05:10.144: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 28 13:05:10.151: INFO: Pod pod-with-poststart-exec-hook still exists Jan 28 13:05:12.144: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 28 13:05:12.151: INFO: Pod pod-with-poststart-exec-hook still exists Jan 28 13:05:14.144: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 28 13:05:14.184: INFO: Pod pod-with-poststart-exec-hook still exists Jan 28 13:05:16.144: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 28 13:05:16.154: INFO: Pod pod-with-poststart-exec-hook still exists Jan 28 13:05:18.144: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 28 13:05:18.154: INFO: Pod pod-with-poststart-exec-hook still exists Jan 28 13:05:20.144: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 28 13:05:20.152: INFO: Pod pod-with-poststart-exec-hook still exists Jan 28 13:05:22.144: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 28 13:05:22.161: INFO: Pod pod-with-poststart-exec-hook still exists Jan 28 13:05:24.145: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 28 13:05:24.150: INFO: Pod pod-with-poststart-exec-hook still exists Jan 28 13:05:26.144: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 28 13:05:26.158: INFO: Pod pod-with-poststart-exec-hook still exists Jan 28 13:05:28.144: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 28 13:05:28.537: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 28 13:05:28.538: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-8202" for this suite. Jan 28 13:05:50.752: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 28 13:05:50.895: INFO: namespace container-lifecycle-hook-8202 deletion completed in 22.344611621s • [SLOW TEST:250.129 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 28 13:05:50.896: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-downwardapi-c58l STEP: Creating a pod to test atomic-volume-subpath Jan 28 13:05:51.118: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-c58l" in namespace "subpath-1303" to be "success or failure" Jan 28 13:05:51.147: INFO: Pod "pod-subpath-test-downwardapi-c58l": Phase="Pending", Reason="", readiness=false. Elapsed: 29.246004ms Jan 28 13:05:53.155: INFO: Pod "pod-subpath-test-downwardapi-c58l": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037164168s Jan 28 13:05:55.164: INFO: Pod "pod-subpath-test-downwardapi-c58l": Phase="Pending", Reason="", readiness=false. Elapsed: 4.045585443s Jan 28 13:05:57.175: INFO: Pod "pod-subpath-test-downwardapi-c58l": Phase="Pending", Reason="", readiness=false. Elapsed: 6.056312812s Jan 28 13:05:59.186: INFO: Pod "pod-subpath-test-downwardapi-c58l": Phase="Running", Reason="", readiness=true. Elapsed: 8.068131477s Jan 28 13:06:01.197: INFO: Pod "pod-subpath-test-downwardapi-c58l": Phase="Running", Reason="", readiness=true. Elapsed: 10.079080662s Jan 28 13:06:03.203: INFO: Pod "pod-subpath-test-downwardapi-c58l": Phase="Running", Reason="", readiness=true. Elapsed: 12.085192861s Jan 28 13:06:05.226: INFO: Pod "pod-subpath-test-downwardapi-c58l": Phase="Running", Reason="", readiness=true. Elapsed: 14.108113181s Jan 28 13:06:07.234: INFO: Pod "pod-subpath-test-downwardapi-c58l": Phase="Running", Reason="", readiness=true. Elapsed: 16.115382856s Jan 28 13:06:09.246: INFO: Pod "pod-subpath-test-downwardapi-c58l": Phase="Running", Reason="", readiness=true. Elapsed: 18.127392986s Jan 28 13:06:11.260: INFO: Pod "pod-subpath-test-downwardapi-c58l": Phase="Running", Reason="", readiness=true. Elapsed: 20.141413999s Jan 28 13:06:13.281: INFO: Pod "pod-subpath-test-downwardapi-c58l": Phase="Running", Reason="", readiness=true. Elapsed: 22.162607126s Jan 28 13:06:15.290: INFO: Pod "pod-subpath-test-downwardapi-c58l": Phase="Running", Reason="", readiness=true. Elapsed: 24.171335341s Jan 28 13:06:17.300: INFO: Pod "pod-subpath-test-downwardapi-c58l": Phase="Running", Reason="", readiness=true. Elapsed: 26.181600986s Jan 28 13:06:19.315: INFO: Pod "pod-subpath-test-downwardapi-c58l": Phase="Succeeded", Reason="", readiness=false. Elapsed: 28.196404766s STEP: Saw pod success Jan 28 13:06:19.315: INFO: Pod "pod-subpath-test-downwardapi-c58l" satisfied condition "success or failure" Jan 28 13:06:19.326: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-downwardapi-c58l container test-container-subpath-downwardapi-c58l: STEP: delete the pod Jan 28 13:06:19.469: INFO: Waiting for pod pod-subpath-test-downwardapi-c58l to disappear Jan 28 13:06:19.476: INFO: Pod pod-subpath-test-downwardapi-c58l no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-c58l Jan 28 13:06:19.476: INFO: Deleting pod "pod-subpath-test-downwardapi-c58l" in namespace "subpath-1303" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 28 13:06:19.480: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-1303" for this suite. Jan 28 13:06:25.596: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 28 13:06:25.754: INFO: namespace subpath-1303 deletion completed in 6.26583533s • [SLOW TEST:34.858 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 28 13:06:25.755: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications Jan 28 13:06:25.967: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-6924,SelfLink:/api/v1/namespaces/watch-6924/configmaps/e2e-watch-test-watch-closed,UID:e76393ca-4e39-45a8-9584-fa108cf06e8c,ResourceVersion:22184999,Generation:0,CreationTimestamp:2020-01-28 13:06:25 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} Jan 28 13:06:25.968: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-6924,SelfLink:/api/v1/namespaces/watch-6924/configmaps/e2e-watch-test-watch-closed,UID:e76393ca-4e39-45a8-9584-fa108cf06e8c,ResourceVersion:22185000,Generation:0,CreationTimestamp:2020-01-28 13:06:25 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed Jan 28 13:06:26.021: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-6924,SelfLink:/api/v1/namespaces/watch-6924/configmaps/e2e-watch-test-watch-closed,UID:e76393ca-4e39-45a8-9584-fa108cf06e8c,ResourceVersion:22185001,Generation:0,CreationTimestamp:2020-01-28 13:06:25 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Jan 28 13:06:26.021: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-6924,SelfLink:/api/v1/namespaces/watch-6924/configmaps/e2e-watch-test-watch-closed,UID:e76393ca-4e39-45a8-9584-fa108cf06e8c,ResourceVersion:22185002,Generation:0,CreationTimestamp:2020-01-28 13:06:25 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 28 13:06:26.021: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-6924" for this suite. Jan 28 13:06:32.150: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 28 13:06:32.327: INFO: namespace watch-6924 deletion completed in 6.234675605s • [SLOW TEST:6.572 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 28 13:06:32.328: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jan 28 13:06:32.467: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. Jan 28 13:06:32.478: INFO: Number of nodes with available pods: 0 Jan 28 13:06:32.478: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. Jan 28 13:06:32.586: INFO: Number of nodes with available pods: 0 Jan 28 13:06:32.586: INFO: Node iruya-node is running more than one daemon pod Jan 28 13:06:33.633: INFO: Number of nodes with available pods: 0 Jan 28 13:06:33.633: INFO: Node iruya-node is running more than one daemon pod Jan 28 13:06:34.600: INFO: Number of nodes with available pods: 0 Jan 28 13:06:34.600: INFO: Node iruya-node is running more than one daemon pod Jan 28 13:06:35.744: INFO: Number of nodes with available pods: 0 Jan 28 13:06:35.744: INFO: Node iruya-node is running more than one daemon pod Jan 28 13:06:36.594: INFO: Number of nodes with available pods: 0 Jan 28 13:06:36.594: INFO: Node iruya-node is running more than one daemon pod Jan 28 13:06:37.603: INFO: Number of nodes with available pods: 0 Jan 28 13:06:37.603: INFO: Node iruya-node is running more than one daemon pod Jan 28 13:06:38.608: INFO: Number of nodes with available pods: 0 Jan 28 13:06:38.608: INFO: Node iruya-node is running more than one daemon pod Jan 28 13:06:39.673: INFO: Number of nodes with available pods: 0 Jan 28 13:06:39.673: INFO: Node iruya-node is running more than one daemon pod Jan 28 13:06:40.608: INFO: Number of nodes with available pods: 1 Jan 28 13:06:40.608: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled Jan 28 13:06:40.692: INFO: Number of nodes with available pods: 1 Jan 28 13:06:40.692: INFO: Number of running nodes: 0, number of available pods: 1 Jan 28 13:06:41.699: INFO: Number of nodes with available pods: 0 Jan 28 13:06:41.699: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate Jan 28 13:06:41.720: INFO: Number of nodes with available pods: 0 Jan 28 13:06:41.720: INFO: Node iruya-node is running more than one daemon pod Jan 28 13:06:42.729: INFO: Number of nodes with available pods: 0 Jan 28 13:06:42.729: INFO: Node iruya-node is running more than one daemon pod Jan 28 13:06:43.731: INFO: Number of nodes with available pods: 0 Jan 28 13:06:43.731: INFO: Node iruya-node is running more than one daemon pod Jan 28 13:06:44.734: INFO: Number of nodes with available pods: 0 Jan 28 13:06:44.734: INFO: Node iruya-node is running more than one daemon pod Jan 28 13:06:45.728: INFO: Number of nodes with available pods: 0 Jan 28 13:06:45.728: INFO: Node iruya-node is running more than one daemon pod Jan 28 13:06:46.728: INFO: Number of nodes with available pods: 0 Jan 28 13:06:46.728: INFO: Node iruya-node is running more than one daemon pod Jan 28 13:06:47.730: INFO: Number of nodes with available pods: 0 Jan 28 13:06:47.731: INFO: Node iruya-node is running more than one daemon pod Jan 28 13:06:48.739: INFO: Number of nodes with available pods: 0 Jan 28 13:06:48.739: INFO: Node iruya-node is running more than one daemon pod Jan 28 13:06:49.728: INFO: Number of nodes with available pods: 0 Jan 28 13:06:49.728: INFO: Node iruya-node is running more than one daemon pod Jan 28 13:06:50.737: INFO: Number of nodes with available pods: 0 Jan 28 13:06:50.737: INFO: Node iruya-node is running more than one daemon pod Jan 28 13:06:51.731: INFO: Number of nodes with available pods: 0 Jan 28 13:06:51.732: INFO: Node iruya-node is running more than one daemon pod Jan 28 13:06:52.731: INFO: Number of nodes with available pods: 0 Jan 28 13:06:52.731: INFO: Node iruya-node is running more than one daemon pod Jan 28 13:06:53.730: INFO: Number of nodes with available pods: 0 Jan 28 13:06:53.730: INFO: Node iruya-node is running more than one daemon pod Jan 28 13:06:54.740: INFO: Number of nodes with available pods: 0 Jan 28 13:06:54.740: INFO: Node iruya-node is running more than one daemon pod Jan 28 13:06:55.744: INFO: Number of nodes with available pods: 0 Jan 28 13:06:55.744: INFO: Node iruya-node is running more than one daemon pod Jan 28 13:06:56.731: INFO: Number of nodes with available pods: 0 Jan 28 13:06:56.731: INFO: Node iruya-node is running more than one daemon pod Jan 28 13:06:57.737: INFO: Number of nodes with available pods: 0 Jan 28 13:06:57.737: INFO: Node iruya-node is running more than one daemon pod Jan 28 13:06:58.729: INFO: Number of nodes with available pods: 0 Jan 28 13:06:58.730: INFO: Node iruya-node is running more than one daemon pod Jan 28 13:06:59.733: INFO: Number of nodes with available pods: 0 Jan 28 13:06:59.733: INFO: Node iruya-node is running more than one daemon pod Jan 28 13:07:00.748: INFO: Number of nodes with available pods: 0 Jan 28 13:07:00.748: INFO: Node iruya-node is running more than one daemon pod Jan 28 13:07:01.736: INFO: Number of nodes with available pods: 0 Jan 28 13:07:01.736: INFO: Node iruya-node is running more than one daemon pod Jan 28 13:07:02.731: INFO: Number of nodes with available pods: 0 Jan 28 13:07:02.731: INFO: Node iruya-node is running more than one daemon pod Jan 28 13:07:03.731: INFO: Number of nodes with available pods: 0 Jan 28 13:07:03.731: INFO: Node iruya-node is running more than one daemon pod Jan 28 13:07:04.733: INFO: Number of nodes with available pods: 1 Jan 28 13:07:04.733: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-6975, will wait for the garbage collector to delete the pods Jan 28 13:07:04.822: INFO: Deleting DaemonSet.extensions daemon-set took: 25.319992ms Jan 28 13:07:05.123: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.715195ms Jan 28 13:07:16.734: INFO: Number of nodes with available pods: 0 Jan 28 13:07:16.734: INFO: Number of running nodes: 0, number of available pods: 0 Jan 28 13:07:16.744: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-6975/daemonsets","resourceVersion":"22185129"},"items":null} Jan 28 13:07:16.754: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-6975/pods","resourceVersion":"22185129"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 28 13:07:16.888: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-6975" for this suite. Jan 28 13:07:22.928: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 28 13:07:23.097: INFO: namespace daemonsets-6975 deletion completed in 6.20292641s • [SLOW TEST:50.769 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 28 13:07:23.098: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88 [It] should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating service multi-endpoint-test in namespace services-3317 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-3317 to expose endpoints map[] Jan 28 13:07:23.258: INFO: Get endpoints failed (12.606822ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found Jan 28 13:07:24.264: INFO: successfully validated that service multi-endpoint-test in namespace services-3317 exposes endpoints map[] (1.019165598s elapsed) STEP: Creating pod pod1 in namespace services-3317 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-3317 to expose endpoints map[pod1:[100]] Jan 28 13:07:28.373: INFO: Unexpected endpoints: found map[], expected map[pod1:[100]] (4.098247671s elapsed, will retry) Jan 28 13:07:34.051: INFO: successfully validated that service multi-endpoint-test in namespace services-3317 exposes endpoints map[pod1:[100]] (9.775755086s elapsed) STEP: Creating pod pod2 in namespace services-3317 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-3317 to expose endpoints map[pod1:[100] pod2:[101]] Jan 28 13:07:38.401: INFO: Unexpected endpoints: found map[69519df4-e4a6-4623-a774-ec0158e51b77:[100]], expected map[pod1:[100] pod2:[101]] (4.342406654s elapsed, will retry) Jan 28 13:07:44.551: INFO: successfully validated that service multi-endpoint-test in namespace services-3317 exposes endpoints map[pod1:[100] pod2:[101]] (10.492946031s elapsed) STEP: Deleting pod pod1 in namespace services-3317 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-3317 to expose endpoints map[pod2:[101]] Jan 28 13:07:45.807: INFO: successfully validated that service multi-endpoint-test in namespace services-3317 exposes endpoints map[pod2:[101]] (1.242414455s elapsed) STEP: Deleting pod pod2 in namespace services-3317 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-3317 to expose endpoints map[] Jan 28 13:07:45.870: INFO: successfully validated that service multi-endpoint-test in namespace services-3317 exposes endpoints map[] (29.104354ms elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 28 13:07:45.994: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-3317" for this suite. Jan 28 13:08:10.157: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 28 13:08:10.312: INFO: namespace services-3317 deletion completed in 24.305485351s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92 • [SLOW TEST:47.215 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-apps] ReplicationController should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 28 13:08:10.313: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change Jan 28 13:08:10.574: INFO: Pod name pod-release: Found 0 pods out of 1 Jan 28 13:08:15.586: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 28 13:08:16.655: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-9527" for this suite. Jan 28 13:08:22.702: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 28 13:08:22.939: INFO: namespace replication-controller-9527 deletion completed in 6.268407897s • [SLOW TEST:12.626 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 28 13:08:22.940: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on tmpfs Jan 28 13:08:23.154: INFO: Waiting up to 5m0s for pod "pod-cd5d4535-90b1-43b4-8f97-583800ac95c2" in namespace "emptydir-6954" to be "success or failure" Jan 28 13:08:23.172: INFO: Pod "pod-cd5d4535-90b1-43b4-8f97-583800ac95c2": Phase="Pending", Reason="", readiness=false. Elapsed: 18.514963ms Jan 28 13:08:25.193: INFO: Pod "pod-cd5d4535-90b1-43b4-8f97-583800ac95c2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.039580247s Jan 28 13:08:27.222: INFO: Pod "pod-cd5d4535-90b1-43b4-8f97-583800ac95c2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.067714599s Jan 28 13:08:29.230: INFO: Pod "pod-cd5d4535-90b1-43b4-8f97-583800ac95c2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.076348552s Jan 28 13:08:31.246: INFO: Pod "pod-cd5d4535-90b1-43b4-8f97-583800ac95c2": Phase="Pending", Reason="", readiness=false. Elapsed: 8.092542435s Jan 28 13:08:33.413: INFO: Pod "pod-cd5d4535-90b1-43b4-8f97-583800ac95c2": Phase="Pending", Reason="", readiness=false. Elapsed: 10.259417479s Jan 28 13:08:35.424: INFO: Pod "pod-cd5d4535-90b1-43b4-8f97-583800ac95c2": Phase="Pending", Reason="", readiness=false. Elapsed: 12.269977736s Jan 28 13:08:37.432: INFO: Pod "pod-cd5d4535-90b1-43b4-8f97-583800ac95c2": Phase="Pending", Reason="", readiness=false. Elapsed: 14.27860043s Jan 28 13:08:39.441: INFO: Pod "pod-cd5d4535-90b1-43b4-8f97-583800ac95c2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.286896243s STEP: Saw pod success Jan 28 13:08:39.441: INFO: Pod "pod-cd5d4535-90b1-43b4-8f97-583800ac95c2" satisfied condition "success or failure" Jan 28 13:08:39.445: INFO: Trying to get logs from node iruya-node pod pod-cd5d4535-90b1-43b4-8f97-583800ac95c2 container test-container: STEP: delete the pod Jan 28 13:08:39.618: INFO: Waiting for pod pod-cd5d4535-90b1-43b4-8f97-583800ac95c2 to disappear Jan 28 13:08:39.629: INFO: Pod pod-cd5d4535-90b1-43b4-8f97-583800ac95c2 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 28 13:08:39.629: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6954" for this suite. Jan 28 13:08:45.686: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 28 13:08:45.803: INFO: namespace emptydir-6954 deletion completed in 6.166692207s • [SLOW TEST:22.863 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 28 13:08:45.803: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-eac8d0ca-3890-43d6-ba45-7bb4f5db425a STEP: Creating a pod to test consume configMaps Jan 28 13:08:46.011: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-886d9510-101c-493d-8958-e76d7bd7cbbd" in namespace "projected-1335" to be "success or failure" Jan 28 13:08:46.018: INFO: Pod "pod-projected-configmaps-886d9510-101c-493d-8958-e76d7bd7cbbd": Phase="Pending", Reason="", readiness=false. Elapsed: 7.328121ms Jan 28 13:08:48.035: INFO: Pod "pod-projected-configmaps-886d9510-101c-493d-8958-e76d7bd7cbbd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024396444s Jan 28 13:08:50.059: INFO: Pod "pod-projected-configmaps-886d9510-101c-493d-8958-e76d7bd7cbbd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.047929019s Jan 28 13:08:52.081: INFO: Pod "pod-projected-configmaps-886d9510-101c-493d-8958-e76d7bd7cbbd": Phase="Pending", Reason="", readiness=false. Elapsed: 6.069805043s Jan 28 13:08:54.088: INFO: Pod "pod-projected-configmaps-886d9510-101c-493d-8958-e76d7bd7cbbd": Phase="Pending", Reason="", readiness=false. Elapsed: 8.077271828s Jan 28 13:08:56.099: INFO: Pod "pod-projected-configmaps-886d9510-101c-493d-8958-e76d7bd7cbbd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.087993622s STEP: Saw pod success Jan 28 13:08:56.099: INFO: Pod "pod-projected-configmaps-886d9510-101c-493d-8958-e76d7bd7cbbd" satisfied condition "success or failure" Jan 28 13:08:56.106: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-886d9510-101c-493d-8958-e76d7bd7cbbd container projected-configmap-volume-test: STEP: delete the pod Jan 28 13:08:56.180: INFO: Waiting for pod pod-projected-configmaps-886d9510-101c-493d-8958-e76d7bd7cbbd to disappear Jan 28 13:08:56.333: INFO: Pod pod-projected-configmaps-886d9510-101c-493d-8958-e76d7bd7cbbd no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 28 13:08:56.334: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1335" for this suite. Jan 28 13:09:02.373: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 28 13:09:02.480: INFO: namespace projected-1335 deletion completed in 6.137537953s • [SLOW TEST:16.677 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 28 13:09:02.482: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37 [It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test hostPath mode Jan 28 13:09:02.644: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-4231" to be "success or failure" Jan 28 13:09:02.651: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 6.063774ms Jan 28 13:09:04.670: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025810884s Jan 28 13:09:06.695: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.050546029s Jan 28 13:09:08.706: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 6.061766815s Jan 28 13:09:10.736: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 8.09132392s Jan 28 13:09:12.741: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 10.09691092s Jan 28 13:09:14.975: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 12.330887644s Jan 28 13:09:16.992: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 14.346910149s Jan 28 13:09:19.004: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.359024446s STEP: Saw pod success Jan 28 13:09:19.004: INFO: Pod "pod-host-path-test" satisfied condition "success or failure" Jan 28 13:09:19.015: INFO: Trying to get logs from node iruya-node pod pod-host-path-test container test-container-1: STEP: delete the pod Jan 28 13:09:19.292: INFO: Waiting for pod pod-host-path-test to disappear Jan 28 13:09:19.298: INFO: Pod pod-host-path-test no longer exists [AfterEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 28 13:09:19.298: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "hostpath-4231" for this suite. Jan 28 13:09:25.481: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 28 13:09:25.675: INFO: namespace hostpath-4231 deletion completed in 6.255943957s • [SLOW TEST:23.193 seconds] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34 should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 28 13:09:25.675: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod busybox-8fc03c89-093d-44a2-8a90-7ed4dc5fcb35 in namespace container-probe-2015 Jan 28 13:09:35.871: INFO: Started pod busybox-8fc03c89-093d-44a2-8a90-7ed4dc5fcb35 in namespace container-probe-2015 STEP: checking the pod's current state and verifying that restartCount is present Jan 28 13:09:35.876: INFO: Initial restart count of pod busybox-8fc03c89-093d-44a2-8a90-7ed4dc5fcb35 is 0 Jan 28 13:10:28.440: INFO: Restart count of pod container-probe-2015/busybox-8fc03c89-093d-44a2-8a90-7ed4dc5fcb35 is now 1 (52.56428841s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 28 13:10:28.485: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-2015" for this suite. Jan 28 13:10:34.537: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 28 13:10:34.663: INFO: namespace container-probe-2015 deletion completed in 6.157960046s • [SLOW TEST:68.988 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl logs should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 28 13:10:34.665: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1292 STEP: creating an rc Jan 28 13:10:34.778: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2833' Jan 28 13:10:37.061: INFO: stderr: "" Jan 28 13:10:37.062: INFO: stdout: "replicationcontroller/redis-master created\n" [It] should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Waiting for Redis master to start. Jan 28 13:10:38.074: INFO: Selector matched 1 pods for map[app:redis] Jan 28 13:10:38.075: INFO: Found 0 / 1 Jan 28 13:10:39.077: INFO: Selector matched 1 pods for map[app:redis] Jan 28 13:10:39.077: INFO: Found 0 / 1 Jan 28 13:10:40.068: INFO: Selector matched 1 pods for map[app:redis] Jan 28 13:10:40.068: INFO: Found 0 / 1 Jan 28 13:10:41.070: INFO: Selector matched 1 pods for map[app:redis] Jan 28 13:10:41.070: INFO: Found 0 / 1 Jan 28 13:10:42.075: INFO: Selector matched 1 pods for map[app:redis] Jan 28 13:10:42.075: INFO: Found 0 / 1 Jan 28 13:10:43.073: INFO: Selector matched 1 pods for map[app:redis] Jan 28 13:10:43.073: INFO: Found 0 / 1 Jan 28 13:10:44.079: INFO: Selector matched 1 pods for map[app:redis] Jan 28 13:10:44.079: INFO: Found 0 / 1 Jan 28 13:10:45.091: INFO: Selector matched 1 pods for map[app:redis] Jan 28 13:10:45.091: INFO: Found 0 / 1 Jan 28 13:10:46.074: INFO: Selector matched 1 pods for map[app:redis] Jan 28 13:10:46.074: INFO: Found 0 / 1 Jan 28 13:10:47.079: INFO: Selector matched 1 pods for map[app:redis] Jan 28 13:10:47.079: INFO: Found 1 / 1 Jan 28 13:10:47.079: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Jan 28 13:10:47.086: INFO: Selector matched 1 pods for map[app:redis] Jan 28 13:10:47.086: INFO: ForEach: Found 1 pods from the filter. Now looping through them. STEP: checking for a matching strings Jan 28 13:10:47.086: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-p8bg7 redis-master --namespace=kubectl-2833' Jan 28 13:10:47.285: INFO: stderr: "" Jan 28 13:10:47.285: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 28 Jan 13:10:45.458 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 28 Jan 13:10:45.458 # Server started, Redis version 3.2.12\n1:M 28 Jan 13:10:45.460 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 28 Jan 13:10:45.460 * The server is now ready to accept connections on port 6379\n" STEP: limiting log lines Jan 28 13:10:47.285: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-p8bg7 redis-master --namespace=kubectl-2833 --tail=1' Jan 28 13:10:47.419: INFO: stderr: "" Jan 28 13:10:47.419: INFO: stdout: "1:M 28 Jan 13:10:45.460 * The server is now ready to accept connections on port 6379\n" STEP: limiting log bytes Jan 28 13:10:47.419: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-p8bg7 redis-master --namespace=kubectl-2833 --limit-bytes=1' Jan 28 13:10:47.547: INFO: stderr: "" Jan 28 13:10:47.548: INFO: stdout: " " STEP: exposing timestamps Jan 28 13:10:47.548: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-p8bg7 redis-master --namespace=kubectl-2833 --tail=1 --timestamps' Jan 28 13:10:47.674: INFO: stderr: "" Jan 28 13:10:47.674: INFO: stdout: "2020-01-28T13:10:45.463629406Z 1:M 28 Jan 13:10:45.460 * The server is now ready to accept connections on port 6379\n" STEP: restricting to a time range Jan 28 13:10:50.176: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-p8bg7 redis-master --namespace=kubectl-2833 --since=1s' Jan 28 13:10:50.382: INFO: stderr: "" Jan 28 13:10:50.382: INFO: stdout: "" Jan 28 13:10:50.383: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-p8bg7 redis-master --namespace=kubectl-2833 --since=24h' Jan 28 13:10:50.626: INFO: stderr: "" Jan 28 13:10:50.626: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 28 Jan 13:10:45.458 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 28 Jan 13:10:45.458 # Server started, Redis version 3.2.12\n1:M 28 Jan 13:10:45.460 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 28 Jan 13:10:45.460 * The server is now ready to accept connections on port 6379\n" [AfterEach] [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1298 STEP: using delete to clean up resources Jan 28 13:10:50.627: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2833' Jan 28 13:10:50.747: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 28 13:10:50.747: INFO: stdout: "replicationcontroller \"redis-master\" force deleted\n" Jan 28 13:10:50.748: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=nginx --no-headers --namespace=kubectl-2833' Jan 28 13:10:50.877: INFO: stderr: "No resources found.\n" Jan 28 13:10:50.877: INFO: stdout: "" Jan 28 13:10:50.878: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=nginx --namespace=kubectl-2833 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jan 28 13:10:51.072: INFO: stderr: "" Jan 28 13:10:51.072: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 28 13:10:51.073: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2833" for this suite. Jan 28 13:11:13.113: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 28 13:11:13.263: INFO: namespace kubectl-2833 deletion completed in 22.174868993s • [SLOW TEST:38.598 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 28 13:11:13.264: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Jan 28 13:11:33.745: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jan 28 13:11:33.776: INFO: Pod pod-with-poststart-http-hook still exists Jan 28 13:11:35.776: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jan 28 13:11:35.791: INFO: Pod pod-with-poststart-http-hook still exists Jan 28 13:11:37.776: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jan 28 13:11:37.791: INFO: Pod pod-with-poststart-http-hook still exists Jan 28 13:11:39.776: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jan 28 13:11:39.792: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 28 13:11:39.793: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-3149" for this suite. Jan 28 13:12:03.830: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 28 13:12:03.951: INFO: namespace container-lifecycle-hook-3149 deletion completed in 24.146599067s • [SLOW TEST:50.687 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 28 13:12:03.952: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jan 28 13:12:04.091: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8232' Jan 28 13:12:04.521: INFO: stderr: "" Jan 28 13:12:04.522: INFO: stdout: "replicationcontroller/redis-master created\n" Jan 28 13:12:04.523: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8232' Jan 28 13:12:05.110: INFO: stderr: "" Jan 28 13:12:05.110: INFO: stdout: "service/redis-master created\n" STEP: Waiting for Redis master to start. Jan 28 13:12:06.121: INFO: Selector matched 1 pods for map[app:redis] Jan 28 13:12:06.121: INFO: Found 0 / 1 Jan 28 13:12:07.120: INFO: Selector matched 1 pods for map[app:redis] Jan 28 13:12:07.120: INFO: Found 0 / 1 Jan 28 13:12:08.126: INFO: Selector matched 1 pods for map[app:redis] Jan 28 13:12:08.126: INFO: Found 0 / 1 Jan 28 13:12:09.126: INFO: Selector matched 1 pods for map[app:redis] Jan 28 13:12:09.126: INFO: Found 0 / 1 Jan 28 13:12:10.124: INFO: Selector matched 1 pods for map[app:redis] Jan 28 13:12:10.125: INFO: Found 0 / 1 Jan 28 13:12:11.121: INFO: Selector matched 1 pods for map[app:redis] Jan 28 13:12:11.121: INFO: Found 0 / 1 Jan 28 13:12:12.125: INFO: Selector matched 1 pods for map[app:redis] Jan 28 13:12:12.125: INFO: Found 0 / 1 Jan 28 13:12:13.131: INFO: Selector matched 1 pods for map[app:redis] Jan 28 13:12:13.132: INFO: Found 1 / 1 Jan 28 13:12:13.132: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Jan 28 13:12:13.143: INFO: Selector matched 1 pods for map[app:redis] Jan 28 13:12:13.143: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Jan 28 13:12:13.144: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe pod redis-master-8cxbr --namespace=kubectl-8232' Jan 28 13:12:13.277: INFO: stderr: "" Jan 28 13:12:13.277: INFO: stdout: "Name: redis-master-8cxbr\nNamespace: kubectl-8232\nPriority: 0\nNode: iruya-node/10.96.3.65\nStart Time: Tue, 28 Jan 2020 13:12:04 +0000\nLabels: app=redis\n role=master\nAnnotations: \nStatus: Running\nIP: 10.44.0.1\nControlled By: ReplicationController/redis-master\nContainers:\n redis-master:\n Container ID: docker://a5744fc0989d86f411d5559c0452715dc824afdd0e0ac9fb8be9b18c25fd91a5\n Image: gcr.io/kubernetes-e2e-test-images/redis:1.0\n Image ID: docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Tue, 28 Jan 2020 13:12:11 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-5rj7r (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-5rj7r:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-5rj7r\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute for 300s\n node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 9s default-scheduler Successfully assigned kubectl-8232/redis-master-8cxbr to iruya-node\n Normal Pulled 5s kubelet, iruya-node Container image \"gcr.io/kubernetes-e2e-test-images/redis:1.0\" already present on machine\n Normal Created 3s kubelet, iruya-node Created container redis-master\n Normal Started 2s kubelet, iruya-node Started container redis-master\n" Jan 28 13:12:13.278: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe rc redis-master --namespace=kubectl-8232' Jan 28 13:12:13.438: INFO: stderr: "" Jan 28 13:12:13.438: INFO: stdout: "Name: redis-master\nNamespace: kubectl-8232\nSelector: app=redis,role=master\nLabels: app=redis\n role=master\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=redis\n role=master\n Containers:\n redis-master:\n Image: gcr.io/kubernetes-e2e-test-images/redis:1.0\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 9s replication-controller Created pod: redis-master-8cxbr\n" Jan 28 13:12:13.439: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe service redis-master --namespace=kubectl-8232' Jan 28 13:12:13.648: INFO: stderr: "" Jan 28 13:12:13.648: INFO: stdout: "Name: redis-master\nNamespace: kubectl-8232\nLabels: app=redis\n role=master\nAnnotations: \nSelector: app=redis,role=master\nType: ClusterIP\nIP: 10.110.39.200\nPort: 6379/TCP\nTargetPort: redis-server/TCP\nEndpoints: 10.44.0.1:6379\nSession Affinity: None\nEvents: \n" Jan 28 13:12:13.653: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe node iruya-node' Jan 28 13:12:13.849: INFO: stderr: "" Jan 28 13:12:13.849: INFO: stdout: "Name: iruya-node\nRoles: \nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=iruya-node\n kubernetes.io/os=linux\nAnnotations: kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Sun, 04 Aug 2019 09:01:39 +0000\nTaints: \nUnschedulable: false\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n NetworkUnavailable False Sat, 12 Oct 2019 11:56:49 +0000 Sat, 12 Oct 2019 11:56:49 +0000 WeaveIsUp Weave pod has set this\n MemoryPressure False Tue, 28 Jan 2020 13:11:46 +0000 Sun, 04 Aug 2019 09:01:39 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Tue, 28 Jan 2020 13:11:46 +0000 Sun, 04 Aug 2019 09:01:39 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Tue, 28 Jan 2020 13:11:46 +0000 Sun, 04 Aug 2019 09:01:39 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Tue, 28 Jan 2020 13:11:46 +0000 Sun, 04 Aug 2019 09:02:19 +0000 KubeletReady kubelet is posting ready status. AppArmor enabled\nAddresses:\n InternalIP: 10.96.3.65\n Hostname: iruya-node\nCapacity:\n cpu: 4\n ephemeral-storage: 20145724Ki\n hugepages-2Mi: 0\n memory: 4039076Ki\n pods: 110\nAllocatable:\n cpu: 4\n ephemeral-storage: 18566299208\n hugepages-2Mi: 0\n memory: 3936676Ki\n pods: 110\nSystem Info:\n Machine ID: f573dcf04d6f4a87856a35d266a2fa7a\n System UUID: F573DCF0-4D6F-4A87-856A-35D266A2FA7A\n Boot ID: 8baf4beb-8391-43e6-b17b-b1e184b5370a\n Kernel Version: 4.15.0-52-generic\n OS Image: Ubuntu 18.04.2 LTS\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: docker://18.9.7\n Kubelet Version: v1.15.1\n Kube-Proxy Version: v1.15.1\nPodCIDR: 10.96.1.0/24\nNon-terminated Pods: (3 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system kube-proxy-976zl 0 (0%) 0 (0%) 0 (0%) 0 (0%) 177d\n kube-system weave-net-rlp57 20m (0%) 0 (0%) 0 (0%) 0 (0%) 108d\n kubectl-8232 redis-master-8cxbr 0 (0%) 0 (0%) 0 (0%) 0 (0%) 9s\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 20m (0%) 0 (0%)\n memory 0 (0%) 0 (0%)\n ephemeral-storage 0 (0%) 0 (0%)\nEvents: \n" Jan 28 13:12:13.850: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe namespace kubectl-8232' Jan 28 13:12:14.061: INFO: stderr: "" Jan 28 13:12:14.061: INFO: stdout: "Name: kubectl-8232\nLabels: e2e-framework=kubectl\n e2e-run=18d4cdca-1599-451e-af4a-7fb92d7e49cc\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo resource limits.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 28 13:12:14.061: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8232" for this suite. Jan 28 13:12:36.091: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 28 13:12:36.246: INFO: namespace kubectl-8232 deletion completed in 22.178713029s • [SLOW TEST:32.294 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl describe /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 28 13:12:36.246: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Jan 28 13:12:47.157: INFO: Successfully updated pod "pod-update-activedeadlineseconds-f13be5ef-97bd-4a72-86c0-d4610da7d59b" Jan 28 13:12:47.157: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-f13be5ef-97bd-4a72-86c0-d4610da7d59b" in namespace "pods-4670" to be "terminated due to deadline exceeded" Jan 28 13:12:47.203: INFO: Pod "pod-update-activedeadlineseconds-f13be5ef-97bd-4a72-86c0-d4610da7d59b": Phase="Running", Reason="", readiness=true. Elapsed: 45.861649ms Jan 28 13:12:49.215: INFO: Pod "pod-update-activedeadlineseconds-f13be5ef-97bd-4a72-86c0-d4610da7d59b": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.057547205s Jan 28 13:12:49.215: INFO: Pod "pod-update-activedeadlineseconds-f13be5ef-97bd-4a72-86c0-d4610da7d59b" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 28 13:12:49.215: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-4670" for this suite. Jan 28 13:12:55.247: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 28 13:12:55.420: INFO: namespace pods-4670 deletion completed in 6.198314091s • [SLOW TEST:19.174 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 28 13:12:55.421: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 Jan 28 13:12:55.583: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jan 28 13:12:55.597: INFO: Waiting for terminating namespaces to be deleted... Jan 28 13:12:55.604: INFO: Logging pods the kubelet thinks is on node iruya-node before test Jan 28 13:12:55.616: INFO: weave-net-rlp57 from kube-system started at 2019-10-12 11:56:39 +0000 UTC (2 container statuses recorded) Jan 28 13:12:55.616: INFO: Container weave ready: true, restart count 0 Jan 28 13:12:55.616: INFO: Container weave-npc ready: true, restart count 0 Jan 28 13:12:55.616: INFO: kube-proxy-976zl from kube-system started at 2019-08-04 09:01:39 +0000 UTC (1 container statuses recorded) Jan 28 13:12:55.616: INFO: Container kube-proxy ready: true, restart count 0 Jan 28 13:12:55.616: INFO: Logging pods the kubelet thinks is on node iruya-server-sfge57q7djm7 before test Jan 28 13:12:55.629: INFO: coredns-5c98db65d4-xx8w8 from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded) Jan 28 13:12:55.629: INFO: Container coredns ready: true, restart count 0 Jan 28 13:12:55.629: INFO: kube-scheduler-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:43 +0000 UTC (1 container statuses recorded) Jan 28 13:12:55.629: INFO: Container kube-scheduler ready: true, restart count 13 Jan 28 13:12:55.629: INFO: weave-net-bzl4d from kube-system started at 2019-08-04 08:52:37 +0000 UTC (2 container statuses recorded) Jan 28 13:12:55.629: INFO: Container weave ready: true, restart count 0 Jan 28 13:12:55.629: INFO: Container weave-npc ready: true, restart count 0 Jan 28 13:12:55.629: INFO: coredns-5c98db65d4-bm4gs from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded) Jan 28 13:12:55.629: INFO: Container coredns ready: true, restart count 0 Jan 28 13:12:55.629: INFO: etcd-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:38 +0000 UTC (1 container statuses recorded) Jan 28 13:12:55.629: INFO: Container etcd ready: true, restart count 0 Jan 28 13:12:55.629: INFO: kube-proxy-58v95 from kube-system started at 2019-08-04 08:52:37 +0000 UTC (1 container statuses recorded) Jan 28 13:12:55.629: INFO: Container kube-proxy ready: true, restart count 0 Jan 28 13:12:55.629: INFO: kube-controller-manager-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:42 +0000 UTC (1 container statuses recorded) Jan 28 13:12:55.629: INFO: Container kube-controller-manager ready: true, restart count 19 Jan 28 13:12:55.629: INFO: kube-apiserver-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:39 +0000 UTC (1 container statuses recorded) Jan 28 13:12:55.629: INFO: Container kube-apiserver ready: true, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.15ee0f261ec191c2], Reason = [FailedScheduling], Message = [0/2 nodes are available: 2 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 28 13:12:56.658: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-4943" for this suite. Jan 28 13:13:02.702: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 28 13:13:02.871: INFO: namespace sched-pred-4943 deletion completed in 6.205867198s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72 • [SLOW TEST:7.451 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23 validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 28 13:13:02.872: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jan 28 13:13:02.955: INFO: Creating deployment "nginx-deployment" Jan 28 13:13:03.057: INFO: Waiting for observed generation 1 Jan 28 13:13:06.253: INFO: Waiting for all required pods to come up Jan 28 13:13:07.080: INFO: Pod name nginx: Found 10 pods out of 10 STEP: ensuring each pod is running Jan 28 13:13:33.367: INFO: Waiting for deployment "nginx-deployment" to complete Jan 28 13:13:33.378: INFO: Updating deployment "nginx-deployment" with a non-existent image Jan 28 13:13:33.389: INFO: Updating deployment nginx-deployment Jan 28 13:13:33.389: INFO: Waiting for observed generation 2 Jan 28 13:13:36.452: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 Jan 28 13:13:36.525: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 Jan 28 13:13:36.708: INFO: Waiting for the first rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas Jan 28 13:13:38.042: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 Jan 28 13:13:38.042: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 Jan 28 13:13:38.047: INFO: Waiting for the second rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas Jan 28 13:13:38.497: INFO: Verifying that deployment "nginx-deployment" has minimum required number of available replicas Jan 28 13:13:38.498: INFO: Scaling up the deployment "nginx-deployment" from 10 to 30 Jan 28 13:13:38.599: INFO: Updating deployment nginx-deployment Jan 28 13:13:38.599: INFO: Waiting for the replicasets of deployment "nginx-deployment" to have desired number of replicas Jan 28 13:13:38.729: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 Jan 28 13:13:38.775: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Jan 28 13:13:39.356: INFO: Deployment "nginx-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment,GenerateName:,Namespace:deployment-3801,SelfLink:/apis/apps/v1/namespaces/deployment-3801/deployments/nginx-deployment,UID:7bec48db-f440-45a6-96a5-ecce4990ca81,ResourceVersion:22186190,Generation:3,CreationTimestamp:2020-01-28 13:13:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*30,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:13,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[{Progressing True 2020-01-28 13:13:35 +0000 UTC 2020-01-28 13:13:03 +0000 UTC ReplicaSetUpdated ReplicaSet "nginx-deployment-55fb7cb77f" is progressing.} {Available False 2020-01-28 13:13:38 +0000 UTC 2020-01-28 13:13:38 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.}],ReadyReplicas:8,CollisionCount:nil,},} Jan 28 13:13:40.608: INFO: New ReplicaSet "nginx-deployment-55fb7cb77f" of Deployment "nginx-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f,GenerateName:,Namespace:deployment-3801,SelfLink:/apis/apps/v1/namespaces/deployment-3801/replicasets/nginx-deployment-55fb7cb77f,UID:4d295976-dfa4-4c22-992c-1d0dd41a500f,ResourceVersion:22186226,Generation:3,CreationTimestamp:2020-01-28 13:13:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 7bec48db-f440-45a6-96a5-ecce4990ca81 0xc002783f77 0xc002783f78}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*13,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:5,FullyLabeledReplicas:5,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Jan 28 13:13:40.608: INFO: All old ReplicaSets of Deployment "nginx-deployment": Jan 28 13:13:40.608: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498,GenerateName:,Namespace:deployment-3801,SelfLink:/apis/apps/v1/namespaces/deployment-3801/replicasets/nginx-deployment-7b8c6f4498,UID:808ef477-29ae-4c32-94ab-aa49c0745a7e,ResourceVersion:22186235,Generation:3,CreationTimestamp:2020-01-28 13:13:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 7bec48db-f440-45a6-96a5-ecce4990ca81 0xc002f88047 0xc002f88048}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*20,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[],},} Jan 28 13:13:42.529: INFO: Pod "nginx-deployment-55fb7cb77f-79s8c" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-79s8c,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-3801,SelfLink:/api/v1/namespaces/deployment-3801/pods/nginx-deployment-55fb7cb77f-79s8c,UID:0706635f-1c09-4f10-b02d-593febb465c6,ResourceVersion:22186230,Generation:0,CreationTimestamp:2020-01-28 13:13:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 4d295976-dfa4-4c22-992c-1d0dd41a500f 0xc002f889c7 0xc002f889c8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-95p8p {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-95p8p,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-95p8p true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002f88a30} {node.kubernetes.io/unreachable Exists NoExecute 0xc002f88a50}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 13:13:39 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 28 13:13:42.530: INFO: Pod "nginx-deployment-55fb7cb77f-bkjrw" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-bkjrw,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-3801,SelfLink:/api/v1/namespaces/deployment-3801/pods/nginx-deployment-55fb7cb77f-bkjrw,UID:50fde185-f32d-4b4b-9c18-e36d3c67c133,ResourceVersion:22186146,Generation:0,CreationTimestamp:2020-01-28 13:13:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 4d295976-dfa4-4c22-992c-1d0dd41a500f 0xc002f88cd7 0xc002f88cd8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-95p8p {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-95p8p,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-95p8p true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002f88d50} {node.kubernetes.io/unreachable Exists NoExecute 0xc002f88d70}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 13:13:33 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-28 13:13:33 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-28 13:13:33 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 13:13:33 +0000 UTC }],Message:,Reason:,HostIP:10.96.2.216,PodIP:,StartTime:2020-01-28 13:13:33 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 28 13:13:42.530: INFO: Pod "nginx-deployment-55fb7cb77f-c8xlj" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-c8xlj,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-3801,SelfLink:/api/v1/namespaces/deployment-3801/pods/nginx-deployment-55fb7cb77f-c8xlj,UID:0c26171e-feaa-4eeb-9925-cfc2d8261a8b,ResourceVersion:22186174,Generation:0,CreationTimestamp:2020-01-28 13:13:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 4d295976-dfa4-4c22-992c-1d0dd41a500f 0xc002f88e47 0xc002f88e48}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-95p8p {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-95p8p,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-95p8p true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002f88ec0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002f88ee0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 13:13:33 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-28 13:13:33 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-28 13:13:33 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 13:13:33 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2020-01-28 13:13:33 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 28 13:13:42.531: INFO: Pod "nginx-deployment-55fb7cb77f-h8jf6" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-h8jf6,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-3801,SelfLink:/api/v1/namespaces/deployment-3801/pods/nginx-deployment-55fb7cb77f-h8jf6,UID:7d8c93be-de5e-46f7-af0c-b0304caafb1d,ResourceVersion:22186236,Generation:0,CreationTimestamp:2020-01-28 13:13:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 4d295976-dfa4-4c22-992c-1d0dd41a500f 0xc002f88fb7 0xc002f88fb8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-95p8p {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-95p8p,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-95p8p true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002f89020} {node.kubernetes.io/unreachable Exists NoExecute 0xc002f89040}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 13:13:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-28 13:13:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-28 13:13:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 13:13:38 +0000 UTC }],Message:,Reason:,HostIP:10.96.2.216,PodIP:,StartTime:2020-01-28 13:13:39 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 28 13:13:42.532: INFO: Pod "nginx-deployment-55fb7cb77f-hk8p7" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-hk8p7,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-3801,SelfLink:/api/v1/namespaces/deployment-3801/pods/nginx-deployment-55fb7cb77f-hk8p7,UID:b0bf30b1-1d06-41c1-b90e-99696ba5d02e,ResourceVersion:22186171,Generation:0,CreationTimestamp:2020-01-28 13:13:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 4d295976-dfa4-4c22-992c-1d0dd41a500f 0xc002f89127 0xc002f89128}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-95p8p {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-95p8p,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-95p8p true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002f89190} {node.kubernetes.io/unreachable Exists NoExecute 0xc002f891b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 13:13:34 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-28 13:13:34 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-28 13:13:34 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 13:13:34 +0000 UTC }],Message:,Reason:,HostIP:10.96.2.216,PodIP:,StartTime:2020-01-28 13:13:34 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 28 13:13:42.532: INFO: Pod "nginx-deployment-55fb7cb77f-ltzx8" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-ltzx8,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-3801,SelfLink:/api/v1/namespaces/deployment-3801/pods/nginx-deployment-55fb7cb77f-ltzx8,UID:130eef8d-c812-48fd-839d-e0beaf6698b6,ResourceVersion:22186206,Generation:0,CreationTimestamp:2020-01-28 13:13:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 4d295976-dfa4-4c22-992c-1d0dd41a500f 0xc002f89287 0xc002f89288}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-95p8p {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-95p8p,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-95p8p true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002f892f0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002f89310}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 13:13:39 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 28 13:13:42.533: INFO: Pod "nginx-deployment-55fb7cb77f-pfcdh" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-pfcdh,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-3801,SelfLink:/api/v1/namespaces/deployment-3801/pods/nginx-deployment-55fb7cb77f-pfcdh,UID:43cc578c-41ac-48c3-a126-54d96f2865e7,ResourceVersion:22186168,Generation:0,CreationTimestamp:2020-01-28 13:13:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 4d295976-dfa4-4c22-992c-1d0dd41a500f 0xc002f893a7 0xc002f893a8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-95p8p {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-95p8p,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-95p8p true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002f89430} {node.kubernetes.io/unreachable Exists NoExecute 0xc002f89450}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 13:13:33 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-28 13:13:33 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-28 13:13:33 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 13:13:33 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2020-01-28 13:13:33 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 28 13:13:42.533: INFO: Pod "nginx-deployment-55fb7cb77f-phjz7" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-phjz7,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-3801,SelfLink:/api/v1/namespaces/deployment-3801/pods/nginx-deployment-55fb7cb77f-phjz7,UID:77073563-9a41-4317-833b-49c01a05b94c,ResourceVersion:22186176,Generation:0,CreationTimestamp:2020-01-28 13:13:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 4d295976-dfa4-4c22-992c-1d0dd41a500f 0xc002f89527 0xc002f89528}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-95p8p {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-95p8p,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-95p8p true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002f895a0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002f895c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 13:13:34 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-28 13:13:34 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-28 13:13:34 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 13:13:34 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2020-01-28 13:13:34 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 28 13:13:42.534: INFO: Pod "nginx-deployment-55fb7cb77f-qgbnv" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-qgbnv,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-3801,SelfLink:/api/v1/namespaces/deployment-3801/pods/nginx-deployment-55fb7cb77f-qgbnv,UID:6bd2ff4e-9017-4bc0-b126-4c859de43da7,ResourceVersion:22186214,Generation:0,CreationTimestamp:2020-01-28 13:13:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 4d295976-dfa4-4c22-992c-1d0dd41a500f 0xc002f89697 0xc002f89698}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-95p8p {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-95p8p,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-95p8p true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002f89710} {node.kubernetes.io/unreachable Exists NoExecute 0xc002f89740}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 13:13:39 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 28 13:13:42.534: INFO: Pod "nginx-deployment-55fb7cb77f-shn5s" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-shn5s,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-3801,SelfLink:/api/v1/namespaces/deployment-3801/pods/nginx-deployment-55fb7cb77f-shn5s,UID:17cb11be-a3fe-451f-90d4-84a9bbc483b8,ResourceVersion:22186212,Generation:0,CreationTimestamp:2020-01-28 13:13:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 4d295976-dfa4-4c22-992c-1d0dd41a500f 0xc002f897c7 0xc002f897c8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-95p8p {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-95p8p,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-95p8p true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002f89830} {node.kubernetes.io/unreachable Exists NoExecute 0xc002f89850}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 13:13:39 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 28 13:13:42.534: INFO: Pod "nginx-deployment-55fb7cb77f-vzs92" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-vzs92,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-3801,SelfLink:/api/v1/namespaces/deployment-3801/pods/nginx-deployment-55fb7cb77f-vzs92,UID:227888d2-91f8-404c-824d-3e794f9e73af,ResourceVersion:22186209,Generation:0,CreationTimestamp:2020-01-28 13:13:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 4d295976-dfa4-4c22-992c-1d0dd41a500f 0xc002f898d7 0xc002f898d8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-95p8p {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-95p8p,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-95p8p true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002f89950} {node.kubernetes.io/unreachable Exists NoExecute 0xc002f89970}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 13:13:39 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 28 13:13:42.535: INFO: Pod "nginx-deployment-55fb7cb77f-wr757" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-wr757,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-3801,SelfLink:/api/v1/namespaces/deployment-3801/pods/nginx-deployment-55fb7cb77f-wr757,UID:bf2308ab-8333-41a1-b1f0-9c4508c47caf,ResourceVersion:22186204,Generation:0,CreationTimestamp:2020-01-28 13:13:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 4d295976-dfa4-4c22-992c-1d0dd41a500f 0xc002f899f7 0xc002f899f8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-95p8p {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-95p8p,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-95p8p true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002f89a70} {node.kubernetes.io/unreachable Exists NoExecute 0xc002f89a90}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 13:13:39 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 28 13:13:42.535: INFO: Pod "nginx-deployment-55fb7cb77f-zzhg5" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-zzhg5,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-3801,SelfLink:/api/v1/namespaces/deployment-3801/pods/nginx-deployment-55fb7cb77f-zzhg5,UID:575946b4-153d-4797-b6d2-638b8e8531df,ResourceVersion:22186217,Generation:0,CreationTimestamp:2020-01-28 13:13:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 4d295976-dfa4-4c22-992c-1d0dd41a500f 0xc002f89b17 0xc002f89b18}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-95p8p {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-95p8p,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-95p8p true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002f89b90} {node.kubernetes.io/unreachable Exists NoExecute 0xc002f89bb0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 13:13:39 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 28 13:13:42.535: INFO: Pod "nginx-deployment-7b8c6f4498-2j82f" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-2j82f,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-3801,SelfLink:/api/v1/namespaces/deployment-3801/pods/nginx-deployment-7b8c6f4498-2j82f,UID:497ec84e-d641-4dcc-91a0-c229081e7f9e,ResourceVersion:22186232,Generation:0,CreationTimestamp:2020-01-28 13:13:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 808ef477-29ae-4c32-94ab-aa49c0745a7e 0xc002f89c37 0xc002f89c38}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-95p8p {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-95p8p,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-95p8p true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002f89ca0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002f89cc0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 13:13:39 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 28 13:13:42.536: INFO: Pod "nginx-deployment-7b8c6f4498-4wffl" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-4wffl,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-3801,SelfLink:/api/v1/namespaces/deployment-3801/pods/nginx-deployment-7b8c6f4498-4wffl,UID:e7f3c69e-d4b5-41bc-a1ff-ea0b0f142988,ResourceVersion:22186105,Generation:0,CreationTimestamp:2020-01-28 13:13:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 808ef477-29ae-4c32-94ab-aa49c0745a7e 0xc002f89d47 0xc002f89d48}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-95p8p {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-95p8p,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-95p8p true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002f89dc0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002f89de0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 13:13:03 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 13:13:31 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 13:13:31 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 13:13:03 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.2,StartTime:2020-01-28 13:13:03 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-28 13:13:29 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://13d492b27bb98e3c84100af944c5cdf6f72876e5f07c823b0f010f6641977d57}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 28 13:13:42.536: INFO: Pod "nginx-deployment-7b8c6f4498-5svq2" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-5svq2,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-3801,SelfLink:/api/v1/namespaces/deployment-3801/pods/nginx-deployment-7b8c6f4498-5svq2,UID:032b837f-ef60-4854-a733-ad4568b471f1,ResourceVersion:22186231,Generation:0,CreationTimestamp:2020-01-28 13:13:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 808ef477-29ae-4c32-94ab-aa49c0745a7e 0xc002f89eb7 0xc002f89eb8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-95p8p {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-95p8p,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-95p8p true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002f89f30} {node.kubernetes.io/unreachable Exists NoExecute 0xc002f89f50}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 13:13:39 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 28 13:13:42.537: INFO: Pod "nginx-deployment-7b8c6f4498-6shrs" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-6shrs,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-3801,SelfLink:/api/v1/namespaces/deployment-3801/pods/nginx-deployment-7b8c6f4498-6shrs,UID:523e9944-8a80-46c1-b013-503bd7551ea4,ResourceVersion:22186191,Generation:0,CreationTimestamp:2020-01-28 13:13:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 808ef477-29ae-4c32-94ab-aa49c0745a7e 0xc002f89fd7 0xc002f89fd8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-95p8p {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-95p8p,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-95p8p true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002a82050} {node.kubernetes.io/unreachable Exists NoExecute 0xc002a82070}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 13:13:38 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 28 13:13:42.537: INFO: Pod "nginx-deployment-7b8c6f4498-98rlc" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-98rlc,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-3801,SelfLink:/api/v1/namespaces/deployment-3801/pods/nginx-deployment-7b8c6f4498-98rlc,UID:fae608a0-5d3c-4c75-b9fe-84f1c1c2f091,ResourceVersion:22186216,Generation:0,CreationTimestamp:2020-01-28 13:13:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 808ef477-29ae-4c32-94ab-aa49c0745a7e 0xc002a820f7 0xc002a820f8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-95p8p {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-95p8p,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-95p8p true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002a82170} {node.kubernetes.io/unreachable Exists NoExecute 0xc002a82190}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 13:13:39 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 28 13:13:42.537: INFO: Pod "nginx-deployment-7b8c6f4498-cr795" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-cr795,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-3801,SelfLink:/api/v1/namespaces/deployment-3801/pods/nginx-deployment-7b8c6f4498-cr795,UID:92bdad81-ada1-4b83-a16f-c01d9468ee28,ResourceVersion:22186227,Generation:0,CreationTimestamp:2020-01-28 13:13:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 808ef477-29ae-4c32-94ab-aa49c0745a7e 0xc002a82217 0xc002a82218}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-95p8p {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-95p8p,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-95p8p true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002a82280} {node.kubernetes.io/unreachable Exists NoExecute 0xc002a822a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 13:13:39 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 28 13:13:42.538: INFO: Pod "nginx-deployment-7b8c6f4498-gc2qv" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-gc2qv,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-3801,SelfLink:/api/v1/namespaces/deployment-3801/pods/nginx-deployment-7b8c6f4498-gc2qv,UID:bb11380e-d0fd-4717-9656-48608ed665e1,ResourceVersion:22186205,Generation:0,CreationTimestamp:2020-01-28 13:13:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 808ef477-29ae-4c32-94ab-aa49c0745a7e 0xc002a82327 0xc002a82328}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-95p8p {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-95p8p,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-95p8p true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002a823a0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002a823c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 13:13:39 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 28 13:13:42.538: INFO: Pod "nginx-deployment-7b8c6f4498-gh8xn" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-gh8xn,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-3801,SelfLink:/api/v1/namespaces/deployment-3801/pods/nginx-deployment-7b8c6f4498-gh8xn,UID:c39a9b1d-7310-4e06-ab39-c2905acfde92,ResourceVersion:22186207,Generation:0,CreationTimestamp:2020-01-28 13:13:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 808ef477-29ae-4c32-94ab-aa49c0745a7e 0xc002a82447 0xc002a82448}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-95p8p {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-95p8p,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-95p8p true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002a824b0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002a824d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 13:13:39 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 28 13:13:42.538: INFO: Pod "nginx-deployment-7b8c6f4498-jc9j6" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-jc9j6,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-3801,SelfLink:/api/v1/namespaces/deployment-3801/pods/nginx-deployment-7b8c6f4498-jc9j6,UID:5f102567-d4ff-4581-8570-1556e40097cc,ResourceVersion:22186115,Generation:0,CreationTimestamp:2020-01-28 13:13:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 808ef477-29ae-4c32-94ab-aa49c0745a7e 0xc002a82557 0xc002a82558}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-95p8p {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-95p8p,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-95p8p true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002a825d0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002a825f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 13:13:03 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 13:13:31 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 13:13:31 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 13:13:03 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.3,StartTime:2020-01-28 13:13:03 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-28 13:13:29 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://c00f9c9bf6407264008be039a6c814638b5a9074b564df61c7c6789b24f26f69}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 28 13:13:42.539: INFO: Pod "nginx-deployment-7b8c6f4498-jdq9m" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-jdq9m,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-3801,SelfLink:/api/v1/namespaces/deployment-3801/pods/nginx-deployment-7b8c6f4498-jdq9m,UID:0afeb94f-1bf7-40c4-957d-f21401ae993b,ResourceVersion:22186075,Generation:0,CreationTimestamp:2020-01-28 13:13:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 808ef477-29ae-4c32-94ab-aa49c0745a7e 0xc002a826c7 0xc002a826c8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-95p8p {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-95p8p,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-95p8p true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002a82730} {node.kubernetes.io/unreachable Exists NoExecute 0xc002a82750}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 13:13:03 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 13:13:25 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 13:13:25 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 13:13:03 +0000 UTC }],Message:,Reason:,HostIP:10.96.2.216,PodIP:10.32.0.7,StartTime:2020-01-28 13:13:03 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-28 13:13:24 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://81d860db4fda3fc0408e97f3656eb2c9c92e075dd70b2c1c00d302c5990120dd}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 28 13:13:42.539: INFO: Pod "nginx-deployment-7b8c6f4498-qb945" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-qb945,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-3801,SelfLink:/api/v1/namespaces/deployment-3801/pods/nginx-deployment-7b8c6f4498-qb945,UID:a7a339b6-8029-4aac-bf7f-960a1f0c2c0d,ResourceVersion:22186069,Generation:0,CreationTimestamp:2020-01-28 13:13:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 808ef477-29ae-4c32-94ab-aa49c0745a7e 0xc002a82827 0xc002a82828}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-95p8p {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-95p8p,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-95p8p true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002a82890} {node.kubernetes.io/unreachable Exists NoExecute 0xc002a828b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 13:13:03 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 13:13:24 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 13:13:24 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 13:13:03 +0000 UTC }],Message:,Reason:,HostIP:10.96.2.216,PodIP:10.32.0.4,StartTime:2020-01-28 13:13:03 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-28 13:13:24 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://255e47efb25aab7794a689d5192c725a5a5cd47238b2ec661b1267c7308fbb79}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 28 13:13:42.539: INFO: Pod "nginx-deployment-7b8c6f4498-qnz7q" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-qnz7q,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-3801,SelfLink:/api/v1/namespaces/deployment-3801/pods/nginx-deployment-7b8c6f4498-qnz7q,UID:be0a83ac-7ffb-409a-9670-3779d0e28cbb,ResourceVersion:22186066,Generation:0,CreationTimestamp:2020-01-28 13:13:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 808ef477-29ae-4c32-94ab-aa49c0745a7e 0xc002a82987 0xc002a82988}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-95p8p {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-95p8p,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-95p8p true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002a829f0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002a82a10}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 13:13:03 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 13:13:24 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 13:13:24 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 13:13:03 +0000 UTC }],Message:,Reason:,HostIP:10.96.2.216,PodIP:10.32.0.6,StartTime:2020-01-28 13:13:03 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-28 13:13:24 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://0e505730db1b61001a71cdc7811d9094d242e6773590c6b707e3074e811aec1e}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 28 13:13:42.540: INFO: Pod "nginx-deployment-7b8c6f4498-rpd7d" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-rpd7d,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-3801,SelfLink:/api/v1/namespaces/deployment-3801/pods/nginx-deployment-7b8c6f4498-rpd7d,UID:308393b7-1b1f-470b-8363-ff3baaceca1e,ResourceVersion:22186210,Generation:0,CreationTimestamp:2020-01-28 13:13:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 808ef477-29ae-4c32-94ab-aa49c0745a7e 0xc002a82ae7 0xc002a82ae8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-95p8p {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-95p8p,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-95p8p true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002a82b60} {node.kubernetes.io/unreachable Exists NoExecute 0xc002a82b80}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 13:13:39 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 28 13:13:42.540: INFO: Pod "nginx-deployment-7b8c6f4498-tndj6" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-tndj6,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-3801,SelfLink:/api/v1/namespaces/deployment-3801/pods/nginx-deployment-7b8c6f4498-tndj6,UID:b5858fd7-94f0-4172-ad3a-e7e5a2b9fc57,ResourceVersion:22186229,Generation:0,CreationTimestamp:2020-01-28 13:13:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 808ef477-29ae-4c32-94ab-aa49c0745a7e 0xc002a82c17 0xc002a82c18}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-95p8p {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-95p8p,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-95p8p true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002a82c80} {node.kubernetes.io/unreachable Exists NoExecute 0xc002a82ca0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 13:13:39 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 28 13:13:42.541: INFO: Pod "nginx-deployment-7b8c6f4498-vkndd" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-vkndd,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-3801,SelfLink:/api/v1/namespaces/deployment-3801/pods/nginx-deployment-7b8c6f4498-vkndd,UID:c66b2f9d-bb48-4881-9c8e-8518835a648f,ResourceVersion:22186099,Generation:0,CreationTimestamp:2020-01-28 13:13:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 808ef477-29ae-4c32-94ab-aa49c0745a7e 0xc002a82d27 0xc002a82d28}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-95p8p {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-95p8p,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-95p8p true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002a82da0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002a82dc0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 13:13:03 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 13:13:30 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 13:13:30 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 13:13:03 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.5,StartTime:2020-01-28 13:13:03 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-28 13:13:30 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://16d0e1592a47a31bdfde2112ab91e4441a90e30d55cea1d849ce74960ec89de6}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 28 13:13:42.541: INFO: Pod "nginx-deployment-7b8c6f4498-x6n49" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-x6n49,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-3801,SelfLink:/api/v1/namespaces/deployment-3801/pods/nginx-deployment-7b8c6f4498-x6n49,UID:e0d6f961-63d5-4c13-be4b-2e7ac7bac1dd,ResourceVersion:22186112,Generation:0,CreationTimestamp:2020-01-28 13:13:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 808ef477-29ae-4c32-94ab-aa49c0745a7e 0xc002a82e97 0xc002a82e98}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-95p8p {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-95p8p,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-95p8p true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002a82f10} {node.kubernetes.io/unreachable Exists NoExecute 0xc002a82f30}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 13:13:03 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 13:13:31 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 13:13:31 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 13:13:03 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.1,StartTime:2020-01-28 13:13:03 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-28 13:13:26 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://77975e9f9af7c24e62011d88cbca24a4914fc10136800e217f5eea99ca9fbd59}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 28 13:13:42.541: INFO: Pod "nginx-deployment-7b8c6f4498-xdsql" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-xdsql,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-3801,SelfLink:/api/v1/namespaces/deployment-3801/pods/nginx-deployment-7b8c6f4498-xdsql,UID:e0d8223b-c1d3-4e7d-8d77-5f38872fc43e,ResourceVersion:22186078,Generation:0,CreationTimestamp:2020-01-28 13:13:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 808ef477-29ae-4c32-94ab-aa49c0745a7e 0xc002a83007 0xc002a83008}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-95p8p {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-95p8p,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-95p8p true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002a83070} {node.kubernetes.io/unreachable Exists NoExecute 0xc002a83090}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 13:13:03 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 13:13:25 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 13:13:25 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 13:13:03 +0000 UTC }],Message:,Reason:,HostIP:10.96.2.216,PodIP:10.32.0.5,StartTime:2020-01-28 13:13:03 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-28 13:13:24 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://d0018a70d827f4688ee45da4c7790813a3d64d71e3ade330298a515a2c4e10eb}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 28 13:13:42.542: INFO: Pod "nginx-deployment-7b8c6f4498-xwlqv" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-xwlqv,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-3801,SelfLink:/api/v1/namespaces/deployment-3801/pods/nginx-deployment-7b8c6f4498-xwlqv,UID:dcf52baf-c83e-44d3-8c7d-2ef27896675a,ResourceVersion:22186224,Generation:0,CreationTimestamp:2020-01-28 13:13:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 808ef477-29ae-4c32-94ab-aa49c0745a7e 0xc002a83167 0xc002a83168}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-95p8p {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-95p8p,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-95p8p true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002a831e0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002a83200}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 13:13:39 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 28 13:13:42.542: INFO: Pod "nginx-deployment-7b8c6f4498-z4hpf" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-z4hpf,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-3801,SelfLink:/api/v1/namespaces/deployment-3801/pods/nginx-deployment-7b8c6f4498-z4hpf,UID:597ffcfe-844b-47fc-96ce-ade4bbbece95,ResourceVersion:22186233,Generation:0,CreationTimestamp:2020-01-28 13:13:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 808ef477-29ae-4c32-94ab-aa49c0745a7e 0xc002a83287 0xc002a83288}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-95p8p {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-95p8p,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-95p8p true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002a832f0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002a83310}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 13:13:39 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 28 13:13:42.543: INFO: Pod "nginx-deployment-7b8c6f4498-z5t6x" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-z5t6x,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-3801,SelfLink:/api/v1/namespaces/deployment-3801/pods/nginx-deployment-7b8c6f4498-z5t6x,UID:f39cb365-3e8d-4b35-a1c3-863e8b512ad2,ResourceVersion:22186213,Generation:0,CreationTimestamp:2020-01-28 13:13:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 808ef477-29ae-4c32-94ab-aa49c0745a7e 0xc002a83397 0xc002a83398}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-95p8p {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-95p8p,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-95p8p true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002a83400} {node.kubernetes.io/unreachable Exists NoExecute 0xc002a83420}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 13:13:39 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 28 13:13:42.543: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-3801" for this suite. Jan 28 13:15:11.151: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 28 13:15:11.289: INFO: namespace deployment-3801 deletion completed in 1m27.183522076s • [SLOW TEST:128.418 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 28 13:15:11.290: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jan 28 13:15:12.365: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace STEP: Creating rc "condition-test" that asks for more than the allowed pod quota STEP: Checking rc "condition-test" has the desired failure condition set STEP: Scaling down rc "condition-test" to satisfy pod quota Jan 28 13:15:14.447: INFO: Updating replication controller "condition-test" STEP: Checking rc "condition-test" has no failure condition set [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 28 13:15:18.327: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-8029" for this suite. Jan 28 13:15:27.622: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 28 13:15:27.726: INFO: namespace replication-controller-8029 deletion completed in 9.391806835s • [SLOW TEST:16.436 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 28 13:15:27.726: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jan 28 13:15:28.037: INFO: Create a RollingUpdate DaemonSet Jan 28 13:15:28.042: INFO: Check that daemon pods launch on every node of the cluster Jan 28 13:15:28.884: INFO: Number of nodes with available pods: 0 Jan 28 13:15:28.884: INFO: Node iruya-node is running more than one daemon pod Jan 28 13:15:31.144: INFO: Number of nodes with available pods: 0 Jan 28 13:15:31.144: INFO: Node iruya-node is running more than one daemon pod Jan 28 13:15:31.908: INFO: Number of nodes with available pods: 0 Jan 28 13:15:31.908: INFO: Node iruya-node is running more than one daemon pod Jan 28 13:15:33.234: INFO: Number of nodes with available pods: 0 Jan 28 13:15:33.234: INFO: Node iruya-node is running more than one daemon pod Jan 28 13:15:33.922: INFO: Number of nodes with available pods: 0 Jan 28 13:15:33.922: INFO: Node iruya-node is running more than one daemon pod Jan 28 13:15:34.911: INFO: Number of nodes with available pods: 0 Jan 28 13:15:34.911: INFO: Node iruya-node is running more than one daemon pod Jan 28 13:15:36.991: INFO: Number of nodes with available pods: 0 Jan 28 13:15:36.991: INFO: Node iruya-node is running more than one daemon pod Jan 28 13:15:37.912: INFO: Number of nodes with available pods: 0 Jan 28 13:15:37.912: INFO: Node iruya-node is running more than one daemon pod Jan 28 13:15:38.955: INFO: Number of nodes with available pods: 0 Jan 28 13:15:38.955: INFO: Node iruya-node is running more than one daemon pod Jan 28 13:15:39.903: INFO: Number of nodes with available pods: 1 Jan 28 13:15:39.903: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Jan 28 13:15:40.901: INFO: Number of nodes with available pods: 2 Jan 28 13:15:40.902: INFO: Number of running nodes: 2, number of available pods: 2 Jan 28 13:15:40.902: INFO: Update the DaemonSet to trigger a rollout Jan 28 13:15:40.920: INFO: Updating DaemonSet daemon-set Jan 28 13:15:58.159: INFO: Roll back the DaemonSet before rollout is complete Jan 28 13:15:58.444: INFO: Updating DaemonSet daemon-set Jan 28 13:15:58.445: INFO: Make sure DaemonSet rollback is complete Jan 28 13:15:58.501: INFO: Wrong image for pod: daemon-set-lmklr. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent. Jan 28 13:15:58.501: INFO: Pod daemon-set-lmklr is not available Jan 28 13:16:01.526: INFO: Pod daemon-set-9ptgf is not available [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-6645, will wait for the garbage collector to delete the pods Jan 28 13:16:02.157: INFO: Deleting DaemonSet.extensions daemon-set took: 526.943492ms Jan 28 13:16:02.657: INFO: Terminating DaemonSet.extensions daemon-set pods took: 500.835127ms Jan 28 13:16:08.965: INFO: Number of nodes with available pods: 0 Jan 28 13:16:08.965: INFO: Number of running nodes: 0, number of available pods: 0 Jan 28 13:16:08.970: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-6645/daemonsets","resourceVersion":"22186743"},"items":null} Jan 28 13:16:08.973: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-6645/pods","resourceVersion":"22186743"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 28 13:16:09.018: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-6645" for this suite. Jan 28 13:16:15.057: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 28 13:16:15.203: INFO: namespace daemonsets-6645 deletion completed in 6.177672121s • [SLOW TEST:47.477 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run default should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 28 13:16:15.206: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1420 [It] should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Jan 28 13:16:15.337: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-4957' Jan 28 13:16:15.470: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Jan 28 13:16:15.471: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n" STEP: verifying the pod controlled by e2e-test-nginx-deployment gets created [AfterEach] [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1426 Jan 28 13:16:17.493: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=kubectl-4957' Jan 28 13:16:17.666: INFO: stderr: "" Jan 28 13:16:17.666: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 28 13:16:17.666: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4957" for this suite. Jan 28 13:16:23.761: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 28 13:16:23.910: INFO: namespace kubectl-4957 deletion completed in 6.235461123s • [SLOW TEST:8.705 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 28 13:16:23.912: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jan 28 13:16:24.126: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7c5dc31d-a7b4-4c73-bc68-ce5fa94baa16" in namespace "downward-api-3411" to be "success or failure" Jan 28 13:16:24.220: INFO: Pod "downwardapi-volume-7c5dc31d-a7b4-4c73-bc68-ce5fa94baa16": Phase="Pending", Reason="", readiness=false. Elapsed: 93.836807ms Jan 28 13:16:26.230: INFO: Pod "downwardapi-volume-7c5dc31d-a7b4-4c73-bc68-ce5fa94baa16": Phase="Pending", Reason="", readiness=false. Elapsed: 2.103756342s Jan 28 13:16:28.236: INFO: Pod "downwardapi-volume-7c5dc31d-a7b4-4c73-bc68-ce5fa94baa16": Phase="Pending", Reason="", readiness=false. Elapsed: 4.110294346s Jan 28 13:16:30.243: INFO: Pod "downwardapi-volume-7c5dc31d-a7b4-4c73-bc68-ce5fa94baa16": Phase="Pending", Reason="", readiness=false. Elapsed: 6.117555697s Jan 28 13:16:32.267: INFO: Pod "downwardapi-volume-7c5dc31d-a7b4-4c73-bc68-ce5fa94baa16": Phase="Pending", Reason="", readiness=false. Elapsed: 8.141036884s Jan 28 13:16:34.278: INFO: Pod "downwardapi-volume-7c5dc31d-a7b4-4c73-bc68-ce5fa94baa16": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.152308294s STEP: Saw pod success Jan 28 13:16:34.278: INFO: Pod "downwardapi-volume-7c5dc31d-a7b4-4c73-bc68-ce5fa94baa16" satisfied condition "success or failure" Jan 28 13:16:34.284: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-7c5dc31d-a7b4-4c73-bc68-ce5fa94baa16 container client-container: STEP: delete the pod Jan 28 13:16:34.382: INFO: Waiting for pod downwardapi-volume-7c5dc31d-a7b4-4c73-bc68-ce5fa94baa16 to disappear Jan 28 13:16:34.406: INFO: Pod downwardapi-volume-7c5dc31d-a7b4-4c73-bc68-ce5fa94baa16 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 28 13:16:34.406: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3411" for this suite. Jan 28 13:16:40.469: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 28 13:16:40.618: INFO: namespace downward-api-3411 deletion completed in 6.17699123s • [SLOW TEST:16.707 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 28 13:16:40.619: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jan 28 13:16:40.800: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. Jan 28 13:16:40.924: INFO: Number of nodes with available pods: 0 Jan 28 13:16:40.925: INFO: Node iruya-node is running more than one daemon pod Jan 28 13:16:42.634: INFO: Number of nodes with available pods: 0 Jan 28 13:16:42.634: INFO: Node iruya-node is running more than one daemon pod Jan 28 13:16:43.082: INFO: Number of nodes with available pods: 0 Jan 28 13:16:43.083: INFO: Node iruya-node is running more than one daemon pod Jan 28 13:16:44.035: INFO: Number of nodes with available pods: 0 Jan 28 13:16:44.035: INFO: Node iruya-node is running more than one daemon pod Jan 28 13:16:44.965: INFO: Number of nodes with available pods: 0 Jan 28 13:16:44.965: INFO: Node iruya-node is running more than one daemon pod Jan 28 13:16:47.178: INFO: Number of nodes with available pods: 0 Jan 28 13:16:47.178: INFO: Node iruya-node is running more than one daemon pod Jan 28 13:16:47.946: INFO: Number of nodes with available pods: 0 Jan 28 13:16:47.947: INFO: Node iruya-node is running more than one daemon pod Jan 28 13:16:48.946: INFO: Number of nodes with available pods: 0 Jan 28 13:16:48.947: INFO: Node iruya-node is running more than one daemon pod Jan 28 13:16:49.939: INFO: Number of nodes with available pods: 2 Jan 28 13:16:49.939: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. Jan 28 13:16:49.987: INFO: Wrong image for pod: daemon-set-j6qzt. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 28 13:16:49.987: INFO: Wrong image for pod: daemon-set-r6gcc. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 28 13:16:51.034: INFO: Wrong image for pod: daemon-set-j6qzt. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 28 13:16:51.034: INFO: Wrong image for pod: daemon-set-r6gcc. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 28 13:16:52.033: INFO: Wrong image for pod: daemon-set-j6qzt. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 28 13:16:52.033: INFO: Wrong image for pod: daemon-set-r6gcc. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 28 13:16:53.064: INFO: Wrong image for pod: daemon-set-j6qzt. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 28 13:16:53.064: INFO: Wrong image for pod: daemon-set-r6gcc. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 28 13:16:54.037: INFO: Wrong image for pod: daemon-set-j6qzt. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 28 13:16:54.037: INFO: Wrong image for pod: daemon-set-r6gcc. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 28 13:16:55.035: INFO: Wrong image for pod: daemon-set-j6qzt. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 28 13:16:55.035: INFO: Wrong image for pod: daemon-set-r6gcc. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 28 13:16:56.036: INFO: Wrong image for pod: daemon-set-j6qzt. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 28 13:16:56.036: INFO: Pod daemon-set-j6qzt is not available Jan 28 13:16:56.036: INFO: Wrong image for pod: daemon-set-r6gcc. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 28 13:16:57.033: INFO: Pod daemon-set-hnhk2 is not available Jan 28 13:16:57.033: INFO: Wrong image for pod: daemon-set-r6gcc. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 28 13:16:58.035: INFO: Pod daemon-set-hnhk2 is not available Jan 28 13:16:58.035: INFO: Wrong image for pod: daemon-set-r6gcc. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 28 13:16:59.035: INFO: Pod daemon-set-hnhk2 is not available Jan 28 13:16:59.035: INFO: Wrong image for pod: daemon-set-r6gcc. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 28 13:17:00.040: INFO: Pod daemon-set-hnhk2 is not available Jan 28 13:17:00.040: INFO: Wrong image for pod: daemon-set-r6gcc. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 28 13:17:01.148: INFO: Pod daemon-set-hnhk2 is not available Jan 28 13:17:01.149: INFO: Wrong image for pod: daemon-set-r6gcc. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 28 13:17:02.037: INFO: Pod daemon-set-hnhk2 is not available Jan 28 13:17:02.037: INFO: Wrong image for pod: daemon-set-r6gcc. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 28 13:17:03.039: INFO: Pod daemon-set-hnhk2 is not available Jan 28 13:17:03.039: INFO: Wrong image for pod: daemon-set-r6gcc. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 28 13:17:04.035: INFO: Pod daemon-set-hnhk2 is not available Jan 28 13:17:04.035: INFO: Wrong image for pod: daemon-set-r6gcc. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 28 13:17:05.033: INFO: Wrong image for pod: daemon-set-r6gcc. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 28 13:17:06.035: INFO: Wrong image for pod: daemon-set-r6gcc. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 28 13:17:07.038: INFO: Wrong image for pod: daemon-set-r6gcc. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 28 13:17:08.033: INFO: Wrong image for pod: daemon-set-r6gcc. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 28 13:17:09.044: INFO: Wrong image for pod: daemon-set-r6gcc. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 28 13:17:09.044: INFO: Pod daemon-set-r6gcc is not available Jan 28 13:17:10.035: INFO: Wrong image for pod: daemon-set-r6gcc. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 28 13:17:10.035: INFO: Pod daemon-set-r6gcc is not available Jan 28 13:17:11.034: INFO: Wrong image for pod: daemon-set-r6gcc. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 28 13:17:11.034: INFO: Pod daemon-set-r6gcc is not available Jan 28 13:17:12.036: INFO: Wrong image for pod: daemon-set-r6gcc. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 28 13:17:12.037: INFO: Pod daemon-set-r6gcc is not available Jan 28 13:17:13.037: INFO: Wrong image for pod: daemon-set-r6gcc. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 28 13:17:13.037: INFO: Pod daemon-set-r6gcc is not available Jan 28 13:17:14.038: INFO: Wrong image for pod: daemon-set-r6gcc. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 28 13:17:14.038: INFO: Pod daemon-set-r6gcc is not available Jan 28 13:17:15.035: INFO: Wrong image for pod: daemon-set-r6gcc. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 28 13:17:15.035: INFO: Pod daemon-set-r6gcc is not available Jan 28 13:17:16.058: INFO: Wrong image for pod: daemon-set-r6gcc. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jan 28 13:17:16.059: INFO: Pod daemon-set-r6gcc is not available Jan 28 13:17:17.045: INFO: Pod daemon-set-shkr4 is not available STEP: Check that daemon pods are still running on every node of the cluster. Jan 28 13:17:17.075: INFO: Number of nodes with available pods: 1 Jan 28 13:17:17.075: INFO: Node iruya-node is running more than one daemon pod Jan 28 13:17:18.104: INFO: Number of nodes with available pods: 1 Jan 28 13:17:18.104: INFO: Node iruya-node is running more than one daemon pod Jan 28 13:17:19.108: INFO: Number of nodes with available pods: 1 Jan 28 13:17:19.108: INFO: Node iruya-node is running more than one daemon pod Jan 28 13:17:20.128: INFO: Number of nodes with available pods: 1 Jan 28 13:17:20.128: INFO: Node iruya-node is running more than one daemon pod Jan 28 13:17:21.087: INFO: Number of nodes with available pods: 1 Jan 28 13:17:21.087: INFO: Node iruya-node is running more than one daemon pod Jan 28 13:17:22.095: INFO: Number of nodes with available pods: 1 Jan 28 13:17:22.095: INFO: Node iruya-node is running more than one daemon pod Jan 28 13:17:23.089: INFO: Number of nodes with available pods: 2 Jan 28 13:17:23.089: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-5737, will wait for the garbage collector to delete the pods Jan 28 13:17:23.169: INFO: Deleting DaemonSet.extensions daemon-set took: 8.085712ms Jan 28 13:17:23.470: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.518024ms Jan 28 13:17:37.889: INFO: Number of nodes with available pods: 0 Jan 28 13:17:37.889: INFO: Number of running nodes: 0, number of available pods: 0 Jan 28 13:17:37.894: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-5737/daemonsets","resourceVersion":"22187017"},"items":null} Jan 28 13:17:37.901: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-5737/pods","resourceVersion":"22187018"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 28 13:17:37.921: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-5737" for this suite. Jan 28 13:17:45.945: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 28 13:17:46.064: INFO: namespace daemonsets-5737 deletion completed in 8.13870966s • [SLOW TEST:65.445 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 28 13:17:46.065: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 28 13:17:54.259: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-6377" for this suite. Jan 28 13:18:38.288: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 28 13:18:38.417: INFO: namespace kubelet-test-6377 deletion completed in 44.150185915s • [SLOW TEST:52.352 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a busybox command in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40 should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run rc should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 28 13:18:38.420: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1456 [It] should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Jan 28 13:18:38.543: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=kubectl-4791' Jan 28 13:18:38.678: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Jan 28 13:18:38.678: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n" STEP: verifying the rc e2e-test-nginx-rc was created STEP: verifying the pod controlled by rc e2e-test-nginx-rc was created STEP: confirm that you can get logs from an rc Jan 28 13:18:38.767: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-nginx-rc-2hh4h] Jan 28 13:18:38.768: INFO: Waiting up to 5m0s for pod "e2e-test-nginx-rc-2hh4h" in namespace "kubectl-4791" to be "running and ready" Jan 28 13:18:38.776: INFO: Pod "e2e-test-nginx-rc-2hh4h": Phase="Pending", Reason="", readiness=false. Elapsed: 7.617571ms Jan 28 13:18:40.799: INFO: Pod "e2e-test-nginx-rc-2hh4h": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030586888s Jan 28 13:18:42.805: INFO: Pod "e2e-test-nginx-rc-2hh4h": Phase="Pending", Reason="", readiness=false. Elapsed: 4.037229228s Jan 28 13:18:44.826: INFO: Pod "e2e-test-nginx-rc-2hh4h": Phase="Pending", Reason="", readiness=false. Elapsed: 6.058094066s Jan 28 13:18:46.836: INFO: Pod "e2e-test-nginx-rc-2hh4h": Phase="Running", Reason="", readiness=true. Elapsed: 8.06783844s Jan 28 13:18:46.836: INFO: Pod "e2e-test-nginx-rc-2hh4h" satisfied condition "running and ready" Jan 28 13:18:46.836: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-nginx-rc-2hh4h] Jan 28 13:18:46.836: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-nginx-rc --namespace=kubectl-4791' Jan 28 13:18:47.078: INFO: stderr: "" Jan 28 13:18:47.078: INFO: stdout: "" [AfterEach] [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1461 Jan 28 13:18:47.078: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=kubectl-4791' Jan 28 13:18:47.285: INFO: stderr: "" Jan 28 13:18:47.285: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 28 13:18:47.285: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4791" for this suite. Jan 28 13:19:09.363: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 28 13:19:09.491: INFO: namespace kubectl-4791 deletion completed in 22.155487372s • [SLOW TEST:31.071 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-apps] Job should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 28 13:19:09.491: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: delete a job STEP: deleting Job.batch foo in namespace job-7361, will wait for the garbage collector to delete the pods Jan 28 13:19:21.701: INFO: Deleting Job.batch foo took: 14.071439ms Jan 28 13:19:22.002: INFO: Terminating Job.batch foo pods took: 300.648924ms STEP: Ensuring job was deleted [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 28 13:20:06.708: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-7361" for this suite. Jan 28 13:20:12.740: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 28 13:20:12.935: INFO: namespace job-7361 deletion completed in 6.218653683s • [SLOW TEST:63.444 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 28 13:20:12.937: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-73cf34bb-8fa1-486e-ac72-d90d9a9bc278 STEP: Creating a pod to test consume secrets Jan 28 13:20:13.109: INFO: Waiting up to 5m0s for pod "pod-secrets-c89f56f4-7edb-45ae-be67-17ab4b8d229d" in namespace "secrets-7296" to be "success or failure" Jan 28 13:20:13.134: INFO: Pod "pod-secrets-c89f56f4-7edb-45ae-be67-17ab4b8d229d": Phase="Pending", Reason="", readiness=false. Elapsed: 24.993087ms Jan 28 13:20:15.161: INFO: Pod "pod-secrets-c89f56f4-7edb-45ae-be67-17ab4b8d229d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.052046721s Jan 28 13:20:17.167: INFO: Pod "pod-secrets-c89f56f4-7edb-45ae-be67-17ab4b8d229d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.057742428s Jan 28 13:20:19.193: INFO: Pod "pod-secrets-c89f56f4-7edb-45ae-be67-17ab4b8d229d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.083750359s Jan 28 13:20:21.198: INFO: Pod "pod-secrets-c89f56f4-7edb-45ae-be67-17ab4b8d229d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.088120105s STEP: Saw pod success Jan 28 13:20:21.198: INFO: Pod "pod-secrets-c89f56f4-7edb-45ae-be67-17ab4b8d229d" satisfied condition "success or failure" Jan 28 13:20:21.201: INFO: Trying to get logs from node iruya-node pod pod-secrets-c89f56f4-7edb-45ae-be67-17ab4b8d229d container secret-env-test: STEP: delete the pod Jan 28 13:20:21.257: INFO: Waiting for pod pod-secrets-c89f56f4-7edb-45ae-be67-17ab4b8d229d to disappear Jan 28 13:20:21.303: INFO: Pod pod-secrets-c89f56f4-7edb-45ae-be67-17ab4b8d229d no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 28 13:20:21.303: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7296" for this suite. Jan 28 13:20:27.342: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 28 13:20:27.457: INFO: namespace secrets-7296 deletion completed in 6.144412397s • [SLOW TEST:14.521 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31 should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 28 13:20:27.458: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-665f94c9-5543-41d3-baae-4256cd1aaebc STEP: Creating a pod to test consume configMaps Jan 28 13:20:27.580: INFO: Waiting up to 5m0s for pod "pod-configmaps-9802ba19-d0ae-4c0e-a04d-5ec0fbe0eb79" in namespace "configmap-4127" to be "success or failure" Jan 28 13:20:27.593: INFO: Pod "pod-configmaps-9802ba19-d0ae-4c0e-a04d-5ec0fbe0eb79": Phase="Pending", Reason="", readiness=false. Elapsed: 12.972871ms Jan 28 13:20:29.604: INFO: Pod "pod-configmaps-9802ba19-d0ae-4c0e-a04d-5ec0fbe0eb79": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023939071s Jan 28 13:20:31.629: INFO: Pod "pod-configmaps-9802ba19-d0ae-4c0e-a04d-5ec0fbe0eb79": Phase="Pending", Reason="", readiness=false. Elapsed: 4.048910991s Jan 28 13:20:33.654: INFO: Pod "pod-configmaps-9802ba19-d0ae-4c0e-a04d-5ec0fbe0eb79": Phase="Pending", Reason="", readiness=false. Elapsed: 6.074199419s Jan 28 13:20:35.681: INFO: Pod "pod-configmaps-9802ba19-d0ae-4c0e-a04d-5ec0fbe0eb79": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.101010281s STEP: Saw pod success Jan 28 13:20:35.681: INFO: Pod "pod-configmaps-9802ba19-d0ae-4c0e-a04d-5ec0fbe0eb79" satisfied condition "success or failure" Jan 28 13:20:35.686: INFO: Trying to get logs from node iruya-node pod pod-configmaps-9802ba19-d0ae-4c0e-a04d-5ec0fbe0eb79 container configmap-volume-test: STEP: delete the pod Jan 28 13:20:35.786: INFO: Waiting for pod pod-configmaps-9802ba19-d0ae-4c0e-a04d-5ec0fbe0eb79 to disappear Jan 28 13:20:35.795: INFO: Pod pod-configmaps-9802ba19-d0ae-4c0e-a04d-5ec0fbe0eb79 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 28 13:20:35.795: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4127" for this suite. Jan 28 13:20:41.841: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 28 13:20:41.999: INFO: namespace configmap-4127 deletion completed in 6.196435438s • [SLOW TEST:14.541 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 28 13:20:42.000: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 28 13:20:50.243: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-3049" for this suite. Jan 28 13:21:42.288: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 28 13:21:42.435: INFO: namespace kubelet-test-3049 deletion completed in 52.180373025s • [SLOW TEST:60.436 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a busybox Pod with hostAliases /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136 should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 28 13:21:42.436: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-map-78efc20c-7cb5-4c61-a8a3-8bedc46d381c STEP: Creating a pod to test consume configMaps Jan 28 13:21:42.593: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-854cee85-f1f4-45ca-9bad-7678647ac818" in namespace "projected-5683" to be "success or failure" Jan 28 13:21:42.612: INFO: Pod "pod-projected-configmaps-854cee85-f1f4-45ca-9bad-7678647ac818": Phase="Pending", Reason="", readiness=false. Elapsed: 18.63391ms Jan 28 13:21:44.631: INFO: Pod "pod-projected-configmaps-854cee85-f1f4-45ca-9bad-7678647ac818": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037393103s Jan 28 13:21:46.639: INFO: Pod "pod-projected-configmaps-854cee85-f1f4-45ca-9bad-7678647ac818": Phase="Pending", Reason="", readiness=false. Elapsed: 4.045147561s Jan 28 13:21:48.655: INFO: Pod "pod-projected-configmaps-854cee85-f1f4-45ca-9bad-7678647ac818": Phase="Pending", Reason="", readiness=false. Elapsed: 6.061133757s Jan 28 13:21:50.676: INFO: Pod "pod-projected-configmaps-854cee85-f1f4-45ca-9bad-7678647ac818": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.08219798s STEP: Saw pod success Jan 28 13:21:50.676: INFO: Pod "pod-projected-configmaps-854cee85-f1f4-45ca-9bad-7678647ac818" satisfied condition "success or failure" Jan 28 13:21:50.683: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-854cee85-f1f4-45ca-9bad-7678647ac818 container projected-configmap-volume-test: STEP: delete the pod Jan 28 13:21:50.909: INFO: Waiting for pod pod-projected-configmaps-854cee85-f1f4-45ca-9bad-7678647ac818 to disappear Jan 28 13:21:50.921: INFO: Pod pod-projected-configmaps-854cee85-f1f4-45ca-9bad-7678647ac818 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 28 13:21:50.922: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5683" for this suite. Jan 28 13:21:56.988: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 28 13:21:57.181: INFO: namespace projected-5683 deletion completed in 6.249727991s • [SLOW TEST:14.745 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 28 13:21:57.181: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-a10aef09-2476-47c0-aedc-37b079540b98 STEP: Creating a pod to test consume secrets Jan 28 13:21:57.292: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-ffbb6d70-01b5-4abf-875e-6ed158b1b9d4" in namespace "projected-8470" to be "success or failure" Jan 28 13:21:57.312: INFO: Pod "pod-projected-secrets-ffbb6d70-01b5-4abf-875e-6ed158b1b9d4": Phase="Pending", Reason="", readiness=false. Elapsed: 19.532054ms Jan 28 13:21:59.323: INFO: Pod "pod-projected-secrets-ffbb6d70-01b5-4abf-875e-6ed158b1b9d4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030670789s Jan 28 13:22:01.335: INFO: Pod "pod-projected-secrets-ffbb6d70-01b5-4abf-875e-6ed158b1b9d4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.042717095s Jan 28 13:22:03.345: INFO: Pod "pod-projected-secrets-ffbb6d70-01b5-4abf-875e-6ed158b1b9d4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.052814663s Jan 28 13:22:05.358: INFO: Pod "pod-projected-secrets-ffbb6d70-01b5-4abf-875e-6ed158b1b9d4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.065551738s STEP: Saw pod success Jan 28 13:22:05.358: INFO: Pod "pod-projected-secrets-ffbb6d70-01b5-4abf-875e-6ed158b1b9d4" satisfied condition "success or failure" Jan 28 13:22:05.368: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-ffbb6d70-01b5-4abf-875e-6ed158b1b9d4 container projected-secret-volume-test: STEP: delete the pod Jan 28 13:22:05.589: INFO: Waiting for pod pod-projected-secrets-ffbb6d70-01b5-4abf-875e-6ed158b1b9d4 to disappear Jan 28 13:22:05.618: INFO: Pod pod-projected-secrets-ffbb6d70-01b5-4abf-875e-6ed158b1b9d4 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 28 13:22:05.619: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8470" for this suite. Jan 28 13:22:11.761: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 28 13:22:11.952: INFO: namespace projected-8470 deletion completed in 6.316727382s • [SLOW TEST:14.771 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 28 13:22:11.954: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0644 on node default medium Jan 28 13:22:12.098: INFO: Waiting up to 5m0s for pod "pod-72637afb-40f3-4618-a3cd-8706df9b803b" in namespace "emptydir-3153" to be "success or failure" Jan 28 13:22:12.107: INFO: Pod "pod-72637afb-40f3-4618-a3cd-8706df9b803b": Phase="Pending", Reason="", readiness=false. Elapsed: 8.062907ms Jan 28 13:22:14.115: INFO: Pod "pod-72637afb-40f3-4618-a3cd-8706df9b803b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016531545s Jan 28 13:22:16.125: INFO: Pod "pod-72637afb-40f3-4618-a3cd-8706df9b803b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.025951619s Jan 28 13:22:18.136: INFO: Pod "pod-72637afb-40f3-4618-a3cd-8706df9b803b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.03737224s Jan 28 13:22:20.167: INFO: Pod "pod-72637afb-40f3-4618-a3cd-8706df9b803b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.068131541s STEP: Saw pod success Jan 28 13:22:20.167: INFO: Pod "pod-72637afb-40f3-4618-a3cd-8706df9b803b" satisfied condition "success or failure" Jan 28 13:22:20.171: INFO: Trying to get logs from node iruya-node pod pod-72637afb-40f3-4618-a3cd-8706df9b803b container test-container: STEP: delete the pod Jan 28 13:22:20.274: INFO: Waiting for pod pod-72637afb-40f3-4618-a3cd-8706df9b803b to disappear Jan 28 13:22:20.329: INFO: Pod pod-72637afb-40f3-4618-a3cd-8706df9b803b no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 28 13:22:20.329: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3153" for this suite. Jan 28 13:22:26.369: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 28 13:22:26.526: INFO: namespace emptydir-3153 deletion completed in 6.18631274s • [SLOW TEST:14.573 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run deployment should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 28 13:22:26.528: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1557 [It] should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Jan 28 13:22:26.613: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --generator=deployment/apps.v1 --namespace=kubectl-3780' Jan 28 13:22:28.426: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Jan 28 13:22:28.426: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n" STEP: verifying the deployment e2e-test-nginx-deployment was created STEP: verifying the pod controlled by deployment e2e-test-nginx-deployment was created [AfterEach] [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1562 Jan 28 13:22:32.517: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=kubectl-3780' Jan 28 13:22:32.668: INFO: stderr: "" Jan 28 13:22:32.668: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 28 13:22:32.668: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3780" for this suite. Jan 28 13:22:54.706: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 28 13:22:54.838: INFO: namespace kubectl-3780 deletion completed in 22.162271243s • [SLOW TEST:28.310 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Proxy server should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 28 13:22:54.838: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Starting the proxy Jan 28 13:22:54.966: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix672738299/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 28 13:22:55.104: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7499" for this suite. Jan 28 13:23:01.142: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 28 13:23:01.285: INFO: namespace kubectl-7499 deletion completed in 6.172123464s • [SLOW TEST:6.447 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Proxy server /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 28 13:23:01.285: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jan 28 13:23:01.483: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"a15a1608-4aab-459d-b8f2-330ccbf75d23", Controller:(*bool)(0xc002931bda), BlockOwnerDeletion:(*bool)(0xc002931bdb)}} Jan 28 13:23:01.543: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"cf130662-33ca-49e0-9815-9425c25d600d", Controller:(*bool)(0xc00219ee6a), BlockOwnerDeletion:(*bool)(0xc00219ee6b)}} Jan 28 13:23:01.586: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"b8fc1f30-27f7-47b5-b730-2a2db592a6a1", Controller:(*bool)(0xc00219f02a), BlockOwnerDeletion:(*bool)(0xc00219f02b)}} [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 28 13:23:06.625: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-9089" for this suite. Jan 28 13:23:12.677: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 28 13:23:12.834: INFO: namespace gc-9089 deletion completed in 6.197595677s • [SLOW TEST:11.549 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 28 13:23:12.835: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Jan 28 13:23:12.967: INFO: Waiting up to 5m0s for pod "downward-api-a1295fd2-d5fe-4e86-a298-ad61ac80bcfd" in namespace "downward-api-7212" to be "success or failure" Jan 28 13:23:12.974: INFO: Pod "downward-api-a1295fd2-d5fe-4e86-a298-ad61ac80bcfd": Phase="Pending", Reason="", readiness=false. Elapsed: 6.723095ms Jan 28 13:23:14.987: INFO: Pod "downward-api-a1295fd2-d5fe-4e86-a298-ad61ac80bcfd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01901917s Jan 28 13:23:16.997: INFO: Pod "downward-api-a1295fd2-d5fe-4e86-a298-ad61ac80bcfd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.02923622s Jan 28 13:23:19.051: INFO: Pod "downward-api-a1295fd2-d5fe-4e86-a298-ad61ac80bcfd": Phase="Pending", Reason="", readiness=false. Elapsed: 6.083444371s Jan 28 13:23:21.060: INFO: Pod "downward-api-a1295fd2-d5fe-4e86-a298-ad61ac80bcfd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.092028582s STEP: Saw pod success Jan 28 13:23:21.060: INFO: Pod "downward-api-a1295fd2-d5fe-4e86-a298-ad61ac80bcfd" satisfied condition "success or failure" Jan 28 13:23:21.065: INFO: Trying to get logs from node iruya-node pod downward-api-a1295fd2-d5fe-4e86-a298-ad61ac80bcfd container dapi-container: STEP: delete the pod Jan 28 13:23:21.114: INFO: Waiting for pod downward-api-a1295fd2-d5fe-4e86-a298-ad61ac80bcfd to disappear Jan 28 13:23:21.170: INFO: Pod downward-api-a1295fd2-d5fe-4e86-a298-ad61ac80bcfd no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 28 13:23:21.170: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7212" for this suite. Jan 28 13:23:27.258: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 28 13:23:27.411: INFO: namespace downward-api-7212 deletion completed in 6.233999973s • [SLOW TEST:14.575 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 28 13:23:27.411: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W0128 13:24:09.294316 9 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jan 28 13:24:09.294: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 28 13:24:09.294: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-7502" for this suite. Jan 28 13:24:27.357: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 28 13:24:27.497: INFO: namespace gc-7502 deletion completed in 18.168286069s • [SLOW TEST:60.086 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 28 13:24:27.499: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes Jan 28 13:24:27.659: INFO: observed the pod list STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 28 13:24:43.249: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-4901" for this suite. Jan 28 13:24:49.284: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 28 13:24:49.540: INFO: namespace pods-4901 deletion completed in 6.280678756s • [SLOW TEST:22.041 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 28 13:24:49.541: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir volume type on tmpfs Jan 28 13:24:49.680: INFO: Waiting up to 5m0s for pod "pod-ae743dd9-7f10-4e7c-a945-ef911181330c" in namespace "emptydir-9906" to be "success or failure" Jan 28 13:24:49.688: INFO: Pod "pod-ae743dd9-7f10-4e7c-a945-ef911181330c": Phase="Pending", Reason="", readiness=false. Elapsed: 8.024352ms Jan 28 13:24:51.709: INFO: Pod "pod-ae743dd9-7f10-4e7c-a945-ef911181330c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028969599s Jan 28 13:24:53.731: INFO: Pod "pod-ae743dd9-7f10-4e7c-a945-ef911181330c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.0506203s Jan 28 13:24:57.198: INFO: Pod "pod-ae743dd9-7f10-4e7c-a945-ef911181330c": Phase="Pending", Reason="", readiness=false. Elapsed: 7.51797304s Jan 28 13:24:59.216: INFO: Pod "pod-ae743dd9-7f10-4e7c-a945-ef911181330c": Phase="Pending", Reason="", readiness=false. Elapsed: 9.535545143s Jan 28 13:25:01.226: INFO: Pod "pod-ae743dd9-7f10-4e7c-a945-ef911181330c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.545520325s STEP: Saw pod success Jan 28 13:25:01.226: INFO: Pod "pod-ae743dd9-7f10-4e7c-a945-ef911181330c" satisfied condition "success or failure" Jan 28 13:25:01.232: INFO: Trying to get logs from node iruya-node pod pod-ae743dd9-7f10-4e7c-a945-ef911181330c container test-container: STEP: delete the pod Jan 28 13:25:01.316: INFO: Waiting for pod pod-ae743dd9-7f10-4e7c-a945-ef911181330c to disappear Jan 28 13:25:01.386: INFO: Pod pod-ae743dd9-7f10-4e7c-a945-ef911181330c no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 28 13:25:01.386: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9906" for this suite. Jan 28 13:25:07.432: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 28 13:25:07.573: INFO: namespace emptydir-9906 deletion completed in 6.173779256s • [SLOW TEST:18.031 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 28 13:25:07.573: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod Jan 28 13:25:16.689: INFO: Successfully updated pod "annotationupdate042990e7-63e6-4353-8c6f-20e13607cf17" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 28 13:25:18.797: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-387" for this suite. Jan 28 13:25:40.829: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 28 13:25:40.926: INFO: namespace projected-387 deletion completed in 22.119899668s • [SLOW TEST:33.353 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 28 13:25:40.927: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification Jan 28 13:25:41.127: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-3378,SelfLink:/api/v1/namespaces/watch-3378/configmaps/e2e-watch-test-configmap-a,UID:e6df7817-c547-4565-a3e8-abfbd5062b24,ResourceVersion:22188301,Generation:0,CreationTimestamp:2020-01-28 13:25:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} Jan 28 13:25:41.127: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-3378,SelfLink:/api/v1/namespaces/watch-3378/configmaps/e2e-watch-test-configmap-a,UID:e6df7817-c547-4565-a3e8-abfbd5062b24,ResourceVersion:22188301,Generation:0,CreationTimestamp:2020-01-28 13:25:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: modifying configmap A and ensuring the correct watchers observe the notification Jan 28 13:25:51.153: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-3378,SelfLink:/api/v1/namespaces/watch-3378/configmaps/e2e-watch-test-configmap-a,UID:e6df7817-c547-4565-a3e8-abfbd5062b24,ResourceVersion:22188315,Generation:0,CreationTimestamp:2020-01-28 13:25:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} Jan 28 13:25:51.154: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-3378,SelfLink:/api/v1/namespaces/watch-3378/configmaps/e2e-watch-test-configmap-a,UID:e6df7817-c547-4565-a3e8-abfbd5062b24,ResourceVersion:22188315,Generation:0,CreationTimestamp:2020-01-28 13:25:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying configmap A again and ensuring the correct watchers observe the notification Jan 28 13:26:01.169: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-3378,SelfLink:/api/v1/namespaces/watch-3378/configmaps/e2e-watch-test-configmap-a,UID:e6df7817-c547-4565-a3e8-abfbd5062b24,ResourceVersion:22188330,Generation:0,CreationTimestamp:2020-01-28 13:25:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Jan 28 13:26:01.170: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-3378,SelfLink:/api/v1/namespaces/watch-3378/configmaps/e2e-watch-test-configmap-a,UID:e6df7817-c547-4565-a3e8-abfbd5062b24,ResourceVersion:22188330,Generation:0,CreationTimestamp:2020-01-28 13:25:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: deleting configmap A and ensuring the correct watchers observe the notification Jan 28 13:26:11.184: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-3378,SelfLink:/api/v1/namespaces/watch-3378/configmaps/e2e-watch-test-configmap-a,UID:e6df7817-c547-4565-a3e8-abfbd5062b24,ResourceVersion:22188345,Generation:0,CreationTimestamp:2020-01-28 13:25:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Jan 28 13:26:11.184: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-3378,SelfLink:/api/v1/namespaces/watch-3378/configmaps/e2e-watch-test-configmap-a,UID:e6df7817-c547-4565-a3e8-abfbd5062b24,ResourceVersion:22188345,Generation:0,CreationTimestamp:2020-01-28 13:25:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification Jan 28 13:26:21.204: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-3378,SelfLink:/api/v1/namespaces/watch-3378/configmaps/e2e-watch-test-configmap-b,UID:70c5f1fe-24d8-47c9-b271-f4c4dfff7c9c,ResourceVersion:22188359,Generation:0,CreationTimestamp:2020-01-28 13:26:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} Jan 28 13:26:21.205: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-3378,SelfLink:/api/v1/namespaces/watch-3378/configmaps/e2e-watch-test-configmap-b,UID:70c5f1fe-24d8-47c9-b271-f4c4dfff7c9c,ResourceVersion:22188359,Generation:0,CreationTimestamp:2020-01-28 13:26:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: deleting configmap B and ensuring the correct watchers observe the notification Jan 28 13:26:31.216: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-3378,SelfLink:/api/v1/namespaces/watch-3378/configmaps/e2e-watch-test-configmap-b,UID:70c5f1fe-24d8-47c9-b271-f4c4dfff7c9c,ResourceVersion:22188373,Generation:0,CreationTimestamp:2020-01-28 13:26:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} Jan 28 13:26:31.216: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-3378,SelfLink:/api/v1/namespaces/watch-3378/configmaps/e2e-watch-test-configmap-b,UID:70c5f1fe-24d8-47c9-b271-f4c4dfff7c9c,ResourceVersion:22188373,Generation:0,CreationTimestamp:2020-01-28 13:26:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 28 13:26:41.217: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-3378" for this suite. Jan 28 13:26:47.257: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 28 13:26:47.399: INFO: namespace watch-3378 deletion completed in 6.172143901s • [SLOW TEST:66.472 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 28 13:26:47.400: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod Jan 28 13:26:55.562: INFO: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:send-events-817ff735-1494-47db-af56-7e710137daea,GenerateName:,Namespace:events-7676,SelfLink:/api/v1/namespaces/events-7676/pods/send-events-817ff735-1494-47db-af56-7e710137daea,UID:6c6e81f1-bdea-4b15-afbb-9e44f7744508,ResourceVersion:22188419,Generation:0,CreationTimestamp:2020-01-28 13:26:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: foo,time: 520852051,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-w8t7b {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-w8t7b,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{p gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 [] [] [{ 0 80 TCP }] [] [] {map[] map[]} [{default-token-w8t7b true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00267c150} {node.kubernetes.io/unreachable Exists NoExecute 0xc00267c170}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 13:26:48 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 13:26:54 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 13:26:54 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 13:26:47 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.1,StartTime:2020-01-28 13:26:48 +0000 UTC,ContainerStatuses:[{p {nil ContainerStateRunning{StartedAt:2020-01-28 13:26:53 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 docker-pullable://gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 docker://df4f766ae33b5539bac21ef185ab0dca007e2db043aadf3efd11d8f3331434d5}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} STEP: checking for scheduler event about the pod Jan 28 13:26:57.588: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod Jan 28 13:26:59.605: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 28 13:26:59.628: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-7676" for this suite. Jan 28 13:27:39.740: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 28 13:27:39.909: INFO: namespace events-7676 deletion completed in 40.189240921s • [SLOW TEST:52.510 seconds] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 28 13:27:39.910: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test use defaults Jan 28 13:27:40.070: INFO: Waiting up to 5m0s for pod "client-containers-bdbe243c-f44a-4ac4-bdb3-10ab1781d653" in namespace "containers-2448" to be "success or failure" Jan 28 13:27:40.076: INFO: Pod "client-containers-bdbe243c-f44a-4ac4-bdb3-10ab1781d653": Phase="Pending", Reason="", readiness=false. Elapsed: 5.872683ms Jan 28 13:27:42.090: INFO: Pod "client-containers-bdbe243c-f44a-4ac4-bdb3-10ab1781d653": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019419687s Jan 28 13:27:44.099: INFO: Pod "client-containers-bdbe243c-f44a-4ac4-bdb3-10ab1781d653": Phase="Pending", Reason="", readiness=false. Elapsed: 4.028620479s Jan 28 13:27:46.113: INFO: Pod "client-containers-bdbe243c-f44a-4ac4-bdb3-10ab1781d653": Phase="Pending", Reason="", readiness=false. Elapsed: 6.043185631s Jan 28 13:27:48.125: INFO: Pod "client-containers-bdbe243c-f44a-4ac4-bdb3-10ab1781d653": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.054397403s STEP: Saw pod success Jan 28 13:27:48.125: INFO: Pod "client-containers-bdbe243c-f44a-4ac4-bdb3-10ab1781d653" satisfied condition "success or failure" Jan 28 13:27:48.130: INFO: Trying to get logs from node iruya-node pod client-containers-bdbe243c-f44a-4ac4-bdb3-10ab1781d653 container test-container: STEP: delete the pod Jan 28 13:27:48.263: INFO: Waiting for pod client-containers-bdbe243c-f44a-4ac4-bdb3-10ab1781d653 to disappear Jan 28 13:27:48.272: INFO: Pod client-containers-bdbe243c-f44a-4ac4-bdb3-10ab1781d653 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 28 13:27:48.272: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-2448" for this suite. Jan 28 13:27:54.300: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 28 13:27:54.480: INFO: namespace containers-2448 deletion completed in 6.199687806s • [SLOW TEST:14.571 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 28 13:27:54.482: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-map-3acdd832-be5e-4e85-a291-7029e05e8bce STEP: Creating a pod to test consume secrets Jan 28 13:27:54.627: INFO: Waiting up to 5m0s for pod "pod-secrets-560758bf-3d2d-476c-8f24-eb0f4a3c3471" in namespace "secrets-2601" to be "success or failure" Jan 28 13:27:54.663: INFO: Pod "pod-secrets-560758bf-3d2d-476c-8f24-eb0f4a3c3471": Phase="Pending", Reason="", readiness=false. Elapsed: 35.587592ms Jan 28 13:27:56.671: INFO: Pod "pod-secrets-560758bf-3d2d-476c-8f24-eb0f4a3c3471": Phase="Pending", Reason="", readiness=false. Elapsed: 2.043829381s Jan 28 13:27:58.678: INFO: Pod "pod-secrets-560758bf-3d2d-476c-8f24-eb0f4a3c3471": Phase="Pending", Reason="", readiness=false. Elapsed: 4.050546161s Jan 28 13:28:00.700: INFO: Pod "pod-secrets-560758bf-3d2d-476c-8f24-eb0f4a3c3471": Phase="Pending", Reason="", readiness=false. Elapsed: 6.073018342s Jan 28 13:28:02.715: INFO: Pod "pod-secrets-560758bf-3d2d-476c-8f24-eb0f4a3c3471": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.088314412s STEP: Saw pod success Jan 28 13:28:02.716: INFO: Pod "pod-secrets-560758bf-3d2d-476c-8f24-eb0f4a3c3471" satisfied condition "success or failure" Jan 28 13:28:02.726: INFO: Trying to get logs from node iruya-node pod pod-secrets-560758bf-3d2d-476c-8f24-eb0f4a3c3471 container secret-volume-test: STEP: delete the pod Jan 28 13:28:03.002: INFO: Waiting for pod pod-secrets-560758bf-3d2d-476c-8f24-eb0f4a3c3471 to disappear Jan 28 13:28:03.018: INFO: Pod pod-secrets-560758bf-3d2d-476c-8f24-eb0f4a3c3471 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 28 13:28:03.019: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2601" for this suite. Jan 28 13:28:09.066: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 28 13:28:09.198: INFO: namespace secrets-2601 deletion completed in 6.170635506s • [SLOW TEST:14.717 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 28 13:28:09.199: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted STEP: Gathering metrics W0128 13:28:23.065136 9 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jan 28 13:28:23.065: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 28 13:28:23.065: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-9030" for this suite. Jan 28 13:28:33.242: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 28 13:28:33.352: INFO: namespace gc-9030 deletion completed in 9.026251279s • [SLOW TEST:24.153 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 28 13:28:33.353: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0777 on tmpfs Jan 28 13:28:34.922: INFO: Waiting up to 5m0s for pod "pod-88df3f9f-0d3d-4af1-9ec9-6297fdf5fce4" in namespace "emptydir-8940" to be "success or failure" Jan 28 13:28:35.482: INFO: Pod "pod-88df3f9f-0d3d-4af1-9ec9-6297fdf5fce4": Phase="Pending", Reason="", readiness=false. Elapsed: 559.912075ms Jan 28 13:28:37.500: INFO: Pod "pod-88df3f9f-0d3d-4af1-9ec9-6297fdf5fce4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.578384172s Jan 28 13:28:39.512: INFO: Pod "pod-88df3f9f-0d3d-4af1-9ec9-6297fdf5fce4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.589655695s Jan 28 13:28:41.523: INFO: Pod "pod-88df3f9f-0d3d-4af1-9ec9-6297fdf5fce4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.601332538s Jan 28 13:28:43.531: INFO: Pod "pod-88df3f9f-0d3d-4af1-9ec9-6297fdf5fce4": Phase="Pending", Reason="", readiness=false. Elapsed: 8.608842566s Jan 28 13:28:45.540: INFO: Pod "pod-88df3f9f-0d3d-4af1-9ec9-6297fdf5fce4": Phase="Pending", Reason="", readiness=false. Elapsed: 10.618102268s Jan 28 13:28:47.561: INFO: Pod "pod-88df3f9f-0d3d-4af1-9ec9-6297fdf5fce4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.63870613s STEP: Saw pod success Jan 28 13:28:47.561: INFO: Pod "pod-88df3f9f-0d3d-4af1-9ec9-6297fdf5fce4" satisfied condition "success or failure" Jan 28 13:28:47.571: INFO: Trying to get logs from node iruya-node pod pod-88df3f9f-0d3d-4af1-9ec9-6297fdf5fce4 container test-container: STEP: delete the pod Jan 28 13:28:47.831: INFO: Waiting for pod pod-88df3f9f-0d3d-4af1-9ec9-6297fdf5fce4 to disappear Jan 28 13:28:47.840: INFO: Pod pod-88df3f9f-0d3d-4af1-9ec9-6297fdf5fce4 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 28 13:28:47.841: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8940" for this suite. Jan 28 13:28:53.951: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 28 13:28:54.092: INFO: namespace emptydir-8940 deletion completed in 6.231409751s • [SLOW TEST:20.740 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 28 13:28:54.093: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name cm-test-opt-del-dacb6da8-3b5d-483f-b39d-c4cb4750a571 STEP: Creating configMap with name cm-test-opt-upd-076b5c72-e758-4826-a781-dfcd043db911 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-dacb6da8-3b5d-483f-b39d-c4cb4750a571 STEP: Updating configmap cm-test-opt-upd-076b5c72-e758-4826-a781-dfcd043db911 STEP: Creating configMap with name cm-test-opt-create-40f87d58-881d-4cfb-b795-025b3e6ef635 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 28 13:29:08.777: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7263" for this suite. Jan 28 13:29:30.894: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 28 13:29:31.005: INFO: namespace configmap-7263 deletion completed in 22.221405402s • [SLOW TEST:36.912 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run job should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 28 13:29:31.005: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1612 [It] should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Jan 28 13:29:31.145: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-513' Jan 28 13:29:31.299: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Jan 28 13:29:31.299: INFO: stdout: "job.batch/e2e-test-nginx-job created\n" STEP: verifying the job e2e-test-nginx-job was created [AfterEach] [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1617 Jan 28 13:29:31.364: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-nginx-job --namespace=kubectl-513' Jan 28 13:29:31.568: INFO: stderr: "" Jan 28 13:29:31.568: INFO: stdout: "job.batch \"e2e-test-nginx-job\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 28 13:29:31.569: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-513" for this suite. Jan 28 13:29:37.604: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 28 13:29:37.748: INFO: namespace kubectl-513 deletion completed in 6.174104273s • [SLOW TEST:6.743 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 28 13:29:37.748: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 28 13:29:49.053: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-9899" for this suite. Jan 28 13:30:11.083: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 28 13:30:11.252: INFO: namespace replication-controller-9899 deletion completed in 22.1917353s • [SLOW TEST:33.504 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 28 13:30:11.253: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7565.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7565.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jan 28 13:30:23.440: INFO: Unable to read wheezy_udp@kubernetes.default.svc.cluster.local from pod dns-7565/dns-test-1b5c3d6e-db5e-4070-a0a7-7be3485c5084: the server could not find the requested resource (get pods dns-test-1b5c3d6e-db5e-4070-a0a7-7be3485c5084) Jan 28 13:30:23.450: INFO: Unable to read wheezy_tcp@kubernetes.default.svc.cluster.local from pod dns-7565/dns-test-1b5c3d6e-db5e-4070-a0a7-7be3485c5084: the server could not find the requested resource (get pods dns-test-1b5c3d6e-db5e-4070-a0a7-7be3485c5084) Jan 28 13:30:23.457: INFO: Unable to read wheezy_udp@PodARecord from pod dns-7565/dns-test-1b5c3d6e-db5e-4070-a0a7-7be3485c5084: the server could not find the requested resource (get pods dns-test-1b5c3d6e-db5e-4070-a0a7-7be3485c5084) Jan 28 13:30:23.466: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-7565/dns-test-1b5c3d6e-db5e-4070-a0a7-7be3485c5084: the server could not find the requested resource (get pods dns-test-1b5c3d6e-db5e-4070-a0a7-7be3485c5084) Jan 28 13:30:23.511: INFO: Lookups using dns-7565/dns-test-1b5c3d6e-db5e-4070-a0a7-7be3485c5084 failed for: [wheezy_udp@kubernetes.default.svc.cluster.local wheezy_tcp@kubernetes.default.svc.cluster.local wheezy_udp@PodARecord wheezy_tcp@PodARecord] Jan 28 13:30:28.575: INFO: DNS probes using dns-7565/dns-test-1b5c3d6e-db5e-4070-a0a7-7be3485c5084 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 28 13:30:28.629: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-7565" for this suite. Jan 28 13:30:34.772: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 28 13:30:34.902: INFO: namespace dns-7565 deletion completed in 6.244424354s • [SLOW TEST:23.650 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 28 13:30:34.904: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jan 28 13:30:57.078: INFO: Container started at 2020-01-28 13:30:41 +0000 UTC, pod became ready at 2020-01-28 13:30:56 +0000 UTC [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 28 13:30:57.079: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-3618" for this suite. Jan 28 13:31:19.120: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 28 13:31:19.255: INFO: namespace container-probe-3618 deletion completed in 22.168866102s • [SLOW TEST:44.352 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 28 13:31:19.258: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 28 13:31:31.430: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-2872" for this suite. Jan 28 13:31:37.467: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 28 13:31:37.595: INFO: namespace kubelet-test-2872 deletion completed in 6.158314485s • [SLOW TEST:18.337 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78 should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 28 13:31:37.596: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jan 28 13:31:37.804: INFO: Waiting up to 5m0s for pod "downwardapi-volume-5618ef0b-4eda-4d80-b6e4-f4dd56182b40" in namespace "downward-api-6698" to be "success or failure" Jan 28 13:31:37.850: INFO: Pod "downwardapi-volume-5618ef0b-4eda-4d80-b6e4-f4dd56182b40": Phase="Pending", Reason="", readiness=false. Elapsed: 46.283352ms Jan 28 13:31:39.877: INFO: Pod "downwardapi-volume-5618ef0b-4eda-4d80-b6e4-f4dd56182b40": Phase="Pending", Reason="", readiness=false. Elapsed: 2.072476366s Jan 28 13:31:41.885: INFO: Pod "downwardapi-volume-5618ef0b-4eda-4d80-b6e4-f4dd56182b40": Phase="Pending", Reason="", readiness=false. Elapsed: 4.081026922s Jan 28 13:31:43.906: INFO: Pod "downwardapi-volume-5618ef0b-4eda-4d80-b6e4-f4dd56182b40": Phase="Pending", Reason="", readiness=false. Elapsed: 6.101450413s Jan 28 13:31:45.915: INFO: Pod "downwardapi-volume-5618ef0b-4eda-4d80-b6e4-f4dd56182b40": Phase="Pending", Reason="", readiness=false. Elapsed: 8.111004344s Jan 28 13:31:47.944: INFO: Pod "downwardapi-volume-5618ef0b-4eda-4d80-b6e4-f4dd56182b40": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.140029507s STEP: Saw pod success Jan 28 13:31:47.945: INFO: Pod "downwardapi-volume-5618ef0b-4eda-4d80-b6e4-f4dd56182b40" satisfied condition "success or failure" Jan 28 13:31:47.970: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-5618ef0b-4eda-4d80-b6e4-f4dd56182b40 container client-container: STEP: delete the pod Jan 28 13:31:48.072: INFO: Waiting for pod downwardapi-volume-5618ef0b-4eda-4d80-b6e4-f4dd56182b40 to disappear Jan 28 13:31:48.087: INFO: Pod downwardapi-volume-5618ef0b-4eda-4d80-b6e4-f4dd56182b40 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 28 13:31:48.087: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6698" for this suite. Jan 28 13:31:54.148: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 28 13:31:54.333: INFO: namespace downward-api-6698 deletion completed in 6.24235287s • [SLOW TEST:16.737 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 28 13:31:54.335: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-upd-3e004990-fc20-4ff5-a106-40eec5f97281 STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 28 13:32:04.593: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8948" for this suite. Jan 28 13:32:26.642: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 28 13:32:26.958: INFO: namespace configmap-8948 deletion completed in 22.356586932s • [SLOW TEST:32.624 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 28 13:32:26.959: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jan 28 13:32:27.058: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e5adaf0d-460f-404f-98f5-217a7ee7bb75" in namespace "downward-api-5630" to be "success or failure" Jan 28 13:32:27.114: INFO: Pod "downwardapi-volume-e5adaf0d-460f-404f-98f5-217a7ee7bb75": Phase="Pending", Reason="", readiness=false. Elapsed: 55.298705ms Jan 28 13:32:29.129: INFO: Pod "downwardapi-volume-e5adaf0d-460f-404f-98f5-217a7ee7bb75": Phase="Pending", Reason="", readiness=false. Elapsed: 2.070277994s Jan 28 13:32:31.145: INFO: Pod "downwardapi-volume-e5adaf0d-460f-404f-98f5-217a7ee7bb75": Phase="Pending", Reason="", readiness=false. Elapsed: 4.086171117s Jan 28 13:32:33.154: INFO: Pod "downwardapi-volume-e5adaf0d-460f-404f-98f5-217a7ee7bb75": Phase="Pending", Reason="", readiness=false. Elapsed: 6.095490522s Jan 28 13:32:35.161: INFO: Pod "downwardapi-volume-e5adaf0d-460f-404f-98f5-217a7ee7bb75": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.103091566s STEP: Saw pod success Jan 28 13:32:35.162: INFO: Pod "downwardapi-volume-e5adaf0d-460f-404f-98f5-217a7ee7bb75" satisfied condition "success or failure" Jan 28 13:32:35.165: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-e5adaf0d-460f-404f-98f5-217a7ee7bb75 container client-container: STEP: delete the pod Jan 28 13:32:35.500: INFO: Waiting for pod downwardapi-volume-e5adaf0d-460f-404f-98f5-217a7ee7bb75 to disappear Jan 28 13:32:35.510: INFO: Pod downwardapi-volume-e5adaf0d-460f-404f-98f5-217a7ee7bb75 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 28 13:32:35.511: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5630" for this suite. Jan 28 13:32:41.633: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 28 13:32:41.814: INFO: namespace downward-api-5630 deletion completed in 6.294088645s • [SLOW TEST:14.856 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 28 13:32:41.815: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jan 28 13:32:41.922: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 28 13:32:43.019: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-5135" for this suite. Jan 28 13:32:49.073: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 28 13:32:49.233: INFO: namespace custom-resource-definition-5135 deletion completed in 6.209460966s • [SLOW TEST:7.418 seconds] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Simple CustomResourceDefinition /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:35 creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl patch should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 28 13:32:49.234: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating Redis RC Jan 28 13:32:49.313: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2004' Jan 28 13:32:51.677: INFO: stderr: "" Jan 28 13:32:51.677: INFO: stdout: "replicationcontroller/redis-master created\n" STEP: Waiting for Redis master to start. Jan 28 13:32:52.691: INFO: Selector matched 1 pods for map[app:redis] Jan 28 13:32:52.691: INFO: Found 0 / 1 Jan 28 13:32:53.697: INFO: Selector matched 1 pods for map[app:redis] Jan 28 13:32:53.698: INFO: Found 0 / 1 Jan 28 13:32:54.693: INFO: Selector matched 1 pods for map[app:redis] Jan 28 13:32:54.693: INFO: Found 0 / 1 Jan 28 13:32:55.688: INFO: Selector matched 1 pods for map[app:redis] Jan 28 13:32:55.688: INFO: Found 0 / 1 Jan 28 13:32:56.690: INFO: Selector matched 1 pods for map[app:redis] Jan 28 13:32:56.691: INFO: Found 0 / 1 Jan 28 13:32:57.691: INFO: Selector matched 1 pods for map[app:redis] Jan 28 13:32:57.691: INFO: Found 0 / 1 Jan 28 13:32:58.706: INFO: Selector matched 1 pods for map[app:redis] Jan 28 13:32:58.707: INFO: Found 0 / 1 Jan 28 13:32:59.690: INFO: Selector matched 1 pods for map[app:redis] Jan 28 13:32:59.690: INFO: Found 0 / 1 Jan 28 13:33:00.692: INFO: Selector matched 1 pods for map[app:redis] Jan 28 13:33:00.692: INFO: Found 1 / 1 Jan 28 13:33:00.692: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods Jan 28 13:33:00.699: INFO: Selector matched 1 pods for map[app:redis] Jan 28 13:33:00.699: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Jan 28 13:33:00.699: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod redis-master-fb9l6 --namespace=kubectl-2004 -p {"metadata":{"annotations":{"x":"y"}}}' Jan 28 13:33:00.948: INFO: stderr: "" Jan 28 13:33:00.949: INFO: stdout: "pod/redis-master-fb9l6 patched\n" STEP: checking annotations Jan 28 13:33:00.955: INFO: Selector matched 1 pods for map[app:redis] Jan 28 13:33:00.955: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 28 13:33:00.955: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2004" for this suite. Jan 28 13:33:20.979: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 28 13:33:21.182: INFO: namespace kubectl-2004 deletion completed in 20.221523262s • [SLOW TEST:31.948 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl patch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 28 13:33:21.182: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jan 28 13:33:21.256: INFO: Creating ReplicaSet my-hostname-basic-a9c212dd-e578-4457-bd3d-9923ef632aec Jan 28 13:33:21.267: INFO: Pod name my-hostname-basic-a9c212dd-e578-4457-bd3d-9923ef632aec: Found 0 pods out of 1 Jan 28 13:33:26.277: INFO: Pod name my-hostname-basic-a9c212dd-e578-4457-bd3d-9923ef632aec: Found 1 pods out of 1 Jan 28 13:33:26.278: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-a9c212dd-e578-4457-bd3d-9923ef632aec" is running Jan 28 13:33:30.292: INFO: Pod "my-hostname-basic-a9c212dd-e578-4457-bd3d-9923ef632aec-s8bwn" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-28 13:33:21 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-28 13:33:21 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-a9c212dd-e578-4457-bd3d-9923ef632aec]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-28 13:33:21 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-a9c212dd-e578-4457-bd3d-9923ef632aec]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-28 13:33:21 +0000 UTC Reason: Message:}]) Jan 28 13:33:30.292: INFO: Trying to dial the pod Jan 28 13:33:35.345: INFO: Controller my-hostname-basic-a9c212dd-e578-4457-bd3d-9923ef632aec: Got expected result from replica 1 [my-hostname-basic-a9c212dd-e578-4457-bd3d-9923ef632aec-s8bwn]: "my-hostname-basic-a9c212dd-e578-4457-bd3d-9923ef632aec-s8bwn", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 28 13:33:35.345: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-6231" for this suite. Jan 28 13:33:41.381: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 28 13:33:41.465: INFO: namespace replicaset-6231 deletion completed in 6.112610257s • [SLOW TEST:20.283 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 28 13:33:41.465: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [It] should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a replication controller Jan 28 13:33:41.552: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5378' Jan 28 13:33:42.010: INFO: stderr: "" Jan 28 13:33:42.010: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Jan 28 13:33:42.011: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5378' Jan 28 13:33:42.246: INFO: stderr: "" Jan 28 13:33:42.246: INFO: stdout: "update-demo-nautilus-sbg6x update-demo-nautilus-vcmpf " Jan 28 13:33:42.246: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-sbg6x -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5378' Jan 28 13:33:42.429: INFO: stderr: "" Jan 28 13:33:42.429: INFO: stdout: "" Jan 28 13:33:42.430: INFO: update-demo-nautilus-sbg6x is created but not running Jan 28 13:33:47.431: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5378' Jan 28 13:33:47.692: INFO: stderr: "" Jan 28 13:33:47.692: INFO: stdout: "update-demo-nautilus-sbg6x update-demo-nautilus-vcmpf " Jan 28 13:33:47.693: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-sbg6x -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5378' Jan 28 13:33:49.318: INFO: stderr: "" Jan 28 13:33:49.318: INFO: stdout: "" Jan 28 13:33:49.318: INFO: update-demo-nautilus-sbg6x is created but not running Jan 28 13:33:54.319: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5378' Jan 28 13:33:54.469: INFO: stderr: "" Jan 28 13:33:54.470: INFO: stdout: "update-demo-nautilus-sbg6x update-demo-nautilus-vcmpf " Jan 28 13:33:54.470: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-sbg6x -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5378' Jan 28 13:33:54.652: INFO: stderr: "" Jan 28 13:33:54.652: INFO: stdout: "true" Jan 28 13:33:54.653: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-sbg6x -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5378' Jan 28 13:33:54.779: INFO: stderr: "" Jan 28 13:33:54.779: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 28 13:33:54.779: INFO: validating pod update-demo-nautilus-sbg6x Jan 28 13:33:54.827: INFO: got data: { "image": "nautilus.jpg" } Jan 28 13:33:54.828: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 28 13:33:54.828: INFO: update-demo-nautilus-sbg6x is verified up and running Jan 28 13:33:54.828: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vcmpf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5378' Jan 28 13:33:54.952: INFO: stderr: "" Jan 28 13:33:54.953: INFO: stdout: "true" Jan 28 13:33:54.953: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vcmpf -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5378' Jan 28 13:33:55.080: INFO: stderr: "" Jan 28 13:33:55.080: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 28 13:33:55.080: INFO: validating pod update-demo-nautilus-vcmpf Jan 28 13:33:55.110: INFO: got data: { "image": "nautilus.jpg" } Jan 28 13:33:55.111: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 28 13:33:55.111: INFO: update-demo-nautilus-vcmpf is verified up and running STEP: using delete to clean up resources Jan 28 13:33:55.111: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5378' Jan 28 13:33:55.241: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 28 13:33:55.241: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Jan 28 13:33:55.241: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-5378' Jan 28 13:33:55.364: INFO: stderr: "No resources found.\n" Jan 28 13:33:55.364: INFO: stdout: "" Jan 28 13:33:55.364: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-5378 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jan 28 13:33:55.461: INFO: stderr: "" Jan 28 13:33:55.461: INFO: stdout: "update-demo-nautilus-sbg6x\nupdate-demo-nautilus-vcmpf\n" Jan 28 13:33:55.962: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-5378' Jan 28 13:33:56.963: INFO: stderr: "No resources found.\n" Jan 28 13:33:56.963: INFO: stdout: "" Jan 28 13:33:56.964: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-5378 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jan 28 13:33:57.314: INFO: stderr: "" Jan 28 13:33:57.314: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 28 13:33:57.314: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5378" for this suite. Jan 28 13:34:19.370: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 28 13:34:19.520: INFO: namespace kubectl-5378 deletion completed in 22.199179723s • [SLOW TEST:38.055 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 28 13:34:19.520: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0644 on tmpfs Jan 28 13:34:19.658: INFO: Waiting up to 5m0s for pod "pod-b6614e68-bc32-43a1-8078-06cb7bfc6339" in namespace "emptydir-4452" to be "success or failure" Jan 28 13:34:19.681: INFO: Pod "pod-b6614e68-bc32-43a1-8078-06cb7bfc6339": Phase="Pending", Reason="", readiness=false. Elapsed: 23.024688ms Jan 28 13:34:21.695: INFO: Pod "pod-b6614e68-bc32-43a1-8078-06cb7bfc6339": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036328936s Jan 28 13:34:23.709: INFO: Pod "pod-b6614e68-bc32-43a1-8078-06cb7bfc6339": Phase="Pending", Reason="", readiness=false. Elapsed: 4.050405258s Jan 28 13:34:25.742: INFO: Pod "pod-b6614e68-bc32-43a1-8078-06cb7bfc6339": Phase="Pending", Reason="", readiness=false. Elapsed: 6.083327133s Jan 28 13:34:27.750: INFO: Pod "pod-b6614e68-bc32-43a1-8078-06cb7bfc6339": Phase="Pending", Reason="", readiness=false. Elapsed: 8.092088645s Jan 28 13:34:29.767: INFO: Pod "pod-b6614e68-bc32-43a1-8078-06cb7bfc6339": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.108635071s STEP: Saw pod success Jan 28 13:34:29.767: INFO: Pod "pod-b6614e68-bc32-43a1-8078-06cb7bfc6339" satisfied condition "success or failure" Jan 28 13:34:29.774: INFO: Trying to get logs from node iruya-node pod pod-b6614e68-bc32-43a1-8078-06cb7bfc6339 container test-container: STEP: delete the pod Jan 28 13:34:30.068: INFO: Waiting for pod pod-b6614e68-bc32-43a1-8078-06cb7bfc6339 to disappear Jan 28 13:34:30.077: INFO: Pod pod-b6614e68-bc32-43a1-8078-06cb7bfc6339 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 28 13:34:30.077: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4452" for this suite. Jan 28 13:34:36.119: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 28 13:34:36.291: INFO: namespace emptydir-4452 deletion completed in 6.209474302s • [SLOW TEST:16.771 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 28 13:34:36.292: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:179 [It] should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 28 13:34:36.437: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-7425" for this suite. Jan 28 13:34:58.591: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 28 13:34:58.690: INFO: namespace pods-7425 deletion completed in 22.128832102s • [SLOW TEST:22.398 seconds] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Proxy server should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 28 13:34:58.691: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: starting the proxy server Jan 28 13:34:58.846: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 28 13:34:58.943: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9476" for this suite. Jan 28 13:35:05.004: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 28 13:35:05.101: INFO: namespace kubectl-9476 deletion completed in 6.133234037s • [SLOW TEST:6.410 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Proxy server /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 28 13:35:05.101: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Jan 28 13:35:05.272: INFO: Waiting up to 5m0s for pod "downward-api-3b087b76-26ab-44d4-882b-9ba6890797a9" in namespace "downward-api-2032" to be "success or failure" Jan 28 13:35:05.291: INFO: Pod "downward-api-3b087b76-26ab-44d4-882b-9ba6890797a9": Phase="Pending", Reason="", readiness=false. Elapsed: 18.185562ms Jan 28 13:35:07.300: INFO: Pod "downward-api-3b087b76-26ab-44d4-882b-9ba6890797a9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027027188s Jan 28 13:35:09.309: INFO: Pod "downward-api-3b087b76-26ab-44d4-882b-9ba6890797a9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.036144887s Jan 28 13:35:11.369: INFO: Pod "downward-api-3b087b76-26ab-44d4-882b-9ba6890797a9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.096218218s Jan 28 13:35:13.389: INFO: Pod "downward-api-3b087b76-26ab-44d4-882b-9ba6890797a9": Phase="Pending", Reason="", readiness=false. Elapsed: 8.11600426s Jan 28 13:35:15.403: INFO: Pod "downward-api-3b087b76-26ab-44d4-882b-9ba6890797a9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.130410372s STEP: Saw pod success Jan 28 13:35:15.403: INFO: Pod "downward-api-3b087b76-26ab-44d4-882b-9ba6890797a9" satisfied condition "success or failure" Jan 28 13:35:15.413: INFO: Trying to get logs from node iruya-node pod downward-api-3b087b76-26ab-44d4-882b-9ba6890797a9 container dapi-container: STEP: delete the pod Jan 28 13:35:15.504: INFO: Waiting for pod downward-api-3b087b76-26ab-44d4-882b-9ba6890797a9 to disappear Jan 28 13:35:15.511: INFO: Pod downward-api-3b087b76-26ab-44d4-882b-9ba6890797a9 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 28 13:35:15.512: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2032" for this suite. Jan 28 13:35:21.555: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 28 13:35:21.691: INFO: namespace downward-api-2032 deletion completed in 6.167085118s • [SLOW TEST:16.589 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-network] Services should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 28 13:35:21.691: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88 [It] should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating service endpoint-test2 in namespace services-7882 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-7882 to expose endpoints map[] Jan 28 13:35:21.875: INFO: successfully validated that service endpoint-test2 in namespace services-7882 exposes endpoints map[] (20.778009ms elapsed) STEP: Creating pod pod1 in namespace services-7882 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-7882 to expose endpoints map[pod1:[80]] Jan 28 13:35:26.097: INFO: Unexpected endpoints: found map[], expected map[pod1:[80]] (4.203097379s elapsed, will retry) Jan 28 13:35:31.176: INFO: successfully validated that service endpoint-test2 in namespace services-7882 exposes endpoints map[pod1:[80]] (9.282238666s elapsed) STEP: Creating pod pod2 in namespace services-7882 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-7882 to expose endpoints map[pod1:[80] pod2:[80]] Jan 28 13:35:37.059: INFO: Unexpected endpoints: found map[827d17ed-2aed-44d2-8d08-fc6b088861fb:[80]], expected map[pod1:[80] pod2:[80]] (5.875686629s elapsed, will retry) Jan 28 13:35:40.162: INFO: successfully validated that service endpoint-test2 in namespace services-7882 exposes endpoints map[pod1:[80] pod2:[80]] (8.978056305s elapsed) STEP: Deleting pod pod1 in namespace services-7882 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-7882 to expose endpoints map[pod2:[80]] Jan 28 13:35:40.302: INFO: successfully validated that service endpoint-test2 in namespace services-7882 exposes endpoints map[pod2:[80]] (111.034902ms elapsed) STEP: Deleting pod pod2 in namespace services-7882 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-7882 to expose endpoints map[] Jan 28 13:35:40.406: INFO: successfully validated that service endpoint-test2 in namespace services-7882 exposes endpoints map[] (67.702855ms elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 28 13:35:40.461: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-7882" for this suite. Jan 28 13:36:02.546: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 28 13:36:02.698: INFO: namespace services-7882 deletion completed in 22.226795617s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92 • [SLOW TEST:41.008 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 28 13:36:02.699: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jan 28 13:36:02.776: INFO: Creating deployment "test-recreate-deployment" Jan 28 13:36:02.867: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 Jan 28 13:36:02.897: INFO: deployment "test-recreate-deployment" doesn't have the required revision set Jan 28 13:36:04.923: INFO: Waiting deployment "test-recreate-deployment" to complete Jan 28 13:36:04.942: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715815362, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715815362, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715815363, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715815362, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 28 13:36:06.953: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715815362, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715815362, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715815363, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715815362, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 28 13:36:08.959: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715815362, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715815362, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715815363, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715815362, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 28 13:36:10.958: INFO: Triggering a new rollout for deployment "test-recreate-deployment" Jan 28 13:36:10.973: INFO: Updating deployment test-recreate-deployment Jan 28 13:36:10.973: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Jan 28 13:36:11.309: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment,GenerateName:,Namespace:deployment-7766,SelfLink:/apis/apps/v1/namespaces/deployment-7766/deployments/test-recreate-deployment,UID:58b6061b-61d0-4071-b9e3-c1d081eec581,ResourceVersion:22189940,Generation:2,CreationTimestamp:2020-01-28 13:36:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[{Available False 2020-01-28 13:36:11 +0000 UTC 2020-01-28 13:36:11 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2020-01-28 13:36:11 +0000 UTC 2020-01-28 13:36:02 +0000 UTC ReplicaSetUpdated ReplicaSet "test-recreate-deployment-5c8c9cc69d" is progressing.}],ReadyReplicas:0,CollisionCount:nil,},} Jan 28 13:36:11.321: INFO: New ReplicaSet "test-recreate-deployment-5c8c9cc69d" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5c8c9cc69d,GenerateName:,Namespace:deployment-7766,SelfLink:/apis/apps/v1/namespaces/deployment-7766/replicasets/test-recreate-deployment-5c8c9cc69d,UID:3bf88499-9430-4d0b-a711-0241f8b1eea2,ResourceVersion:22189937,Generation:1,CreationTimestamp:2020-01-28 13:36:11 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 58b6061b-61d0-4071-b9e3-c1d081eec581 0xc002f13157 0xc002f13158}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Jan 28 13:36:11.321: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": Jan 28 13:36:11.322: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-6df85df6b9,GenerateName:,Namespace:deployment-7766,SelfLink:/apis/apps/v1/namespaces/deployment-7766/replicasets/test-recreate-deployment-6df85df6b9,UID:3da9aa5f-1f8a-467c-88fc-1f128589ed36,ResourceVersion:22189929,Generation:2,CreationTimestamp:2020-01-28 13:36:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 58b6061b-61d0-4071-b9e3-c1d081eec581 0xc002f13257 0xc002f13258}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Jan 28 13:36:11.387: INFO: Pod "test-recreate-deployment-5c8c9cc69d-8xwqp" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5c8c9cc69d-8xwqp,GenerateName:test-recreate-deployment-5c8c9cc69d-,Namespace:deployment-7766,SelfLink:/api/v1/namespaces/deployment-7766/pods/test-recreate-deployment-5c8c9cc69d-8xwqp,UID:2d1c9647-736b-46ed-afbe-74d2420c555d,ResourceVersion:22189941,Generation:0,CreationTimestamp:2020-01-28 13:36:11 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-recreate-deployment-5c8c9cc69d 3bf88499-9430-4d0b-a711-0241f8b1eea2 0xc002885cb7 0xc002885cb8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-h5qv2 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-h5qv2,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-h5qv2 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002885d30} {node.kubernetes.io/unreachable Exists NoExecute 0xc002885d50}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 13:36:11 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-28 13:36:11 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-28 13:36:11 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 13:36:11 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2020-01-28 13:36:11 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 28 13:36:11.388: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-7766" for this suite. Jan 28 13:36:17.462: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 28 13:36:17.742: INFO: namespace deployment-7766 deletion completed in 6.338281395s • [SLOW TEST:15.043 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 28 13:36:17.743: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jan 28 13:36:17.924: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7e9eee4a-95b0-4c00-92eb-e2f810f2a8cb" in namespace "projected-1264" to be "success or failure" Jan 28 13:36:17.957: INFO: Pod "downwardapi-volume-7e9eee4a-95b0-4c00-92eb-e2f810f2a8cb": Phase="Pending", Reason="", readiness=false. Elapsed: 32.282238ms Jan 28 13:36:19.967: INFO: Pod "downwardapi-volume-7e9eee4a-95b0-4c00-92eb-e2f810f2a8cb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.042395429s Jan 28 13:36:21.979: INFO: Pod "downwardapi-volume-7e9eee4a-95b0-4c00-92eb-e2f810f2a8cb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.054466068s Jan 28 13:36:23.997: INFO: Pod "downwardapi-volume-7e9eee4a-95b0-4c00-92eb-e2f810f2a8cb": Phase="Pending", Reason="", readiness=false. Elapsed: 6.072141376s Jan 28 13:36:26.007: INFO: Pod "downwardapi-volume-7e9eee4a-95b0-4c00-92eb-e2f810f2a8cb": Phase="Pending", Reason="", readiness=false. Elapsed: 8.082008939s Jan 28 13:36:28.022: INFO: Pod "downwardapi-volume-7e9eee4a-95b0-4c00-92eb-e2f810f2a8cb": Phase="Pending", Reason="", readiness=false. Elapsed: 10.096968629s Jan 28 13:36:30.032: INFO: Pod "downwardapi-volume-7e9eee4a-95b0-4c00-92eb-e2f810f2a8cb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.107170035s STEP: Saw pod success Jan 28 13:36:30.032: INFO: Pod "downwardapi-volume-7e9eee4a-95b0-4c00-92eb-e2f810f2a8cb" satisfied condition "success or failure" Jan 28 13:36:30.036: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-7e9eee4a-95b0-4c00-92eb-e2f810f2a8cb container client-container: STEP: delete the pod Jan 28 13:36:30.140: INFO: Waiting for pod downwardapi-volume-7e9eee4a-95b0-4c00-92eb-e2f810f2a8cb to disappear Jan 28 13:36:30.150: INFO: Pod downwardapi-volume-7e9eee4a-95b0-4c00-92eb-e2f810f2a8cb no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 28 13:36:30.150: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1264" for this suite. Jan 28 13:36:36.202: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 28 13:36:36.321: INFO: namespace projected-1264 deletion completed in 6.156491948s • [SLOW TEST:18.578 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 28 13:36:36.321: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 28 13:37:23.745: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-5782" for this suite. Jan 28 13:37:29.774: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 28 13:37:29.970: INFO: namespace namespaces-5782 deletion completed in 6.21898022s STEP: Destroying namespace "nsdeletetest-8520" for this suite. Jan 28 13:37:29.974: INFO: Namespace nsdeletetest-8520 was already deleted STEP: Destroying namespace "nsdeletetest-8258" for this suite. Jan 28 13:37:36.006: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 28 13:37:36.126: INFO: namespace nsdeletetest-8258 deletion completed in 6.152053557s • [SLOW TEST:59.805 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 28 13:37:36.127: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: expected 0 pods, got 2 pods STEP: Gathering metrics W0128 13:37:39.012633 9 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jan 28 13:37:39.012: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 28 13:37:39.012: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-293" for this suite. Jan 28 13:37:46.714: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 28 13:37:46.907: INFO: namespace gc-293 deletion completed in 7.889249404s • [SLOW TEST:10.780 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 28 13:37:46.908: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: getting the auto-created API token Jan 28 13:37:47.889: INFO: created pod pod-service-account-defaultsa Jan 28 13:37:47.890: INFO: pod pod-service-account-defaultsa service account token volume mount: true Jan 28 13:37:47.905: INFO: created pod pod-service-account-mountsa Jan 28 13:37:47.905: INFO: pod pod-service-account-mountsa service account token volume mount: true Jan 28 13:37:47.965: INFO: created pod pod-service-account-nomountsa Jan 28 13:37:47.966: INFO: pod pod-service-account-nomountsa service account token volume mount: false Jan 28 13:37:47.986: INFO: created pod pod-service-account-defaultsa-mountspec Jan 28 13:37:47.986: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true Jan 28 13:37:48.028: INFO: created pod pod-service-account-mountsa-mountspec Jan 28 13:37:48.028: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true Jan 28 13:37:48.042: INFO: created pod pod-service-account-nomountsa-mountspec Jan 28 13:37:48.042: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true Jan 28 13:37:48.170: INFO: created pod pod-service-account-defaultsa-nomountspec Jan 28 13:37:48.170: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false Jan 28 13:37:48.226: INFO: created pod pod-service-account-mountsa-nomountspec Jan 28 13:37:48.226: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false Jan 28 13:37:48.348: INFO: created pod pod-service-account-nomountsa-nomountspec Jan 28 13:37:48.349: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 28 13:37:48.349: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-4456" for this suite. Jan 28 13:38:17.052: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 28 13:38:17.170: INFO: namespace svcaccounts-4456 deletion completed in 28.805119738s • [SLOW TEST:30.263 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 28 13:38:17.171: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Jan 28 13:38:26.625: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 28 13:38:26.676: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-5578" for this suite. Jan 28 13:38:32.807: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 28 13:38:32.964: INFO: namespace container-runtime-5578 deletion completed in 6.185897928s • [SLOW TEST:15.793 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 28 13:38:32.965: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-55142572-4b09-4a94-b528-0ff33f7bb90d STEP: Creating a pod to test consume secrets Jan 28 13:38:33.504: INFO: Waiting up to 5m0s for pod "pod-secrets-e1702d6f-56eb-4149-bec3-30ba8bdff82c" in namespace "secrets-4675" to be "success or failure" Jan 28 13:38:33.548: INFO: Pod "pod-secrets-e1702d6f-56eb-4149-bec3-30ba8bdff82c": Phase="Pending", Reason="", readiness=false. Elapsed: 44.034616ms Jan 28 13:38:35.894: INFO: Pod "pod-secrets-e1702d6f-56eb-4149-bec3-30ba8bdff82c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.389102802s Jan 28 13:38:37.902: INFO: Pod "pod-secrets-e1702d6f-56eb-4149-bec3-30ba8bdff82c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.397441093s Jan 28 13:38:39.920: INFO: Pod "pod-secrets-e1702d6f-56eb-4149-bec3-30ba8bdff82c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.415951702s Jan 28 13:38:41.938: INFO: Pod "pod-secrets-e1702d6f-56eb-4149-bec3-30ba8bdff82c": Phase="Pending", Reason="", readiness=false. Elapsed: 8.434023765s Jan 28 13:38:43.954: INFO: Pod "pod-secrets-e1702d6f-56eb-4149-bec3-30ba8bdff82c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.449466377s STEP: Saw pod success Jan 28 13:38:43.954: INFO: Pod "pod-secrets-e1702d6f-56eb-4149-bec3-30ba8bdff82c" satisfied condition "success or failure" Jan 28 13:38:43.974: INFO: Trying to get logs from node iruya-node pod pod-secrets-e1702d6f-56eb-4149-bec3-30ba8bdff82c container secret-volume-test: STEP: delete the pod Jan 28 13:38:44.328: INFO: Waiting for pod pod-secrets-e1702d6f-56eb-4149-bec3-30ba8bdff82c to disappear Jan 28 13:38:44.334: INFO: Pod pod-secrets-e1702d6f-56eb-4149-bec3-30ba8bdff82c no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 28 13:38:44.335: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4675" for this suite. Jan 28 13:38:50.368: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 28 13:38:50.514: INFO: namespace secrets-4675 deletion completed in 6.17165986s STEP: Destroying namespace "secret-namespace-9851" for this suite. Jan 28 13:38:56.619: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 28 13:38:56.746: INFO: namespace secret-namespace-9851 deletion completed in 6.231214551s • [SLOW TEST:23.781 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 28 13:38:56.748: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jan 28 13:39:05.070: INFO: Waiting up to 5m0s for pod "client-envvars-90c92e66-3924-4b5d-aa82-e99d0fd43df8" in namespace "pods-2267" to be "success or failure" Jan 28 13:39:05.086: INFO: Pod "client-envvars-90c92e66-3924-4b5d-aa82-e99d0fd43df8": Phase="Pending", Reason="", readiness=false. Elapsed: 15.619445ms Jan 28 13:39:07.839: INFO: Pod "client-envvars-90c92e66-3924-4b5d-aa82-e99d0fd43df8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.768353049s Jan 28 13:39:09.854: INFO: Pod "client-envvars-90c92e66-3924-4b5d-aa82-e99d0fd43df8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.783183427s Jan 28 13:39:11.882: INFO: Pod "client-envvars-90c92e66-3924-4b5d-aa82-e99d0fd43df8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.81129844s Jan 28 13:39:13.895: INFO: Pod "client-envvars-90c92e66-3924-4b5d-aa82-e99d0fd43df8": Phase="Pending", Reason="", readiness=false. Elapsed: 8.824585531s Jan 28 13:39:15.905: INFO: Pod "client-envvars-90c92e66-3924-4b5d-aa82-e99d0fd43df8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.834324457s STEP: Saw pod success Jan 28 13:39:15.905: INFO: Pod "client-envvars-90c92e66-3924-4b5d-aa82-e99d0fd43df8" satisfied condition "success or failure" Jan 28 13:39:15.910: INFO: Trying to get logs from node iruya-node pod client-envvars-90c92e66-3924-4b5d-aa82-e99d0fd43df8 container env3cont: STEP: delete the pod Jan 28 13:39:16.175: INFO: Waiting for pod client-envvars-90c92e66-3924-4b5d-aa82-e99d0fd43df8 to disappear Jan 28 13:39:16.186: INFO: Pod client-envvars-90c92e66-3924-4b5d-aa82-e99d0fd43df8 no longer exists [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 28 13:39:16.186: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-2267" for this suite. Jan 28 13:39:58.267: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 28 13:39:58.412: INFO: namespace pods-2267 deletion completed in 42.177935606s • [SLOW TEST:61.664 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 28 13:39:58.412: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: validating cluster-info Jan 28 13:39:58.496: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info' Jan 28 13:39:58.612: INFO: stderr: "" Jan 28 13:39:58.612: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.24.4.57:6443\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.24.4.57:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 28 13:39:58.612: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1814" for this suite. Jan 28 13:40:04.640: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 28 13:40:04.781: INFO: namespace kubectl-1814 deletion completed in 6.163050345s • [SLOW TEST:6.369 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl cluster-info /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 28 13:40:04.782: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-projected-all-test-volume-48c0feed-b28d-4a05-ab7a-c6aad1cec727 STEP: Creating secret with name secret-projected-all-test-volume-db270cfc-c67e-400a-9f3e-a6997d69a55e STEP: Creating a pod to test Check all projections for projected volume plugin Jan 28 13:40:04.998: INFO: Waiting up to 5m0s for pod "projected-volume-330a62d7-e493-4486-8bab-65a2f5ad32ac" in namespace "projected-6146" to be "success or failure" Jan 28 13:40:05.029: INFO: Pod "projected-volume-330a62d7-e493-4486-8bab-65a2f5ad32ac": Phase="Pending", Reason="", readiness=false. Elapsed: 30.37022ms Jan 28 13:40:07.040: INFO: Pod "projected-volume-330a62d7-e493-4486-8bab-65a2f5ad32ac": Phase="Pending", Reason="", readiness=false. Elapsed: 2.041897479s Jan 28 13:40:09.052: INFO: Pod "projected-volume-330a62d7-e493-4486-8bab-65a2f5ad32ac": Phase="Pending", Reason="", readiness=false. Elapsed: 4.054124487s Jan 28 13:40:11.062: INFO: Pod "projected-volume-330a62d7-e493-4486-8bab-65a2f5ad32ac": Phase="Pending", Reason="", readiness=false. Elapsed: 6.06342218s Jan 28 13:40:13.112: INFO: Pod "projected-volume-330a62d7-e493-4486-8bab-65a2f5ad32ac": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.113946778s STEP: Saw pod success Jan 28 13:40:13.112: INFO: Pod "projected-volume-330a62d7-e493-4486-8bab-65a2f5ad32ac" satisfied condition "success or failure" Jan 28 13:40:13.118: INFO: Trying to get logs from node iruya-node pod projected-volume-330a62d7-e493-4486-8bab-65a2f5ad32ac container projected-all-volume-test: STEP: delete the pod Jan 28 13:40:13.176: INFO: Waiting for pod projected-volume-330a62d7-e493-4486-8bab-65a2f5ad32ac to disappear Jan 28 13:40:13.185: INFO: Pod projected-volume-330a62d7-e493-4486-8bab-65a2f5ad32ac no longer exists [AfterEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 28 13:40:13.185: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6146" for this suite. Jan 28 13:40:19.247: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 28 13:40:19.398: INFO: namespace projected-6146 deletion completed in 6.20594975s • [SLOW TEST:14.616 seconds] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31 should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 28 13:40:19.398: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-1391 [It] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace statefulset-1391 STEP: Creating statefulset with conflicting port in namespace statefulset-1391 STEP: Waiting until pod test-pod will start running in namespace statefulset-1391 STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-1391 Jan 28 13:40:29.654: INFO: Observed stateful pod in namespace: statefulset-1391, name: ss-0, uid: 5402e135-516b-4b59-a3e3-bb400c8853a3, status phase: Pending. Waiting for statefulset controller to delete. Jan 28 13:40:36.516: INFO: Observed stateful pod in namespace: statefulset-1391, name: ss-0, uid: 5402e135-516b-4b59-a3e3-bb400c8853a3, status phase: Failed. Waiting for statefulset controller to delete. Jan 28 13:40:36.547: INFO: Observed stateful pod in namespace: statefulset-1391, name: ss-0, uid: 5402e135-516b-4b59-a3e3-bb400c8853a3, status phase: Failed. Waiting for statefulset controller to delete. Jan 28 13:40:36.554: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-1391 STEP: Removing pod with conflicting port in namespace statefulset-1391 STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-1391 and will be in running state [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 Jan 28 13:40:46.768: INFO: Deleting all statefulset in ns statefulset-1391 Jan 28 13:40:46.772: INFO: Scaling statefulset ss to 0 Jan 28 13:40:56.872: INFO: Waiting for statefulset status.replicas updated to 0 Jan 28 13:40:56.877: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 28 13:40:56.912: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-1391" for this suite. Jan 28 13:41:02.984: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 28 13:41:03.132: INFO: namespace statefulset-1391 deletion completed in 6.211569474s • [SLOW TEST:43.735 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 28 13:41:03.133: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jan 28 13:41:03.232: INFO: (0) /api/v1/nodes/iruya-node:10250/proxy/logs/:
alternatives.log
alternatives.l... (200; 37.637424ms)
Jan 28 13:41:03.292: INFO: (1) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 59.152304ms)
Jan 28 13:41:03.302: INFO: (2) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 10.141999ms)
Jan 28 13:41:03.311: INFO: (3) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.572034ms)
Jan 28 13:41:03.318: INFO: (4) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.499701ms)
Jan 28 13:41:03.327: INFO: (5) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.220476ms)
Jan 28 13:41:03.335: INFO: (6) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.571774ms)
Jan 28 13:41:03.343: INFO: (7) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.717359ms)
Jan 28 13:41:03.351: INFO: (8) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.645944ms)
Jan 28 13:41:03.364: INFO: (9) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 13.028672ms)
Jan 28 13:41:03.371: INFO: (10) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.553727ms)
Jan 28 13:41:03.376: INFO: (11) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.497871ms)
Jan 28 13:41:03.383: INFO: (12) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.390074ms)
Jan 28 13:41:03.393: INFO: (13) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 9.412418ms)
Jan 28 13:41:03.402: INFO: (14) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.701435ms)
Jan 28 13:41:03.408: INFO: (15) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.785867ms)
Jan 28 13:41:03.417: INFO: (16) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 9.046164ms)
Jan 28 13:41:03.424: INFO: (17) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.605317ms)
Jan 28 13:41:03.432: INFO: (18) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.869417ms)
Jan 28 13:41:03.441: INFO: (19) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.685287ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 28 13:41:03.441: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-9056" for this suite.
Jan 28 13:41:09.534: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 28 13:41:09.691: INFO: namespace proxy-9056 deletion completed in 6.242339427s

• [SLOW TEST:6.558 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58
    should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should have monotonically increasing restart count [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 28 13:41:09.693: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should have monotonically increasing restart count [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod liveness-750a36d0-8a2d-4486-b240-18a7496bf4f4 in namespace container-probe-1665
Jan 28 13:41:19.828: INFO: Started pod liveness-750a36d0-8a2d-4486-b240-18a7496bf4f4 in namespace container-probe-1665
STEP: checking the pod's current state and verifying that restartCount is present
Jan 28 13:41:19.832: INFO: Initial restart count of pod liveness-750a36d0-8a2d-4486-b240-18a7496bf4f4 is 0
Jan 28 13:41:39.937: INFO: Restart count of pod container-probe-1665/liveness-750a36d0-8a2d-4486-b240-18a7496bf4f4 is now 1 (20.105809485s elapsed)
Jan 28 13:42:00.119: INFO: Restart count of pod container-probe-1665/liveness-750a36d0-8a2d-4486-b240-18a7496bf4f4 is now 2 (40.28766548s elapsed)
Jan 28 13:42:22.238: INFO: Restart count of pod container-probe-1665/liveness-750a36d0-8a2d-4486-b240-18a7496bf4f4 is now 3 (1m2.406363354s elapsed)
Jan 28 13:42:42.494: INFO: Restart count of pod container-probe-1665/liveness-750a36d0-8a2d-4486-b240-18a7496bf4f4 is now 4 (1m22.66264314s elapsed)
Jan 28 13:43:45.200: INFO: Restart count of pod container-probe-1665/liveness-750a36d0-8a2d-4486-b240-18a7496bf4f4 is now 5 (2m25.368323557s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 28 13:43:45.246: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-1665" for this suite.
Jan 28 13:43:51.296: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 28 13:43:51.433: INFO: namespace container-probe-1665 deletion completed in 6.163995812s

• [SLOW TEST:161.740 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should have monotonically increasing restart count [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 28 13:43:51.434: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Jan 28 13:44:00.664: INFO: Expected: &{} to match Container's Termination Message:  --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 28 13:44:00.691: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-8096" for this suite.
Jan 28 13:44:06.742: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 28 13:44:06.875: INFO: namespace container-runtime-8096 deletion completed in 6.180203546s

• [SLOW TEST:15.441 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129
      should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 28 13:44:06.876: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-6717be8d-ebd9-48bf-806c-736fdaa2f4a5
STEP: Creating a pod to test consume configMaps
Jan 28 13:44:06.985: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-834aa2d4-ed22-4139-af61-b166f749077f" in namespace "projected-3871" to be "success or failure"
Jan 28 13:44:06.994: INFO: Pod "pod-projected-configmaps-834aa2d4-ed22-4139-af61-b166f749077f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.608872ms
Jan 28 13:44:09.006: INFO: Pod "pod-projected-configmaps-834aa2d4-ed22-4139-af61-b166f749077f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020754616s
Jan 28 13:44:11.037: INFO: Pod "pod-projected-configmaps-834aa2d4-ed22-4139-af61-b166f749077f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.05147707s
Jan 28 13:44:13.046: INFO: Pod "pod-projected-configmaps-834aa2d4-ed22-4139-af61-b166f749077f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.060281886s
Jan 28 13:44:15.063: INFO: Pod "pod-projected-configmaps-834aa2d4-ed22-4139-af61-b166f749077f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.077904441s
STEP: Saw pod success
Jan 28 13:44:15.064: INFO: Pod "pod-projected-configmaps-834aa2d4-ed22-4139-af61-b166f749077f" satisfied condition "success or failure"
Jan 28 13:44:15.075: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-834aa2d4-ed22-4139-af61-b166f749077f container projected-configmap-volume-test: 
STEP: delete the pod
Jan 28 13:44:15.140: INFO: Waiting for pod pod-projected-configmaps-834aa2d4-ed22-4139-af61-b166f749077f to disappear
Jan 28 13:44:15.155: INFO: Pod pod-projected-configmaps-834aa2d4-ed22-4139-af61-b166f749077f no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 28 13:44:15.155: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3871" for this suite.
Jan 28 13:44:21.277: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 28 13:44:21.420: INFO: namespace projected-3871 deletion completed in 6.258403991s

• [SLOW TEST:14.544 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 28 13:44:21.423: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Jan 28 13:44:41.672: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jan 28 13:44:41.687: INFO: Pod pod-with-prestop-http-hook still exists
Jan 28 13:44:43.688: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jan 28 13:44:43.701: INFO: Pod pod-with-prestop-http-hook still exists
Jan 28 13:44:45.688: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jan 28 13:44:45.700: INFO: Pod pod-with-prestop-http-hook still exists
Jan 28 13:44:47.688: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jan 28 13:44:47.696: INFO: Pod pod-with-prestop-http-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 28 13:44:47.722: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-3841" for this suite.
Jan 28 13:45:09.743: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 28 13:45:09.985: INFO: namespace container-lifecycle-hook-3841 deletion completed in 22.257598972s

• [SLOW TEST:48.562 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute prestop http hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 28 13:45:09.986: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name s-test-opt-del-2275954a-6551-4700-8b4c-9e8454233000
STEP: Creating secret with name s-test-opt-upd-0513af3f-7c17-4783-8b75-de80919d6062
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-2275954a-6551-4700-8b4c-9e8454233000
STEP: Updating secret s-test-opt-upd-0513af3f-7c17-4783-8b75-de80919d6062
STEP: Creating secret with name s-test-opt-create-0226942a-82df-4825-a067-a82ec33322fe
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 28 13:45:28.644: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-3923" for this suite.
Jan 28 13:45:52.686: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 28 13:45:52.805: INFO: namespace secrets-3923 deletion completed in 24.151506109s

• [SLOW TEST:42.821 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 28 13:45:52.808: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81
Jan 28 13:45:52.931: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Jan 28 13:45:52.945: INFO: Waiting for terminating namespaces to be deleted...
Jan 28 13:45:52.949: INFO: 
Logging pods the kubelet thinks is on node iruya-node before test
Jan 28 13:45:52.978: INFO: kube-proxy-976zl from kube-system started at 2019-08-04 09:01:39 +0000 UTC (1 container statuses recorded)
Jan 28 13:45:52.978: INFO: 	Container kube-proxy ready: true, restart count 0
Jan 28 13:45:52.978: INFO: weave-net-rlp57 from kube-system started at 2019-10-12 11:56:39 +0000 UTC (2 container statuses recorded)
Jan 28 13:45:52.978: INFO: 	Container weave ready: true, restart count 0
Jan 28 13:45:52.978: INFO: 	Container weave-npc ready: true, restart count 0
Jan 28 13:45:52.978: INFO: 
Logging pods the kubelet thinks is on node iruya-server-sfge57q7djm7 before test
Jan 28 13:45:53.029: INFO: kube-controller-manager-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:42 +0000 UTC (1 container statuses recorded)
Jan 28 13:45:53.029: INFO: 	Container kube-controller-manager ready: true, restart count 19
Jan 28 13:45:53.029: INFO: kube-proxy-58v95 from kube-system started at 2019-08-04 08:52:37 +0000 UTC (1 container statuses recorded)
Jan 28 13:45:53.029: INFO: 	Container kube-proxy ready: true, restart count 0
Jan 28 13:45:53.029: INFO: kube-apiserver-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:39 +0000 UTC (1 container statuses recorded)
Jan 28 13:45:53.029: INFO: 	Container kube-apiserver ready: true, restart count 0
Jan 28 13:45:53.029: INFO: kube-scheduler-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:43 +0000 UTC (1 container statuses recorded)
Jan 28 13:45:53.029: INFO: 	Container kube-scheduler ready: true, restart count 13
Jan 28 13:45:53.029: INFO: coredns-5c98db65d4-xx8w8 from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded)
Jan 28 13:45:53.029: INFO: 	Container coredns ready: true, restart count 0
Jan 28 13:45:53.029: INFO: etcd-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:38 +0000 UTC (1 container statuses recorded)
Jan 28 13:45:53.029: INFO: 	Container etcd ready: true, restart count 0
Jan 28 13:45:53.029: INFO: weave-net-bzl4d from kube-system started at 2019-08-04 08:52:37 +0000 UTC (2 container statuses recorded)
Jan 28 13:45:53.029: INFO: 	Container weave ready: true, restart count 0
Jan 28 13:45:53.029: INFO: 	Container weave-npc ready: true, restart count 0
Jan 28 13:45:53.029: INFO: coredns-5c98db65d4-bm4gs from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded)
Jan 28 13:45:53.029: INFO: 	Container coredns ready: true, restart count 0
[It] validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-f845c9f5-29a6-4406-bd0d-22305f8c84a7 42
STEP: Trying to relaunch the pod, now with labels.
STEP: removing the label kubernetes.io/e2e-f845c9f5-29a6-4406-bd0d-22305f8c84a7 off the node iruya-node
STEP: verifying the node doesn't have the label kubernetes.io/e2e-f845c9f5-29a6-4406-bd0d-22305f8c84a7
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 28 13:46:13.302: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-2962" for this suite.
Jan 28 13:46:33.375: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 28 13:46:33.533: INFO: namespace sched-pred-2962 deletion completed in 20.226486372s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72

• [SLOW TEST:40.725 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23
  validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-storage] Projected configMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 28 13:46:33.533: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name cm-test-opt-del-3d038c8e-65d7-48b1-b025-c2a708e358e9
STEP: Creating configMap with name cm-test-opt-upd-8b9b280c-879e-4f78-9770-646a8a996e88
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-3d038c8e-65d7-48b1-b025-c2a708e358e9
STEP: Updating configmap cm-test-opt-upd-8b9b280c-879e-4f78-9770-646a8a996e88
STEP: Creating configMap with name cm-test-opt-create-24a59ade-fae9-41c6-b421-413a8499ed31
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 28 13:46:48.062: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7211" for this suite.
Jan 28 13:47:10.104: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 28 13:47:10.262: INFO: namespace projected-7211 deletion completed in 22.194344987s

• [SLOW TEST:36.729 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[k8s.io] [sig-node] PreStop 
  should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 28 13:47:10.263: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename prestop
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:167
[It] should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating server pod server in namespace prestop-9757
STEP: Waiting for pods to come up.
STEP: Creating tester pod tester in namespace prestop-9757
STEP: Deleting pre-stop pod
Jan 28 13:47:33.493: INFO: Saw: {
	"Hostname": "server",
	"Sent": null,
	"Received": {
		"prestop": 1
	},
	"Errors": null,
	"Log": [
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up."
	],
	"StillContactingPeers": true
}
STEP: Deleting the server pod
[AfterEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 28 13:47:33.510: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "prestop-9757" for this suite.
Jan 28 13:48:11.586: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 28 13:48:11.751: INFO: namespace prestop-9757 deletion completed in 38.224965022s

• [SLOW TEST:61.489 seconds]
[k8s.io] [sig-node] PreStop
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 28 13:48:11.752: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0666 on tmpfs
Jan 28 13:48:11.883: INFO: Waiting up to 5m0s for pod "pod-01dff14e-da47-49a8-8ba5-a6fead42c60b" in namespace "emptydir-2599" to be "success or failure"
Jan 28 13:48:11.891: INFO: Pod "pod-01dff14e-da47-49a8-8ba5-a6fead42c60b": Phase="Pending", Reason="", readiness=false. Elapsed: 7.770395ms
Jan 28 13:48:13.908: INFO: Pod "pod-01dff14e-da47-49a8-8ba5-a6fead42c60b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025283688s
Jan 28 13:48:15.967: INFO: Pod "pod-01dff14e-da47-49a8-8ba5-a6fead42c60b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.08391311s
Jan 28 13:48:17.978: INFO: Pod "pod-01dff14e-da47-49a8-8ba5-a6fead42c60b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.094680308s
Jan 28 13:48:19.992: INFO: Pod "pod-01dff14e-da47-49a8-8ba5-a6fead42c60b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.109129541s
STEP: Saw pod success
Jan 28 13:48:19.992: INFO: Pod "pod-01dff14e-da47-49a8-8ba5-a6fead42c60b" satisfied condition "success or failure"
Jan 28 13:48:19.999: INFO: Trying to get logs from node iruya-node pod pod-01dff14e-da47-49a8-8ba5-a6fead42c60b container test-container: 
STEP: delete the pod
Jan 28 13:48:20.163: INFO: Waiting for pod pod-01dff14e-da47-49a8-8ba5-a6fead42c60b to disappear
Jan 28 13:48:20.198: INFO: Pod pod-01dff14e-da47-49a8-8ba5-a6fead42c60b no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 28 13:48:20.199: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-2599" for this suite.
Jan 28 13:48:26.261: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 28 13:48:26.453: INFO: namespace emptydir-2599 deletion completed in 6.229798784s

• [SLOW TEST:14.701 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 28 13:48:26.455: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Performing setup for networking test in namespace pod-network-test-5850
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Jan 28 13:48:26.583: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Jan 28 13:49:02.809: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.32.0.4 8081 | grep -v '^\s*$'] Namespace:pod-network-test-5850 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 28 13:49:02.809: INFO: >>> kubeConfig: /root/.kube/config
I0128 13:49:02.921453       9 log.go:172] (0xc0019b9c30) (0xc0028883c0) Create stream
I0128 13:49:02.921638       9 log.go:172] (0xc0019b9c30) (0xc0028883c0) Stream added, broadcasting: 1
I0128 13:49:02.932859       9 log.go:172] (0xc0019b9c30) Reply frame received for 1
I0128 13:49:02.932989       9 log.go:172] (0xc0019b9c30) (0xc0009fa8c0) Create stream
I0128 13:49:02.933004       9 log.go:172] (0xc0019b9c30) (0xc0009fa8c0) Stream added, broadcasting: 3
I0128 13:49:02.935270       9 log.go:172] (0xc0019b9c30) Reply frame received for 3
I0128 13:49:02.935307       9 log.go:172] (0xc0019b9c30) (0xc002888460) Create stream
I0128 13:49:02.935329       9 log.go:172] (0xc0019b9c30) (0xc002888460) Stream added, broadcasting: 5
I0128 13:49:02.937065       9 log.go:172] (0xc0019b9c30) Reply frame received for 5
I0128 13:49:04.097200       9 log.go:172] (0xc0019b9c30) Data frame received for 3
I0128 13:49:04.097352       9 log.go:172] (0xc0009fa8c0) (3) Data frame handling
I0128 13:49:04.097407       9 log.go:172] (0xc0009fa8c0) (3) Data frame sent
I0128 13:49:04.234850       9 log.go:172] (0xc0019b9c30) (0xc002888460) Stream removed, broadcasting: 5
I0128 13:49:04.234989       9 log.go:172] (0xc0019b9c30) Data frame received for 1
I0128 13:49:04.235011       9 log.go:172] (0xc0028883c0) (1) Data frame handling
I0128 13:49:04.235039       9 log.go:172] (0xc0028883c0) (1) Data frame sent
I0128 13:49:04.235062       9 log.go:172] (0xc0019b9c30) (0xc0009fa8c0) Stream removed, broadcasting: 3
I0128 13:49:04.235124       9 log.go:172] (0xc0019b9c30) (0xc0028883c0) Stream removed, broadcasting: 1
I0128 13:49:04.235141       9 log.go:172] (0xc0019b9c30) Go away received
I0128 13:49:04.235358       9 log.go:172] (0xc0019b9c30) (0xc0028883c0) Stream removed, broadcasting: 1
I0128 13:49:04.235367       9 log.go:172] (0xc0019b9c30) (0xc0009fa8c0) Stream removed, broadcasting: 3
I0128 13:49:04.235373       9 log.go:172] (0xc0019b9c30) (0xc002888460) Stream removed, broadcasting: 5
Jan 28 13:49:04.235: INFO: Found all expected endpoints: [netserver-0]
Jan 28 13:49:04.245: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.44.0.1 8081 | grep -v '^\s*$'] Namespace:pod-network-test-5850 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 28 13:49:04.245: INFO: >>> kubeConfig: /root/.kube/config
I0128 13:49:04.302198       9 log.go:172] (0xc001ec0b00) (0xc002888a00) Create stream
I0128 13:49:04.302368       9 log.go:172] (0xc001ec0b00) (0xc002888a00) Stream added, broadcasting: 1
I0128 13:49:04.309771       9 log.go:172] (0xc001ec0b00) Reply frame received for 1
I0128 13:49:04.309948       9 log.go:172] (0xc001ec0b00) (0xc0009fab40) Create stream
I0128 13:49:04.309963       9 log.go:172] (0xc001ec0b00) (0xc0009fab40) Stream added, broadcasting: 3
I0128 13:49:04.313097       9 log.go:172] (0xc001ec0b00) Reply frame received for 3
I0128 13:49:04.313147       9 log.go:172] (0xc001ec0b00) (0xc001af8b40) Create stream
I0128 13:49:04.313166       9 log.go:172] (0xc001ec0b00) (0xc001af8b40) Stream added, broadcasting: 5
I0128 13:49:04.315168       9 log.go:172] (0xc001ec0b00) Reply frame received for 5
I0128 13:49:05.425317       9 log.go:172] (0xc001ec0b00) Data frame received for 3
I0128 13:49:05.425605       9 log.go:172] (0xc0009fab40) (3) Data frame handling
I0128 13:49:05.425740       9 log.go:172] (0xc0009fab40) (3) Data frame sent
I0128 13:49:05.613354       9 log.go:172] (0xc001ec0b00) Data frame received for 1
I0128 13:49:05.613521       9 log.go:172] (0xc001ec0b00) (0xc0009fab40) Stream removed, broadcasting: 3
I0128 13:49:05.613745       9 log.go:172] (0xc002888a00) (1) Data frame handling
I0128 13:49:05.613928       9 log.go:172] (0xc002888a00) (1) Data frame sent
I0128 13:49:05.613978       9 log.go:172] (0xc001ec0b00) (0xc001af8b40) Stream removed, broadcasting: 5
I0128 13:49:05.614081       9 log.go:172] (0xc001ec0b00) (0xc002888a00) Stream removed, broadcasting: 1
I0128 13:49:05.614108       9 log.go:172] (0xc001ec0b00) Go away received
I0128 13:49:05.614606       9 log.go:172] (0xc001ec0b00) (0xc002888a00) Stream removed, broadcasting: 1
I0128 13:49:05.614697       9 log.go:172] (0xc001ec0b00) (0xc0009fab40) Stream removed, broadcasting: 3
I0128 13:49:05.614721       9 log.go:172] (0xc001ec0b00) (0xc001af8b40) Stream removed, broadcasting: 5
Jan 28 13:49:05.614: INFO: Found all expected endpoints: [netserver-1]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 28 13:49:05.615: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-5850" for this suite.
Jan 28 13:49:29.649: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 28 13:49:29.828: INFO: namespace pod-network-test-5850 deletion completed in 24.200339573s

• [SLOW TEST:63.373 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Burst scaling should run to completion even with unhealthy pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 28 13:49:29.828: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75
STEP: Creating service test in namespace statefulset-730
[It] Burst scaling should run to completion even with unhealthy pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating stateful set ss in namespace statefulset-730
STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-730
Jan 28 13:49:30.018: INFO: Found 0 stateful pods, waiting for 1
Jan 28 13:49:40.027: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod
Jan 28 13:49:40.032: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-730 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan 28 13:49:42.531: INFO: stderr: "I0128 13:49:42.071726    1304 log.go:172] (0xc000b38420) (0xc000b2e6e0) Create stream\nI0128 13:49:42.071917    1304 log.go:172] (0xc000b38420) (0xc000b2e6e0) Stream added, broadcasting: 1\nI0128 13:49:42.083094    1304 log.go:172] (0xc000b38420) Reply frame received for 1\nI0128 13:49:42.083172    1304 log.go:172] (0xc000b38420) (0xc000215ae0) Create stream\nI0128 13:49:42.083192    1304 log.go:172] (0xc000b38420) (0xc000215ae0) Stream added, broadcasting: 3\nI0128 13:49:42.085169    1304 log.go:172] (0xc000b38420) Reply frame received for 3\nI0128 13:49:42.085203    1304 log.go:172] (0xc000b38420) (0xc0005e4280) Create stream\nI0128 13:49:42.085215    1304 log.go:172] (0xc000b38420) (0xc0005e4280) Stream added, broadcasting: 5\nI0128 13:49:42.086921    1304 log.go:172] (0xc000b38420) Reply frame received for 5\nI0128 13:49:42.273706    1304 log.go:172] (0xc000b38420) Data frame received for 5\nI0128 13:49:42.273811    1304 log.go:172] (0xc0005e4280) (5) Data frame handling\nI0128 13:49:42.273854    1304 log.go:172] (0xc0005e4280) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0128 13:49:42.351978    1304 log.go:172] (0xc000b38420) Data frame received for 3\nI0128 13:49:42.352078    1304 log.go:172] (0xc000215ae0) (3) Data frame handling\nI0128 13:49:42.352100    1304 log.go:172] (0xc000215ae0) (3) Data frame sent\nI0128 13:49:42.513322    1304 log.go:172] (0xc000b38420) Data frame received for 1\nI0128 13:49:42.513512    1304 log.go:172] (0xc000b38420) (0xc000215ae0) Stream removed, broadcasting: 3\nI0128 13:49:42.513606    1304 log.go:172] (0xc000b2e6e0) (1) Data frame handling\nI0128 13:49:42.513631    1304 log.go:172] (0xc000b2e6e0) (1) Data frame sent\nI0128 13:49:42.513672    1304 log.go:172] (0xc000b38420) (0xc0005e4280) Stream removed, broadcasting: 5\nI0128 13:49:42.513744    1304 log.go:172] (0xc000b38420) (0xc000b2e6e0) Stream removed, broadcasting: 1\nI0128 13:49:42.514901    1304 log.go:172] (0xc000b38420) (0xc000b2e6e0) Stream removed, broadcasting: 1\nI0128 13:49:42.514930    1304 log.go:172] (0xc000b38420) (0xc000215ae0) Stream removed, broadcasting: 3\nI0128 13:49:42.514942    1304 log.go:172] (0xc000b38420) (0xc0005e4280) Stream removed, broadcasting: 5\n"
Jan 28 13:49:42.531: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan 28 13:49:42.531: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan 28 13:49:42.547: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
Jan 28 13:49:52.570: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Jan 28 13:49:52.571: INFO: Waiting for statefulset status.replicas updated to 0
Jan 28 13:49:52.608: INFO: POD   NODE        PHASE    GRACE  CONDITIONS
Jan 28 13:49:52.608: INFO: ss-0  iruya-node  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 13:49:30 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-28 13:49:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-28 13:49:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 13:49:30 +0000 UTC  }]
Jan 28 13:49:52.609: INFO: 
Jan 28 13:49:52.609: INFO: StatefulSet ss has not reached scale 3, at 1
Jan 28 13:49:54.102: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.986856643s
Jan 28 13:49:55.608: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.493663696s
Jan 28 13:49:56.641: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.986752197s
Jan 28 13:49:57.700: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.954083226s
Jan 28 13:49:59.430: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.89574132s
Jan 28 13:50:00.526: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.165872589s
Jan 28 13:50:01.738: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.069711367s
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-730
Jan 28 13:50:02.875: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-730 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 28 13:50:03.619: INFO: stderr: "I0128 13:50:03.180495    1332 log.go:172] (0xc000146790) (0xc00064c320) Create stream\nI0128 13:50:03.181075    1332 log.go:172] (0xc000146790) (0xc00064c320) Stream added, broadcasting: 1\nI0128 13:50:03.191526    1332 log.go:172] (0xc000146790) Reply frame received for 1\nI0128 13:50:03.191557    1332 log.go:172] (0xc000146790) (0xc00064c3c0) Create stream\nI0128 13:50:03.191562    1332 log.go:172] (0xc000146790) (0xc00064c3c0) Stream added, broadcasting: 3\nI0128 13:50:03.192855    1332 log.go:172] (0xc000146790) Reply frame received for 3\nI0128 13:50:03.192894    1332 log.go:172] (0xc000146790) (0xc0006d6000) Create stream\nI0128 13:50:03.192918    1332 log.go:172] (0xc000146790) (0xc0006d6000) Stream added, broadcasting: 5\nI0128 13:50:03.195158    1332 log.go:172] (0xc000146790) Reply frame received for 5\nI0128 13:50:03.405133    1332 log.go:172] (0xc000146790) Data frame received for 5\nI0128 13:50:03.405455    1332 log.go:172] (0xc0006d6000) (5) Data frame handling\nI0128 13:50:03.405488    1332 log.go:172] (0xc0006d6000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0128 13:50:03.405660    1332 log.go:172] (0xc000146790) Data frame received for 3\nI0128 13:50:03.405754    1332 log.go:172] (0xc00064c3c0) (3) Data frame handling\nI0128 13:50:03.405788    1332 log.go:172] (0xc00064c3c0) (3) Data frame sent\nI0128 13:50:03.610625    1332 log.go:172] (0xc000146790) (0xc0006d6000) Stream removed, broadcasting: 5\nI0128 13:50:03.610836    1332 log.go:172] (0xc000146790) Data frame received for 1\nI0128 13:50:03.610861    1332 log.go:172] (0xc00064c320) (1) Data frame handling\nI0128 13:50:03.610887    1332 log.go:172] (0xc00064c320) (1) Data frame sent\nI0128 13:50:03.610960    1332 log.go:172] (0xc000146790) (0xc00064c3c0) Stream removed, broadcasting: 3\nI0128 13:50:03.611125    1332 log.go:172] (0xc000146790) (0xc00064c320) Stream removed, broadcasting: 1\nI0128 13:50:03.611157    1332 log.go:172] (0xc000146790) Go away received\nI0128 13:50:03.612042    1332 log.go:172] (0xc000146790) (0xc00064c320) Stream removed, broadcasting: 1\nI0128 13:50:03.612060    1332 log.go:172] (0xc000146790) (0xc00064c3c0) Stream removed, broadcasting: 3\nI0128 13:50:03.612071    1332 log.go:172] (0xc000146790) (0xc0006d6000) Stream removed, broadcasting: 5\n"
Jan 28 13:50:03.620: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jan 28 13:50:03.620: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jan 28 13:50:03.620: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-730 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 28 13:50:04.295: INFO: stderr: "I0128 13:50:03.836167    1344 log.go:172] (0xc00094a210) (0xc0007665a0) Create stream\nI0128 13:50:03.836475    1344 log.go:172] (0xc00094a210) (0xc0007665a0) Stream added, broadcasting: 1\nI0128 13:50:03.840872    1344 log.go:172] (0xc00094a210) Reply frame received for 1\nI0128 13:50:03.840923    1344 log.go:172] (0xc00094a210) (0xc0001dc460) Create stream\nI0128 13:50:03.840932    1344 log.go:172] (0xc00094a210) (0xc0001dc460) Stream added, broadcasting: 3\nI0128 13:50:03.841918    1344 log.go:172] (0xc00094a210) Reply frame received for 3\nI0128 13:50:03.841942    1344 log.go:172] (0xc00094a210) (0xc000766640) Create stream\nI0128 13:50:03.841954    1344 log.go:172] (0xc00094a210) (0xc000766640) Stream added, broadcasting: 5\nI0128 13:50:03.842951    1344 log.go:172] (0xc00094a210) Reply frame received for 5\nI0128 13:50:04.037713    1344 log.go:172] (0xc00094a210) Data frame received for 5\nI0128 13:50:04.037875    1344 log.go:172] (0xc000766640) (5) Data frame handling\nI0128 13:50:04.037893    1344 log.go:172] (0xc000766640) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0128 13:50:04.123711    1344 log.go:172] (0xc00094a210) Data frame received for 5\nI0128 13:50:04.123931    1344 log.go:172] (0xc000766640) (5) Data frame handling\nI0128 13:50:04.123964    1344 log.go:172] (0xc000766640) (5) Data frame sent\nI0128 13:50:04.123984    1344 log.go:172] (0xc00094a210) Data frame received for 5\nI0128 13:50:04.123992    1344 log.go:172] (0xc000766640) (5) Data frame handling\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0128 13:50:04.124035    1344 log.go:172] (0xc00094a210) Data frame received for 3\nI0128 13:50:04.124083    1344 log.go:172] (0xc0001dc460) (3) Data frame handling\nI0128 13:50:04.124098    1344 log.go:172] (0xc0001dc460) (3) Data frame sent\nI0128 13:50:04.124135    1344 log.go:172] (0xc000766640) (5) Data frame sent\nI0128 13:50:04.284889    1344 log.go:172] (0xc00094a210) Data frame received for 1\nI0128 13:50:04.285004    1344 log.go:172] (0xc00094a210) (0xc0001dc460) Stream removed, broadcasting: 3\nI0128 13:50:04.285057    1344 log.go:172] (0xc0007665a0) (1) Data frame handling\nI0128 13:50:04.285082    1344 log.go:172] (0xc0007665a0) (1) Data frame sent\nI0128 13:50:04.285221    1344 log.go:172] (0xc00094a210) (0xc000766640) Stream removed, broadcasting: 5\nI0128 13:50:04.285349    1344 log.go:172] (0xc00094a210) (0xc0007665a0) Stream removed, broadcasting: 1\nI0128 13:50:04.285381    1344 log.go:172] (0xc00094a210) Go away received\nI0128 13:50:04.286090    1344 log.go:172] (0xc00094a210) (0xc0007665a0) Stream removed, broadcasting: 1\nI0128 13:50:04.286102    1344 log.go:172] (0xc00094a210) (0xc0001dc460) Stream removed, broadcasting: 3\nI0128 13:50:04.286107    1344 log.go:172] (0xc00094a210) (0xc000766640) Stream removed, broadcasting: 5\n"
Jan 28 13:50:04.295: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jan 28 13:50:04.295: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jan 28 13:50:04.295: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-730 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 28 13:50:04.804: INFO: stderr: "I0128 13:50:04.495912    1360 log.go:172] (0xc000116dc0) (0xc000664640) Create stream\nI0128 13:50:04.496232    1360 log.go:172] (0xc000116dc0) (0xc000664640) Stream added, broadcasting: 1\nI0128 13:50:04.503775    1360 log.go:172] (0xc000116dc0) Reply frame received for 1\nI0128 13:50:04.503829    1360 log.go:172] (0xc000116dc0) (0xc00062e280) Create stream\nI0128 13:50:04.503855    1360 log.go:172] (0xc000116dc0) (0xc00062e280) Stream added, broadcasting: 3\nI0128 13:50:04.505198    1360 log.go:172] (0xc000116dc0) Reply frame received for 3\nI0128 13:50:04.505233    1360 log.go:172] (0xc000116dc0) (0xc0008ca000) Create stream\nI0128 13:50:04.505261    1360 log.go:172] (0xc000116dc0) (0xc0008ca000) Stream added, broadcasting: 5\nI0128 13:50:04.506870    1360 log.go:172] (0xc000116dc0) Reply frame received for 5\nI0128 13:50:04.696041    1360 log.go:172] (0xc000116dc0) Data frame received for 5\nI0128 13:50:04.696192    1360 log.go:172] (0xc0008ca000) (5) Data frame handling\nI0128 13:50:04.696212    1360 log.go:172] (0xc0008ca000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0128 13:50:04.696379    1360 log.go:172] (0xc000116dc0) Data frame received for 3\nI0128 13:50:04.696574    1360 log.go:172] (0xc00062e280) (3) Data frame handling\nI0128 13:50:04.696783    1360 log.go:172] (0xc00062e280) (3) Data frame sent\nI0128 13:50:04.796187    1360 log.go:172] (0xc000116dc0) (0xc0008ca000) Stream removed, broadcasting: 5\nI0128 13:50:04.796299    1360 log.go:172] (0xc000116dc0) Data frame received for 1\nI0128 13:50:04.796395    1360 log.go:172] (0xc000116dc0) (0xc00062e280) Stream removed, broadcasting: 3\nI0128 13:50:04.796427    1360 log.go:172] (0xc000664640) (1) Data frame handling\nI0128 13:50:04.796456    1360 log.go:172] (0xc000664640) (1) Data frame sent\nI0128 13:50:04.796473    1360 log.go:172] (0xc000116dc0) (0xc000664640) Stream removed, broadcasting: 1\nI0128 13:50:04.796490    1360 log.go:172] (0xc000116dc0) Go away received\nI0128 13:50:04.797111    1360 log.go:172] (0xc000116dc0) (0xc000664640) Stream removed, broadcasting: 1\nI0128 13:50:04.797120    1360 log.go:172] (0xc000116dc0) (0xc00062e280) Stream removed, broadcasting: 3\nI0128 13:50:04.797125    1360 log.go:172] (0xc000116dc0) (0xc0008ca000) Stream removed, broadcasting: 5\n"
Jan 28 13:50:04.804: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jan 28 13:50:04.804: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jan 28 13:50:04.813: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Jan 28 13:50:04.813: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Jan 28 13:50:04.813: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Scale down will not halt with unhealthy stateful pod
Jan 28 13:50:04.817: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-730 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan 28 13:50:05.222: INFO: stderr: "I0128 13:50:04.998947    1380 log.go:172] (0xc000926370) (0xc000864640) Create stream\nI0128 13:50:04.999263    1380 log.go:172] (0xc000926370) (0xc000864640) Stream added, broadcasting: 1\nI0128 13:50:05.008842    1380 log.go:172] (0xc000926370) Reply frame received for 1\nI0128 13:50:05.008946    1380 log.go:172] (0xc000926370) (0xc000a10000) Create stream\nI0128 13:50:05.008959    1380 log.go:172] (0xc000926370) (0xc000a10000) Stream added, broadcasting: 3\nI0128 13:50:05.011667    1380 log.go:172] (0xc000926370) Reply frame received for 3\nI0128 13:50:05.011714    1380 log.go:172] (0xc000926370) (0xc0008646e0) Create stream\nI0128 13:50:05.011726    1380 log.go:172] (0xc000926370) (0xc0008646e0) Stream added, broadcasting: 5\nI0128 13:50:05.013107    1380 log.go:172] (0xc000926370) Reply frame received for 5\nI0128 13:50:05.102028    1380 log.go:172] (0xc000926370) Data frame received for 5\nI0128 13:50:05.102172    1380 log.go:172] (0xc0008646e0) (5) Data frame handling\nI0128 13:50:05.102194    1380 log.go:172] (0xc0008646e0) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0128 13:50:05.102218    1380 log.go:172] (0xc000926370) Data frame received for 3\nI0128 13:50:05.102222    1380 log.go:172] (0xc000a10000) (3) Data frame handling\nI0128 13:50:05.102227    1380 log.go:172] (0xc000a10000) (3) Data frame sent\nI0128 13:50:05.211692    1380 log.go:172] (0xc000926370) Data frame received for 1\nI0128 13:50:05.211869    1380 log.go:172] (0xc000864640) (1) Data frame handling\nI0128 13:50:05.211915    1380 log.go:172] (0xc000864640) (1) Data frame sent\nI0128 13:50:05.211946    1380 log.go:172] (0xc000926370) (0xc000864640) Stream removed, broadcasting: 1\nI0128 13:50:05.213527    1380 log.go:172] (0xc000926370) (0xc000a10000) Stream removed, broadcasting: 3\nI0128 13:50:05.213974    1380 log.go:172] (0xc000926370) (0xc0008646e0) Stream removed, broadcasting: 5\nI0128 13:50:05.214264    1380 log.go:172] (0xc000926370) Go away received\nI0128 13:50:05.214317    1380 log.go:172] (0xc000926370) (0xc000864640) Stream removed, broadcasting: 1\nI0128 13:50:05.214350    1380 log.go:172] (0xc000926370) (0xc000a10000) Stream removed, broadcasting: 3\nI0128 13:50:05.214421    1380 log.go:172] (0xc000926370) (0xc0008646e0) Stream removed, broadcasting: 5\n"
Jan 28 13:50:05.222: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan 28 13:50:05.222: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan 28 13:50:05.223: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-730 ss-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan 28 13:50:05.550: INFO: stderr: "I0128 13:50:05.371547    1400 log.go:172] (0xc000a002c0) (0xc0009c4640) Create stream\nI0128 13:50:05.371755    1400 log.go:172] (0xc000a002c0) (0xc0009c4640) Stream added, broadcasting: 1\nI0128 13:50:05.374385    1400 log.go:172] (0xc000a002c0) Reply frame received for 1\nI0128 13:50:05.374454    1400 log.go:172] (0xc000a002c0) (0xc0009c46e0) Create stream\nI0128 13:50:05.374471    1400 log.go:172] (0xc000a002c0) (0xc0009c46e0) Stream added, broadcasting: 3\nI0128 13:50:05.375494    1400 log.go:172] (0xc000a002c0) Reply frame received for 3\nI0128 13:50:05.375519    1400 log.go:172] (0xc000a002c0) (0xc0009c4780) Create stream\nI0128 13:50:05.375527    1400 log.go:172] (0xc000a002c0) (0xc0009c4780) Stream added, broadcasting: 5\nI0128 13:50:05.377098    1400 log.go:172] (0xc000a002c0) Reply frame received for 5\nI0128 13:50:05.458223    1400 log.go:172] (0xc000a002c0) Data frame received for 5\nI0128 13:50:05.458299    1400 log.go:172] (0xc0009c4780) (5) Data frame handling\nI0128 13:50:05.458324    1400 log.go:172] (0xc0009c4780) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0128 13:50:05.474782    1400 log.go:172] (0xc000a002c0) Data frame received for 3\nI0128 13:50:05.474818    1400 log.go:172] (0xc0009c46e0) (3) Data frame handling\nI0128 13:50:05.474835    1400 log.go:172] (0xc0009c46e0) (3) Data frame sent\nI0128 13:50:05.541799    1400 log.go:172] (0xc000a002c0) Data frame received for 1\nI0128 13:50:05.541863    1400 log.go:172] (0xc000a002c0) (0xc0009c46e0) Stream removed, broadcasting: 3\nI0128 13:50:05.541898    1400 log.go:172] (0xc0009c4640) (1) Data frame handling\nI0128 13:50:05.541931    1400 log.go:172] (0xc000a002c0) (0xc0009c4780) Stream removed, broadcasting: 5\nI0128 13:50:05.541968    1400 log.go:172] (0xc0009c4640) (1) Data frame sent\nI0128 13:50:05.541985    1400 log.go:172] (0xc000a002c0) (0xc0009c4640) Stream removed, broadcasting: 1\nI0128 13:50:05.542005    1400 log.go:172] (0xc000a002c0) Go away received\nI0128 13:50:05.542823    1400 log.go:172] (0xc000a002c0) (0xc0009c4640) Stream removed, broadcasting: 1\nI0128 13:50:05.542844    1400 log.go:172] (0xc000a002c0) (0xc0009c46e0) Stream removed, broadcasting: 3\nI0128 13:50:05.542854    1400 log.go:172] (0xc000a002c0) (0xc0009c4780) Stream removed, broadcasting: 5\n"
Jan 28 13:50:05.551: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan 28 13:50:05.551: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan 28 13:50:05.551: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-730 ss-2 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan 28 13:50:05.959: INFO: stderr: "I0128 13:50:05.702896    1420 log.go:172] (0xc000912160) (0xc0009766e0) Create stream\nI0128 13:50:05.703368    1420 log.go:172] (0xc000912160) (0xc0009766e0) Stream added, broadcasting: 1\nI0128 13:50:05.715062    1420 log.go:172] (0xc000912160) Reply frame received for 1\nI0128 13:50:05.715152    1420 log.go:172] (0xc000912160) (0xc0002f43c0) Create stream\nI0128 13:50:05.715168    1420 log.go:172] (0xc000912160) (0xc0002f43c0) Stream added, broadcasting: 3\nI0128 13:50:05.716562    1420 log.go:172] (0xc000912160) Reply frame received for 3\nI0128 13:50:05.716623    1420 log.go:172] (0xc000912160) (0xc000976780) Create stream\nI0128 13:50:05.716649    1420 log.go:172] (0xc000912160) (0xc000976780) Stream added, broadcasting: 5\nI0128 13:50:05.718503    1420 log.go:172] (0xc000912160) Reply frame received for 5\nI0128 13:50:05.829162    1420 log.go:172] (0xc000912160) Data frame received for 5\nI0128 13:50:05.829211    1420 log.go:172] (0xc000976780) (5) Data frame handling\nI0128 13:50:05.829229    1420 log.go:172] (0xc000976780) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0128 13:50:05.859295    1420 log.go:172] (0xc000912160) Data frame received for 3\nI0128 13:50:05.859387    1420 log.go:172] (0xc0002f43c0) (3) Data frame handling\nI0128 13:50:05.859406    1420 log.go:172] (0xc0002f43c0) (3) Data frame sent\nI0128 13:50:05.945965    1420 log.go:172] (0xc000912160) Data frame received for 1\nI0128 13:50:05.946159    1420 log.go:172] (0xc0009766e0) (1) Data frame handling\nI0128 13:50:05.946205    1420 log.go:172] (0xc0009766e0) (1) Data frame sent\nI0128 13:50:05.947917    1420 log.go:172] (0xc000912160) (0xc0009766e0) Stream removed, broadcasting: 1\nI0128 13:50:05.949588    1420 log.go:172] (0xc000912160) (0xc000976780) Stream removed, broadcasting: 5\nI0128 13:50:05.949660    1420 log.go:172] (0xc000912160) (0xc0002f43c0) Stream removed, broadcasting: 3\nI0128 13:50:05.949845    1420 log.go:172] (0xc000912160) (0xc0009766e0) Stream removed, broadcasting: 1\nI0128 13:50:05.949864    1420 log.go:172] (0xc000912160) (0xc0002f43c0) Stream removed, broadcasting: 3\nI0128 13:50:05.949881    1420 log.go:172] (0xc000912160) (0xc000976780) Stream removed, broadcasting: 5\nI0128 13:50:05.951077    1420 log.go:172] (0xc000912160) Go away received\n"
Jan 28 13:50:05.959: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan 28 13:50:05.960: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan 28 13:50:05.960: INFO: Waiting for statefulset status.replicas updated to 0
Jan 28 13:50:05.967: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2
Jan 28 13:50:15.984: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Jan 28 13:50:15.984: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Jan 28 13:50:15.984: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Jan 28 13:50:16.045: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Jan 28 13:50:16.045: INFO: ss-0  iruya-node                 Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 13:49:30 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-28 13:50:05 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-28 13:50:05 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 13:49:30 +0000 UTC  }]
Jan 28 13:50:16.045: INFO: ss-1  iruya-server-sfge57q7djm7  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 13:49:52 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-28 13:50:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-28 13:50:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 13:49:52 +0000 UTC  }]
Jan 28 13:50:16.045: INFO: ss-2  iruya-node                 Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 13:49:52 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-28 13:50:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-28 13:50:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 13:49:52 +0000 UTC  }]
Jan 28 13:50:16.045: INFO: 
Jan 28 13:50:16.045: INFO: StatefulSet ss has not reached scale 0, at 3
Jan 28 13:50:17.655: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Jan 28 13:50:17.655: INFO: ss-0  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 13:49:30 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-28 13:50:05 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-28 13:50:05 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 13:49:30 +0000 UTC  }]
Jan 28 13:50:17.656: INFO: ss-1  iruya-server-sfge57q7djm7  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 13:49:52 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-28 13:50:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-28 13:50:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 13:49:52 +0000 UTC  }]
Jan 28 13:50:17.656: INFO: ss-2  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 13:49:52 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-28 13:50:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-28 13:50:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 13:49:52 +0000 UTC  }]
Jan 28 13:50:17.656: INFO: 
Jan 28 13:50:17.656: INFO: StatefulSet ss has not reached scale 0, at 3
Jan 28 13:50:18.671: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Jan 28 13:50:18.671: INFO: ss-0  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 13:49:30 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-28 13:50:05 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-28 13:50:05 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 13:49:30 +0000 UTC  }]
Jan 28 13:50:18.671: INFO: ss-1  iruya-server-sfge57q7djm7  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 13:49:52 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-28 13:50:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-28 13:50:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 13:49:52 +0000 UTC  }]
Jan 28 13:50:18.671: INFO: ss-2  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 13:49:52 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-28 13:50:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-28 13:50:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 13:49:52 +0000 UTC  }]
Jan 28 13:50:18.671: INFO: 
Jan 28 13:50:18.671: INFO: StatefulSet ss has not reached scale 0, at 3
Jan 28 13:50:19.683: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Jan 28 13:50:19.683: INFO: ss-0  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 13:49:30 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-28 13:50:05 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-28 13:50:05 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 13:49:30 +0000 UTC  }]
Jan 28 13:50:19.683: INFO: ss-1  iruya-server-sfge57q7djm7  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 13:49:52 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-28 13:50:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-28 13:50:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 13:49:52 +0000 UTC  }]
Jan 28 13:50:19.683: INFO: ss-2  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 13:49:52 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-28 13:50:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-28 13:50:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 13:49:52 +0000 UTC  }]
Jan 28 13:50:19.683: INFO: 
Jan 28 13:50:19.684: INFO: StatefulSet ss has not reached scale 0, at 3
Jan 28 13:50:20.704: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Jan 28 13:50:20.704: INFO: ss-0  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 13:49:30 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-28 13:50:05 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-28 13:50:05 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 13:49:30 +0000 UTC  }]
Jan 28 13:50:20.704: INFO: ss-1  iruya-server-sfge57q7djm7  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 13:49:52 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-28 13:50:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-28 13:50:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 13:49:52 +0000 UTC  }]
Jan 28 13:50:20.705: INFO: ss-2  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 13:49:52 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-28 13:50:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-28 13:50:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 13:49:52 +0000 UTC  }]
Jan 28 13:50:20.705: INFO: 
Jan 28 13:50:20.705: INFO: StatefulSet ss has not reached scale 0, at 3
Jan 28 13:50:21.716: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Jan 28 13:50:21.716: INFO: ss-0  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 13:49:30 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-28 13:50:05 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-28 13:50:05 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 13:49:30 +0000 UTC  }]
Jan 28 13:50:21.716: INFO: ss-1  iruya-server-sfge57q7djm7  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 13:49:52 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-28 13:50:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-28 13:50:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 13:49:52 +0000 UTC  }]
Jan 28 13:50:21.716: INFO: ss-2  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 13:49:52 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-28 13:50:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-28 13:50:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 13:49:52 +0000 UTC  }]
Jan 28 13:50:21.716: INFO: 
Jan 28 13:50:21.716: INFO: StatefulSet ss has not reached scale 0, at 3
Jan 28 13:50:22.726: INFO: POD   NODE        PHASE    GRACE  CONDITIONS
Jan 28 13:50:22.726: INFO: ss-0  iruya-node  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 13:49:30 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-28 13:50:05 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-28 13:50:05 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 13:49:30 +0000 UTC  }]
Jan 28 13:50:22.726: INFO: ss-2  iruya-node  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 13:49:52 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-28 13:50:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-28 13:50:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 13:49:52 +0000 UTC  }]
Jan 28 13:50:22.726: INFO: 
Jan 28 13:50:22.726: INFO: StatefulSet ss has not reached scale 0, at 2
Jan 28 13:50:23.738: INFO: POD   NODE        PHASE    GRACE  CONDITIONS
Jan 28 13:50:23.738: INFO: ss-0  iruya-node  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 13:49:30 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-28 13:50:05 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-28 13:50:05 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 13:49:30 +0000 UTC  }]
Jan 28 13:50:23.738: INFO: 
Jan 28 13:50:23.738: INFO: StatefulSet ss has not reached scale 0, at 1
Jan 28 13:50:24.757: INFO: POD   NODE        PHASE    GRACE  CONDITIONS
Jan 28 13:50:24.757: INFO: ss-0  iruya-node  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 13:49:30 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-28 13:50:05 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-28 13:50:05 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 13:49:30 +0000 UTC  }]
Jan 28 13:50:24.758: INFO: 
Jan 28 13:50:24.758: INFO: StatefulSet ss has not reached scale 0, at 1
Jan 28 13:50:25.768: INFO: POD   NODE        PHASE    GRACE  CONDITIONS
Jan 28 13:50:25.768: INFO: ss-0  iruya-node  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 13:49:30 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-28 13:50:05 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-28 13:50:05 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 13:49:30 +0000 UTC  }]
Jan 28 13:50:25.768: INFO: 
Jan 28 13:50:25.769: INFO: StatefulSet ss has not reached scale 0, at 1
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-730
Jan 28 13:50:26.778: INFO: Scaling statefulset ss to 0
Jan 28 13:50:26.793: INFO: Waiting for statefulset status.replicas updated to 0
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86
Jan 28 13:50:26.797: INFO: Deleting all statefulset in ns statefulset-730
Jan 28 13:50:26.801: INFO: Scaling statefulset ss to 0
Jan 28 13:50:26.815: INFO: Waiting for statefulset status.replicas updated to 0
Jan 28 13:50:26.819: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 28 13:50:26.889: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-730" for this suite.
Jan 28 13:50:32.948: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 28 13:50:33.064: INFO: namespace statefulset-730 deletion completed in 6.158913394s

• [SLOW TEST:63.236 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    Burst scaling should run to completion even with unhealthy pods [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Guestbook application 
  should create and stop a working application  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 28 13:50:33.068: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should create and stop a working application  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating all guestbook components
Jan 28 13:50:33.207: INFO: apiVersion: v1
kind: Service
metadata:
  name: redis-slave
  labels:
    app: redis
    role: slave
    tier: backend
spec:
  ports:
  - port: 6379
  selector:
    app: redis
    role: slave
    tier: backend

Jan 28 13:50:33.207: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3662'
Jan 28 13:50:33.745: INFO: stderr: ""
Jan 28 13:50:33.745: INFO: stdout: "service/redis-slave created\n"
Jan 28 13:50:33.746: INFO: apiVersion: v1
kind: Service
metadata:
  name: redis-master
  labels:
    app: redis
    role: master
    tier: backend
spec:
  ports:
  - port: 6379
    targetPort: 6379
  selector:
    app: redis
    role: master
    tier: backend

Jan 28 13:50:33.746: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3662'
Jan 28 13:50:34.291: INFO: stderr: ""
Jan 28 13:50:34.291: INFO: stdout: "service/redis-master created\n"
Jan 28 13:50:34.292: INFO: apiVersion: v1
kind: Service
metadata:
  name: frontend
  labels:
    app: guestbook
    tier: frontend
spec:
  # if your cluster supports it, uncomment the following to automatically create
  # an external load-balanced IP for the frontend service.
  # type: LoadBalancer
  ports:
  - port: 80
  selector:
    app: guestbook
    tier: frontend

Jan 28 13:50:34.292: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3662'
Jan 28 13:50:34.770: INFO: stderr: ""
Jan 28 13:50:34.771: INFO: stdout: "service/frontend created\n"
Jan 28 13:50:34.771: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: frontend
spec:
  replicas: 3
  selector:
    matchLabels:
      app: guestbook
      tier: frontend
  template:
    metadata:
      labels:
        app: guestbook
        tier: frontend
    spec:
      containers:
      - name: php-redis
        image: gcr.io/google-samples/gb-frontend:v6
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        env:
        - name: GET_HOSTS_FROM
          value: dns
          # If your cluster config does not include a dns service, then to
          # instead access environment variables to find service host
          # info, comment out the 'value: dns' line above, and uncomment the
          # line below:
          # value: env
        ports:
        - containerPort: 80

Jan 28 13:50:34.772: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3662'
Jan 28 13:50:35.412: INFO: stderr: ""
Jan 28 13:50:35.412: INFO: stdout: "deployment.apps/frontend created\n"
Jan 28 13:50:35.412: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: redis-master
spec:
  replicas: 1
  selector:
    matchLabels:
      app: redis
      role: master
      tier: backend
  template:
    metadata:
      labels:
        app: redis
        role: master
        tier: backend
    spec:
      containers:
      - name: master
        image: gcr.io/kubernetes-e2e-test-images/redis:1.0
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 6379

Jan 28 13:50:35.412: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3662'
Jan 28 13:50:35.981: INFO: stderr: ""
Jan 28 13:50:35.981: INFO: stdout: "deployment.apps/redis-master created\n"
Jan 28 13:50:35.983: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: redis-slave
spec:
  replicas: 2
  selector:
    matchLabels:
      app: redis
      role: slave
      tier: backend
  template:
    metadata:
      labels:
        app: redis
        role: slave
        tier: backend
    spec:
      containers:
      - name: slave
        image: gcr.io/google-samples/gb-redisslave:v3
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        env:
        - name: GET_HOSTS_FROM
          value: dns
          # If your cluster config does not include a dns service, then to
          # instead access an environment variable to find the master
          # service's host, comment out the 'value: dns' line above, and
          # uncomment the line below:
          # value: env
        ports:
        - containerPort: 6379

Jan 28 13:50:35.983: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3662'
Jan 28 13:50:37.892: INFO: stderr: ""
Jan 28 13:50:37.893: INFO: stdout: "deployment.apps/redis-slave created\n"
STEP: validating guestbook app
Jan 28 13:50:37.893: INFO: Waiting for all frontend pods to be Running.
Jan 28 13:50:57.945: INFO: Waiting for frontend to serve content.
Jan 28 13:50:58.099: INFO: Trying to add a new entry to the guestbook.
Jan 28 13:50:58.136: INFO: Verifying that added entry can be retrieved.
STEP: using delete to clean up resources
Jan 28 13:50:58.204: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3662'
Jan 28 13:50:58.425: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan 28 13:50:58.425: INFO: stdout: "service \"redis-slave\" force deleted\n"
STEP: using delete to clean up resources
Jan 28 13:50:58.427: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3662'
Jan 28 13:50:58.749: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan 28 13:50:58.749: INFO: stdout: "service \"redis-master\" force deleted\n"
STEP: using delete to clean up resources
Jan 28 13:50:58.749: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3662'
Jan 28 13:50:58.959: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan 28 13:50:58.960: INFO: stdout: "service \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Jan 28 13:50:58.962: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3662'
Jan 28 13:50:59.073: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan 28 13:50:59.073: INFO: stdout: "deployment.apps \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Jan 28 13:50:59.074: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3662'
Jan 28 13:50:59.217: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan 28 13:50:59.217: INFO: stdout: "deployment.apps \"redis-master\" force deleted\n"
STEP: using delete to clean up resources
Jan 28 13:50:59.218: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3662'
Jan 28 13:50:59.325: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan 28 13:50:59.325: INFO: stdout: "deployment.apps \"redis-slave\" force deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 28 13:50:59.325: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3662" for this suite.
Jan 28 13:51:51.482: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 28 13:51:51.657: INFO: namespace kubectl-3662 deletion completed in 52.303722538s

• [SLOW TEST:78.589 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Guestbook application
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create and stop a working application  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl api-versions 
  should check if v1 is in available api versions  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 28 13:51:51.659: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should check if v1 is in available api versions  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: validating api versions
Jan 28 13:51:51.820: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions'
Jan 28 13:51:52.028: INFO: stderr: ""
Jan 28 13:51:52.028: INFO: stdout: "admissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\napps/v1beta1\napps/v1beta2\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 28 13:51:52.029: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5400" for this suite.
Jan 28 13:51:58.066: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 28 13:51:58.219: INFO: namespace kubectl-5400 deletion completed in 6.179032188s

• [SLOW TEST:6.561 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl api-versions
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should check if v1 is in available api versions  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 28 13:51:58.220: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-map-298da2b9-a882-4a56-a8b7-ce3bce77c26b
STEP: Creating a pod to test consume configMaps
Jan 28 13:51:58.363: INFO: Waiting up to 5m0s for pod "pod-configmaps-897c5048-b4f7-4827-98af-f835369ba216" in namespace "configmap-4014" to be "success or failure"
Jan 28 13:51:58.383: INFO: Pod "pod-configmaps-897c5048-b4f7-4827-98af-f835369ba216": Phase="Pending", Reason="", readiness=false. Elapsed: 20.208139ms
Jan 28 13:52:00.394: INFO: Pod "pod-configmaps-897c5048-b4f7-4827-98af-f835369ba216": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03101701s
Jan 28 13:52:02.405: INFO: Pod "pod-configmaps-897c5048-b4f7-4827-98af-f835369ba216": Phase="Pending", Reason="", readiness=false. Elapsed: 4.042196823s
Jan 28 13:52:04.422: INFO: Pod "pod-configmaps-897c5048-b4f7-4827-98af-f835369ba216": Phase="Pending", Reason="", readiness=false. Elapsed: 6.058543966s
Jan 28 13:52:06.437: INFO: Pod "pod-configmaps-897c5048-b4f7-4827-98af-f835369ba216": Phase="Pending", Reason="", readiness=false. Elapsed: 8.074356945s
Jan 28 13:52:08.451: INFO: Pod "pod-configmaps-897c5048-b4f7-4827-98af-f835369ba216": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.088357555s
STEP: Saw pod success
Jan 28 13:52:08.451: INFO: Pod "pod-configmaps-897c5048-b4f7-4827-98af-f835369ba216" satisfied condition "success or failure"
Jan 28 13:52:08.464: INFO: Trying to get logs from node iruya-node pod pod-configmaps-897c5048-b4f7-4827-98af-f835369ba216 container configmap-volume-test: 
STEP: delete the pod
Jan 28 13:52:08.587: INFO: Waiting for pod pod-configmaps-897c5048-b4f7-4827-98af-f835369ba216 to disappear
Jan 28 13:52:08.594: INFO: Pod pod-configmaps-897c5048-b4f7-4827-98af-f835369ba216 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 28 13:52:08.594: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-4014" for this suite.
Jan 28 13:52:14.626: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 28 13:52:14.816: INFO: namespace configmap-4014 deletion completed in 6.215516344s

• [SLOW TEST:16.596 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with secret pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 28 13:52:14.817: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with secret pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod pod-subpath-test-secret-pqvl
STEP: Creating a pod to test atomic-volume-subpath
Jan 28 13:52:15.031: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-pqvl" in namespace "subpath-176" to be "success or failure"
Jan 28 13:52:15.061: INFO: Pod "pod-subpath-test-secret-pqvl": Phase="Pending", Reason="", readiness=false. Elapsed: 29.652371ms
Jan 28 13:52:17.078: INFO: Pod "pod-subpath-test-secret-pqvl": Phase="Pending", Reason="", readiness=false. Elapsed: 2.046994157s
Jan 28 13:52:19.090: INFO: Pod "pod-subpath-test-secret-pqvl": Phase="Pending", Reason="", readiness=false. Elapsed: 4.058159842s
Jan 28 13:52:21.105: INFO: Pod "pod-subpath-test-secret-pqvl": Phase="Pending", Reason="", readiness=false. Elapsed: 6.073579316s
Jan 28 13:52:23.112: INFO: Pod "pod-subpath-test-secret-pqvl": Phase="Pending", Reason="", readiness=false. Elapsed: 8.080456503s
Jan 28 13:52:25.124: INFO: Pod "pod-subpath-test-secret-pqvl": Phase="Running", Reason="", readiness=true. Elapsed: 10.093034133s
Jan 28 13:52:27.132: INFO: Pod "pod-subpath-test-secret-pqvl": Phase="Running", Reason="", readiness=true. Elapsed: 12.100917905s
Jan 28 13:52:29.143: INFO: Pod "pod-subpath-test-secret-pqvl": Phase="Running", Reason="", readiness=true. Elapsed: 14.1114693s
Jan 28 13:52:31.187: INFO: Pod "pod-subpath-test-secret-pqvl": Phase="Running", Reason="", readiness=true. Elapsed: 16.155764075s
Jan 28 13:52:33.197: INFO: Pod "pod-subpath-test-secret-pqvl": Phase="Running", Reason="", readiness=true. Elapsed: 18.165630572s
Jan 28 13:52:35.206: INFO: Pod "pod-subpath-test-secret-pqvl": Phase="Running", Reason="", readiness=true. Elapsed: 20.174645682s
Jan 28 13:52:37.214: INFO: Pod "pod-subpath-test-secret-pqvl": Phase="Running", Reason="", readiness=true. Elapsed: 22.182522144s
Jan 28 13:52:39.222: INFO: Pod "pod-subpath-test-secret-pqvl": Phase="Running", Reason="", readiness=true. Elapsed: 24.191134808s
Jan 28 13:52:41.239: INFO: Pod "pod-subpath-test-secret-pqvl": Phase="Running", Reason="", readiness=true. Elapsed: 26.207626439s
Jan 28 13:52:43.249: INFO: Pod "pod-subpath-test-secret-pqvl": Phase="Running", Reason="", readiness=true. Elapsed: 28.217252775s
Jan 28 13:52:45.283: INFO: Pod "pod-subpath-test-secret-pqvl": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.251256899s
STEP: Saw pod success
Jan 28 13:52:45.283: INFO: Pod "pod-subpath-test-secret-pqvl" satisfied condition "success or failure"
Jan 28 13:52:45.288: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-secret-pqvl container test-container-subpath-secret-pqvl: 
STEP: delete the pod
Jan 28 13:52:45.381: INFO: Waiting for pod pod-subpath-test-secret-pqvl to disappear
Jan 28 13:52:45.452: INFO: Pod pod-subpath-test-secret-pqvl no longer exists
STEP: Deleting pod pod-subpath-test-secret-pqvl
Jan 28 13:52:45.452: INFO: Deleting pod "pod-subpath-test-secret-pqvl" in namespace "subpath-176"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 28 13:52:45.459: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-176" for this suite.
Jan 28 13:52:51.511: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 28 13:52:51.684: INFO: namespace subpath-176 deletion completed in 6.213861933s

• [SLOW TEST:36.867 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with secret pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 28 13:52:51.685: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44
[It] should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
Jan 28 13:52:51.760: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 28 13:53:08.149: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-3209" for this suite.
Jan 28 13:53:20.180: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 28 13:53:20.288: INFO: namespace init-container-3209 deletion completed in 12.128618446s

• [SLOW TEST:28.603 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 28 13:53:20.288: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test substitution in container's args
Jan 28 13:53:20.417: INFO: Waiting up to 5m0s for pod "var-expansion-9c12ddcf-c2b2-44d6-92c6-c18ed1744e5d" in namespace "var-expansion-9923" to be "success or failure"
Jan 28 13:53:20.500: INFO: Pod "var-expansion-9c12ddcf-c2b2-44d6-92c6-c18ed1744e5d": Phase="Pending", Reason="", readiness=false. Elapsed: 82.647283ms
Jan 28 13:53:22.520: INFO: Pod "var-expansion-9c12ddcf-c2b2-44d6-92c6-c18ed1744e5d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.102319747s
Jan 28 13:53:24.538: INFO: Pod "var-expansion-9c12ddcf-c2b2-44d6-92c6-c18ed1744e5d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.120535257s
Jan 28 13:53:26.564: INFO: Pod "var-expansion-9c12ddcf-c2b2-44d6-92c6-c18ed1744e5d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.146402156s
Jan 28 13:53:28.593: INFO: Pod "var-expansion-9c12ddcf-c2b2-44d6-92c6-c18ed1744e5d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.175505775s
STEP: Saw pod success
Jan 28 13:53:28.593: INFO: Pod "var-expansion-9c12ddcf-c2b2-44d6-92c6-c18ed1744e5d" satisfied condition "success or failure"
Jan 28 13:53:28.613: INFO: Trying to get logs from node iruya-node pod var-expansion-9c12ddcf-c2b2-44d6-92c6-c18ed1744e5d container dapi-container: 
STEP: delete the pod
Jan 28 13:53:28.889: INFO: Waiting for pod var-expansion-9c12ddcf-c2b2-44d6-92c6-c18ed1744e5d to disappear
Jan 28 13:53:28.904: INFO: Pod var-expansion-9c12ddcf-c2b2-44d6-92c6-c18ed1744e5d no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 28 13:53:28.905: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-9923" for this suite.
Jan 28 13:53:34.959: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 28 13:53:35.063: INFO: namespace var-expansion-9923 deletion completed in 6.142109602s

• [SLOW TEST:14.775 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl replace 
  should update a single-container pod's image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 28 13:53:35.064: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1721
[It] should update a single-container pod's image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Jan 28 13:53:35.197: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --labels=run=e2e-test-nginx-pod --namespace=kubectl-9004'
Jan 28 13:53:35.380: INFO: stderr: ""
Jan 28 13:53:35.380: INFO: stdout: "pod/e2e-test-nginx-pod created\n"
STEP: verifying the pod e2e-test-nginx-pod is running
STEP: verifying the pod e2e-test-nginx-pod was created
Jan 28 13:53:45.433: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-nginx-pod --namespace=kubectl-9004 -o json'
Jan 28 13:53:45.578: INFO: stderr: ""
Jan 28 13:53:45.578: INFO: stdout: "{\n    \"apiVersion\": \"v1\",\n    \"kind\": \"Pod\",\n    \"metadata\": {\n        \"creationTimestamp\": \"2020-01-28T13:53:35Z\",\n        \"labels\": {\n            \"run\": \"e2e-test-nginx-pod\"\n        },\n        \"name\": \"e2e-test-nginx-pod\",\n        \"namespace\": \"kubectl-9004\",\n        \"resourceVersion\": \"22192711\",\n        \"selfLink\": \"/api/v1/namespaces/kubectl-9004/pods/e2e-test-nginx-pod\",\n        \"uid\": \"f09b7f31-f9a4-4f5e-8170-0c402f56e493\"\n    },\n    \"spec\": {\n        \"containers\": [\n            {\n                \"image\": \"docker.io/library/nginx:1.14-alpine\",\n                \"imagePullPolicy\": \"IfNotPresent\",\n                \"name\": \"e2e-test-nginx-pod\",\n                \"resources\": {},\n                \"terminationMessagePath\": \"/dev/termination-log\",\n                \"terminationMessagePolicy\": \"File\",\n                \"volumeMounts\": [\n                    {\n                        \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n                        \"name\": \"default-token-9zvv6\",\n                        \"readOnly\": true\n                    }\n                ]\n            }\n        ],\n        \"dnsPolicy\": \"ClusterFirst\",\n        \"enableServiceLinks\": true,\n        \"nodeName\": \"iruya-node\",\n        \"priority\": 0,\n        \"restartPolicy\": \"Always\",\n        \"schedulerName\": \"default-scheduler\",\n        \"securityContext\": {},\n        \"serviceAccount\": \"default\",\n        \"serviceAccountName\": \"default\",\n        \"terminationGracePeriodSeconds\": 30,\n        \"tolerations\": [\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/not-ready\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            },\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/unreachable\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            }\n        ],\n        \"volumes\": [\n            {\n                \"name\": \"default-token-9zvv6\",\n                \"secret\": {\n                    \"defaultMode\": 420,\n                    \"secretName\": \"default-token-9zvv6\"\n                }\n            }\n        ]\n    },\n    \"status\": {\n        \"conditions\": [\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-01-28T13:53:35Z\",\n                \"status\": \"True\",\n                \"type\": \"Initialized\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-01-28T13:53:42Z\",\n                \"status\": \"True\",\n                \"type\": \"Ready\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-01-28T13:53:42Z\",\n                \"status\": \"True\",\n                \"type\": \"ContainersReady\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-01-28T13:53:35Z\",\n                \"status\": \"True\",\n                \"type\": \"PodScheduled\"\n            }\n        ],\n        \"containerStatuses\": [\n            {\n                \"containerID\": \"docker://e7b9f39bb66914cf65f11b6243a078444481e37092016b23c30ad55e97e253b7\",\n                \"image\": \"nginx:1.14-alpine\",\n                \"imageID\": \"docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7\",\n                \"lastState\": {},\n                \"name\": \"e2e-test-nginx-pod\",\n                \"ready\": true,\n                \"restartCount\": 0,\n                \"state\": {\n                    \"running\": {\n                        \"startedAt\": \"2020-01-28T13:53:40Z\"\n                    }\n                }\n            }\n        ],\n        \"hostIP\": \"10.96.3.65\",\n        \"phase\": \"Running\",\n        \"podIP\": \"10.44.0.1\",\n        \"qosClass\": \"BestEffort\",\n        \"startTime\": \"2020-01-28T13:53:35Z\"\n    }\n}\n"
STEP: replace the image in the pod
Jan 28 13:53:45.578: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-9004'
Jan 28 13:53:46.180: INFO: stderr: ""
Jan 28 13:53:46.180: INFO: stdout: "pod/e2e-test-nginx-pod replaced\n"
STEP: verifying the pod e2e-test-nginx-pod has the right image docker.io/library/busybox:1.29
[AfterEach] [k8s.io] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1726
Jan 28 13:53:46.285: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=kubectl-9004'
Jan 28 13:53:53.132: INFO: stderr: ""
Jan 28 13:53:53.133: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 28 13:53:53.133: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-9004" for this suite.
Jan 28 13:53:59.185: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 28 13:53:59.342: INFO: namespace kubectl-9004 deletion completed in 6.184316602s

• [SLOW TEST:24.278 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should update a single-container pod's image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 28 13:53:59.345: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Jan 28 13:53:59.450: INFO: Waiting up to 5m0s for pod "downwardapi-volume-81ffeee1-857a-47f3-89a7-522605462d2f" in namespace "projected-308" to be "success or failure"
Jan 28 13:53:59.455: INFO: Pod "downwardapi-volume-81ffeee1-857a-47f3-89a7-522605462d2f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.877007ms
Jan 28 13:54:01.466: INFO: Pod "downwardapi-volume-81ffeee1-857a-47f3-89a7-522605462d2f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01551473s
Jan 28 13:54:03.475: INFO: Pod "downwardapi-volume-81ffeee1-857a-47f3-89a7-522605462d2f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.024676843s
Jan 28 13:54:05.483: INFO: Pod "downwardapi-volume-81ffeee1-857a-47f3-89a7-522605462d2f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.03251479s
Jan 28 13:54:07.490: INFO: Pod "downwardapi-volume-81ffeee1-857a-47f3-89a7-522605462d2f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.039163506s
Jan 28 13:54:09.499: INFO: Pod "downwardapi-volume-81ffeee1-857a-47f3-89a7-522605462d2f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.048467197s
STEP: Saw pod success
Jan 28 13:54:09.499: INFO: Pod "downwardapi-volume-81ffeee1-857a-47f3-89a7-522605462d2f" satisfied condition "success or failure"
Jan 28 13:54:09.505: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-81ffeee1-857a-47f3-89a7-522605462d2f container client-container: 
STEP: delete the pod
Jan 28 13:54:09.647: INFO: Waiting for pod downwardapi-volume-81ffeee1-857a-47f3-89a7-522605462d2f to disappear
Jan 28 13:54:09.701: INFO: Pod downwardapi-volume-81ffeee1-857a-47f3-89a7-522605462d2f no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 28 13:54:09.701: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-308" for this suite.
Jan 28 13:54:15.760: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 28 13:54:15.922: INFO: namespace projected-308 deletion completed in 6.197917557s

• [SLOW TEST:16.578 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 28 13:54:15.924: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-5ba1df87-55c4-4679-a0f2-47c140be7735
STEP: Creating a pod to test consume secrets
Jan 28 13:54:16.074: INFO: Waiting up to 5m0s for pod "pod-secrets-517e4d04-346c-4fec-98c2-f5e1bd608ddc" in namespace "secrets-4393" to be "success or failure"
Jan 28 13:54:16.087: INFO: Pod "pod-secrets-517e4d04-346c-4fec-98c2-f5e1bd608ddc": Phase="Pending", Reason="", readiness=false. Elapsed: 12.862973ms
Jan 28 13:54:18.102: INFO: Pod "pod-secrets-517e4d04-346c-4fec-98c2-f5e1bd608ddc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026962553s
Jan 28 13:54:20.109: INFO: Pod "pod-secrets-517e4d04-346c-4fec-98c2-f5e1bd608ddc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.034304048s
Jan 28 13:54:22.119: INFO: Pod "pod-secrets-517e4d04-346c-4fec-98c2-f5e1bd608ddc": Phase="Pending", Reason="", readiness=false. Elapsed: 6.044784114s
Jan 28 13:54:24.127: INFO: Pod "pod-secrets-517e4d04-346c-4fec-98c2-f5e1bd608ddc": Phase="Pending", Reason="", readiness=false. Elapsed: 8.052919349s
Jan 28 13:54:26.138: INFO: Pod "pod-secrets-517e4d04-346c-4fec-98c2-f5e1bd608ddc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.063573836s
STEP: Saw pod success
Jan 28 13:54:26.138: INFO: Pod "pod-secrets-517e4d04-346c-4fec-98c2-f5e1bd608ddc" satisfied condition "success or failure"
Jan 28 13:54:26.165: INFO: Trying to get logs from node iruya-node pod pod-secrets-517e4d04-346c-4fec-98c2-f5e1bd608ddc container secret-volume-test: 
STEP: delete the pod
Jan 28 13:54:26.243: INFO: Waiting for pod pod-secrets-517e4d04-346c-4fec-98c2-f5e1bd608ddc to disappear
Jan 28 13:54:26.250: INFO: Pod pod-secrets-517e4d04-346c-4fec-98c2-f5e1bd608ddc no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 28 13:54:26.250: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-4393" for this suite.
Jan 28 13:54:32.349: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 28 13:54:32.505: INFO: namespace secrets-4393 deletion completed in 6.242204906s

• [SLOW TEST:16.582 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 28 13:54:32.506: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
[It] should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Jan 28 13:54:32.673: INFO: Number of nodes with available pods: 0
Jan 28 13:54:32.673: INFO: Node iruya-node is running more than one daemon pod
Jan 28 13:54:33.697: INFO: Number of nodes with available pods: 0
Jan 28 13:54:33.697: INFO: Node iruya-node is running more than one daemon pod
Jan 28 13:54:34.713: INFO: Number of nodes with available pods: 0
Jan 28 13:54:34.713: INFO: Node iruya-node is running more than one daemon pod
Jan 28 13:54:35.729: INFO: Number of nodes with available pods: 0
Jan 28 13:54:35.729: INFO: Node iruya-node is running more than one daemon pod
Jan 28 13:54:37.653: INFO: Number of nodes with available pods: 0
Jan 28 13:54:37.653: INFO: Node iruya-node is running more than one daemon pod
Jan 28 13:54:37.695: INFO: Number of nodes with available pods: 0
Jan 28 13:54:37.695: INFO: Node iruya-node is running more than one daemon pod
Jan 28 13:54:40.263: INFO: Number of nodes with available pods: 0
Jan 28 13:54:40.263: INFO: Node iruya-node is running more than one daemon pod
Jan 28 13:54:41.343: INFO: Number of nodes with available pods: 0
Jan 28 13:54:41.343: INFO: Node iruya-node is running more than one daemon pod
Jan 28 13:54:41.698: INFO: Number of nodes with available pods: 0
Jan 28 13:54:41.698: INFO: Node iruya-node is running more than one daemon pod
Jan 28 13:54:42.685: INFO: Number of nodes with available pods: 0
Jan 28 13:54:42.685: INFO: Node iruya-node is running more than one daemon pod
Jan 28 13:54:43.704: INFO: Number of nodes with available pods: 2
Jan 28 13:54:43.704: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Stop a daemon pod, check that the daemon pod is revived.
Jan 28 13:54:43.762: INFO: Number of nodes with available pods: 1
Jan 28 13:54:43.762: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan 28 13:54:44.781: INFO: Number of nodes with available pods: 1
Jan 28 13:54:44.781: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan 28 13:54:46.069: INFO: Number of nodes with available pods: 1
Jan 28 13:54:46.069: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan 28 13:54:46.783: INFO: Number of nodes with available pods: 1
Jan 28 13:54:46.783: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan 28 13:54:47.801: INFO: Number of nodes with available pods: 1
Jan 28 13:54:47.801: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan 28 13:54:48.844: INFO: Number of nodes with available pods: 1
Jan 28 13:54:48.844: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan 28 13:54:49.774: INFO: Number of nodes with available pods: 1
Jan 28 13:54:49.774: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan 28 13:54:50.785: INFO: Number of nodes with available pods: 1
Jan 28 13:54:50.785: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan 28 13:54:51.804: INFO: Number of nodes with available pods: 1
Jan 28 13:54:51.804: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan 28 13:54:52.783: INFO: Number of nodes with available pods: 1
Jan 28 13:54:52.784: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan 28 13:54:53.787: INFO: Number of nodes with available pods: 1
Jan 28 13:54:53.787: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan 28 13:54:54.782: INFO: Number of nodes with available pods: 1
Jan 28 13:54:54.782: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan 28 13:54:55.780: INFO: Number of nodes with available pods: 1
Jan 28 13:54:55.780: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan 28 13:54:56.782: INFO: Number of nodes with available pods: 1
Jan 28 13:54:56.782: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan 28 13:54:57.789: INFO: Number of nodes with available pods: 1
Jan 28 13:54:57.789: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan 28 13:54:59.154: INFO: Number of nodes with available pods: 1
Jan 28 13:54:59.154: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan 28 13:54:59.800: INFO: Number of nodes with available pods: 1
Jan 28 13:54:59.800: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan 28 13:55:00.785: INFO: Number of nodes with available pods: 1
Jan 28 13:55:00.785: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan 28 13:55:02.940: INFO: Number of nodes with available pods: 1
Jan 28 13:55:02.940: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan 28 13:55:03.781: INFO: Number of nodes with available pods: 1
Jan 28 13:55:03.781: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan 28 13:55:04.782: INFO: Number of nodes with available pods: 1
Jan 28 13:55:04.782: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan 28 13:55:05.777: INFO: Number of nodes with available pods: 2
Jan 28 13:55:05.777: INFO: Number of running nodes: 2, number of available pods: 2
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-3062, will wait for the garbage collector to delete the pods
Jan 28 13:55:05.865: INFO: Deleting DaemonSet.extensions daemon-set took: 24.207362ms
Jan 28 13:55:06.166: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.882708ms
Jan 28 13:55:17.977: INFO: Number of nodes with available pods: 0
Jan 28 13:55:17.978: INFO: Number of running nodes: 0, number of available pods: 0
Jan 28 13:55:17.984: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-3062/daemonsets","resourceVersion":"22192965"},"items":null}

Jan 28 13:55:17.989: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-3062/pods","resourceVersion":"22192965"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 28 13:55:18.005: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-3062" for this suite.
Jan 28 13:55:24.032: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 28 13:55:24.133: INFO: namespace daemonsets-3062 deletion completed in 6.123571669s

• [SLOW TEST:51.627 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 28 13:55:24.137: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward api env vars
Jan 28 13:55:24.328: INFO: Waiting up to 5m0s for pod "downward-api-61c7f307-059e-4418-a16a-a7645e873108" in namespace "downward-api-2690" to be "success or failure"
Jan 28 13:55:24.396: INFO: Pod "downward-api-61c7f307-059e-4418-a16a-a7645e873108": Phase="Pending", Reason="", readiness=false. Elapsed: 67.190697ms
Jan 28 13:55:26.415: INFO: Pod "downward-api-61c7f307-059e-4418-a16a-a7645e873108": Phase="Pending", Reason="", readiness=false. Elapsed: 2.086376948s
Jan 28 13:55:28.427: INFO: Pod "downward-api-61c7f307-059e-4418-a16a-a7645e873108": Phase="Pending", Reason="", readiness=false. Elapsed: 4.098764266s
Jan 28 13:55:30.439: INFO: Pod "downward-api-61c7f307-059e-4418-a16a-a7645e873108": Phase="Pending", Reason="", readiness=false. Elapsed: 6.110691984s
Jan 28 13:55:32.455: INFO: Pod "downward-api-61c7f307-059e-4418-a16a-a7645e873108": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.125953217s
STEP: Saw pod success
Jan 28 13:55:32.455: INFO: Pod "downward-api-61c7f307-059e-4418-a16a-a7645e873108" satisfied condition "success or failure"
Jan 28 13:55:32.461: INFO: Trying to get logs from node iruya-node pod downward-api-61c7f307-059e-4418-a16a-a7645e873108 container dapi-container: 
STEP: delete the pod
Jan 28 13:55:32.536: INFO: Waiting for pod downward-api-61c7f307-059e-4418-a16a-a7645e873108 to disappear
Jan 28 13:55:32.542: INFO: Pod downward-api-61c7f307-059e-4418-a16a-a7645e873108 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 28 13:55:32.542: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-2690" for this suite.
Jan 28 13:55:38.568: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 28 13:55:38.710: INFO: namespace downward-api-2690 deletion completed in 6.162617525s

• [SLOW TEST:14.573 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicaSet 
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 28 13:55:38.711: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Given a Pod with a 'name' label pod-adoption-release is created
STEP: When a replicaset with a matching selector is created
STEP: Then the orphan pod is adopted
STEP: When the matched label of one of its pods change
Jan 28 13:55:47.923: INFO: Pod name pod-adoption-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 28 13:55:48.993: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replicaset-2138" for this suite.
Jan 28 13:56:11.021: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 28 13:56:11.148: INFO: namespace replicaset-2138 deletion completed in 22.148283256s

• [SLOW TEST:32.438 seconds]
[sig-apps] ReplicaSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-storage] Downward API volume 
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 28 13:56:11.149: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating the pod
Jan 28 13:56:19.934: INFO: Successfully updated pod "annotationupdate394dd8bd-d938-4321-8f8f-2f35f830f0c0"
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 28 13:56:24.033: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-6501" for this suite.
Jan 28 13:56:46.070: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 28 13:56:46.186: INFO: namespace downward-api-6501 deletion completed in 22.145289186s

• [SLOW TEST:35.037 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl rolling-update 
  should support rolling-update to same image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 28 13:56:46.188: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1516
[It] should support rolling-update to same image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Jan 28 13:56:46.336: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=kubectl-8618'
Jan 28 13:56:46.580: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Jan 28 13:56:46.580: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n"
STEP: verifying the rc e2e-test-nginx-rc was created
STEP: rolling-update to same image controller
Jan 28 13:56:46.659: INFO: scanned /root for discovery docs: 
Jan 28 13:56:46.659: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-nginx-rc --update-period=1s --image=docker.io/library/nginx:1.14-alpine --image-pull-policy=IfNotPresent --namespace=kubectl-8618'
Jan 28 13:57:08.226: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Jan 28 13:57:08.226: INFO: stdout: "Created e2e-test-nginx-rc-37299ee30f8f14742813ebe39c07509c\nScaling up e2e-test-nginx-rc-37299ee30f8f14742813ebe39c07509c from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-37299ee30f8f14742813ebe39c07509c up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-37299ee30f8f14742813ebe39c07509c to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n"
Jan 28 13:57:08.226: INFO: stdout: "Created e2e-test-nginx-rc-37299ee30f8f14742813ebe39c07509c\nScaling up e2e-test-nginx-rc-37299ee30f8f14742813ebe39c07509c from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-37299ee30f8f14742813ebe39c07509c up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-37299ee30f8f14742813ebe39c07509c to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n"
STEP: waiting for all containers in run=e2e-test-nginx-rc pods to come up.
Jan 28 13:57:08.228: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-8618'
Jan 28 13:57:08.423: INFO: stderr: ""
Jan 28 13:57:08.423: INFO: stdout: "e2e-test-nginx-rc-37299ee30f8f14742813ebe39c07509c-xrjhx e2e-test-nginx-rc-4cnm6 "
STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2
Jan 28 13:57:13.424: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-8618'
Jan 28 13:57:13.596: INFO: stderr: ""
Jan 28 13:57:13.596: INFO: stdout: "e2e-test-nginx-rc-37299ee30f8f14742813ebe39c07509c-xrjhx e2e-test-nginx-rc-4cnm6 "
STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2
Jan 28 13:57:18.597: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-8618'
Jan 28 13:57:18.744: INFO: stderr: ""
Jan 28 13:57:18.744: INFO: stdout: "e2e-test-nginx-rc-37299ee30f8f14742813ebe39c07509c-xrjhx e2e-test-nginx-rc-4cnm6 "
STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2
Jan 28 13:57:23.745: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-8618'
Jan 28 13:57:23.976: INFO: stderr: ""
Jan 28 13:57:23.977: INFO: stdout: "e2e-test-nginx-rc-37299ee30f8f14742813ebe39c07509c-xrjhx e2e-test-nginx-rc-4cnm6 "
STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2
Jan 28 13:57:28.977: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-8618'
Jan 28 13:57:29.141: INFO: stderr: ""
Jan 28 13:57:29.141: INFO: stdout: "e2e-test-nginx-rc-37299ee30f8f14742813ebe39c07509c-xrjhx e2e-test-nginx-rc-4cnm6 "
STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2
Jan 28 13:57:34.142: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-8618'
Jan 28 13:57:34.292: INFO: stderr: ""
Jan 28 13:57:34.292: INFO: stdout: "e2e-test-nginx-rc-37299ee30f8f14742813ebe39c07509c-xrjhx e2e-test-nginx-rc-4cnm6 "
STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2
Jan 28 13:57:39.293: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-8618'
Jan 28 13:57:39.519: INFO: stderr: ""
Jan 28 13:57:39.520: INFO: stdout: "e2e-test-nginx-rc-37299ee30f8f14742813ebe39c07509c-xrjhx e2e-test-nginx-rc-4cnm6 "
STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2
Jan 28 13:57:44.520: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-8618'
Jan 28 13:57:44.689: INFO: stderr: ""
Jan 28 13:57:44.689: INFO: stdout: "e2e-test-nginx-rc-37299ee30f8f14742813ebe39c07509c-xrjhx e2e-test-nginx-rc-4cnm6 "
STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2
Jan 28 13:57:49.690: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-8618'
Jan 28 13:57:49.885: INFO: stderr: ""
Jan 28 13:57:49.885: INFO: stdout: "e2e-test-nginx-rc-37299ee30f8f14742813ebe39c07509c-xrjhx e2e-test-nginx-rc-4cnm6 "
STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2
Jan 28 13:57:54.885: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-8618'
Jan 28 13:57:55.055: INFO: stderr: ""
Jan 28 13:57:55.055: INFO: stdout: "e2e-test-nginx-rc-37299ee30f8f14742813ebe39c07509c-xrjhx e2e-test-nginx-rc-4cnm6 "
STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2
Jan 28 13:58:00.056: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-8618'
Jan 28 13:58:00.216: INFO: stderr: ""
Jan 28 13:58:00.216: INFO: stdout: "e2e-test-nginx-rc-37299ee30f8f14742813ebe39c07509c-xrjhx e2e-test-nginx-rc-4cnm6 "
STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2
Jan 28 13:58:05.217: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-8618'
Jan 28 13:58:05.374: INFO: stderr: ""
Jan 28 13:58:05.374: INFO: stdout: "e2e-test-nginx-rc-37299ee30f8f14742813ebe39c07509c-xrjhx e2e-test-nginx-rc-4cnm6 "
STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2
Jan 28 13:58:10.375: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-8618'
Jan 28 13:58:10.583: INFO: stderr: ""
Jan 28 13:58:10.583: INFO: stdout: "e2e-test-nginx-rc-37299ee30f8f14742813ebe39c07509c-xrjhx e2e-test-nginx-rc-4cnm6 "
STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2
Jan 28 13:58:15.584: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-8618'
Jan 28 13:58:15.718: INFO: stderr: ""
Jan 28 13:58:15.719: INFO: stdout: "e2e-test-nginx-rc-37299ee30f8f14742813ebe39c07509c-xrjhx e2e-test-nginx-rc-4cnm6 "
STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2
Jan 28 13:58:20.720: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-8618'
Jan 28 13:58:20.957: INFO: stderr: ""
Jan 28 13:58:20.958: INFO: stdout: "e2e-test-nginx-rc-37299ee30f8f14742813ebe39c07509c-xrjhx "
Jan 28 13:58:20.958: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-37299ee30f8f14742813ebe39c07509c-xrjhx -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-nginx-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8618'
Jan 28 13:58:21.076: INFO: stderr: ""
Jan 28 13:58:21.076: INFO: stdout: "true"
Jan 28 13:58:21.076: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-37299ee30f8f14742813ebe39c07509c-xrjhx -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-nginx-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8618'
Jan 28 13:58:21.177: INFO: stderr: ""
Jan 28 13:58:21.177: INFO: stdout: "docker.io/library/nginx:1.14-alpine"
Jan 28 13:58:21.177: INFO: e2e-test-nginx-rc-37299ee30f8f14742813ebe39c07509c-xrjhx is verified up and running
[AfterEach] [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1522
Jan 28 13:58:21.178: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=kubectl-8618'
Jan 28 13:58:21.293: INFO: stderr: ""
Jan 28 13:58:21.293: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 28 13:58:21.293: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-8618" for this suite.
Jan 28 13:58:43.323: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 28 13:58:43.498: INFO: namespace kubectl-8618 deletion completed in 22.198656811s

• [SLOW TEST:117.310 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should support rolling-update to same image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should fail to create secret due to empty secret key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 28 13:58:43.499: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create secret due to empty secret key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name secret-emptykey-test-e5347a3b-6ea1-4814-a665-d740a1dc3fb2
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 28 13:58:43.637: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-9495" for this suite.
Jan 28 13:58:49.680: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 28 13:58:49.899: INFO: namespace secrets-9495 deletion completed in 6.255639083s

• [SLOW TEST:6.401 seconds]
[sig-api-machinery] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31
  should fail to create secret due to empty secret key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-network] DNS 
  should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 28 13:58:49.901: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-2835.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-2835.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-2835.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-2835.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-2835.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-2835.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-2835.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-2835.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-2835.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-2835.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-2835.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-2835.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2835.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 13.136.97.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.97.136.13_udp@PTR;check="$$(dig +tcp +noall +answer +search 13.136.97.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.97.136.13_tcp@PTR;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-2835.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-2835.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-2835.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-2835.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-2835.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-2835.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-2835.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-2835.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-2835.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-2835.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-2835.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-2835.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2835.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 13.136.97.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.97.136.13_udp@PTR;check="$$(dig +tcp +noall +answer +search 13.136.97.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.97.136.13_tcp@PTR;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jan 28 13:59:02.633: INFO: Unable to read wheezy_udp@dns-test-service.dns-2835.svc.cluster.local from pod dns-2835/dns-test-5851f760-5a07-4a33-b095-fa65a3457645: the server could not find the requested resource (get pods dns-test-5851f760-5a07-4a33-b095-fa65a3457645)
Jan 28 13:59:02.642: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2835.svc.cluster.local from pod dns-2835/dns-test-5851f760-5a07-4a33-b095-fa65a3457645: the server could not find the requested resource (get pods dns-test-5851f760-5a07-4a33-b095-fa65a3457645)
Jan 28 13:59:02.648: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2835.svc.cluster.local from pod dns-2835/dns-test-5851f760-5a07-4a33-b095-fa65a3457645: the server could not find the requested resource (get pods dns-test-5851f760-5a07-4a33-b095-fa65a3457645)
Jan 28 13:59:02.653: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2835.svc.cluster.local from pod dns-2835/dns-test-5851f760-5a07-4a33-b095-fa65a3457645: the server could not find the requested resource (get pods dns-test-5851f760-5a07-4a33-b095-fa65a3457645)
Jan 28 13:59:02.658: INFO: Unable to read wheezy_udp@_http._tcp.test-service-2.dns-2835.svc.cluster.local from pod dns-2835/dns-test-5851f760-5a07-4a33-b095-fa65a3457645: the server could not find the requested resource (get pods dns-test-5851f760-5a07-4a33-b095-fa65a3457645)
Jan 28 13:59:02.662: INFO: Unable to read wheezy_tcp@_http._tcp.test-service-2.dns-2835.svc.cluster.local from pod dns-2835/dns-test-5851f760-5a07-4a33-b095-fa65a3457645: the server could not find the requested resource (get pods dns-test-5851f760-5a07-4a33-b095-fa65a3457645)
Jan 28 13:59:02.666: INFO: Unable to read wheezy_udp@PodARecord from pod dns-2835/dns-test-5851f760-5a07-4a33-b095-fa65a3457645: the server could not find the requested resource (get pods dns-test-5851f760-5a07-4a33-b095-fa65a3457645)
Jan 28 13:59:02.671: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-2835/dns-test-5851f760-5a07-4a33-b095-fa65a3457645: the server could not find the requested resource (get pods dns-test-5851f760-5a07-4a33-b095-fa65a3457645)
Jan 28 13:59:02.676: INFO: Unable to read 10.97.136.13_udp@PTR from pod dns-2835/dns-test-5851f760-5a07-4a33-b095-fa65a3457645: the server could not find the requested resource (get pods dns-test-5851f760-5a07-4a33-b095-fa65a3457645)
Jan 28 13:59:02.681: INFO: Unable to read 10.97.136.13_tcp@PTR from pod dns-2835/dns-test-5851f760-5a07-4a33-b095-fa65a3457645: the server could not find the requested resource (get pods dns-test-5851f760-5a07-4a33-b095-fa65a3457645)
Jan 28 13:59:02.685: INFO: Unable to read jessie_udp@dns-test-service.dns-2835.svc.cluster.local from pod dns-2835/dns-test-5851f760-5a07-4a33-b095-fa65a3457645: the server could not find the requested resource (get pods dns-test-5851f760-5a07-4a33-b095-fa65a3457645)
Jan 28 13:59:02.689: INFO: Unable to read jessie_tcp@dns-test-service.dns-2835.svc.cluster.local from pod dns-2835/dns-test-5851f760-5a07-4a33-b095-fa65a3457645: the server could not find the requested resource (get pods dns-test-5851f760-5a07-4a33-b095-fa65a3457645)
Jan 28 13:59:02.692: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2835.svc.cluster.local from pod dns-2835/dns-test-5851f760-5a07-4a33-b095-fa65a3457645: the server could not find the requested resource (get pods dns-test-5851f760-5a07-4a33-b095-fa65a3457645)
Jan 28 13:59:02.696: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2835.svc.cluster.local from pod dns-2835/dns-test-5851f760-5a07-4a33-b095-fa65a3457645: the server could not find the requested resource (get pods dns-test-5851f760-5a07-4a33-b095-fa65a3457645)
Jan 28 13:59:02.699: INFO: Unable to read jessie_udp@_http._tcp.test-service-2.dns-2835.svc.cluster.local from pod dns-2835/dns-test-5851f760-5a07-4a33-b095-fa65a3457645: the server could not find the requested resource (get pods dns-test-5851f760-5a07-4a33-b095-fa65a3457645)
Jan 28 13:59:02.703: INFO: Unable to read jessie_tcp@_http._tcp.test-service-2.dns-2835.svc.cluster.local from pod dns-2835/dns-test-5851f760-5a07-4a33-b095-fa65a3457645: the server could not find the requested resource (get pods dns-test-5851f760-5a07-4a33-b095-fa65a3457645)
Jan 28 13:59:02.707: INFO: Unable to read jessie_udp@PodARecord from pod dns-2835/dns-test-5851f760-5a07-4a33-b095-fa65a3457645: the server could not find the requested resource (get pods dns-test-5851f760-5a07-4a33-b095-fa65a3457645)
Jan 28 13:59:02.711: INFO: Unable to read jessie_tcp@PodARecord from pod dns-2835/dns-test-5851f760-5a07-4a33-b095-fa65a3457645: the server could not find the requested resource (get pods dns-test-5851f760-5a07-4a33-b095-fa65a3457645)
Jan 28 13:59:02.714: INFO: Unable to read 10.97.136.13_udp@PTR from pod dns-2835/dns-test-5851f760-5a07-4a33-b095-fa65a3457645: the server could not find the requested resource (get pods dns-test-5851f760-5a07-4a33-b095-fa65a3457645)
Jan 28 13:59:02.718: INFO: Unable to read 10.97.136.13_tcp@PTR from pod dns-2835/dns-test-5851f760-5a07-4a33-b095-fa65a3457645: the server could not find the requested resource (get pods dns-test-5851f760-5a07-4a33-b095-fa65a3457645)
Jan 28 13:59:02.718: INFO: Lookups using dns-2835/dns-test-5851f760-5a07-4a33-b095-fa65a3457645 failed for: [wheezy_udp@dns-test-service.dns-2835.svc.cluster.local wheezy_tcp@dns-test-service.dns-2835.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-2835.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-2835.svc.cluster.local wheezy_udp@_http._tcp.test-service-2.dns-2835.svc.cluster.local wheezy_tcp@_http._tcp.test-service-2.dns-2835.svc.cluster.local wheezy_udp@PodARecord wheezy_tcp@PodARecord 10.97.136.13_udp@PTR 10.97.136.13_tcp@PTR jessie_udp@dns-test-service.dns-2835.svc.cluster.local jessie_tcp@dns-test-service.dns-2835.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-2835.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-2835.svc.cluster.local jessie_udp@_http._tcp.test-service-2.dns-2835.svc.cluster.local jessie_tcp@_http._tcp.test-service-2.dns-2835.svc.cluster.local jessie_udp@PodARecord jessie_tcp@PodARecord 10.97.136.13_udp@PTR 10.97.136.13_tcp@PTR]

Jan 28 13:59:07.947: INFO: DNS probes using dns-2835/dns-test-5851f760-5a07-4a33-b095-fa65a3457645 succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 28 13:59:08.327: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-2835" for this suite.
Jan 28 13:59:14.420: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 28 13:59:14.595: INFO: namespace dns-2835 deletion completed in 6.211137377s

• [SLOW TEST:24.695 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 28 13:59:14.596: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Cleaning up the secret
STEP: Cleaning up the configmap
STEP: Cleaning up the pod
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 28 13:59:22.864: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-wrapper-7003" for this suite.
Jan 28 13:59:28.978: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 28 13:59:29.084: INFO: namespace emptydir-wrapper-7003 deletion completed in 6.196824271s

• [SLOW TEST:14.488 seconds]
[sig-storage] EmptyDir wrapper volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 28 13:59:29.085: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72
[It] deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jan 28 13:59:29.215: INFO: Pod name cleanup-pod: Found 0 pods out of 1
Jan 28 13:59:34.225: INFO: Pod name cleanup-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Jan 28 13:59:38.255: INFO: Creating deployment test-cleanup-deployment
STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66
Jan 28 13:59:38.392: INFO: Deployment "test-cleanup-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment,GenerateName:,Namespace:deployment-8718,SelfLink:/apis/apps/v1/namespaces/deployment-8718/deployments/test-cleanup-deployment,UID:2990968a-bc05-4c59-847c-a1c06159acec,ResourceVersion:22193618,Generation:1,CreationTimestamp:2020-01-28 13:59:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[],ReadyReplicas:0,CollisionCount:nil,},}

Jan 28 13:59:38.459: INFO: New ReplicaSet "test-cleanup-deployment-55bbcbc84c" of Deployment "test-cleanup-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment-55bbcbc84c,GenerateName:,Namespace:deployment-8718,SelfLink:/apis/apps/v1/namespaces/deployment-8718/replicasets/test-cleanup-deployment-55bbcbc84c,UID:35fc9a0a-2422-482d-896a-0441c421ea96,ResourceVersion:22193620,Generation:1,CreationTimestamp:2020-01-28 13:59:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment 2990968a-bc05-4c59-847c-a1c06159acec 0xc002146c07 0xc002146c08}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:0,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Jan 28 13:59:38.460: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment":
Jan 28 13:59:38.460: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller,GenerateName:,Namespace:deployment-8718,SelfLink:/apis/apps/v1/namespaces/deployment-8718/replicasets/test-cleanup-controller,UID:b2e261b0-a1ec-4216-a556-acc0632b5379,ResourceVersion:22193619,Generation:1,CreationTimestamp:2020-01-28 13:59:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment 2990968a-bc05-4c59-847c-a1c06159acec 0xc002146b37 0xc002146b38}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Jan 28 13:59:38.512: INFO: Pod "test-cleanup-controller-kn65c" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller-kn65c,GenerateName:test-cleanup-controller-,Namespace:deployment-8718,SelfLink:/api/v1/namespaces/deployment-8718/pods/test-cleanup-controller-kn65c,UID:c0fd642c-07f9-42a7-9aa6-36593437d065,ResourceVersion:22193612,Generation:0,CreationTimestamp:2020-01-28 13:59:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-controller b2e261b0-a1ec-4216-a556-acc0632b5379 0xc0021474d7 0xc0021474d8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-xxc9b {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-xxc9b,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-xxc9b true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002147550} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002147570}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 13:59:29 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 13:59:36 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 13:59:36 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 13:59:29 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.1,StartTime:2020-01-28 13:59:29 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-28 13:59:35 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://de4bc01f7df26fe52838d57b2340b425deb23958d78ebe0e1ddf54bd86730ec0}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 28 13:59:38.513: INFO: Pod "test-cleanup-deployment-55bbcbc84c-26l9z" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment-55bbcbc84c-26l9z,GenerateName:test-cleanup-deployment-55bbcbc84c-,Namespace:deployment-8718,SelfLink:/api/v1/namespaces/deployment-8718/pods/test-cleanup-deployment-55bbcbc84c-26l9z,UID:3f847486-a7bd-48d6-bbef-e61b59753900,ResourceVersion:22193625,Generation:0,CreationTimestamp:2020-01-28 13:59:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-deployment-55bbcbc84c 35fc9a0a-2422-482d-896a-0441c421ea96 0xc002147657 0xc002147658}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-xxc9b {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-xxc9b,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [{default-token-xxc9b true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0021476d0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0021476f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 13:59:38 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 28 13:59:38.513: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-8718" for this suite.
Jan 28 13:59:46.681: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 28 13:59:46.826: INFO: namespace deployment-8718 deletion completed in 8.275117797s

• [SLOW TEST:17.742 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 28 13:59:46.827: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-map-6d0d4f1e-323d-45ed-bbc6-15356ce67b5c
STEP: Creating a pod to test consume configMaps
Jan 28 13:59:47.067: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-67bb44e6-f3a5-486b-8ac9-4ce923e30e97" in namespace "projected-3251" to be "success or failure"
Jan 28 13:59:47.083: INFO: Pod "pod-projected-configmaps-67bb44e6-f3a5-486b-8ac9-4ce923e30e97": Phase="Pending", Reason="", readiness=false. Elapsed: 15.881287ms
Jan 28 13:59:49.093: INFO: Pod "pod-projected-configmaps-67bb44e6-f3a5-486b-8ac9-4ce923e30e97": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026579317s
Jan 28 13:59:51.103: INFO: Pod "pod-projected-configmaps-67bb44e6-f3a5-486b-8ac9-4ce923e30e97": Phase="Pending", Reason="", readiness=false. Elapsed: 4.036077995s
Jan 28 13:59:53.113: INFO: Pod "pod-projected-configmaps-67bb44e6-f3a5-486b-8ac9-4ce923e30e97": Phase="Pending", Reason="", readiness=false. Elapsed: 6.045858107s
Jan 28 13:59:55.137: INFO: Pod "pod-projected-configmaps-67bb44e6-f3a5-486b-8ac9-4ce923e30e97": Phase="Pending", Reason="", readiness=false. Elapsed: 8.069693442s
Jan 28 13:59:57.150: INFO: Pod "pod-projected-configmaps-67bb44e6-f3a5-486b-8ac9-4ce923e30e97": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.083580097s
STEP: Saw pod success
Jan 28 13:59:57.151: INFO: Pod "pod-projected-configmaps-67bb44e6-f3a5-486b-8ac9-4ce923e30e97" satisfied condition "success or failure"
Jan 28 13:59:57.157: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-67bb44e6-f3a5-486b-8ac9-4ce923e30e97 container projected-configmap-volume-test: 
STEP: delete the pod
Jan 28 13:59:57.217: INFO: Waiting for pod pod-projected-configmaps-67bb44e6-f3a5-486b-8ac9-4ce923e30e97 to disappear
Jan 28 13:59:57.332: INFO: Pod pod-projected-configmaps-67bb44e6-f3a5-486b-8ac9-4ce923e30e97 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 28 13:59:57.333: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3251" for this suite.
Jan 28 14:00:03.384: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 28 14:00:03.609: INFO: namespace projected-3251 deletion completed in 6.266224249s

• [SLOW TEST:16.783 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 28 14:00:03.613: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Jan 28 14:00:03.696: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7927ac59-3dfe-4026-ae62-f831b75db097" in namespace "projected-4610" to be "success or failure"
Jan 28 14:00:03.792: INFO: Pod "downwardapi-volume-7927ac59-3dfe-4026-ae62-f831b75db097": Phase="Pending", Reason="", readiness=false. Elapsed: 95.516556ms
Jan 28 14:00:05.804: INFO: Pod "downwardapi-volume-7927ac59-3dfe-4026-ae62-f831b75db097": Phase="Pending", Reason="", readiness=false. Elapsed: 2.107178756s
Jan 28 14:00:07.822: INFO: Pod "downwardapi-volume-7927ac59-3dfe-4026-ae62-f831b75db097": Phase="Pending", Reason="", readiness=false. Elapsed: 4.125530352s
Jan 28 14:00:09.842: INFO: Pod "downwardapi-volume-7927ac59-3dfe-4026-ae62-f831b75db097": Phase="Pending", Reason="", readiness=false. Elapsed: 6.145127167s
Jan 28 14:00:11.871: INFO: Pod "downwardapi-volume-7927ac59-3dfe-4026-ae62-f831b75db097": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.174386341s
STEP: Saw pod success
Jan 28 14:00:11.871: INFO: Pod "downwardapi-volume-7927ac59-3dfe-4026-ae62-f831b75db097" satisfied condition "success or failure"
Jan 28 14:00:11.878: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-7927ac59-3dfe-4026-ae62-f831b75db097 container client-container: 
STEP: delete the pod
Jan 28 14:00:12.060: INFO: Waiting for pod downwardapi-volume-7927ac59-3dfe-4026-ae62-f831b75db097 to disappear
Jan 28 14:00:12.142: INFO: Pod downwardapi-volume-7927ac59-3dfe-4026-ae62-f831b75db097 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 28 14:00:12.143: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4610" for this suite.
Jan 28 14:00:18.243: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 28 14:00:18.359: INFO: namespace projected-4610 deletion completed in 6.140410717s

• [SLOW TEST:14.746 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 28 14:00:18.360: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-map-d62964ca-b90e-417c-9bcd-77c60cc22b54
STEP: Creating a pod to test consume configMaps
Jan 28 14:00:18.476: INFO: Waiting up to 5m0s for pod "pod-configmaps-0acdfb65-bf41-4777-ac37-3dc0ef56b026" in namespace "configmap-9404" to be "success or failure"
Jan 28 14:00:18.487: INFO: Pod "pod-configmaps-0acdfb65-bf41-4777-ac37-3dc0ef56b026": Phase="Pending", Reason="", readiness=false. Elapsed: 10.930049ms
Jan 28 14:00:20.499: INFO: Pod "pod-configmaps-0acdfb65-bf41-4777-ac37-3dc0ef56b026": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02362156s
Jan 28 14:00:22.515: INFO: Pod "pod-configmaps-0acdfb65-bf41-4777-ac37-3dc0ef56b026": Phase="Pending", Reason="", readiness=false. Elapsed: 4.039507316s
Jan 28 14:00:24.532: INFO: Pod "pod-configmaps-0acdfb65-bf41-4777-ac37-3dc0ef56b026": Phase="Pending", Reason="", readiness=false. Elapsed: 6.055775082s
Jan 28 14:00:26.551: INFO: Pod "pod-configmaps-0acdfb65-bf41-4777-ac37-3dc0ef56b026": Phase="Pending", Reason="", readiness=false. Elapsed: 8.075689756s
Jan 28 14:00:28.571: INFO: Pod "pod-configmaps-0acdfb65-bf41-4777-ac37-3dc0ef56b026": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.094781139s
STEP: Saw pod success
Jan 28 14:00:28.571: INFO: Pod "pod-configmaps-0acdfb65-bf41-4777-ac37-3dc0ef56b026" satisfied condition "success or failure"
Jan 28 14:00:28.580: INFO: Trying to get logs from node iruya-node pod pod-configmaps-0acdfb65-bf41-4777-ac37-3dc0ef56b026 container configmap-volume-test: 
STEP: delete the pod
Jan 28 14:00:28.741: INFO: Waiting for pod pod-configmaps-0acdfb65-bf41-4777-ac37-3dc0ef56b026 to disappear
Jan 28 14:00:28.760: INFO: Pod pod-configmaps-0acdfb65-bf41-4777-ac37-3dc0ef56b026 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 28 14:00:28.760: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-9404" for this suite.
Jan 28 14:00:34.794: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 28 14:00:34.926: INFO: namespace configmap-9404 deletion completed in 6.153807808s

• [SLOW TEST:16.566 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[k8s.io] KubeletManagedEtcHosts 
  should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] KubeletManagedEtcHosts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 28 14:00:34.926: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Setting up the test
STEP: Creating hostNetwork=false pod
STEP: Creating hostNetwork=true pod
STEP: Running the test
STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false
Jan 28 14:00:55.144: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-8846 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 28 14:00:55.144: INFO: >>> kubeConfig: /root/.kube/config
I0128 14:00:55.242743       9 log.go:172] (0xc00073d8c0) (0xc00244d5e0) Create stream
I0128 14:00:55.242965       9 log.go:172] (0xc00073d8c0) (0xc00244d5e0) Stream added, broadcasting: 1
I0128 14:00:55.254898       9 log.go:172] (0xc00073d8c0) Reply frame received for 1
I0128 14:00:55.254967       9 log.go:172] (0xc00073d8c0) (0xc0029cc460) Create stream
I0128 14:00:55.254993       9 log.go:172] (0xc00073d8c0) (0xc0029cc460) Stream added, broadcasting: 3
I0128 14:00:55.257660       9 log.go:172] (0xc00073d8c0) Reply frame received for 3
I0128 14:00:55.257692       9 log.go:172] (0xc00073d8c0) (0xc0029cc500) Create stream
I0128 14:00:55.257704       9 log.go:172] (0xc00073d8c0) (0xc0029cc500) Stream added, broadcasting: 5
I0128 14:00:55.265140       9 log.go:172] (0xc00073d8c0) Reply frame received for 5
I0128 14:00:55.435413       9 log.go:172] (0xc00073d8c0) Data frame received for 3
I0128 14:00:55.435558       9 log.go:172] (0xc0029cc460) (3) Data frame handling
I0128 14:00:55.435601       9 log.go:172] (0xc0029cc460) (3) Data frame sent
I0128 14:00:55.621981       9 log.go:172] (0xc00073d8c0) Data frame received for 1
I0128 14:00:55.622250       9 log.go:172] (0xc00073d8c0) (0xc0029cc460) Stream removed, broadcasting: 3
I0128 14:00:55.622589       9 log.go:172] (0xc00244d5e0) (1) Data frame handling
I0128 14:00:55.622648       9 log.go:172] (0xc00244d5e0) (1) Data frame sent
I0128 14:00:55.622731       9 log.go:172] (0xc00073d8c0) (0xc00244d5e0) Stream removed, broadcasting: 1
I0128 14:00:55.624642       9 log.go:172] (0xc00073d8c0) (0xc0029cc500) Stream removed, broadcasting: 5
I0128 14:00:55.624777       9 log.go:172] (0xc00073d8c0) (0xc00244d5e0) Stream removed, broadcasting: 1
I0128 14:00:55.624796       9 log.go:172] (0xc00073d8c0) (0xc0029cc460) Stream removed, broadcasting: 3
I0128 14:00:55.624832       9 log.go:172] (0xc00073d8c0) (0xc0029cc500) Stream removed, broadcasting: 5
Jan 28 14:00:55.624: INFO: Exec stderr: ""
I0128 14:00:55.624901       9 log.go:172] (0xc00073d8c0) Go away received
Jan 28 14:00:55.624: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-8846 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 28 14:00:55.625: INFO: >>> kubeConfig: /root/.kube/config
I0128 14:00:55.695870       9 log.go:172] (0xc001349290) (0xc0029cc6e0) Create stream
I0128 14:00:55.696103       9 log.go:172] (0xc001349290) (0xc0029cc6e0) Stream added, broadcasting: 1
I0128 14:00:55.708895       9 log.go:172] (0xc001349290) Reply frame received for 1
I0128 14:00:55.709114       9 log.go:172] (0xc001349290) (0xc0021d19a0) Create stream
I0128 14:00:55.709128       9 log.go:172] (0xc001349290) (0xc0021d19a0) Stream added, broadcasting: 3
I0128 14:00:55.710628       9 log.go:172] (0xc001349290) Reply frame received for 3
I0128 14:00:55.710657       9 log.go:172] (0xc001349290) (0xc0021d1a40) Create stream
I0128 14:00:55.710669       9 log.go:172] (0xc001349290) (0xc0021d1a40) Stream added, broadcasting: 5
I0128 14:00:55.712551       9 log.go:172] (0xc001349290) Reply frame received for 5
I0128 14:00:55.871613       9 log.go:172] (0xc001349290) Data frame received for 3
I0128 14:00:55.871835       9 log.go:172] (0xc0021d19a0) (3) Data frame handling
I0128 14:00:55.871869       9 log.go:172] (0xc0021d19a0) (3) Data frame sent
I0128 14:00:56.029005       9 log.go:172] (0xc001349290) Data frame received for 1
I0128 14:00:56.029129       9 log.go:172] (0xc001349290) (0xc0021d19a0) Stream removed, broadcasting: 3
I0128 14:00:56.029219       9 log.go:172] (0xc0029cc6e0) (1) Data frame handling
I0128 14:00:56.029256       9 log.go:172] (0xc0029cc6e0) (1) Data frame sent
I0128 14:00:56.029302       9 log.go:172] (0xc001349290) (0xc0021d1a40) Stream removed, broadcasting: 5
I0128 14:00:56.029349       9 log.go:172] (0xc001349290) (0xc0029cc6e0) Stream removed, broadcasting: 1
I0128 14:00:56.029386       9 log.go:172] (0xc001349290) Go away received
I0128 14:00:56.029830       9 log.go:172] (0xc001349290) (0xc0029cc6e0) Stream removed, broadcasting: 1
I0128 14:00:56.029842       9 log.go:172] (0xc001349290) (0xc0021d19a0) Stream removed, broadcasting: 3
I0128 14:00:56.029850       9 log.go:172] (0xc001349290) (0xc0021d1a40) Stream removed, broadcasting: 5
Jan 28 14:00:56.029: INFO: Exec stderr: ""
Jan 28 14:00:56.030: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-8846 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 28 14:00:56.030: INFO: >>> kubeConfig: /root/.kube/config
I0128 14:00:56.089137       9 log.go:172] (0xc002b92420) (0xc00244d900) Create stream
I0128 14:00:56.089259       9 log.go:172] (0xc002b92420) (0xc00244d900) Stream added, broadcasting: 1
I0128 14:00:56.096479       9 log.go:172] (0xc002b92420) Reply frame received for 1
I0128 14:00:56.096611       9 log.go:172] (0xc002b92420) (0xc0021d1ae0) Create stream
I0128 14:00:56.096628       9 log.go:172] (0xc002b92420) (0xc0021d1ae0) Stream added, broadcasting: 3
I0128 14:00:56.098245       9 log.go:172] (0xc002b92420) Reply frame received for 3
I0128 14:00:56.098274       9 log.go:172] (0xc002b92420) (0xc00244d9a0) Create stream
I0128 14:00:56.098304       9 log.go:172] (0xc002b92420) (0xc00244d9a0) Stream added, broadcasting: 5
I0128 14:00:56.099953       9 log.go:172] (0xc002b92420) Reply frame received for 5
I0128 14:00:56.183599       9 log.go:172] (0xc002b92420) Data frame received for 3
I0128 14:00:56.183706       9 log.go:172] (0xc0021d1ae0) (3) Data frame handling
I0128 14:00:56.183748       9 log.go:172] (0xc0021d1ae0) (3) Data frame sent
I0128 14:00:56.302509       9 log.go:172] (0xc002b92420) (0xc00244d9a0) Stream removed, broadcasting: 5
I0128 14:00:56.302864       9 log.go:172] (0xc002b92420) Data frame received for 1
I0128 14:00:56.302903       9 log.go:172] (0xc00244d900) (1) Data frame handling
I0128 14:00:56.302954       9 log.go:172] (0xc00244d900) (1) Data frame sent
I0128 14:00:56.303039       9 log.go:172] (0xc002b92420) (0xc00244d900) Stream removed, broadcasting: 1
I0128 14:00:56.303855       9 log.go:172] (0xc002b92420) (0xc0021d1ae0) Stream removed, broadcasting: 3
I0128 14:00:56.304248       9 log.go:172] (0xc002b92420) Go away received
I0128 14:00:56.304694       9 log.go:172] (0xc002b92420) (0xc00244d900) Stream removed, broadcasting: 1
I0128 14:00:56.304747       9 log.go:172] (0xc002b92420) (0xc0021d1ae0) Stream removed, broadcasting: 3
I0128 14:00:56.304770       9 log.go:172] (0xc002b92420) (0xc00244d9a0) Stream removed, broadcasting: 5
Jan 28 14:00:56.304: INFO: Exec stderr: ""
Jan 28 14:00:56.305: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-8846 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 28 14:00:56.305: INFO: >>> kubeConfig: /root/.kube/config
I0128 14:00:56.367065       9 log.go:172] (0xc001a63ce0) (0xc0021d1cc0) Create stream
I0128 14:00:56.367219       9 log.go:172] (0xc001a63ce0) (0xc0021d1cc0) Stream added, broadcasting: 1
I0128 14:00:56.378099       9 log.go:172] (0xc001a63ce0) Reply frame received for 1
I0128 14:00:56.378288       9 log.go:172] (0xc001a63ce0) (0xc0029cc780) Create stream
I0128 14:00:56.378311       9 log.go:172] (0xc001a63ce0) (0xc0029cc780) Stream added, broadcasting: 3
I0128 14:00:56.380064       9 log.go:172] (0xc001a63ce0) Reply frame received for 3
I0128 14:00:56.380124       9 log.go:172] (0xc001a63ce0) (0xc0021d1d60) Create stream
I0128 14:00:56.380136       9 log.go:172] (0xc001a63ce0) (0xc0021d1d60) Stream added, broadcasting: 5
I0128 14:00:56.382747       9 log.go:172] (0xc001a63ce0) Reply frame received for 5
I0128 14:00:56.490776       9 log.go:172] (0xc001a63ce0) Data frame received for 3
I0128 14:00:56.491185       9 log.go:172] (0xc0029cc780) (3) Data frame handling
I0128 14:00:56.491222       9 log.go:172] (0xc0029cc780) (3) Data frame sent
I0128 14:00:56.682131       9 log.go:172] (0xc001a63ce0) Data frame received for 1
I0128 14:00:56.682248       9 log.go:172] (0xc0021d1cc0) (1) Data frame handling
I0128 14:00:56.682284       9 log.go:172] (0xc0021d1cc0) (1) Data frame sent
I0128 14:00:56.682576       9 log.go:172] (0xc001a63ce0) (0xc0021d1d60) Stream removed, broadcasting: 5
I0128 14:00:56.682695       9 log.go:172] (0xc001a63ce0) (0xc0021d1cc0) Stream removed, broadcasting: 1
I0128 14:00:56.682899       9 log.go:172] (0xc001a63ce0) (0xc0029cc780) Stream removed, broadcasting: 3
I0128 14:00:56.683199       9 log.go:172] (0xc001a63ce0) Go away received
I0128 14:00:56.683250       9 log.go:172] (0xc001a63ce0) (0xc0021d1cc0) Stream removed, broadcasting: 1
I0128 14:00:56.683269       9 log.go:172] (0xc001a63ce0) (0xc0029cc780) Stream removed, broadcasting: 3
I0128 14:00:56.683286       9 log.go:172] (0xc001a63ce0) (0xc0021d1d60) Stream removed, broadcasting: 5
Jan 28 14:00:56.683: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount
Jan 28 14:00:56.683: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-8846 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 28 14:00:56.683: INFO: >>> kubeConfig: /root/.kube/config
I0128 14:00:56.767230       9 log.go:172] (0xc002b93290) (0xc00244dea0) Create stream
I0128 14:00:56.767466       9 log.go:172] (0xc002b93290) (0xc00244dea0) Stream added, broadcasting: 1
I0128 14:00:56.776299       9 log.go:172] (0xc002b93290) Reply frame received for 1
I0128 14:00:56.776485       9 log.go:172] (0xc002b93290) (0xc0029cc820) Create stream
I0128 14:00:56.776505       9 log.go:172] (0xc002b93290) (0xc0029cc820) Stream added, broadcasting: 3
I0128 14:00:56.778020       9 log.go:172] (0xc002b93290) Reply frame received for 3
I0128 14:00:56.778046       9 log.go:172] (0xc002b93290) (0xc00244df40) Create stream
I0128 14:00:56.778054       9 log.go:172] (0xc002b93290) (0xc00244df40) Stream added, broadcasting: 5
I0128 14:00:56.779418       9 log.go:172] (0xc002b93290) Reply frame received for 5
I0128 14:00:56.872819       9 log.go:172] (0xc002b93290) Data frame received for 3
I0128 14:00:56.872927       9 log.go:172] (0xc0029cc820) (3) Data frame handling
I0128 14:00:56.872960       9 log.go:172] (0xc0029cc820) (3) Data frame sent
I0128 14:00:56.999781       9 log.go:172] (0xc002b93290) Data frame received for 1
I0128 14:00:57.000000       9 log.go:172] (0xc002b93290) (0xc0029cc820) Stream removed, broadcasting: 3
I0128 14:00:57.000118       9 log.go:172] (0xc00244dea0) (1) Data frame handling
I0128 14:00:57.000155       9 log.go:172] (0xc00244dea0) (1) Data frame sent
I0128 14:00:57.000189       9 log.go:172] (0xc002b93290) (0xc00244df40) Stream removed, broadcasting: 5
I0128 14:00:57.000259       9 log.go:172] (0xc002b93290) (0xc00244dea0) Stream removed, broadcasting: 1
I0128 14:00:57.000274       9 log.go:172] (0xc002b93290) Go away received
I0128 14:00:57.000836       9 log.go:172] (0xc002b93290) (0xc00244dea0) Stream removed, broadcasting: 1
I0128 14:00:57.000859       9 log.go:172] (0xc002b93290) (0xc0029cc820) Stream removed, broadcasting: 3
I0128 14:00:57.000872       9 log.go:172] (0xc002b93290) (0xc00244df40) Stream removed, broadcasting: 5
Jan 28 14:00:57.000: INFO: Exec stderr: ""
Jan 28 14:00:57.001: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-8846 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 28 14:00:57.001: INFO: >>> kubeConfig: /root/.kube/config
I0128 14:00:57.067579       9 log.go:172] (0xc001f54b00) (0xc001af88c0) Create stream
I0128 14:00:57.067649       9 log.go:172] (0xc001f54b00) (0xc001af88c0) Stream added, broadcasting: 1
I0128 14:00:57.075606       9 log.go:172] (0xc001f54b00) Reply frame received for 1
I0128 14:00:57.075759       9 log.go:172] (0xc001f54b00) (0xc00279cd20) Create stream
I0128 14:00:57.075770       9 log.go:172] (0xc001f54b00) (0xc00279cd20) Stream added, broadcasting: 3
I0128 14:00:57.077348       9 log.go:172] (0xc001f54b00) Reply frame received for 3
I0128 14:00:57.077379       9 log.go:172] (0xc001f54b00) (0xc001af8960) Create stream
I0128 14:00:57.077390       9 log.go:172] (0xc001f54b00) (0xc001af8960) Stream added, broadcasting: 5
I0128 14:00:57.079070       9 log.go:172] (0xc001f54b00) Reply frame received for 5
I0128 14:00:57.210195       9 log.go:172] (0xc001f54b00) Data frame received for 3
I0128 14:00:57.210279       9 log.go:172] (0xc00279cd20) (3) Data frame handling
I0128 14:00:57.210306       9 log.go:172] (0xc00279cd20) (3) Data frame sent
I0128 14:00:57.330001       9 log.go:172] (0xc001f54b00) (0xc00279cd20) Stream removed, broadcasting: 3
I0128 14:00:57.330371       9 log.go:172] (0xc001f54b00) (0xc001af8960) Stream removed, broadcasting: 5
I0128 14:00:57.330493       9 log.go:172] (0xc001f54b00) Data frame received for 1
I0128 14:00:57.330513       9 log.go:172] (0xc001af88c0) (1) Data frame handling
I0128 14:00:57.330587       9 log.go:172] (0xc001af88c0) (1) Data frame sent
I0128 14:00:57.330601       9 log.go:172] (0xc001f54b00) (0xc001af88c0) Stream removed, broadcasting: 1
I0128 14:00:57.331183       9 log.go:172] (0xc001f54b00) (0xc001af88c0) Stream removed, broadcasting: 1
I0128 14:00:57.331223       9 log.go:172] (0xc001f54b00) (0xc00279cd20) Stream removed, broadcasting: 3
I0128 14:00:57.331259       9 log.go:172] (0xc001f54b00) (0xc001af8960) Stream removed, broadcasting: 5
I0128 14:00:57.331740       9 log.go:172] (0xc001f54b00) Go away received
Jan 28 14:00:57.332: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true
Jan 28 14:00:57.332: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-8846 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 28 14:00:57.332: INFO: >>> kubeConfig: /root/.kube/config
I0128 14:00:57.414639       9 log.go:172] (0xc001f55b80) (0xc001af8d20) Create stream
I0128 14:00:57.414764       9 log.go:172] (0xc001f55b80) (0xc001af8d20) Stream added, broadcasting: 1
I0128 14:00:57.429252       9 log.go:172] (0xc001f55b80) Reply frame received for 1
I0128 14:00:57.429412       9 log.go:172] (0xc001f55b80) (0xc002d82000) Create stream
I0128 14:00:57.429433       9 log.go:172] (0xc001f55b80) (0xc002d82000) Stream added, broadcasting: 3
I0128 14:00:57.432982       9 log.go:172] (0xc001f55b80) Reply frame received for 3
I0128 14:00:57.433031       9 log.go:172] (0xc001f55b80) (0xc0029cc8c0) Create stream
I0128 14:00:57.433066       9 log.go:172] (0xc001f55b80) (0xc0029cc8c0) Stream added, broadcasting: 5
I0128 14:00:57.434800       9 log.go:172] (0xc001f55b80) Reply frame received for 5
I0128 14:00:57.533904       9 log.go:172] (0xc001f55b80) Data frame received for 3
I0128 14:00:57.533971       9 log.go:172] (0xc002d82000) (3) Data frame handling
I0128 14:00:57.534105       9 log.go:172] (0xc002d82000) (3) Data frame sent
I0128 14:00:57.651635       9 log.go:172] (0xc001f55b80) Data frame received for 1
I0128 14:00:57.651908       9 log.go:172] (0xc001f55b80) (0xc002d82000) Stream removed, broadcasting: 3
I0128 14:00:57.652057       9 log.go:172] (0xc001af8d20) (1) Data frame handling
I0128 14:00:57.652138       9 log.go:172] (0xc001af8d20) (1) Data frame sent
I0128 14:00:57.652178       9 log.go:172] (0xc001f55b80) (0xc0029cc8c0) Stream removed, broadcasting: 5
I0128 14:00:57.652242       9 log.go:172] (0xc001f55b80) (0xc001af8d20) Stream removed, broadcasting: 1
I0128 14:00:57.652290       9 log.go:172] (0xc001f55b80) Go away received
I0128 14:00:57.652754       9 log.go:172] (0xc001f55b80) (0xc001af8d20) Stream removed, broadcasting: 1
I0128 14:00:57.652816       9 log.go:172] (0xc001f55b80) (0xc002d82000) Stream removed, broadcasting: 3
I0128 14:00:57.652899       9 log.go:172] (0xc001f55b80) (0xc0029cc8c0) Stream removed, broadcasting: 5
Jan 28 14:00:57.653: INFO: Exec stderr: ""
Jan 28 14:00:57.653: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-8846 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 28 14:00:57.653: INFO: >>> kubeConfig: /root/.kube/config
I0128 14:00:57.731700       9 log.go:172] (0xc00205b760) (0xc00279d0e0) Create stream
I0128 14:00:57.731768       9 log.go:172] (0xc00205b760) (0xc00279d0e0) Stream added, broadcasting: 1
I0128 14:00:57.741102       9 log.go:172] (0xc00205b760) Reply frame received for 1
I0128 14:00:57.741339       9 log.go:172] (0xc00205b760) (0xc001af8dc0) Create stream
I0128 14:00:57.741380       9 log.go:172] (0xc00205b760) (0xc001af8dc0) Stream added, broadcasting: 3
I0128 14:00:57.743271       9 log.go:172] (0xc00205b760) Reply frame received for 3
I0128 14:00:57.743322       9 log.go:172] (0xc00205b760) (0xc00279d180) Create stream
I0128 14:00:57.743341       9 log.go:172] (0xc00205b760) (0xc00279d180) Stream added, broadcasting: 5
I0128 14:00:57.745732       9 log.go:172] (0xc00205b760) Reply frame received for 5
I0128 14:00:57.872089       9 log.go:172] (0xc00205b760) Data frame received for 3
I0128 14:00:57.872266       9 log.go:172] (0xc001af8dc0) (3) Data frame handling
I0128 14:00:57.872521       9 log.go:172] (0xc001af8dc0) (3) Data frame sent
I0128 14:00:58.005826       9 log.go:172] (0xc00205b760) (0xc001af8dc0) Stream removed, broadcasting: 3
I0128 14:00:58.005980       9 log.go:172] (0xc00205b760) Data frame received for 1
I0128 14:00:58.006025       9 log.go:172] (0xc00279d0e0) (1) Data frame handling
I0128 14:00:58.006056       9 log.go:172] (0xc00279d0e0) (1) Data frame sent
I0128 14:00:58.006075       9 log.go:172] (0xc00205b760) (0xc00279d0e0) Stream removed, broadcasting: 1
I0128 14:00:58.006300       9 log.go:172] (0xc00205b760) (0xc00279d180) Stream removed, broadcasting: 5
I0128 14:00:58.006355       9 log.go:172] (0xc00205b760) Go away received
I0128 14:00:58.006718       9 log.go:172] (0xc00205b760) (0xc00279d0e0) Stream removed, broadcasting: 1
I0128 14:00:58.006757       9 log.go:172] (0xc00205b760) (0xc001af8dc0) Stream removed, broadcasting: 3
I0128 14:00:58.006801       9 log.go:172] (0xc00205b760) (0xc00279d180) Stream removed, broadcasting: 5
Jan 28 14:00:58.006: INFO: Exec stderr: ""
Jan 28 14:00:58.007: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-8846 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 28 14:00:58.007: INFO: >>> kubeConfig: /root/.kube/config
I0128 14:00:58.073160       9 log.go:172] (0xc002d86b00) (0xc001af9180) Create stream
I0128 14:00:58.073305       9 log.go:172] (0xc002d86b00) (0xc001af9180) Stream added, broadcasting: 1
I0128 14:00:58.083039       9 log.go:172] (0xc002d86b00) Reply frame received for 1
I0128 14:00:58.083116       9 log.go:172] (0xc002d86b00) (0xc00279d220) Create stream
I0128 14:00:58.083126       9 log.go:172] (0xc002d86b00) (0xc00279d220) Stream added, broadcasting: 3
I0128 14:00:58.084962       9 log.go:172] (0xc002d86b00) Reply frame received for 3
I0128 14:00:58.085043       9 log.go:172] (0xc002d86b00) (0xc002d820a0) Create stream
I0128 14:00:58.085050       9 log.go:172] (0xc002d86b00) (0xc002d820a0) Stream added, broadcasting: 5
I0128 14:00:58.086918       9 log.go:172] (0xc002d86b00) Reply frame received for 5
I0128 14:00:58.211187       9 log.go:172] (0xc002d86b00) Data frame received for 3
I0128 14:00:58.211520       9 log.go:172] (0xc00279d220) (3) Data frame handling
I0128 14:00:58.211581       9 log.go:172] (0xc00279d220) (3) Data frame sent
I0128 14:00:58.339364       9 log.go:172] (0xc002d86b00) (0xc00279d220) Stream removed, broadcasting: 3
I0128 14:00:58.339680       9 log.go:172] (0xc002d86b00) Data frame received for 1
I0128 14:00:58.339754       9 log.go:172] (0xc001af9180) (1) Data frame handling
I0128 14:00:58.339825       9 log.go:172] (0xc001af9180) (1) Data frame sent
I0128 14:00:58.339875       9 log.go:172] (0xc002d86b00) (0xc001af9180) Stream removed, broadcasting: 1
I0128 14:00:58.340133       9 log.go:172] (0xc002d86b00) (0xc002d820a0) Stream removed, broadcasting: 5
I0128 14:00:58.340425       9 log.go:172] (0xc002d86b00) Go away received
I0128 14:00:58.340584       9 log.go:172] (0xc002d86b00) (0xc001af9180) Stream removed, broadcasting: 1
I0128 14:00:58.340654       9 log.go:172] (0xc002d86b00) (0xc00279d220) Stream removed, broadcasting: 3
I0128 14:00:58.340700       9 log.go:172] (0xc002d86b00) (0xc002d820a0) Stream removed, broadcasting: 5
Jan 28 14:00:58.340: INFO: Exec stderr: ""
Jan 28 14:00:58.340: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-8846 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 28 14:00:58.342: INFO: >>> kubeConfig: /root/.kube/config
I0128 14:00:58.416258       9 log.go:172] (0xc002d8aa50) (0xc002d82280) Create stream
I0128 14:00:58.416370       9 log.go:172] (0xc002d8aa50) (0xc002d82280) Stream added, broadcasting: 1
I0128 14:00:58.474973       9 log.go:172] (0xc002d8aa50) Reply frame received for 1
I0128 14:00:58.475422       9 log.go:172] (0xc002d8aa50) (0xc001af9220) Create stream
I0128 14:00:58.475509       9 log.go:172] (0xc002d8aa50) (0xc001af9220) Stream added, broadcasting: 3
I0128 14:00:58.492253       9 log.go:172] (0xc002d8aa50) Reply frame received for 3
I0128 14:00:58.492459       9 log.go:172] (0xc002d8aa50) (0xc0009fa0a0) Create stream
I0128 14:00:58.492487       9 log.go:172] (0xc002d8aa50) (0xc0009fa0a0) Stream added, broadcasting: 5
I0128 14:00:58.496030       9 log.go:172] (0xc002d8aa50) Reply frame received for 5
I0128 14:00:58.925424       9 log.go:172] (0xc002d8aa50) Data frame received for 3
I0128 14:00:58.925827       9 log.go:172] (0xc001af9220) (3) Data frame handling
I0128 14:00:58.925906       9 log.go:172] (0xc001af9220) (3) Data frame sent
I0128 14:00:59.093986       9 log.go:172] (0xc002d8aa50) Data frame received for 1
I0128 14:00:59.094257       9 log.go:172] (0xc002d8aa50) (0xc001af9220) Stream removed, broadcasting: 3
I0128 14:00:59.094647       9 log.go:172] (0xc002d82280) (1) Data frame handling
I0128 14:00:59.095054       9 log.go:172] (0xc002d82280) (1) Data frame sent
I0128 14:00:59.095119       9 log.go:172] (0xc002d8aa50) (0xc0009fa0a0) Stream removed, broadcasting: 5
I0128 14:00:59.095372       9 log.go:172] (0xc002d8aa50) (0xc002d82280) Stream removed, broadcasting: 1
I0128 14:00:59.095484       9 log.go:172] (0xc002d8aa50) Go away received
I0128 14:00:59.096191       9 log.go:172] (0xc002d8aa50) (0xc002d82280) Stream removed, broadcasting: 1
I0128 14:00:59.096247       9 log.go:172] (0xc002d8aa50) (0xc001af9220) Stream removed, broadcasting: 3
I0128 14:00:59.096263       9 log.go:172] (0xc002d8aa50) (0xc0009fa0a0) Stream removed, broadcasting: 5
Jan 28 14:00:59.096: INFO: Exec stderr: ""
[AfterEach] [k8s.io] KubeletManagedEtcHosts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 28 14:00:59.096: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-kubelet-etc-hosts-8846" for this suite.
Jan 28 14:01:43.128: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 28 14:01:43.348: INFO: namespace e2e-kubelet-etc-hosts-8846 deletion completed in 44.241591026s

• [SLOW TEST:68.421 seconds]
[k8s.io] KubeletManagedEtcHosts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period 
  should be submitted and removed [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 28 14:01:43.348: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Delete Grace Period
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:47
[It] should be submitted and removed [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: setting up selector
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
Jan 28 14:01:51.517: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0'
STEP: deleting the pod gracefully
STEP: verifying the kubelet observed the termination notice
Jan 28 14:02:06.698: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed
[AfterEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 28 14:02:06.703: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-9462" for this suite.
Jan 28 14:02:12.796: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 28 14:02:12.967: INFO: namespace pods-9462 deletion completed in 6.258696488s

• [SLOW TEST:29.619 seconds]
[k8s.io] [sig-node] Pods Extended
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  [k8s.io] Delete Grace Period
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should be submitted and removed [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 28 14:02:12.970: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-map-7e7ebba8-8898-4d4a-b7bf-8f394b0091c3
STEP: Creating a pod to test consume configMaps
Jan 28 14:02:13.096: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-1fa59079-708c-4cd2-b4d7-13cc10739318" in namespace "projected-8607" to be "success or failure"
Jan 28 14:02:13.119: INFO: Pod "pod-projected-configmaps-1fa59079-708c-4cd2-b4d7-13cc10739318": Phase="Pending", Reason="", readiness=false. Elapsed: 22.849807ms
Jan 28 14:02:15.128: INFO: Pod "pod-projected-configmaps-1fa59079-708c-4cd2-b4d7-13cc10739318": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031625127s
Jan 28 14:02:17.140: INFO: Pod "pod-projected-configmaps-1fa59079-708c-4cd2-b4d7-13cc10739318": Phase="Pending", Reason="", readiness=false. Elapsed: 4.043806285s
Jan 28 14:02:19.150: INFO: Pod "pod-projected-configmaps-1fa59079-708c-4cd2-b4d7-13cc10739318": Phase="Pending", Reason="", readiness=false. Elapsed: 6.053413537s
Jan 28 14:02:21.160: INFO: Pod "pod-projected-configmaps-1fa59079-708c-4cd2-b4d7-13cc10739318": Phase="Running", Reason="", readiness=true. Elapsed: 8.063333209s
Jan 28 14:02:23.170: INFO: Pod "pod-projected-configmaps-1fa59079-708c-4cd2-b4d7-13cc10739318": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.07383842s
STEP: Saw pod success
Jan 28 14:02:23.170: INFO: Pod "pod-projected-configmaps-1fa59079-708c-4cd2-b4d7-13cc10739318" satisfied condition "success or failure"
Jan 28 14:02:23.175: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-1fa59079-708c-4cd2-b4d7-13cc10739318 container projected-configmap-volume-test: 
STEP: delete the pod
Jan 28 14:02:23.243: INFO: Waiting for pod pod-projected-configmaps-1fa59079-708c-4cd2-b4d7-13cc10739318 to disappear
Jan 28 14:02:23.351: INFO: Pod pod-projected-configmaps-1fa59079-708c-4cd2-b4d7-13cc10739318 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 28 14:02:23.351: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8607" for this suite.
Jan 28 14:02:29.403: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 28 14:02:29.544: INFO: namespace projected-8607 deletion completed in 6.180725341s

• [SLOW TEST:16.575 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 28 14:02:29.547: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72
[It] deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jan 28 14:02:29.650: INFO: Pod name rollover-pod: Found 0 pods out of 1
Jan 28 14:02:34.661: INFO: Pod name rollover-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Jan 28 14:02:36.673: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready
Jan 28 14:02:38.681: INFO: Creating deployment "test-rollover-deployment"
Jan 28 14:02:38.709: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations
Jan 28 14:02:40.751: INFO: Check revision of new replica set for deployment "test-rollover-deployment"
Jan 28 14:02:40.776: INFO: Ensure that both replica sets have 1 created replica
Jan 28 14:02:40.786: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update
Jan 28 14:02:40.801: INFO: Updating deployment test-rollover-deployment
Jan 28 14:02:40.801: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller
Jan 28 14:02:42.944: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2
Jan 28 14:02:42.955: INFO: Make sure deployment "test-rollover-deployment" is complete
Jan 28 14:02:42.966: INFO: all replica sets need to contain the pod-template-hash label
Jan 28 14:02:42.967: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715816958, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715816958, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715816961, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715816958, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 28 14:02:44.991: INFO: all replica sets need to contain the pod-template-hash label
Jan 28 14:02:44.992: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715816958, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715816958, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715816961, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715816958, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 28 14:02:46.985: INFO: all replica sets need to contain the pod-template-hash label
Jan 28 14:02:46.985: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715816958, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715816958, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715816961, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715816958, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 28 14:02:48.991: INFO: all replica sets need to contain the pod-template-hash label
Jan 28 14:02:48.992: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715816958, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715816958, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715816961, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715816958, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 28 14:02:50.984: INFO: all replica sets need to contain the pod-template-hash label
Jan 28 14:02:50.985: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715816958, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715816958, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715816969, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715816958, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 28 14:02:52.991: INFO: all replica sets need to contain the pod-template-hash label
Jan 28 14:02:52.991: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715816958, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715816958, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715816969, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715816958, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 28 14:02:54.980: INFO: all replica sets need to contain the pod-template-hash label
Jan 28 14:02:54.980: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715816958, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715816958, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715816969, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715816958, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 28 14:02:56.986: INFO: all replica sets need to contain the pod-template-hash label
Jan 28 14:02:56.986: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715816958, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715816958, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715816969, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715816958, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 28 14:02:59.021: INFO: all replica sets need to contain the pod-template-hash label
Jan 28 14:02:59.021: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715816958, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715816958, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715816969, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715816958, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 28 14:03:00.989: INFO: 
Jan 28 14:03:00.989: INFO: Ensure that both old replica sets have no replicas
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66
Jan 28 14:03:01.002: INFO: Deployment "test-rollover-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment,GenerateName:,Namespace:deployment-4987,SelfLink:/apis/apps/v1/namespaces/deployment-4987/deployments/test-rollover-deployment,UID:8781d2ee-ac56-47be-83a1-c79834d26eee,ResourceVersion:22194175,Generation:2,CreationTimestamp:2020-01-28 14:02:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-01-28 14:02:38 +0000 UTC 2020-01-28 14:02:38 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-01-28 14:02:59 +0000 UTC 2020-01-28 14:02:38 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rollover-deployment-854595fc44" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},}

Jan 28 14:03:01.007: INFO: New ReplicaSet "test-rollover-deployment-854595fc44" of Deployment "test-rollover-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-854595fc44,GenerateName:,Namespace:deployment-4987,SelfLink:/apis/apps/v1/namespaces/deployment-4987/replicasets/test-rollover-deployment-854595fc44,UID:c751c903-ee90-4b3d-9bc6-3b8f068a5fb5,ResourceVersion:22194165,Generation:2,CreationTimestamp:2020-01-28 14:02:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 8781d2ee-ac56-47be-83a1-c79834d26eee 0xc002183387 0xc002183388}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Jan 28 14:03:01.007: INFO: All old ReplicaSets of Deployment "test-rollover-deployment":
Jan 28 14:03:01.007: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-controller,GenerateName:,Namespace:deployment-4987,SelfLink:/apis/apps/v1/namespaces/deployment-4987/replicasets/test-rollover-controller,UID:dd7dff1c-f31b-41f1-8688-dbb1d321a2a4,ResourceVersion:22194173,Generation:2,CreationTimestamp:2020-01-28 14:02:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 8781d2ee-ac56-47be-83a1-c79834d26eee 0xc0021831d7 0xc0021831d8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Jan 28 14:03:01.008: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-9b8b997cf,GenerateName:,Namespace:deployment-4987,SelfLink:/apis/apps/v1/namespaces/deployment-4987/replicasets/test-rollover-deployment-9b8b997cf,UID:ef655539-c33c-4391-b2d5-c3286617c5a5,ResourceVersion:22194132,Generation:2,CreationTimestamp:2020-01-28 14:02:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 8781d2ee-ac56-47be-83a1-c79834d26eee 0xc002183450 0xc002183451}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Jan 28 14:03:01.014: INFO: Pod "test-rollover-deployment-854595fc44-sxd62" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-854595fc44-sxd62,GenerateName:test-rollover-deployment-854595fc44-,Namespace:deployment-4987,SelfLink:/api/v1/namespaces/deployment-4987/pods/test-rollover-deployment-854595fc44-sxd62,UID:e29b2d0a-f9d7-44b7-a1bc-75ede989983d,ResourceVersion:22194149,Generation:0,CreationTimestamp:2020-01-28 14:02:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rollover-deployment-854595fc44 c751c903-ee90-4b3d-9bc6-3b8f068a5fb5 0xc001f752d7 0xc001f752d8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-4rkv4 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-4rkv4,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [{default-token-4rkv4 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001f754a0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001f75550}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 14:02:41 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 14:02:49 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 14:02:49 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 14:02:41 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.2,StartTime:2020-01-28 14:02:41 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-01-28 14:02:48 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 docker://ef11f316b13f8cfb9d878dc2fe7275bafbe0170ca572e98fbe8e8a8bf05ea1a2}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 28 14:03:01.015: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-4987" for this suite.
Jan 28 14:03:07.041: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 28 14:03:07.210: INFO: namespace deployment-4987 deletion completed in 6.188776913s

• [SLOW TEST:37.663 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 28 14:03:07.211: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: modifying the configmap a second time
STEP: deleting the configmap
STEP: creating a watch on configmaps from the resource version returned by the first update
STEP: Expecting to observe notifications for all changes to the configmap after the first update
Jan 28 14:03:07.534: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:watch-1596,SelfLink:/api/v1/namespaces/watch-1596/configmaps/e2e-watch-test-resource-version,UID:5cfd334b-2602-446e-96a6-64be91947685,ResourceVersion:22194225,Generation:0,CreationTimestamp:2020-01-28 14:03:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Jan 28 14:03:07.535: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:watch-1596,SelfLink:/api/v1/namespaces/watch-1596/configmaps/e2e-watch-test-resource-version,UID:5cfd334b-2602-446e-96a6-64be91947685,ResourceVersion:22194226,Generation:0,CreationTimestamp:2020-01-28 14:03:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 28 14:03:07.536: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-1596" for this suite.
Jan 28 14:03:13.621: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 28 14:03:13.877: INFO: namespace watch-1596 deletion completed in 6.332135945s

• [SLOW TEST:6.666 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 28 14:03:13.878: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Performing setup for networking test in namespace pod-network-test-8111
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Jan 28 14:03:13.976: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Jan 28 14:03:48.260: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostName&protocol=udp&host=10.44.0.1&port=8081&tries=1'] Namespace:pod-network-test-8111 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 28 14:03:48.261: INFO: >>> kubeConfig: /root/.kube/config
I0128 14:03:48.340709       9 log.go:172] (0xc001a8c2c0) (0xc00233a8c0) Create stream
I0128 14:03:48.340857       9 log.go:172] (0xc001a8c2c0) (0xc00233a8c0) Stream added, broadcasting: 1
I0128 14:03:48.352088       9 log.go:172] (0xc001a8c2c0) Reply frame received for 1
I0128 14:03:48.352178       9 log.go:172] (0xc001a8c2c0) (0xc000a185a0) Create stream
I0128 14:03:48.352194       9 log.go:172] (0xc001a8c2c0) (0xc000a185a0) Stream added, broadcasting: 3
I0128 14:03:48.354232       9 log.go:172] (0xc001a8c2c0) Reply frame received for 3
I0128 14:03:48.354286       9 log.go:172] (0xc001a8c2c0) (0xc001d1a280) Create stream
I0128 14:03:48.354302       9 log.go:172] (0xc001a8c2c0) (0xc001d1a280) Stream added, broadcasting: 5
I0128 14:03:48.356375       9 log.go:172] (0xc001a8c2c0) Reply frame received for 5
I0128 14:03:48.606377       9 log.go:172] (0xc001a8c2c0) Data frame received for 3
I0128 14:03:48.606697       9 log.go:172] (0xc000a185a0) (3) Data frame handling
I0128 14:03:48.606768       9 log.go:172] (0xc000a185a0) (3) Data frame sent
I0128 14:03:48.900540       9 log.go:172] (0xc001a8c2c0) (0xc000a185a0) Stream removed, broadcasting: 3
I0128 14:03:48.900833       9 log.go:172] (0xc001a8c2c0) Data frame received for 1
I0128 14:03:48.900902       9 log.go:172] (0xc001a8c2c0) (0xc001d1a280) Stream removed, broadcasting: 5
I0128 14:03:48.900988       9 log.go:172] (0xc00233a8c0) (1) Data frame handling
I0128 14:03:48.901018       9 log.go:172] (0xc00233a8c0) (1) Data frame sent
I0128 14:03:48.901035       9 log.go:172] (0xc001a8c2c0) (0xc00233a8c0) Stream removed, broadcasting: 1
I0128 14:03:48.901100       9 log.go:172] (0xc001a8c2c0) Go away received
I0128 14:03:48.902338       9 log.go:172] (0xc001a8c2c0) (0xc00233a8c0) Stream removed, broadcasting: 1
I0128 14:03:48.902392       9 log.go:172] (0xc001a8c2c0) (0xc000a185a0) Stream removed, broadcasting: 3
I0128 14:03:48.902410       9 log.go:172] (0xc001a8c2c0) (0xc001d1a280) Stream removed, broadcasting: 5
Jan 28 14:03:48.902: INFO: Waiting for endpoints: map[]
Jan 28 14:03:48.914: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostName&protocol=udp&host=10.32.0.4&port=8081&tries=1'] Namespace:pod-network-test-8111 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 28 14:03:48.914: INFO: >>> kubeConfig: /root/.kube/config
I0128 14:03:48.978640       9 log.go:172] (0xc001aed290) (0xc001d1a460) Create stream
I0128 14:03:48.978826       9 log.go:172] (0xc001aed290) (0xc001d1a460) Stream added, broadcasting: 1
I0128 14:03:48.992080       9 log.go:172] (0xc001aed290) Reply frame received for 1
I0128 14:03:48.992193       9 log.go:172] (0xc001aed290) (0xc001d1a500) Create stream
I0128 14:03:48.992198       9 log.go:172] (0xc001aed290) (0xc001d1a500) Stream added, broadcasting: 3
I0128 14:03:48.993822       9 log.go:172] (0xc001aed290) Reply frame received for 3
I0128 14:03:48.993928       9 log.go:172] (0xc001aed290) (0xc00233a960) Create stream
I0128 14:03:48.993945       9 log.go:172] (0xc001aed290) (0xc00233a960) Stream added, broadcasting: 5
I0128 14:03:48.995772       9 log.go:172] (0xc001aed290) Reply frame received for 5
I0128 14:03:49.118773       9 log.go:172] (0xc001aed290) Data frame received for 3
I0128 14:03:49.118854       9 log.go:172] (0xc001d1a500) (3) Data frame handling
I0128 14:03:49.118889       9 log.go:172] (0xc001d1a500) (3) Data frame sent
I0128 14:03:49.238592       9 log.go:172] (0xc001aed290) (0xc001d1a500) Stream removed, broadcasting: 3
I0128 14:03:49.238844       9 log.go:172] (0xc001aed290) Data frame received for 1
I0128 14:03:49.238894       9 log.go:172] (0xc001d1a460) (1) Data frame handling
I0128 14:03:49.238938       9 log.go:172] (0xc001d1a460) (1) Data frame sent
I0128 14:03:49.238961       9 log.go:172] (0xc001aed290) (0xc001d1a460) Stream removed, broadcasting: 1
I0128 14:03:49.239185       9 log.go:172] (0xc001aed290) (0xc00233a960) Stream removed, broadcasting: 5
I0128 14:03:49.239412       9 log.go:172] (0xc001aed290) Go away received
I0128 14:03:49.239934       9 log.go:172] (0xc001aed290) (0xc001d1a460) Stream removed, broadcasting: 1
I0128 14:03:49.240152       9 log.go:172] (0xc001aed290) (0xc001d1a500) Stream removed, broadcasting: 3
I0128 14:03:49.240232       9 log.go:172] (0xc001aed290) (0xc00233a960) Stream removed, broadcasting: 5
Jan 28 14:03:49.240: INFO: Waiting for endpoints: map[]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 28 14:03:49.241: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-8111" for this suite.
Jan 28 14:04:15.282: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 28 14:04:15.433: INFO: namespace pod-network-test-8111 deletion completed in 26.178584851s

• [SLOW TEST:61.555 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-network] Proxy version v1 
  should proxy through a service and a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 28 14:04:15.433: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy through a service and a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: starting an echo server on multiple ports
STEP: creating replication controller proxy-service-dr84w in namespace proxy-4510
I0128 14:04:15.618893       9 runners.go:180] Created replication controller with name: proxy-service-dr84w, namespace: proxy-4510, replica count: 1
I0128 14:04:16.669857       9 runners.go:180] proxy-service-dr84w Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0128 14:04:17.670352       9 runners.go:180] proxy-service-dr84w Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0128 14:04:18.670903       9 runners.go:180] proxy-service-dr84w Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0128 14:04:19.671482       9 runners.go:180] proxy-service-dr84w Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0128 14:04:20.672165       9 runners.go:180] proxy-service-dr84w Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0128 14:04:21.673134       9 runners.go:180] proxy-service-dr84w Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0128 14:04:22.673785       9 runners.go:180] proxy-service-dr84w Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0128 14:04:23.674616       9 runners.go:180] proxy-service-dr84w Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0128 14:04:24.675152       9 runners.go:180] proxy-service-dr84w Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0128 14:04:25.675727       9 runners.go:180] proxy-service-dr84w Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0128 14:04:26.676456       9 runners.go:180] proxy-service-dr84w Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0128 14:04:27.676957       9 runners.go:180] proxy-service-dr84w Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0128 14:04:28.677380       9 runners.go:180] proxy-service-dr84w Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0128 14:04:29.677850       9 runners.go:180] proxy-service-dr84w Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0128 14:04:30.678438       9 runners.go:180] proxy-service-dr84w Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0128 14:04:31.679216       9 runners.go:180] proxy-service-dr84w Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0128 14:04:32.679659       9 runners.go:180] proxy-service-dr84w Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0128 14:04:33.680316       9 runners.go:180] proxy-service-dr84w Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Jan 28 14:04:33.691: INFO: setup took 18.185887305s, starting test cases
STEP: running 16 cases, 20 attempts per case, 320 total attempts
Jan 28 14:04:33.735: INFO: (0) /api/v1/namespaces/proxy-4510/pods/proxy-service-dr84w-6k2cw/proxy/: test (200; 43.948837ms)
Jan 28 14:04:33.735: INFO: (0) /api/v1/namespaces/proxy-4510/pods/proxy-service-dr84w-6k2cw:160/proxy/: foo (200; 43.625149ms)
Jan 28 14:04:33.741: INFO: (0) /api/v1/namespaces/proxy-4510/pods/http:proxy-service-dr84w-6k2cw:160/proxy/: foo (200; 49.408083ms)
Jan 28 14:04:33.741: INFO: (0) /api/v1/namespaces/proxy-4510/pods/proxy-service-dr84w-6k2cw:162/proxy/: bar (200; 49.415883ms)
Jan 28 14:04:33.744: INFO: (0) /api/v1/namespaces/proxy-4510/pods/http:proxy-service-dr84w-6k2cw:1080/proxy/: ... (200; 52.228485ms)
Jan 28 14:04:33.745: INFO: (0) /api/v1/namespaces/proxy-4510/pods/http:proxy-service-dr84w-6k2cw:162/proxy/: bar (200; 53.296308ms)
Jan 28 14:04:33.745: INFO: (0) /api/v1/namespaces/proxy-4510/services/proxy-service-dr84w:portname1/proxy/: foo (200; 53.340949ms)
Jan 28 14:04:33.745: INFO: (0) /api/v1/namespaces/proxy-4510/services/proxy-service-dr84w:portname2/proxy/: bar (200; 53.508594ms)
Jan 28 14:04:33.746: INFO: (0) /api/v1/namespaces/proxy-4510/services/http:proxy-service-dr84w:portname2/proxy/: bar (200; 55.180761ms)
Jan 28 14:04:33.747: INFO: (0) /api/v1/namespaces/proxy-4510/pods/proxy-service-dr84w-6k2cw:1080/proxy/: test<... (200; 55.579876ms)
Jan 28 14:04:33.755: INFO: (0) /api/v1/namespaces/proxy-4510/services/http:proxy-service-dr84w:portname1/proxy/: foo (200; 63.250244ms)
Jan 28 14:04:33.761: INFO: (0) /api/v1/namespaces/proxy-4510/pods/https:proxy-service-dr84w-6k2cw:460/proxy/: tls baz (200; 68.599595ms)
Jan 28 14:04:33.761: INFO: (0) /api/v1/namespaces/proxy-4510/pods/https:proxy-service-dr84w-6k2cw:462/proxy/: tls qux (200; 69.478381ms)
Jan 28 14:04:33.761: INFO: (0) /api/v1/namespaces/proxy-4510/services/https:proxy-service-dr84w:tlsportname2/proxy/: tls qux (200; 68.933618ms)
Jan 28 14:04:33.761: INFO: (0) /api/v1/namespaces/proxy-4510/services/https:proxy-service-dr84w:tlsportname1/proxy/: tls baz (200; 69.362278ms)
Jan 28 14:04:33.761: INFO: (0) /api/v1/namespaces/proxy-4510/pods/https:proxy-service-dr84w-6k2cw:443/proxy/: test<... (200; 26.796426ms)
Jan 28 14:04:33.788: INFO: (1) /api/v1/namespaces/proxy-4510/pods/proxy-service-dr84w-6k2cw:160/proxy/: foo (200; 27.338974ms)
Jan 28 14:04:33.789: INFO: (1) /api/v1/namespaces/proxy-4510/services/proxy-service-dr84w:portname1/proxy/: foo (200; 27.319872ms)
Jan 28 14:04:33.789: INFO: (1) /api/v1/namespaces/proxy-4510/services/https:proxy-service-dr84w:tlsportname1/proxy/: tls baz (200; 27.872965ms)
Jan 28 14:04:33.789: INFO: (1) /api/v1/namespaces/proxy-4510/pods/http:proxy-service-dr84w-6k2cw:1080/proxy/: ... (200; 27.664244ms)
Jan 28 14:04:33.790: INFO: (1) /api/v1/namespaces/proxy-4510/pods/https:proxy-service-dr84w-6k2cw:462/proxy/: tls qux (200; 28.136657ms)
Jan 28 14:04:33.790: INFO: (1) /api/v1/namespaces/proxy-4510/pods/proxy-service-dr84w-6k2cw:162/proxy/: bar (200; 28.042438ms)
Jan 28 14:04:33.790: INFO: (1) /api/v1/namespaces/proxy-4510/services/https:proxy-service-dr84w:tlsportname2/proxy/: tls qux (200; 28.680051ms)
Jan 28 14:04:33.790: INFO: (1) /api/v1/namespaces/proxy-4510/pods/proxy-service-dr84w-6k2cw/proxy/: test (200; 28.429902ms)
Jan 28 14:04:33.790: INFO: (1) /api/v1/namespaces/proxy-4510/pods/http:proxy-service-dr84w-6k2cw:162/proxy/: bar (200; 28.88109ms)
Jan 28 14:04:33.796: INFO: (2) /api/v1/namespaces/proxy-4510/pods/https:proxy-service-dr84w-6k2cw:460/proxy/: tls baz (200; 5.859718ms)
Jan 28 14:04:33.799: INFO: (2) /api/v1/namespaces/proxy-4510/pods/https:proxy-service-dr84w-6k2cw:462/proxy/: tls qux (200; 8.707113ms)
Jan 28 14:04:33.804: INFO: (2) /api/v1/namespaces/proxy-4510/services/proxy-service-dr84w:portname2/proxy/: bar (200; 13.269471ms)
Jan 28 14:04:33.806: INFO: (2) /api/v1/namespaces/proxy-4510/pods/http:proxy-service-dr84w-6k2cw:160/proxy/: foo (200; 14.811141ms)
Jan 28 14:04:33.806: INFO: (2) /api/v1/namespaces/proxy-4510/pods/http:proxy-service-dr84w-6k2cw:1080/proxy/: ... (200; 14.912341ms)
Jan 28 14:04:33.806: INFO: (2) /api/v1/namespaces/proxy-4510/services/https:proxy-service-dr84w:tlsportname1/proxy/: tls baz (200; 15.597111ms)
Jan 28 14:04:33.806: INFO: (2) /api/v1/namespaces/proxy-4510/services/proxy-service-dr84w:portname1/proxy/: foo (200; 15.229622ms)
Jan 28 14:04:33.806: INFO: (2) /api/v1/namespaces/proxy-4510/pods/http:proxy-service-dr84w-6k2cw:162/proxy/: bar (200; 15.230843ms)
Jan 28 14:04:33.808: INFO: (2) /api/v1/namespaces/proxy-4510/pods/proxy-service-dr84w-6k2cw/proxy/: test (200; 17.26605ms)
Jan 28 14:04:33.808: INFO: (2) /api/v1/namespaces/proxy-4510/pods/proxy-service-dr84w-6k2cw:1080/proxy/: test<... (200; 17.567571ms)
Jan 28 14:04:33.809: INFO: (2) /api/v1/namespaces/proxy-4510/pods/proxy-service-dr84w-6k2cw:162/proxy/: bar (200; 17.9795ms)
Jan 28 14:04:33.809: INFO: (2) /api/v1/namespaces/proxy-4510/pods/https:proxy-service-dr84w-6k2cw:443/proxy/: test (200; 23.08912ms)
Jan 28 14:04:33.841: INFO: (3) /api/v1/namespaces/proxy-4510/services/https:proxy-service-dr84w:tlsportname1/proxy/: tls baz (200; 22.979286ms)
Jan 28 14:04:33.841: INFO: (3) /api/v1/namespaces/proxy-4510/pods/https:proxy-service-dr84w-6k2cw:460/proxy/: tls baz (200; 23.021436ms)
Jan 28 14:04:33.842: INFO: (3) /api/v1/namespaces/proxy-4510/services/http:proxy-service-dr84w:portname2/proxy/: bar (200; 23.367575ms)
Jan 28 14:04:33.842: INFO: (3) /api/v1/namespaces/proxy-4510/services/http:proxy-service-dr84w:portname1/proxy/: foo (200; 24.528686ms)
Jan 28 14:04:33.843: INFO: (3) /api/v1/namespaces/proxy-4510/pods/https:proxy-service-dr84w-6k2cw:443/proxy/: ... (200; 24.612341ms)
Jan 28 14:04:33.843: INFO: (3) /api/v1/namespaces/proxy-4510/pods/proxy-service-dr84w-6k2cw:1080/proxy/: test<... (200; 24.472807ms)
Jan 28 14:04:33.843: INFO: (3) /api/v1/namespaces/proxy-4510/pods/proxy-service-dr84w-6k2cw:160/proxy/: foo (200; 24.765939ms)
Jan 28 14:04:33.851: INFO: (3) /api/v1/namespaces/proxy-4510/services/proxy-service-dr84w:portname1/proxy/: foo (200; 32.438987ms)
Jan 28 14:04:33.872: INFO: (4) /api/v1/namespaces/proxy-4510/pods/http:proxy-service-dr84w-6k2cw:162/proxy/: bar (200; 20.444138ms)
Jan 28 14:04:33.872: INFO: (4) /api/v1/namespaces/proxy-4510/pods/https:proxy-service-dr84w-6k2cw:443/proxy/: test<... (200; 20.203957ms)
Jan 28 14:04:33.872: INFO: (4) /api/v1/namespaces/proxy-4510/pods/http:proxy-service-dr84w-6k2cw:160/proxy/: foo (200; 17.581702ms)
Jan 28 14:04:33.872: INFO: (4) /api/v1/namespaces/proxy-4510/pods/proxy-service-dr84w-6k2cw:162/proxy/: bar (200; 19.900368ms)
Jan 28 14:04:33.872: INFO: (4) /api/v1/namespaces/proxy-4510/pods/http:proxy-service-dr84w-6k2cw:1080/proxy/: ... (200; 20.003241ms)
Jan 28 14:04:33.872: INFO: (4) /api/v1/namespaces/proxy-4510/pods/https:proxy-service-dr84w-6k2cw:460/proxy/: tls baz (200; 18.07425ms)
Jan 28 14:04:33.872: INFO: (4) /api/v1/namespaces/proxy-4510/pods/proxy-service-dr84w-6k2cw/proxy/: test (200; 17.814523ms)
Jan 28 14:04:33.872: INFO: (4) /api/v1/namespaces/proxy-4510/pods/proxy-service-dr84w-6k2cw:160/proxy/: foo (200; 19.462387ms)
Jan 28 14:04:33.874: INFO: (4) /api/v1/namespaces/proxy-4510/pods/https:proxy-service-dr84w-6k2cw:462/proxy/: tls qux (200; 20.267152ms)
Jan 28 14:04:33.879: INFO: (4) /api/v1/namespaces/proxy-4510/services/https:proxy-service-dr84w:tlsportname2/proxy/: tls qux (200; 27.703255ms)
Jan 28 14:04:33.879: INFO: (4) /api/v1/namespaces/proxy-4510/services/https:proxy-service-dr84w:tlsportname1/proxy/: tls baz (200; 26.727294ms)
Jan 28 14:04:33.881: INFO: (4) /api/v1/namespaces/proxy-4510/services/proxy-service-dr84w:portname1/proxy/: foo (200; 27.376374ms)
Jan 28 14:04:33.881: INFO: (4) /api/v1/namespaces/proxy-4510/services/proxy-service-dr84w:portname2/proxy/: bar (200; 27.150316ms)
Jan 28 14:04:33.881: INFO: (4) /api/v1/namespaces/proxy-4510/services/http:proxy-service-dr84w:portname2/proxy/: bar (200; 28.908341ms)
Jan 28 14:04:33.882: INFO: (4) /api/v1/namespaces/proxy-4510/services/http:proxy-service-dr84w:portname1/proxy/: foo (200; 27.774349ms)
Jan 28 14:04:33.893: INFO: (5) /api/v1/namespaces/proxy-4510/pods/proxy-service-dr84w-6k2cw:1080/proxy/: test<... (200; 11.208401ms)
Jan 28 14:04:33.893: INFO: (5) /api/v1/namespaces/proxy-4510/pods/proxy-service-dr84w-6k2cw/proxy/: test (200; 11.254926ms)
Jan 28 14:04:33.897: INFO: (5) /api/v1/namespaces/proxy-4510/pods/http:proxy-service-dr84w-6k2cw:162/proxy/: bar (200; 14.239971ms)
Jan 28 14:04:33.897: INFO: (5) /api/v1/namespaces/proxy-4510/pods/http:proxy-service-dr84w-6k2cw:160/proxy/: foo (200; 14.154155ms)
Jan 28 14:04:33.897: INFO: (5) /api/v1/namespaces/proxy-4510/pods/proxy-service-dr84w-6k2cw:162/proxy/: bar (200; 14.780636ms)
Jan 28 14:04:33.898: INFO: (5) /api/v1/namespaces/proxy-4510/services/https:proxy-service-dr84w:tlsportname1/proxy/: tls baz (200; 15.819092ms)
Jan 28 14:04:33.901: INFO: (5) /api/v1/namespaces/proxy-4510/pods/proxy-service-dr84w-6k2cw:160/proxy/: foo (200; 18.693056ms)
Jan 28 14:04:33.905: INFO: (5) /api/v1/namespaces/proxy-4510/services/proxy-service-dr84w:portname1/proxy/: foo (200; 22.05154ms)
Jan 28 14:04:33.907: INFO: (5) /api/v1/namespaces/proxy-4510/services/http:proxy-service-dr84w:portname2/proxy/: bar (200; 25.114533ms)
Jan 28 14:04:33.908: INFO: (5) /api/v1/namespaces/proxy-4510/pods/http:proxy-service-dr84w-6k2cw:1080/proxy/: ... (200; 25.072181ms)
Jan 28 14:04:33.908: INFO: (5) /api/v1/namespaces/proxy-4510/pods/https:proxy-service-dr84w-6k2cw:460/proxy/: tls baz (200; 25.877906ms)
Jan 28 14:04:33.909: INFO: (5) /api/v1/namespaces/proxy-4510/services/https:proxy-service-dr84w:tlsportname2/proxy/: tls qux (200; 26.637396ms)
Jan 28 14:04:33.909: INFO: (5) /api/v1/namespaces/proxy-4510/services/http:proxy-service-dr84w:portname1/proxy/: foo (200; 26.453195ms)
Jan 28 14:04:33.909: INFO: (5) /api/v1/namespaces/proxy-4510/pods/https:proxy-service-dr84w-6k2cw:462/proxy/: tls qux (200; 26.681153ms)
Jan 28 14:04:33.909: INFO: (5) /api/v1/namespaces/proxy-4510/pods/https:proxy-service-dr84w-6k2cw:443/proxy/: test (200; 16.001226ms)
Jan 28 14:04:33.929: INFO: (6) /api/v1/namespaces/proxy-4510/services/http:proxy-service-dr84w:portname1/proxy/: foo (200; 18.398394ms)
Jan 28 14:04:33.929: INFO: (6) /api/v1/namespaces/proxy-4510/pods/http:proxy-service-dr84w-6k2cw:160/proxy/: foo (200; 18.022997ms)
Jan 28 14:04:33.929: INFO: (6) /api/v1/namespaces/proxy-4510/pods/proxy-service-dr84w-6k2cw:162/proxy/: bar (200; 19.119687ms)
Jan 28 14:04:33.931: INFO: (6) /api/v1/namespaces/proxy-4510/pods/proxy-service-dr84w-6k2cw:1080/proxy/: test<... (200; 20.616638ms)
Jan 28 14:04:33.932: INFO: (6) /api/v1/namespaces/proxy-4510/pods/https:proxy-service-dr84w-6k2cw:462/proxy/: tls qux (200; 21.047266ms)
Jan 28 14:04:33.932: INFO: (6) /api/v1/namespaces/proxy-4510/pods/proxy-service-dr84w-6k2cw:160/proxy/: foo (200; 21.359294ms)
Jan 28 14:04:33.934: INFO: (6) /api/v1/namespaces/proxy-4510/pods/http:proxy-service-dr84w-6k2cw:1080/proxy/: ... (200; 22.967631ms)
Jan 28 14:04:33.934: INFO: (6) /api/v1/namespaces/proxy-4510/pods/https:proxy-service-dr84w-6k2cw:443/proxy/: test (200; 14.80169ms)
Jan 28 14:04:33.954: INFO: (7) /api/v1/namespaces/proxy-4510/pods/proxy-service-dr84w-6k2cw:1080/proxy/: test<... (200; 14.998441ms)
Jan 28 14:04:33.954: INFO: (7) /api/v1/namespaces/proxy-4510/pods/http:proxy-service-dr84w-6k2cw:1080/proxy/: ... (200; 15.120276ms)
Jan 28 14:04:33.954: INFO: (7) /api/v1/namespaces/proxy-4510/services/https:proxy-service-dr84w:tlsportname2/proxy/: tls qux (200; 14.755385ms)
Jan 28 14:04:33.954: INFO: (7) /api/v1/namespaces/proxy-4510/pods/https:proxy-service-dr84w-6k2cw:462/proxy/: tls qux (200; 15.202117ms)
Jan 28 14:04:33.955: INFO: (7) /api/v1/namespaces/proxy-4510/pods/http:proxy-service-dr84w-6k2cw:160/proxy/: foo (200; 16.551708ms)
Jan 28 14:04:33.955: INFO: (7) /api/v1/namespaces/proxy-4510/pods/http:proxy-service-dr84w-6k2cw:162/proxy/: bar (200; 16.570012ms)
Jan 28 14:04:33.956: INFO: (7) /api/v1/namespaces/proxy-4510/pods/https:proxy-service-dr84w-6k2cw:460/proxy/: tls baz (200; 17.48652ms)
Jan 28 14:04:33.956: INFO: (7) /api/v1/namespaces/proxy-4510/services/proxy-service-dr84w:portname1/proxy/: foo (200; 17.43956ms)
Jan 28 14:04:33.956: INFO: (7) /api/v1/namespaces/proxy-4510/services/proxy-service-dr84w:portname2/proxy/: bar (200; 17.090473ms)
Jan 28 14:04:33.956: INFO: (7) /api/v1/namespaces/proxy-4510/services/https:proxy-service-dr84w:tlsportname1/proxy/: tls baz (200; 18.33764ms)
Jan 28 14:04:33.957: INFO: (7) /api/v1/namespaces/proxy-4510/services/http:proxy-service-dr84w:portname2/proxy/: bar (200; 17.861081ms)
Jan 28 14:04:33.957: INFO: (7) /api/v1/namespaces/proxy-4510/pods/proxy-service-dr84w-6k2cw:160/proxy/: foo (200; 18.714946ms)
Jan 28 14:04:33.967: INFO: (8) /api/v1/namespaces/proxy-4510/pods/http:proxy-service-dr84w-6k2cw:160/proxy/: foo (200; 10.377379ms)
Jan 28 14:04:33.967: INFO: (8) /api/v1/namespaces/proxy-4510/pods/proxy-service-dr84w-6k2cw/proxy/: test (200; 10.03063ms)
Jan 28 14:04:33.967: INFO: (8) /api/v1/namespaces/proxy-4510/pods/http:proxy-service-dr84w-6k2cw:162/proxy/: bar (200; 10.143766ms)
Jan 28 14:04:33.968: INFO: (8) /api/v1/namespaces/proxy-4510/pods/proxy-service-dr84w-6k2cw:1080/proxy/: test<... (200; 11.009634ms)
Jan 28 14:04:33.969: INFO: (8) /api/v1/namespaces/proxy-4510/pods/proxy-service-dr84w-6k2cw:160/proxy/: foo (200; 11.688995ms)
Jan 28 14:04:33.969: INFO: (8) /api/v1/namespaces/proxy-4510/pods/http:proxy-service-dr84w-6k2cw:1080/proxy/: ... (200; 12.442393ms)
Jan 28 14:04:33.970: INFO: (8) /api/v1/namespaces/proxy-4510/pods/https:proxy-service-dr84w-6k2cw:443/proxy/: test<... (200; 13.196863ms)
Jan 28 14:04:33.992: INFO: (9) /api/v1/namespaces/proxy-4510/pods/http:proxy-service-dr84w-6k2cw:162/proxy/: bar (200; 15.289353ms)
Jan 28 14:04:33.992: INFO: (9) /api/v1/namespaces/proxy-4510/pods/http:proxy-service-dr84w-6k2cw:1080/proxy/: ... (200; 14.774513ms)
Jan 28 14:04:33.992: INFO: (9) /api/v1/namespaces/proxy-4510/pods/proxy-service-dr84w-6k2cw:162/proxy/: bar (200; 14.85284ms)
Jan 28 14:04:33.992: INFO: (9) /api/v1/namespaces/proxy-4510/pods/https:proxy-service-dr84w-6k2cw:443/proxy/: test (200; 15.844061ms)
Jan 28 14:04:33.993: INFO: (9) /api/v1/namespaces/proxy-4510/pods/https:proxy-service-dr84w-6k2cw:460/proxy/: tls baz (200; 15.915091ms)
Jan 28 14:04:33.995: INFO: (9) /api/v1/namespaces/proxy-4510/services/proxy-service-dr84w:portname2/proxy/: bar (200; 18.286249ms)
Jan 28 14:04:33.998: INFO: (9) /api/v1/namespaces/proxy-4510/services/https:proxy-service-dr84w:tlsportname2/proxy/: tls qux (200; 21.052061ms)
Jan 28 14:04:33.998: INFO: (9) /api/v1/namespaces/proxy-4510/services/http:proxy-service-dr84w:portname1/proxy/: foo (200; 21.68571ms)
Jan 28 14:04:34.002: INFO: (9) /api/v1/namespaces/proxy-4510/services/proxy-service-dr84w:portname1/proxy/: foo (200; 24.944564ms)
Jan 28 14:04:34.003: INFO: (9) /api/v1/namespaces/proxy-4510/services/http:proxy-service-dr84w:portname2/proxy/: bar (200; 25.412237ms)
Jan 28 14:04:34.002: INFO: (9) /api/v1/namespaces/proxy-4510/services/https:proxy-service-dr84w:tlsportname1/proxy/: tls baz (200; 25.686163ms)
Jan 28 14:04:34.035: INFO: (10) /api/v1/namespaces/proxy-4510/services/proxy-service-dr84w:portname1/proxy/: foo (200; 31.911967ms)
Jan 28 14:04:34.035: INFO: (10) /api/v1/namespaces/proxy-4510/pods/http:proxy-service-dr84w-6k2cw:162/proxy/: bar (200; 31.233778ms)
Jan 28 14:04:34.035: INFO: (10) /api/v1/namespaces/proxy-4510/pods/proxy-service-dr84w-6k2cw:160/proxy/: foo (200; 30.93026ms)
Jan 28 14:04:34.035: INFO: (10) /api/v1/namespaces/proxy-4510/pods/http:proxy-service-dr84w-6k2cw:160/proxy/: foo (200; 31.381393ms)
Jan 28 14:04:34.036: INFO: (10) /api/v1/namespaces/proxy-4510/pods/http:proxy-service-dr84w-6k2cw:1080/proxy/: ... (200; 31.142492ms)
Jan 28 14:04:34.036: INFO: (10) /api/v1/namespaces/proxy-4510/pods/proxy-service-dr84w-6k2cw/proxy/: test (200; 32.973207ms)
Jan 28 14:04:34.036: INFO: (10) /api/v1/namespaces/proxy-4510/pods/proxy-service-dr84w-6k2cw:162/proxy/: bar (200; 31.724999ms)
Jan 28 14:04:34.036: INFO: (10) /api/v1/namespaces/proxy-4510/pods/https:proxy-service-dr84w-6k2cw:462/proxy/: tls qux (200; 32.88824ms)
Jan 28 14:04:34.036: INFO: (10) /api/v1/namespaces/proxy-4510/pods/proxy-service-dr84w-6k2cw:1080/proxy/: test<... (200; 32.409883ms)
Jan 28 14:04:34.037: INFO: (10) /api/v1/namespaces/proxy-4510/pods/https:proxy-service-dr84w-6k2cw:443/proxy/: test<... (200; 17.542089ms)
Jan 28 14:04:34.062: INFO: (11) /api/v1/namespaces/proxy-4510/services/https:proxy-service-dr84w:tlsportname1/proxy/: tls baz (200; 19.358422ms)
Jan 28 14:04:34.062: INFO: (11) /api/v1/namespaces/proxy-4510/services/proxy-service-dr84w:portname1/proxy/: foo (200; 19.508915ms)
Jan 28 14:04:34.062: INFO: (11) /api/v1/namespaces/proxy-4510/services/https:proxy-service-dr84w:tlsportname2/proxy/: tls qux (200; 19.36413ms)
Jan 28 14:04:34.062: INFO: (11) /api/v1/namespaces/proxy-4510/services/http:proxy-service-dr84w:portname1/proxy/: foo (200; 19.501616ms)
Jan 28 14:04:34.062: INFO: (11) /api/v1/namespaces/proxy-4510/pods/proxy-service-dr84w-6k2cw/proxy/: test (200; 19.484627ms)
Jan 28 14:04:34.062: INFO: (11) /api/v1/namespaces/proxy-4510/services/proxy-service-dr84w:portname2/proxy/: bar (200; 20.044021ms)
Jan 28 14:04:34.062: INFO: (11) /api/v1/namespaces/proxy-4510/pods/https:proxy-service-dr84w-6k2cw:460/proxy/: tls baz (200; 19.404869ms)
Jan 28 14:04:34.063: INFO: (11) /api/v1/namespaces/proxy-4510/pods/proxy-service-dr84w-6k2cw:160/proxy/: foo (200; 19.971719ms)
Jan 28 14:04:34.063: INFO: (11) /api/v1/namespaces/proxy-4510/pods/http:proxy-service-dr84w-6k2cw:1080/proxy/: ... (200; 20.351163ms)
Jan 28 14:04:34.076: INFO: (12) /api/v1/namespaces/proxy-4510/pods/proxy-service-dr84w-6k2cw:162/proxy/: bar (200; 12.626796ms)
Jan 28 14:04:34.076: INFO: (12) /api/v1/namespaces/proxy-4510/services/proxy-service-dr84w:portname1/proxy/: foo (200; 12.331985ms)
Jan 28 14:04:34.076: INFO: (12) /api/v1/namespaces/proxy-4510/services/http:proxy-service-dr84w:portname2/proxy/: bar (200; 12.246717ms)
Jan 28 14:04:34.076: INFO: (12) /api/v1/namespaces/proxy-4510/services/https:proxy-service-dr84w:tlsportname1/proxy/: tls baz (200; 12.352555ms)
Jan 28 14:04:34.076: INFO: (12) /api/v1/namespaces/proxy-4510/pods/http:proxy-service-dr84w-6k2cw:1080/proxy/: ... (200; 13.019754ms)
Jan 28 14:04:34.078: INFO: (12) /api/v1/namespaces/proxy-4510/pods/https:proxy-service-dr84w-6k2cw:443/proxy/: test (200; 14.547825ms)
Jan 28 14:04:34.078: INFO: (12) /api/v1/namespaces/proxy-4510/pods/http:proxy-service-dr84w-6k2cw:162/proxy/: bar (200; 14.302253ms)
Jan 28 14:04:34.078: INFO: (12) /api/v1/namespaces/proxy-4510/pods/https:proxy-service-dr84w-6k2cw:460/proxy/: tls baz (200; 14.24575ms)
Jan 28 14:04:34.078: INFO: (12) /api/v1/namespaces/proxy-4510/pods/http:proxy-service-dr84w-6k2cw:160/proxy/: foo (200; 14.221169ms)
Jan 28 14:04:34.078: INFO: (12) /api/v1/namespaces/proxy-4510/pods/proxy-service-dr84w-6k2cw:1080/proxy/: test<... (200; 14.724104ms)
Jan 28 14:04:34.078: INFO: (12) /api/v1/namespaces/proxy-4510/services/proxy-service-dr84w:portname2/proxy/: bar (200; 14.721614ms)
Jan 28 14:04:34.080: INFO: (12) /api/v1/namespaces/proxy-4510/pods/https:proxy-service-dr84w-6k2cw:462/proxy/: tls qux (200; 16.882895ms)
Jan 28 14:04:34.080: INFO: (12) /api/v1/namespaces/proxy-4510/services/http:proxy-service-dr84w:portname1/proxy/: foo (200; 17.071565ms)
Jan 28 14:04:34.080: INFO: (12) /api/v1/namespaces/proxy-4510/pods/proxy-service-dr84w-6k2cw:160/proxy/: foo (200; 16.571289ms)
Jan 28 14:04:34.081: INFO: (12) /api/v1/namespaces/proxy-4510/services/https:proxy-service-dr84w:tlsportname2/proxy/: tls qux (200; 17.165047ms)
Jan 28 14:04:34.088: INFO: (13) /api/v1/namespaces/proxy-4510/pods/proxy-service-dr84w-6k2cw:1080/proxy/: test<... (200; 6.313629ms)
Jan 28 14:04:34.091: INFO: (13) /api/v1/namespaces/proxy-4510/pods/http:proxy-service-dr84w-6k2cw:162/proxy/: bar (200; 9.061754ms)
Jan 28 14:04:34.096: INFO: (13) /api/v1/namespaces/proxy-4510/pods/https:proxy-service-dr84w-6k2cw:462/proxy/: tls qux (200; 13.76066ms)
Jan 28 14:04:34.096: INFO: (13) /api/v1/namespaces/proxy-4510/pods/http:proxy-service-dr84w-6k2cw:160/proxy/: foo (200; 14.561493ms)
Jan 28 14:04:34.096: INFO: (13) /api/v1/namespaces/proxy-4510/pods/http:proxy-service-dr84w-6k2cw:1080/proxy/: ... (200; 14.554697ms)
Jan 28 14:04:34.097: INFO: (13) /api/v1/namespaces/proxy-4510/services/https:proxy-service-dr84w:tlsportname2/proxy/: tls qux (200; 15.24062ms)
Jan 28 14:04:34.098: INFO: (13) /api/v1/namespaces/proxy-4510/pods/https:proxy-service-dr84w-6k2cw:443/proxy/: test (200; 17.564736ms)
Jan 28 14:04:34.100: INFO: (13) /api/v1/namespaces/proxy-4510/pods/https:proxy-service-dr84w-6k2cw:460/proxy/: tls baz (200; 17.697829ms)
Jan 28 14:04:34.100: INFO: (13) /api/v1/namespaces/proxy-4510/services/http:proxy-service-dr84w:portname1/proxy/: foo (200; 17.531522ms)
Jan 28 14:04:34.100: INFO: (13) /api/v1/namespaces/proxy-4510/services/https:proxy-service-dr84w:tlsportname1/proxy/: tls baz (200; 18.963224ms)
Jan 28 14:04:34.101: INFO: (13) /api/v1/namespaces/proxy-4510/pods/proxy-service-dr84w-6k2cw:160/proxy/: foo (200; 19.956488ms)
Jan 28 14:04:34.101: INFO: (13) /api/v1/namespaces/proxy-4510/services/proxy-service-dr84w:portname1/proxy/: foo (200; 19.752867ms)
Jan 28 14:04:34.115: INFO: (14) /api/v1/namespaces/proxy-4510/pods/proxy-service-dr84w-6k2cw:162/proxy/: bar (200; 13.174852ms)
Jan 28 14:04:34.115: INFO: (14) /api/v1/namespaces/proxy-4510/pods/http:proxy-service-dr84w-6k2cw:162/proxy/: bar (200; 12.525142ms)
Jan 28 14:04:34.115: INFO: (14) /api/v1/namespaces/proxy-4510/pods/proxy-service-dr84w-6k2cw/proxy/: test (200; 12.732508ms)
Jan 28 14:04:34.118: INFO: (14) /api/v1/namespaces/proxy-4510/services/http:proxy-service-dr84w:portname2/proxy/: bar (200; 15.848526ms)
Jan 28 14:04:34.119: INFO: (14) /api/v1/namespaces/proxy-4510/services/proxy-service-dr84w:portname2/proxy/: bar (200; 17.757819ms)
Jan 28 14:04:34.121: INFO: (14) /api/v1/namespaces/proxy-4510/pods/http:proxy-service-dr84w-6k2cw:1080/proxy/: ... (200; 18.487354ms)
Jan 28 14:04:34.122: INFO: (14) /api/v1/namespaces/proxy-4510/pods/http:proxy-service-dr84w-6k2cw:160/proxy/: foo (200; 19.585637ms)
Jan 28 14:04:34.122: INFO: (14) /api/v1/namespaces/proxy-4510/pods/proxy-service-dr84w-6k2cw:1080/proxy/: test<... (200; 19.966606ms)
Jan 28 14:04:34.123: INFO: (14) /api/v1/namespaces/proxy-4510/pods/https:proxy-service-dr84w-6k2cw:460/proxy/: tls baz (200; 20.323993ms)
Jan 28 14:04:34.123: INFO: (14) /api/v1/namespaces/proxy-4510/services/https:proxy-service-dr84w:tlsportname2/proxy/: tls qux (200; 21.73537ms)
Jan 28 14:04:34.123: INFO: (14) /api/v1/namespaces/proxy-4510/pods/https:proxy-service-dr84w-6k2cw:443/proxy/: ... (200; 36.294676ms)
Jan 28 14:04:34.166: INFO: (15) /api/v1/namespaces/proxy-4510/pods/proxy-service-dr84w-6k2cw/proxy/: test (200; 36.618021ms)
Jan 28 14:04:34.166: INFO: (15) /api/v1/namespaces/proxy-4510/pods/http:proxy-service-dr84w-6k2cw:160/proxy/: foo (200; 36.800955ms)
Jan 28 14:04:34.167: INFO: (15) /api/v1/namespaces/proxy-4510/pods/proxy-service-dr84w-6k2cw:162/proxy/: bar (200; 37.524443ms)
Jan 28 14:04:34.167: INFO: (15) /api/v1/namespaces/proxy-4510/services/proxy-service-dr84w:portname2/proxy/: bar (200; 38.452805ms)
Jan 28 14:04:34.168: INFO: (15) /api/v1/namespaces/proxy-4510/pods/proxy-service-dr84w-6k2cw:1080/proxy/: test<... (200; 37.695367ms)
Jan 28 14:04:34.168: INFO: (15) /api/v1/namespaces/proxy-4510/pods/proxy-service-dr84w-6k2cw:160/proxy/: foo (200; 38.617317ms)
Jan 28 14:04:34.168: INFO: (15) /api/v1/namespaces/proxy-4510/services/proxy-service-dr84w:portname1/proxy/: foo (200; 38.103361ms)
Jan 28 14:04:34.168: INFO: (15) /api/v1/namespaces/proxy-4510/services/http:proxy-service-dr84w:portname1/proxy/: foo (200; 38.215793ms)
Jan 28 14:04:34.168: INFO: (15) /api/v1/namespaces/proxy-4510/services/http:proxy-service-dr84w:portname2/proxy/: bar (200; 38.953036ms)
Jan 28 14:04:34.169: INFO: (15) /api/v1/namespaces/proxy-4510/pods/https:proxy-service-dr84w-6k2cw:462/proxy/: tls qux (200; 40.045252ms)
Jan 28 14:04:34.169: INFO: (15) /api/v1/namespaces/proxy-4510/pods/https:proxy-service-dr84w-6k2cw:443/proxy/: ... (200; 12.890139ms)
Jan 28 14:04:34.184: INFO: (16) /api/v1/namespaces/proxy-4510/pods/proxy-service-dr84w-6k2cw:1080/proxy/: test<... (200; 13.780647ms)
Jan 28 14:04:34.185: INFO: (16) /api/v1/namespaces/proxy-4510/services/http:proxy-service-dr84w:portname1/proxy/: foo (200; 14.50211ms)
Jan 28 14:04:34.185: INFO: (16) /api/v1/namespaces/proxy-4510/services/proxy-service-dr84w:portname1/proxy/: foo (200; 14.738472ms)
Jan 28 14:04:34.186: INFO: (16) /api/v1/namespaces/proxy-4510/services/proxy-service-dr84w:portname2/proxy/: bar (200; 15.271155ms)
Jan 28 14:04:34.186: INFO: (16) /api/v1/namespaces/proxy-4510/services/https:proxy-service-dr84w:tlsportname2/proxy/: tls qux (200; 16.305563ms)
Jan 28 14:04:34.186: INFO: (16) /api/v1/namespaces/proxy-4510/pods/proxy-service-dr84w-6k2cw/proxy/: test (200; 15.755769ms)
Jan 28 14:04:34.186: INFO: (16) /api/v1/namespaces/proxy-4510/services/https:proxy-service-dr84w:tlsportname1/proxy/: tls baz (200; 15.900684ms)
Jan 28 14:04:34.186: INFO: (16) /api/v1/namespaces/proxy-4510/pods/https:proxy-service-dr84w-6k2cw:462/proxy/: tls qux (200; 16.328304ms)
Jan 28 14:04:34.186: INFO: (16) /api/v1/namespaces/proxy-4510/pods/https:proxy-service-dr84w-6k2cw:460/proxy/: tls baz (200; 16.233409ms)
Jan 28 14:04:34.186: INFO: (16) /api/v1/namespaces/proxy-4510/pods/proxy-service-dr84w-6k2cw:162/proxy/: bar (200; 16.167566ms)
Jan 28 14:04:34.188: INFO: (16) /api/v1/namespaces/proxy-4510/pods/proxy-service-dr84w-6k2cw:160/proxy/: foo (200; 17.632762ms)
Jan 28 14:04:34.188: INFO: (16) /api/v1/namespaces/proxy-4510/pods/http:proxy-service-dr84w-6k2cw:162/proxy/: bar (200; 17.698185ms)
Jan 28 14:04:34.188: INFO: (16) /api/v1/namespaces/proxy-4510/services/http:proxy-service-dr84w:portname2/proxy/: bar (200; 17.372706ms)
Jan 28 14:04:34.208: INFO: (17) /api/v1/namespaces/proxy-4510/pods/proxy-service-dr84w-6k2cw:160/proxy/: foo (200; 19.633965ms)
Jan 28 14:04:34.211: INFO: (17) /api/v1/namespaces/proxy-4510/pods/proxy-service-dr84w-6k2cw:1080/proxy/: test<... (200; 22.354103ms)
Jan 28 14:04:34.211: INFO: (17) /api/v1/namespaces/proxy-4510/pods/proxy-service-dr84w-6k2cw/proxy/: test (200; 22.159127ms)
Jan 28 14:04:34.211: INFO: (17) /api/v1/namespaces/proxy-4510/pods/http:proxy-service-dr84w-6k2cw:162/proxy/: bar (200; 22.871336ms)
Jan 28 14:04:34.211: INFO: (17) /api/v1/namespaces/proxy-4510/pods/http:proxy-service-dr84w-6k2cw:160/proxy/: foo (200; 21.784366ms)
Jan 28 14:04:34.211: INFO: (17) /api/v1/namespaces/proxy-4510/services/proxy-service-dr84w:portname2/proxy/: bar (200; 22.535082ms)
Jan 28 14:04:34.211: INFO: (17) /api/v1/namespaces/proxy-4510/pods/http:proxy-service-dr84w-6k2cw:1080/proxy/: ... (200; 23.151662ms)
Jan 28 14:04:34.213: INFO: (17) /api/v1/namespaces/proxy-4510/pods/proxy-service-dr84w-6k2cw:162/proxy/: bar (200; 24.772782ms)
Jan 28 14:04:34.213: INFO: (17) /api/v1/namespaces/proxy-4510/services/http:proxy-service-dr84w:portname2/proxy/: bar (200; 24.752025ms)
Jan 28 14:04:34.213: INFO: (17) /api/v1/namespaces/proxy-4510/pods/https:proxy-service-dr84w-6k2cw:443/proxy/: ... (200; 34.873804ms)
Jan 28 14:04:34.251: INFO: (18) /api/v1/namespaces/proxy-4510/pods/proxy-service-dr84w-6k2cw:1080/proxy/: test<... (200; 35.473496ms)
Jan 28 14:04:34.253: INFO: (18) /api/v1/namespaces/proxy-4510/pods/proxy-service-dr84w-6k2cw/proxy/: test (200; 37.406473ms)
Jan 28 14:04:34.254: INFO: (18) /api/v1/namespaces/proxy-4510/pods/https:proxy-service-dr84w-6k2cw:460/proxy/: tls baz (200; 38.65549ms)
Jan 28 14:04:34.255: INFO: (18) /api/v1/namespaces/proxy-4510/services/https:proxy-service-dr84w:tlsportname1/proxy/: tls baz (200; 39.592984ms)
Jan 28 14:04:34.256: INFO: (18) /api/v1/namespaces/proxy-4510/pods/https:proxy-service-dr84w-6k2cw:443/proxy/: test (200; 17.379728ms)
Jan 28 14:04:34.282: INFO: (19) /api/v1/namespaces/proxy-4510/pods/proxy-service-dr84w-6k2cw:160/proxy/: foo (200; 18.462574ms)
Jan 28 14:04:34.282: INFO: (19) /api/v1/namespaces/proxy-4510/pods/proxy-service-dr84w-6k2cw:1080/proxy/: test<... (200; 18.08676ms)
Jan 28 14:04:34.283: INFO: (19) /api/v1/namespaces/proxy-4510/pods/https:proxy-service-dr84w-6k2cw:460/proxy/: tls baz (200; 17.84001ms)
Jan 28 14:04:34.283: INFO: (19) /api/v1/namespaces/proxy-4510/pods/https:proxy-service-dr84w-6k2cw:462/proxy/: tls qux (200; 18.259964ms)
Jan 28 14:04:34.293: INFO: (19) /api/v1/namespaces/proxy-4510/pods/https:proxy-service-dr84w-6k2cw:443/proxy/: ... (200; 30.038719ms)
Jan 28 14:04:34.295: INFO: (19) /api/v1/namespaces/proxy-4510/pods/http:proxy-service-dr84w-6k2cw:162/proxy/: bar (200; 30.618587ms)
Jan 28 14:04:34.296: INFO: (19) /api/v1/namespaces/proxy-4510/services/https:proxy-service-dr84w:tlsportname1/proxy/: tls baz (200; 32.763762ms)
Jan 28 14:04:34.296: INFO: (19) /api/v1/namespaces/proxy-4510/pods/proxy-service-dr84w-6k2cw:162/proxy/: bar (200; 31.21414ms)
Jan 28 14:04:34.299: INFO: (19) /api/v1/namespaces/proxy-4510/services/proxy-service-dr84w:portname1/proxy/: foo (200; 34.750121ms)
Jan 28 14:04:34.300: INFO: (19) /api/v1/namespaces/proxy-4510/services/https:proxy-service-dr84w:tlsportname2/proxy/: tls qux (200; 35.585798ms)
Jan 28 14:04:34.300: INFO: (19) /api/v1/namespaces/proxy-4510/services/http:proxy-service-dr84w:portname1/proxy/: foo (200; 35.717223ms)
Jan 28 14:04:34.301: INFO: (19) /api/v1/namespaces/proxy-4510/services/proxy-service-dr84w:portname2/proxy/: bar (200; 36.406685ms)
Jan 28 14:04:34.304: INFO: (19) /api/v1/namespaces/proxy-4510/services/http:proxy-service-dr84w:portname2/proxy/: bar (200; 38.906563ms)
STEP: deleting ReplicationController proxy-service-dr84w in namespace proxy-4510, will wait for the garbage collector to delete the pods
Jan 28 14:04:34.370: INFO: Deleting ReplicationController proxy-service-dr84w took: 10.622992ms
Jan 28 14:04:34.672: INFO: Terminating ReplicationController proxy-service-dr84w pods took: 301.160268ms
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 28 14:04:46.574: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-4510" for this suite.
Jan 28 14:04:52.618: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 28 14:04:52.737: INFO: namespace proxy-4510 deletion completed in 6.147166381s

• [SLOW TEST:37.304 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58
    should proxy through a service and a pod  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 28 14:04:52.740: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0644 on tmpfs
Jan 28 14:04:52.957: INFO: Waiting up to 5m0s for pod "pod-b8c4d51c-1f4f-44c6-b25c-afbbb421a7a2" in namespace "emptydir-4777" to be "success or failure"
Jan 28 14:04:53.012: INFO: Pod "pod-b8c4d51c-1f4f-44c6-b25c-afbbb421a7a2": Phase="Pending", Reason="", readiness=false. Elapsed: 54.868641ms
Jan 28 14:04:55.018: INFO: Pod "pod-b8c4d51c-1f4f-44c6-b25c-afbbb421a7a2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.060195708s
Jan 28 14:04:57.070: INFO: Pod "pod-b8c4d51c-1f4f-44c6-b25c-afbbb421a7a2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.112653097s
Jan 28 14:04:59.129: INFO: Pod "pod-b8c4d51c-1f4f-44c6-b25c-afbbb421a7a2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.17167668s
Jan 28 14:05:01.140: INFO: Pod "pod-b8c4d51c-1f4f-44c6-b25c-afbbb421a7a2": Phase="Pending", Reason="", readiness=false. Elapsed: 8.182866467s
Jan 28 14:05:03.163: INFO: Pod "pod-b8c4d51c-1f4f-44c6-b25c-afbbb421a7a2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.205025761s
STEP: Saw pod success
Jan 28 14:05:03.163: INFO: Pod "pod-b8c4d51c-1f4f-44c6-b25c-afbbb421a7a2" satisfied condition "success or failure"
Jan 28 14:05:03.169: INFO: Trying to get logs from node iruya-node pod pod-b8c4d51c-1f4f-44c6-b25c-afbbb421a7a2 container test-container: 
STEP: delete the pod
Jan 28 14:05:03.234: INFO: Waiting for pod pod-b8c4d51c-1f4f-44c6-b25c-afbbb421a7a2 to disappear
Jan 28 14:05:03.242: INFO: Pod pod-b8c4d51c-1f4f-44c6-b25c-afbbb421a7a2 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 28 14:05:03.242: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-4777" for this suite.
Jan 28 14:05:09.276: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 28 14:05:09.460: INFO: namespace emptydir-4777 deletion completed in 6.209916679s

• [SLOW TEST:16.721 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 28 14:05:09.462: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod liveness-5c046cbb-fa9c-45ba-bd4a-f1067ae93770 in namespace container-probe-1694
Jan 28 14:05:17.581: INFO: Started pod liveness-5c046cbb-fa9c-45ba-bd4a-f1067ae93770 in namespace container-probe-1694
STEP: checking the pod's current state and verifying that restartCount is present
Jan 28 14:05:17.593: INFO: Initial restart count of pod liveness-5c046cbb-fa9c-45ba-bd4a-f1067ae93770 is 0
Jan 28 14:05:37.807: INFO: Restart count of pod container-probe-1694/liveness-5c046cbb-fa9c-45ba-bd4a-f1067ae93770 is now 1 (20.213711506s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 28 14:05:37.864: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-1694" for this suite.
Jan 28 14:05:44.081: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 28 14:05:44.246: INFO: namespace container-probe-1694 deletion completed in 6.27052905s

• [SLOW TEST:34.784 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 28 14:05:44.248: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name projected-secret-test-afc36d74-d5a2-41bc-8962-29f9eb7db112
STEP: Creating a pod to test consume secrets
Jan 28 14:05:44.389: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-0e4f290b-0e06-41ac-81c6-652169804a0e" in namespace "projected-9159" to be "success or failure"
Jan 28 14:05:44.458: INFO: Pod "pod-projected-secrets-0e4f290b-0e06-41ac-81c6-652169804a0e": Phase="Pending", Reason="", readiness=false. Elapsed: 68.713667ms
Jan 28 14:05:46.470: INFO: Pod "pod-projected-secrets-0e4f290b-0e06-41ac-81c6-652169804a0e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.080298442s
Jan 28 14:05:48.485: INFO: Pod "pod-projected-secrets-0e4f290b-0e06-41ac-81c6-652169804a0e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.096081608s
Jan 28 14:05:50.500: INFO: Pod "pod-projected-secrets-0e4f290b-0e06-41ac-81c6-652169804a0e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.110717941s
Jan 28 14:05:52.517: INFO: Pod "pod-projected-secrets-0e4f290b-0e06-41ac-81c6-652169804a0e": Phase="Pending", Reason="", readiness=false. Elapsed: 8.12724641s
Jan 28 14:05:54.531: INFO: Pod "pod-projected-secrets-0e4f290b-0e06-41ac-81c6-652169804a0e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.141996641s
STEP: Saw pod success
Jan 28 14:05:54.531: INFO: Pod "pod-projected-secrets-0e4f290b-0e06-41ac-81c6-652169804a0e" satisfied condition "success or failure"
Jan 28 14:05:54.539: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-0e4f290b-0e06-41ac-81c6-652169804a0e container secret-volume-test: 
STEP: delete the pod
Jan 28 14:05:54.836: INFO: Waiting for pod pod-projected-secrets-0e4f290b-0e06-41ac-81c6-652169804a0e to disappear
Jan 28 14:05:54.847: INFO: Pod pod-projected-secrets-0e4f290b-0e06-41ac-81c6-652169804a0e no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 28 14:05:54.848: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9159" for this suite.
Jan 28 14:06:00.900: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 28 14:06:01.077: INFO: namespace projected-9159 deletion completed in 6.220011771s

• [SLOW TEST:16.830 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 28 14:06:01.080: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Jan 28 14:06:11.868: INFO: Successfully updated pod "pod-update-71dcd74e-2aa2-4fde-b457-39e9732b4c06"
STEP: verifying the updated pod is in kubernetes
Jan 28 14:06:11.899: INFO: Pod update OK
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 28 14:06:11.899: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-1455" for this suite.
Jan 28 14:06:33.926: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 28 14:06:34.044: INFO: namespace pods-1455 deletion completed in 22.138985732s

• [SLOW TEST:32.965 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 28 14:06:34.045: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a service in the namespace
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
STEP: Recreating the namespace
STEP: Verifying there is no service in the namespace
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 28 14:06:41.131: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "namespaces-5606" for this suite.
Jan 28 14:06:47.206: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 28 14:06:47.319: INFO: namespace namespaces-5606 deletion completed in 6.140333032s
STEP: Destroying namespace "nsdeletetest-9891" for this suite.
Jan 28 14:06:47.321: INFO: Namespace nsdeletetest-9891 was already deleted
STEP: Destroying namespace "nsdeletetest-1639" for this suite.
Jan 28 14:06:53.353: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 28 14:06:53.506: INFO: namespace nsdeletetest-1639 deletion completed in 6.185299554s

• [SLOW TEST:19.462 seconds]
[sig-api-machinery] Namespaces [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 28 14:06:53.508: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Performing setup for networking test in namespace pod-network-test-621
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Jan 28 14:06:53.587: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Jan 28 14:07:33.890: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.32.0.4:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-621 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 28 14:07:33.890: INFO: >>> kubeConfig: /root/.kube/config
I0128 14:07:34.007993       9 log.go:172] (0xc000a11080) (0xc000a18e60) Create stream
I0128 14:07:34.008334       9 log.go:172] (0xc000a11080) (0xc000a18e60) Stream added, broadcasting: 1
I0128 14:07:34.027084       9 log.go:172] (0xc000a11080) Reply frame received for 1
I0128 14:07:34.027194       9 log.go:172] (0xc000a11080) (0xc000a18f00) Create stream
I0128 14:07:34.027210       9 log.go:172] (0xc000a11080) (0xc000a18f00) Stream added, broadcasting: 3
I0128 14:07:34.029493       9 log.go:172] (0xc000a11080) Reply frame received for 3
I0128 14:07:34.029570       9 log.go:172] (0xc000a11080) (0xc00224a000) Create stream
I0128 14:07:34.029610       9 log.go:172] (0xc000a11080) (0xc00224a000) Stream added, broadcasting: 5
I0128 14:07:34.031549       9 log.go:172] (0xc000a11080) Reply frame received for 5
I0128 14:07:34.369297       9 log.go:172] (0xc000a11080) Data frame received for 3
I0128 14:07:34.369451       9 log.go:172] (0xc000a18f00) (3) Data frame handling
I0128 14:07:34.369523       9 log.go:172] (0xc000a18f00) (3) Data frame sent
I0128 14:07:34.688808       9 log.go:172] (0xc000a11080) (0xc000a18f00) Stream removed, broadcasting: 3
I0128 14:07:34.689243       9 log.go:172] (0xc000a11080) Data frame received for 1
I0128 14:07:34.689268       9 log.go:172] (0xc000a18e60) (1) Data frame handling
I0128 14:07:34.689292       9 log.go:172] (0xc000a18e60) (1) Data frame sent
I0128 14:07:34.689309       9 log.go:172] (0xc000a11080) (0xc000a18e60) Stream removed, broadcasting: 1
I0128 14:07:34.689702       9 log.go:172] (0xc000a11080) (0xc00224a000) Stream removed, broadcasting: 5
I0128 14:07:34.689778       9 log.go:172] (0xc000a11080) (0xc000a18e60) Stream removed, broadcasting: 1
I0128 14:07:34.689793       9 log.go:172] (0xc000a11080) (0xc000a18f00) Stream removed, broadcasting: 3
I0128 14:07:34.689802       9 log.go:172] (0xc000a11080) (0xc00224a000) Stream removed, broadcasting: 5
Jan 28 14:07:34.690: INFO: Found all expected endpoints: [netserver-0]
I0128 14:07:34.691027       9 log.go:172] (0xc000a11080) Go away received
Jan 28 14:07:34.715: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.44.0.1:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-621 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 28 14:07:34.715: INFO: >>> kubeConfig: /root/.kube/config
I0128 14:07:34.939514       9 log.go:172] (0xc000a11760) (0xc000a18fa0) Create stream
I0128 14:07:34.939701       9 log.go:172] (0xc000a11760) (0xc000a18fa0) Stream added, broadcasting: 1
I0128 14:07:34.952791       9 log.go:172] (0xc000a11760) Reply frame received for 1
I0128 14:07:34.952930       9 log.go:172] (0xc000a11760) (0xc000a190e0) Create stream
I0128 14:07:34.952949       9 log.go:172] (0xc000a11760) (0xc000a190e0) Stream added, broadcasting: 3
I0128 14:07:34.955347       9 log.go:172] (0xc000a11760) Reply frame received for 3
I0128 14:07:34.955422       9 log.go:172] (0xc000a11760) (0xc00122e280) Create stream
I0128 14:07:34.955441       9 log.go:172] (0xc000a11760) (0xc00122e280) Stream added, broadcasting: 5
I0128 14:07:34.958683       9 log.go:172] (0xc000a11760) Reply frame received for 5
I0128 14:07:35.148508       9 log.go:172] (0xc000a11760) Data frame received for 3
I0128 14:07:35.148691       9 log.go:172] (0xc000a190e0) (3) Data frame handling
I0128 14:07:35.148744       9 log.go:172] (0xc000a190e0) (3) Data frame sent
I0128 14:07:35.255726       9 log.go:172] (0xc000a11760) (0xc000a190e0) Stream removed, broadcasting: 3
I0128 14:07:35.255993       9 log.go:172] (0xc000a11760) Data frame received for 1
I0128 14:07:35.256015       9 log.go:172] (0xc000a18fa0) (1) Data frame handling
I0128 14:07:35.256070       9 log.go:172] (0xc000a18fa0) (1) Data frame sent
I0128 14:07:35.256130       9 log.go:172] (0xc000a11760) (0xc000a18fa0) Stream removed, broadcasting: 1
I0128 14:07:35.256572       9 log.go:172] (0xc000a11760) (0xc00122e280) Stream removed, broadcasting: 5
I0128 14:07:35.256610       9 log.go:172] (0xc000a11760) Go away received
I0128 14:07:35.256969       9 log.go:172] (0xc000a11760) (0xc000a18fa0) Stream removed, broadcasting: 1
I0128 14:07:35.257005       9 log.go:172] (0xc000a11760) (0xc000a190e0) Stream removed, broadcasting: 3
I0128 14:07:35.257021       9 log.go:172] (0xc000a11760) (0xc00122e280) Stream removed, broadcasting: 5
Jan 28 14:07:35.257: INFO: Found all expected endpoints: [netserver-1]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 28 14:07:35.257: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-621" for this suite.
Jan 28 14:07:59.322: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 28 14:07:59.461: INFO: namespace pod-network-test-621 deletion completed in 24.185921096s

• [SLOW TEST:65.953 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 28 14:07:59.462: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0777 on tmpfs
Jan 28 14:07:59.576: INFO: Waiting up to 5m0s for pod "pod-a0cfc38c-6164-47f3-9d3a-a7fe00970531" in namespace "emptydir-1198" to be "success or failure"
Jan 28 14:07:59.590: INFO: Pod "pod-a0cfc38c-6164-47f3-9d3a-a7fe00970531": Phase="Pending", Reason="", readiness=false. Elapsed: 13.406078ms
Jan 28 14:08:01.602: INFO: Pod "pod-a0cfc38c-6164-47f3-9d3a-a7fe00970531": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025494147s
Jan 28 14:08:03.613: INFO: Pod "pod-a0cfc38c-6164-47f3-9d3a-a7fe00970531": Phase="Pending", Reason="", readiness=false. Elapsed: 4.037349789s
Jan 28 14:08:05.620: INFO: Pod "pod-a0cfc38c-6164-47f3-9d3a-a7fe00970531": Phase="Pending", Reason="", readiness=false. Elapsed: 6.043567805s
Jan 28 14:08:07.628: INFO: Pod "pod-a0cfc38c-6164-47f3-9d3a-a7fe00970531": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.051974608s
STEP: Saw pod success
Jan 28 14:08:07.628: INFO: Pod "pod-a0cfc38c-6164-47f3-9d3a-a7fe00970531" satisfied condition "success or failure"
Jan 28 14:08:07.632: INFO: Trying to get logs from node iruya-node pod pod-a0cfc38c-6164-47f3-9d3a-a7fe00970531 container test-container: 
STEP: delete the pod
Jan 28 14:08:07.678: INFO: Waiting for pod pod-a0cfc38c-6164-47f3-9d3a-a7fe00970531 to disappear
Jan 28 14:08:07.748: INFO: Pod pod-a0cfc38c-6164-47f3-9d3a-a7fe00970531 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 28 14:08:07.749: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-1198" for this suite.
Jan 28 14:08:13.813: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 28 14:08:14.011: INFO: namespace emptydir-1198 deletion completed in 6.237936996s

• [SLOW TEST:14.549 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 28 14:08:14.016: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test override all
Jan 28 14:08:14.223: INFO: Waiting up to 5m0s for pod "client-containers-e211b3ba-c3b2-494c-b6d3-12fa9550b789" in namespace "containers-8852" to be "success or failure"
Jan 28 14:08:14.235: INFO: Pod "client-containers-e211b3ba-c3b2-494c-b6d3-12fa9550b789": Phase="Pending", Reason="", readiness=false. Elapsed: 11.166888ms
Jan 28 14:08:16.261: INFO: Pod "client-containers-e211b3ba-c3b2-494c-b6d3-12fa9550b789": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038092749s
Jan 28 14:08:18.285: INFO: Pod "client-containers-e211b3ba-c3b2-494c-b6d3-12fa9550b789": Phase="Pending", Reason="", readiness=false. Elapsed: 4.061607645s
Jan 28 14:08:20.390: INFO: Pod "client-containers-e211b3ba-c3b2-494c-b6d3-12fa9550b789": Phase="Pending", Reason="", readiness=false. Elapsed: 6.166906882s
Jan 28 14:08:22.445: INFO: Pod "client-containers-e211b3ba-c3b2-494c-b6d3-12fa9550b789": Phase="Pending", Reason="", readiness=false. Elapsed: 8.22177033s
Jan 28 14:08:24.498: INFO: Pod "client-containers-e211b3ba-c3b2-494c-b6d3-12fa9550b789": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.274853609s
STEP: Saw pod success
Jan 28 14:08:24.499: INFO: Pod "client-containers-e211b3ba-c3b2-494c-b6d3-12fa9550b789" satisfied condition "success or failure"
Jan 28 14:08:24.506: INFO: Trying to get logs from node iruya-node pod client-containers-e211b3ba-c3b2-494c-b6d3-12fa9550b789 container test-container: 
STEP: delete the pod
Jan 28 14:08:24.600: INFO: Waiting for pod client-containers-e211b3ba-c3b2-494c-b6d3-12fa9550b789 to disappear
Jan 28 14:08:24.688: INFO: Pod client-containers-e211b3ba-c3b2-494c-b6d3-12fa9550b789 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 28 14:08:24.689: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-8852" for this suite.
Jan 28 14:08:30.739: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 28 14:08:30.910: INFO: namespace containers-8852 deletion completed in 6.213221912s

• [SLOW TEST:16.895 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 28 14:08:30.910: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Jan 28 14:08:30.998: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f6b3d056-c5bc-4b06-9867-d76fd96d9ebf" in namespace "projected-1418" to be "success or failure"
Jan 28 14:08:31.005: INFO: Pod "downwardapi-volume-f6b3d056-c5bc-4b06-9867-d76fd96d9ebf": Phase="Pending", Reason="", readiness=false. Elapsed: 6.590649ms
Jan 28 14:08:33.017: INFO: Pod "downwardapi-volume-f6b3d056-c5bc-4b06-9867-d76fd96d9ebf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018443s
Jan 28 14:08:35.030: INFO: Pod "downwardapi-volume-f6b3d056-c5bc-4b06-9867-d76fd96d9ebf": Phase="Pending", Reason="", readiness=false. Elapsed: 4.031910809s
Jan 28 14:08:37.051: INFO: Pod "downwardapi-volume-f6b3d056-c5bc-4b06-9867-d76fd96d9ebf": Phase="Pending", Reason="", readiness=false. Elapsed: 6.052613077s
Jan 28 14:08:39.059: INFO: Pod "downwardapi-volume-f6b3d056-c5bc-4b06-9867-d76fd96d9ebf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.060918851s
STEP: Saw pod success
Jan 28 14:08:39.059: INFO: Pod "downwardapi-volume-f6b3d056-c5bc-4b06-9867-d76fd96d9ebf" satisfied condition "success or failure"
Jan 28 14:08:39.066: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-f6b3d056-c5bc-4b06-9867-d76fd96d9ebf container client-container: 
STEP: delete the pod
Jan 28 14:08:39.119: INFO: Waiting for pod downwardapi-volume-f6b3d056-c5bc-4b06-9867-d76fd96d9ebf to disappear
Jan 28 14:08:39.171: INFO: Pod downwardapi-volume-f6b3d056-c5bc-4b06-9867-d76fd96d9ebf no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 28 14:08:39.171: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1418" for this suite.
Jan 28 14:08:45.258: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 28 14:08:45.446: INFO: namespace projected-1418 deletion completed in 6.267225426s

• [SLOW TEST:14.536 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 28 14:08:45.448: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0777 on node default medium
Jan 28 14:08:45.617: INFO: Waiting up to 5m0s for pod "pod-925e675f-6cc1-4968-9a41-83f53aa1f694" in namespace "emptydir-8849" to be "success or failure"
Jan 28 14:08:45.649: INFO: Pod "pod-925e675f-6cc1-4968-9a41-83f53aa1f694": Phase="Pending", Reason="", readiness=false. Elapsed: 31.509293ms
Jan 28 14:08:47.660: INFO: Pod "pod-925e675f-6cc1-4968-9a41-83f53aa1f694": Phase="Pending", Reason="", readiness=false. Elapsed: 2.042565531s
Jan 28 14:08:49.667: INFO: Pod "pod-925e675f-6cc1-4968-9a41-83f53aa1f694": Phase="Pending", Reason="", readiness=false. Elapsed: 4.04984207s
Jan 28 14:08:51.674: INFO: Pod "pod-925e675f-6cc1-4968-9a41-83f53aa1f694": Phase="Pending", Reason="", readiness=false. Elapsed: 6.056324503s
Jan 28 14:08:53.688: INFO: Pod "pod-925e675f-6cc1-4968-9a41-83f53aa1f694": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.070754663s
STEP: Saw pod success
Jan 28 14:08:53.689: INFO: Pod "pod-925e675f-6cc1-4968-9a41-83f53aa1f694" satisfied condition "success or failure"
Jan 28 14:08:53.692: INFO: Trying to get logs from node iruya-node pod pod-925e675f-6cc1-4968-9a41-83f53aa1f694 container test-container: 
STEP: delete the pod
Jan 28 14:08:53.794: INFO: Waiting for pod pod-925e675f-6cc1-4968-9a41-83f53aa1f694 to disappear
Jan 28 14:08:53.874: INFO: Pod pod-925e675f-6cc1-4968-9a41-83f53aa1f694 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 28 14:08:53.875: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-8849" for this suite.
Jan 28 14:08:59.930: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 28 14:09:00.034: INFO: namespace emptydir-8849 deletion completed in 6.142752898s

• [SLOW TEST:14.586 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 28 14:09:00.035: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0666 on node default medium
Jan 28 14:09:00.175: INFO: Waiting up to 5m0s for pod "pod-fdbbb92c-9a35-49be-bcc9-91164d035e56" in namespace "emptydir-3845" to be "success or failure"
Jan 28 14:09:00.185: INFO: Pod "pod-fdbbb92c-9a35-49be-bcc9-91164d035e56": Phase="Pending", Reason="", readiness=false. Elapsed: 9.995423ms
Jan 28 14:09:02.194: INFO: Pod "pod-fdbbb92c-9a35-49be-bcc9-91164d035e56": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018607884s
Jan 28 14:09:04.205: INFO: Pod "pod-fdbbb92c-9a35-49be-bcc9-91164d035e56": Phase="Pending", Reason="", readiness=false. Elapsed: 4.029909952s
Jan 28 14:09:06.218: INFO: Pod "pod-fdbbb92c-9a35-49be-bcc9-91164d035e56": Phase="Pending", Reason="", readiness=false. Elapsed: 6.042217484s
Jan 28 14:09:08.257: INFO: Pod "pod-fdbbb92c-9a35-49be-bcc9-91164d035e56": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.081752908s
STEP: Saw pod success
Jan 28 14:09:08.257: INFO: Pod "pod-fdbbb92c-9a35-49be-bcc9-91164d035e56" satisfied condition "success or failure"
Jan 28 14:09:08.263: INFO: Trying to get logs from node iruya-node pod pod-fdbbb92c-9a35-49be-bcc9-91164d035e56 container test-container: 
STEP: delete the pod
Jan 28 14:09:08.354: INFO: Waiting for pod pod-fdbbb92c-9a35-49be-bcc9-91164d035e56 to disappear
Jan 28 14:09:08.446: INFO: Pod pod-fdbbb92c-9a35-49be-bcc9-91164d035e56 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 28 14:09:08.446: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-3845" for this suite.
Jan 28 14:09:14.504: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 28 14:09:14.670: INFO: namespace emptydir-3845 deletion completed in 6.209778527s

• [SLOW TEST:14.635 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 28 14:09:14.671: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a watch on configmaps with a certain label
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: changing the label value of the configmap
STEP: Expecting to observe a delete notification for the watched object
Jan 28 14:09:14.785: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-1089,SelfLink:/api/v1/namespaces/watch-1089/configmaps/e2e-watch-test-label-changed,UID:184c9cba-a8e0-44e7-abfb-f675fa0a0d91,ResourceVersion:22195144,Generation:0,CreationTimestamp:2020-01-28 14:09:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
Jan 28 14:09:14.785: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-1089,SelfLink:/api/v1/namespaces/watch-1089/configmaps/e2e-watch-test-label-changed,UID:184c9cba-a8e0-44e7-abfb-f675fa0a0d91,ResourceVersion:22195145,Generation:0,CreationTimestamp:2020-01-28 14:09:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
Jan 28 14:09:14.786: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-1089,SelfLink:/api/v1/namespaces/watch-1089/configmaps/e2e-watch-test-label-changed,UID:184c9cba-a8e0-44e7-abfb-f675fa0a0d91,ResourceVersion:22195146,Generation:0,CreationTimestamp:2020-01-28 14:09:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying the configmap a second time
STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements
STEP: changing the label value of the configmap back
STEP: modifying the configmap a third time
STEP: deleting the configmap
STEP: Expecting to observe an add notification for the watched object when the label value was restored
Jan 28 14:09:24.864: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-1089,SelfLink:/api/v1/namespaces/watch-1089/configmaps/e2e-watch-test-label-changed,UID:184c9cba-a8e0-44e7-abfb-f675fa0a0d91,ResourceVersion:22195162,Generation:0,CreationTimestamp:2020-01-28 14:09:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Jan 28 14:09:24.865: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-1089,SelfLink:/api/v1/namespaces/watch-1089/configmaps/e2e-watch-test-label-changed,UID:184c9cba-a8e0-44e7-abfb-f675fa0a0d91,ResourceVersion:22195163,Generation:0,CreationTimestamp:2020-01-28 14:09:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},}
Jan 28 14:09:24.865: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-1089,SelfLink:/api/v1/namespaces/watch-1089/configmaps/e2e-watch-test-label-changed,UID:184c9cba-a8e0-44e7-abfb-f675fa0a0d91,ResourceVersion:22195164,Generation:0,CreationTimestamp:2020-01-28 14:09:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 28 14:09:24.865: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-1089" for this suite.
Jan 28 14:09:30.969: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 28 14:09:31.096: INFO: namespace watch-1089 deletion completed in 6.216312483s

• [SLOW TEST:16.425 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 28 14:09:31.097: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test env composition
Jan 28 14:09:31.203: INFO: Waiting up to 5m0s for pod "var-expansion-cc2b6ef9-489e-4d37-919d-411be807654f" in namespace "var-expansion-1745" to be "success or failure"
Jan 28 14:09:31.230: INFO: Pod "var-expansion-cc2b6ef9-489e-4d37-919d-411be807654f": Phase="Pending", Reason="", readiness=false. Elapsed: 27.038392ms
Jan 28 14:09:33.257: INFO: Pod "var-expansion-cc2b6ef9-489e-4d37-919d-411be807654f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.053735698s
Jan 28 14:09:35.315: INFO: Pod "var-expansion-cc2b6ef9-489e-4d37-919d-411be807654f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.11137112s
Jan 28 14:09:37.325: INFO: Pod "var-expansion-cc2b6ef9-489e-4d37-919d-411be807654f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.121252735s
Jan 28 14:09:39.335: INFO: Pod "var-expansion-cc2b6ef9-489e-4d37-919d-411be807654f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.131752299s
Jan 28 14:09:41.345: INFO: Pod "var-expansion-cc2b6ef9-489e-4d37-919d-411be807654f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.141445981s
STEP: Saw pod success
Jan 28 14:09:41.345: INFO: Pod "var-expansion-cc2b6ef9-489e-4d37-919d-411be807654f" satisfied condition "success or failure"
Jan 28 14:09:41.350: INFO: Trying to get logs from node iruya-node pod var-expansion-cc2b6ef9-489e-4d37-919d-411be807654f container dapi-container: 
STEP: delete the pod
Jan 28 14:09:41.472: INFO: Waiting for pod var-expansion-cc2b6ef9-489e-4d37-919d-411be807654f to disappear
Jan 28 14:09:41.481: INFO: Pod var-expansion-cc2b6ef9-489e-4d37-919d-411be807654f no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 28 14:09:41.482: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-1745" for this suite.
Jan 28 14:09:47.511: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 28 14:09:47.652: INFO: namespace var-expansion-1745 deletion completed in 6.162136572s

• [SLOW TEST:16.555 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 28 14:09:47.654: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81
Jan 28 14:09:47.714: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Jan 28 14:09:47.725: INFO: Waiting for terminating namespaces to be deleted...
Jan 28 14:09:47.730: INFO: 
Logging pods the kubelet thinks is on node iruya-node before test
Jan 28 14:09:47.743: INFO: weave-net-rlp57 from kube-system started at 2019-10-12 11:56:39 +0000 UTC (2 container statuses recorded)
Jan 28 14:09:47.743: INFO: 	Container weave ready: true, restart count 0
Jan 28 14:09:47.743: INFO: 	Container weave-npc ready: true, restart count 0
Jan 28 14:09:47.743: INFO: kube-proxy-976zl from kube-system started at 2019-08-04 09:01:39 +0000 UTC (1 container statuses recorded)
Jan 28 14:09:47.743: INFO: 	Container kube-proxy ready: true, restart count 0
Jan 28 14:09:47.743: INFO: 
Logging pods the kubelet thinks is on node iruya-server-sfge57q7djm7 before test
Jan 28 14:09:47.763: INFO: kube-apiserver-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:39 +0000 UTC (1 container statuses recorded)
Jan 28 14:09:47.763: INFO: 	Container kube-apiserver ready: true, restart count 0
Jan 28 14:09:47.763: INFO: kube-scheduler-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:43 +0000 UTC (1 container statuses recorded)
Jan 28 14:09:47.763: INFO: 	Container kube-scheduler ready: true, restart count 13
Jan 28 14:09:47.763: INFO: coredns-5c98db65d4-xx8w8 from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded)
Jan 28 14:09:47.763: INFO: 	Container coredns ready: true, restart count 0
Jan 28 14:09:47.763: INFO: etcd-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:38 +0000 UTC (1 container statuses recorded)
Jan 28 14:09:47.763: INFO: 	Container etcd ready: true, restart count 0
Jan 28 14:09:47.763: INFO: weave-net-bzl4d from kube-system started at 2019-08-04 08:52:37 +0000 UTC (2 container statuses recorded)
Jan 28 14:09:47.763: INFO: 	Container weave ready: true, restart count 0
Jan 28 14:09:47.763: INFO: 	Container weave-npc ready: true, restart count 0
Jan 28 14:09:47.763: INFO: coredns-5c98db65d4-bm4gs from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded)
Jan 28 14:09:47.763: INFO: 	Container coredns ready: true, restart count 0
Jan 28 14:09:47.764: INFO: kube-controller-manager-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:42 +0000 UTC (1 container statuses recorded)
Jan 28 14:09:47.764: INFO: 	Container kube-controller-manager ready: true, restart count 19
Jan 28 14:09:47.764: INFO: kube-proxy-58v95 from kube-system started at 2019-08-04 08:52:37 +0000 UTC (1 container statuses recorded)
Jan 28 14:09:47.764: INFO: 	Container kube-proxy ready: true, restart count 0
[It] validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: verifying the node has the label node iruya-node
STEP: verifying the node has the label node iruya-server-sfge57q7djm7
Jan 28 14:09:47.924: INFO: Pod coredns-5c98db65d4-bm4gs requesting resource cpu=100m on Node iruya-server-sfge57q7djm7
Jan 28 14:09:47.924: INFO: Pod coredns-5c98db65d4-xx8w8 requesting resource cpu=100m on Node iruya-server-sfge57q7djm7
Jan 28 14:09:47.924: INFO: Pod etcd-iruya-server-sfge57q7djm7 requesting resource cpu=0m on Node iruya-server-sfge57q7djm7
Jan 28 14:09:47.924: INFO: Pod kube-apiserver-iruya-server-sfge57q7djm7 requesting resource cpu=250m on Node iruya-server-sfge57q7djm7
Jan 28 14:09:47.924: INFO: Pod kube-controller-manager-iruya-server-sfge57q7djm7 requesting resource cpu=200m on Node iruya-server-sfge57q7djm7
Jan 28 14:09:47.924: INFO: Pod kube-proxy-58v95 requesting resource cpu=0m on Node iruya-server-sfge57q7djm7
Jan 28 14:09:47.924: INFO: Pod kube-proxy-976zl requesting resource cpu=0m on Node iruya-node
Jan 28 14:09:47.924: INFO: Pod kube-scheduler-iruya-server-sfge57q7djm7 requesting resource cpu=100m on Node iruya-server-sfge57q7djm7
Jan 28 14:09:47.924: INFO: Pod weave-net-bzl4d requesting resource cpu=20m on Node iruya-server-sfge57q7djm7
Jan 28 14:09:47.924: INFO: Pod weave-net-rlp57 requesting resource cpu=20m on Node iruya-node
STEP: Starting Pods to consume most of the cluster CPU.
STEP: Creating another pod that requires unavailable amount of CPU.
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-08e18c5e-c013-406d-9678-4403c400fa9f.15ee12409fb66a9d], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4257/filler-pod-08e18c5e-c013-406d-9678-4403c400fa9f to iruya-server-sfge57q7djm7]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-08e18c5e-c013-406d-9678-4403c400fa9f.15ee1241d69e0079], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-08e18c5e-c013-406d-9678-4403c400fa9f.15ee12429b3c32a2], Reason = [Created], Message = [Created container filler-pod-08e18c5e-c013-406d-9678-4403c400fa9f]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-08e18c5e-c013-406d-9678-4403c400fa9f.15ee1242c3112e9e], Reason = [Started], Message = [Started container filler-pod-08e18c5e-c013-406d-9678-4403c400fa9f]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-276265c8-454a-44fc-966a-3e3aef966df6.15ee12409d2e7fca], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4257/filler-pod-276265c8-454a-44fc-966a-3e3aef966df6 to iruya-node]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-276265c8-454a-44fc-966a-3e3aef966df6.15ee1241882c5e36], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-276265c8-454a-44fc-966a-3e3aef966df6.15ee124205ec4702], Reason = [Created], Message = [Created container filler-pod-276265c8-454a-44fc-966a-3e3aef966df6]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-276265c8-454a-44fc-966a-3e3aef966df6.15ee1242316c3fde], Reason = [Started], Message = [Started container filler-pod-276265c8-454a-44fc-966a-3e3aef966df6]
STEP: Considering event: 
Type = [Warning], Name = [additional-pod.15ee12436e15fb53], Reason = [FailedScheduling], Message = [0/2 nodes are available: 2 Insufficient cpu.]
STEP: removing the label node off the node iruya-node
STEP: verifying the node doesn't have the label node
STEP: removing the label node off the node iruya-server-sfge57q7djm7
STEP: verifying the node doesn't have the label node
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 28 14:10:01.377: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-4257" for this suite.
Jan 28 14:10:09.413: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 28 14:10:09.553: INFO: namespace sched-pred-4257 deletion completed in 8.168383111s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72

• [SLOW TEST:21.899 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23
  validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should fail to create ConfigMap with empty key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 28 14:10:09.555: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create ConfigMap with empty key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap that has name configmap-test-emptyKey-ff30a329-89f1-4505-9b8f-ef01aaa2fc60
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 28 14:10:10.963: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-6836" for this suite.
Jan 28 14:10:17.221: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 28 14:10:17.330: INFO: namespace configmap-6836 deletion completed in 6.305268209s

• [SLOW TEST:7.775 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should fail to create ConfigMap with empty key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test when starting a container that exits 
  should run with the expected status [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 28 14:10:17.332: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run with the expected status [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpa': should get the expected 'State'
STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpof': should get the expected 'State'
STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpn': should get the expected 'State'
STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance]
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 28 14:11:09.446: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-991" for this suite.
Jan 28 14:11:15.551: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 28 14:11:15.710: INFO: namespace container-runtime-991 deletion completed in 6.200421313s

• [SLOW TEST:58.379 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    when starting a container that exits
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:39
      should run with the expected status [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should be possible to delete [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 28 14:11:15.712: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should be possible to delete [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 28 14:11:16.022: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-1893" for this suite.
Jan 28 14:11:22.052: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 28 14:11:22.175: INFO: namespace kubelet-test-1893 deletion completed in 6.146354323s

• [SLOW TEST:6.464 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78
    should be possible to delete [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl version 
  should check is all data is printed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 28 14:11:22.176: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should check is all data is printed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jan 28 14:11:22.257: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version'
Jan 28 14:11:22.433: INFO: stderr: ""
Jan 28 14:11:22.434: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"15\", GitVersion:\"v1.15.7\", GitCommit:\"6c143d35bb11d74970e7bc0b6c45b6bfdffc0bd4\", GitTreeState:\"clean\", BuildDate:\"2019-12-22T16:55:20Z\", GoVersion:\"go1.12.14\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"15\", GitVersion:\"v1.15.1\", GitCommit:\"4485c6f18cee9a5d3c3b4e523bd27972b1b53892\", GitTreeState:\"clean\", BuildDate:\"2019-07-18T09:09:21Z\", GoVersion:\"go1.12.5\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 28 14:11:22.434: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-8130" for this suite.
Jan 28 14:11:28.472: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 28 14:11:28.630: INFO: namespace kubectl-8130 deletion completed in 6.181306958s

• [SLOW TEST:6.454 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl version
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should check is all data is printed  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 28 14:11:28.630: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs
STEP: Gathering metrics
W0128 14:11:59.316580       9 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jan 28 14:11:59.316: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 28 14:11:59.316: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-1785" for this suite.
Jan 28 14:12:05.400: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 28 14:12:05.527: INFO: namespace gc-1785 deletion completed in 6.203367693s

• [SLOW TEST:36.897 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 28 14:12:05.528: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test override arguments
Jan 28 14:12:06.735: INFO: Waiting up to 5m0s for pod "client-containers-e423974b-2bac-4447-b17f-038c9df5034d" in namespace "containers-7173" to be "success or failure"
Jan 28 14:12:06.749: INFO: Pod "client-containers-e423974b-2bac-4447-b17f-038c9df5034d": Phase="Pending", Reason="", readiness=false. Elapsed: 13.415119ms
Jan 28 14:12:08.759: INFO: Pod "client-containers-e423974b-2bac-4447-b17f-038c9df5034d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023416891s
Jan 28 14:12:10.773: INFO: Pod "client-containers-e423974b-2bac-4447-b17f-038c9df5034d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.03806234s
Jan 28 14:12:12.791: INFO: Pod "client-containers-e423974b-2bac-4447-b17f-038c9df5034d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.055556781s
Jan 28 14:12:14.801: INFO: Pod "client-containers-e423974b-2bac-4447-b17f-038c9df5034d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.065860889s
Jan 28 14:12:16.812: INFO: Pod "client-containers-e423974b-2bac-4447-b17f-038c9df5034d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.077004848s
STEP: Saw pod success
Jan 28 14:12:16.813: INFO: Pod "client-containers-e423974b-2bac-4447-b17f-038c9df5034d" satisfied condition "success or failure"
Jan 28 14:12:16.911: INFO: Trying to get logs from node iruya-node pod client-containers-e423974b-2bac-4447-b17f-038c9df5034d container test-container: 
STEP: delete the pod
Jan 28 14:12:17.067: INFO: Waiting for pod client-containers-e423974b-2bac-4447-b17f-038c9df5034d to disappear
Jan 28 14:12:17.080: INFO: Pod client-containers-e423974b-2bac-4447-b17f-038c9df5034d no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 28 14:12:17.080: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-7173" for this suite.
Jan 28 14:12:23.125: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 28 14:12:23.272: INFO: namespace containers-7173 deletion completed in 6.184075957s

• [SLOW TEST:17.744 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-storage] Projected downwardAPI 
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 28 14:12:23.272: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating the pod
Jan 28 14:12:31.945: INFO: Successfully updated pod "labelsupdate5060b0ed-7ed8-4dba-86d3-008268341ea0"
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 28 14:12:36.069: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8037" for this suite.
Jan 28 14:12:58.097: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 28 14:12:58.249: INFO: namespace projected-8037 deletion completed in 22.172551574s

• [SLOW TEST:34.977 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-node] ConfigMap 
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 28 14:12:58.250: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap configmap-224/configmap-test-12845905-ac1d-4b18-b3f1-146588ad720a
STEP: Creating a pod to test consume configMaps
Jan 28 14:12:58.529: INFO: Waiting up to 5m0s for pod "pod-configmaps-d6cdb114-8040-47c6-a612-43bec964520f" in namespace "configmap-224" to be "success or failure"
Jan 28 14:12:58.545: INFO: Pod "pod-configmaps-d6cdb114-8040-47c6-a612-43bec964520f": Phase="Pending", Reason="", readiness=false. Elapsed: 15.592329ms
Jan 28 14:13:00.570: INFO: Pod "pod-configmaps-d6cdb114-8040-47c6-a612-43bec964520f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.04107534s
Jan 28 14:13:02.583: INFO: Pod "pod-configmaps-d6cdb114-8040-47c6-a612-43bec964520f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.053274822s
Jan 28 14:13:04.595: INFO: Pod "pod-configmaps-d6cdb114-8040-47c6-a612-43bec964520f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.066248762s
Jan 28 14:13:06.604: INFO: Pod "pod-configmaps-d6cdb114-8040-47c6-a612-43bec964520f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.074931015s
STEP: Saw pod success
Jan 28 14:13:06.604: INFO: Pod "pod-configmaps-d6cdb114-8040-47c6-a612-43bec964520f" satisfied condition "success or failure"
Jan 28 14:13:06.609: INFO: Trying to get logs from node iruya-node pod pod-configmaps-d6cdb114-8040-47c6-a612-43bec964520f container env-test: 
STEP: delete the pod
Jan 28 14:13:06.654: INFO: Waiting for pod pod-configmaps-d6cdb114-8040-47c6-a612-43bec964520f to disappear
Jan 28 14:13:06.672: INFO: Pod pod-configmaps-d6cdb114-8040-47c6-a612-43bec964520f no longer exists
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 28 14:13:06.672: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-224" for this suite.
Jan 28 14:13:12.769: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 28 14:13:12.899: INFO: namespace configmap-224 deletion completed in 6.218993105s

• [SLOW TEST:14.650 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 28 14:13:12.900: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0666 on node default medium
Jan 28 14:13:12.986: INFO: Waiting up to 5m0s for pod "pod-5dea698c-ab23-4e51-8219-ef0977665602" in namespace "emptydir-125" to be "success or failure"
Jan 28 14:13:13.047: INFO: Pod "pod-5dea698c-ab23-4e51-8219-ef0977665602": Phase="Pending", Reason="", readiness=false. Elapsed: 60.35706ms
Jan 28 14:13:15.059: INFO: Pod "pod-5dea698c-ab23-4e51-8219-ef0977665602": Phase="Pending", Reason="", readiness=false. Elapsed: 2.072859671s
Jan 28 14:13:17.072: INFO: Pod "pod-5dea698c-ab23-4e51-8219-ef0977665602": Phase="Pending", Reason="", readiness=false. Elapsed: 4.085575188s
Jan 28 14:13:19.081: INFO: Pod "pod-5dea698c-ab23-4e51-8219-ef0977665602": Phase="Pending", Reason="", readiness=false. Elapsed: 6.09448187s
Jan 28 14:13:21.089: INFO: Pod "pod-5dea698c-ab23-4e51-8219-ef0977665602": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.103095332s
STEP: Saw pod success
Jan 28 14:13:21.090: INFO: Pod "pod-5dea698c-ab23-4e51-8219-ef0977665602" satisfied condition "success or failure"
Jan 28 14:13:21.094: INFO: Trying to get logs from node iruya-node pod pod-5dea698c-ab23-4e51-8219-ef0977665602 container test-container: 
STEP: delete the pod
Jan 28 14:13:21.164: INFO: Waiting for pod pod-5dea698c-ab23-4e51-8219-ef0977665602 to disappear
Jan 28 14:13:21.179: INFO: Pod pod-5dea698c-ab23-4e51-8219-ef0977665602 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 28 14:13:21.179: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-125" for this suite.
Jan 28 14:13:27.222: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 28 14:13:27.411: INFO: namespace emptydir-125 deletion completed in 6.225886523s

• [SLOW TEST:14.511 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  pod should support shared volumes between containers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 28 14:13:27.413: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] pod should support shared volumes between containers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating Pod
STEP: Waiting for the pod running
STEP: Geting the pod
STEP: Reading file content from the nginx-container
Jan 28 14:13:39.549: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec pod-sharedvolume-63e176ba-a3ea-4f7e-a249-fc88d53c974f -c busybox-main-container --namespace=emptydir-3153 -- cat /usr/share/volumeshare/shareddata.txt'
Jan 28 14:13:42.132: INFO: stderr: "I0128 14:13:41.697546    2193 log.go:172] (0xc00069a210) (0xc0001fea00) Create stream\nI0128 14:13:41.698011    2193 log.go:172] (0xc00069a210) (0xc0001fea00) Stream added, broadcasting: 1\nI0128 14:13:41.719062    2193 log.go:172] (0xc00069a210) Reply frame received for 1\nI0128 14:13:41.719276    2193 log.go:172] (0xc00069a210) (0xc0007260a0) Create stream\nI0128 14:13:41.719337    2193 log.go:172] (0xc00069a210) (0xc0007260a0) Stream added, broadcasting: 3\nI0128 14:13:41.727485    2193 log.go:172] (0xc00069a210) Reply frame received for 3\nI0128 14:13:41.727542    2193 log.go:172] (0xc00069a210) (0xc000726140) Create stream\nI0128 14:13:41.727556    2193 log.go:172] (0xc00069a210) (0xc000726140) Stream added, broadcasting: 5\nI0128 14:13:41.732024    2193 log.go:172] (0xc00069a210) Reply frame received for 5\nI0128 14:13:41.896750    2193 log.go:172] (0xc00069a210) Data frame received for 3\nI0128 14:13:41.896888    2193 log.go:172] (0xc0007260a0) (3) Data frame handling\nI0128 14:13:41.896917    2193 log.go:172] (0xc0007260a0) (3) Data frame sent\nI0128 14:13:42.114899    2193 log.go:172] (0xc00069a210) Data frame received for 1\nI0128 14:13:42.115126    2193 log.go:172] (0xc0001fea00) (1) Data frame handling\nI0128 14:13:42.115180    2193 log.go:172] (0xc0001fea00) (1) Data frame sent\nI0128 14:13:42.115815    2193 log.go:172] (0xc00069a210) (0xc000726140) Stream removed, broadcasting: 5\nI0128 14:13:42.115962    2193 log.go:172] (0xc00069a210) (0xc0001fea00) Stream removed, broadcasting: 1\nI0128 14:13:42.117024    2193 log.go:172] (0xc00069a210) (0xc0007260a0) Stream removed, broadcasting: 3\nI0128 14:13:42.117438    2193 log.go:172] (0xc00069a210) Go away received\nI0128 14:13:42.118716    2193 log.go:172] (0xc00069a210) (0xc0001fea00) Stream removed, broadcasting: 1\nI0128 14:13:42.118788    2193 log.go:172] (0xc00069a210) (0xc0007260a0) Stream removed, broadcasting: 3\nI0128 14:13:42.118813    2193 log.go:172] (0xc00069a210) (0xc000726140) Stream removed, broadcasting: 5\n"
Jan 28 14:13:42.133: INFO: stdout: "Hello from the busy-box sub-container\n"
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 28 14:13:42.133: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-3153" for this suite.
Jan 28 14:13:48.170: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 28 14:13:48.308: INFO: namespace emptydir-3153 deletion completed in 6.163407466s

• [SLOW TEST:20.896 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  pod should support shared volumes between containers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 28 14:13:48.310: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Jan 28 14:13:48.431: INFO: Waiting up to 5m0s for pod "downwardapi-volume-65617753-d44b-4146-bc8f-3d4c294c6130" in namespace "projected-2707" to be "success or failure"
Jan 28 14:13:48.443: INFO: Pod "downwardapi-volume-65617753-d44b-4146-bc8f-3d4c294c6130": Phase="Pending", Reason="", readiness=false. Elapsed: 11.095408ms
Jan 28 14:13:50.455: INFO: Pod "downwardapi-volume-65617753-d44b-4146-bc8f-3d4c294c6130": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023421783s
Jan 28 14:13:52.470: INFO: Pod "downwardapi-volume-65617753-d44b-4146-bc8f-3d4c294c6130": Phase="Pending", Reason="", readiness=false. Elapsed: 4.039013106s
Jan 28 14:13:54.485: INFO: Pod "downwardapi-volume-65617753-d44b-4146-bc8f-3d4c294c6130": Phase="Pending", Reason="", readiness=false. Elapsed: 6.053780772s
Jan 28 14:13:56.499: INFO: Pod "downwardapi-volume-65617753-d44b-4146-bc8f-3d4c294c6130": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.067816915s
STEP: Saw pod success
Jan 28 14:13:56.499: INFO: Pod "downwardapi-volume-65617753-d44b-4146-bc8f-3d4c294c6130" satisfied condition "success or failure"
Jan 28 14:13:56.506: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-65617753-d44b-4146-bc8f-3d4c294c6130 container client-container: 
STEP: delete the pod
Jan 28 14:13:56.604: INFO: Waiting for pod downwardapi-volume-65617753-d44b-4146-bc8f-3d4c294c6130 to disappear
Jan 28 14:13:56.614: INFO: Pod downwardapi-volume-65617753-d44b-4146-bc8f-3d4c294c6130 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 28 14:13:56.614: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2707" for this suite.
Jan 28 14:14:02.672: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 28 14:14:02.825: INFO: namespace projected-2707 deletion completed in 6.201986789s

• [SLOW TEST:14.515 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 28 14:14:02.826: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Jan 28 14:14:02.968: INFO: Waiting up to 5m0s for pod "downwardapi-volume-bd9dc93b-0399-41e3-99f8-0fd2119b81d8" in namespace "downward-api-5951" to be "success or failure"
Jan 28 14:14:02.979: INFO: Pod "downwardapi-volume-bd9dc93b-0399-41e3-99f8-0fd2119b81d8": Phase="Pending", Reason="", readiness=false. Elapsed: 10.684623ms
Jan 28 14:14:04.991: INFO: Pod "downwardapi-volume-bd9dc93b-0399-41e3-99f8-0fd2119b81d8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022705045s
Jan 28 14:14:07.011: INFO: Pod "downwardapi-volume-bd9dc93b-0399-41e3-99f8-0fd2119b81d8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.042215168s
Jan 28 14:14:09.020: INFO: Pod "downwardapi-volume-bd9dc93b-0399-41e3-99f8-0fd2119b81d8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.051901364s
Jan 28 14:14:11.032: INFO: Pod "downwardapi-volume-bd9dc93b-0399-41e3-99f8-0fd2119b81d8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.063021006s
STEP: Saw pod success
Jan 28 14:14:11.032: INFO: Pod "downwardapi-volume-bd9dc93b-0399-41e3-99f8-0fd2119b81d8" satisfied condition "success or failure"
Jan 28 14:14:11.037: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-bd9dc93b-0399-41e3-99f8-0fd2119b81d8 container client-container: 
STEP: delete the pod
Jan 28 14:14:11.151: INFO: Waiting for pod downwardapi-volume-bd9dc93b-0399-41e3-99f8-0fd2119b81d8 to disappear
Jan 28 14:14:11.160: INFO: Pod downwardapi-volume-bd9dc93b-0399-41e3-99f8-0fd2119b81d8 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 28 14:14:11.161: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-5951" for this suite.
Jan 28 14:14:17.203: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 28 14:14:17.362: INFO: namespace downward-api-5951 deletion completed in 6.190742056s

• [SLOW TEST:14.536 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 28 14:14:17.364: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod pod-subpath-test-configmap-lgbn
STEP: Creating a pod to test atomic-volume-subpath
Jan 28 14:14:17.603: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-lgbn" in namespace "subpath-2344" to be "success or failure"
Jan 28 14:14:17.609: INFO: Pod "pod-subpath-test-configmap-lgbn": Phase="Pending", Reason="", readiness=false. Elapsed: 6.476315ms
Jan 28 14:14:19.616: INFO: Pod "pod-subpath-test-configmap-lgbn": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013425173s
Jan 28 14:14:21.628: INFO: Pod "pod-subpath-test-configmap-lgbn": Phase="Pending", Reason="", readiness=false. Elapsed: 4.025594622s
Jan 28 14:14:23.654: INFO: Pod "pod-subpath-test-configmap-lgbn": Phase="Pending", Reason="", readiness=false. Elapsed: 6.051117034s
Jan 28 14:14:25.667: INFO: Pod "pod-subpath-test-configmap-lgbn": Phase="Running", Reason="", readiness=true. Elapsed: 8.064350418s
Jan 28 14:14:27.678: INFO: Pod "pod-subpath-test-configmap-lgbn": Phase="Running", Reason="", readiness=true. Elapsed: 10.074906374s
Jan 28 14:14:29.686: INFO: Pod "pod-subpath-test-configmap-lgbn": Phase="Running", Reason="", readiness=true. Elapsed: 12.08312784s
Jan 28 14:14:31.702: INFO: Pod "pod-subpath-test-configmap-lgbn": Phase="Running", Reason="", readiness=true. Elapsed: 14.099560043s
Jan 28 14:14:33.716: INFO: Pod "pod-subpath-test-configmap-lgbn": Phase="Running", Reason="", readiness=true. Elapsed: 16.113327768s
Jan 28 14:14:35.734: INFO: Pod "pod-subpath-test-configmap-lgbn": Phase="Running", Reason="", readiness=true. Elapsed: 18.131578322s
Jan 28 14:14:37.751: INFO: Pod "pod-subpath-test-configmap-lgbn": Phase="Running", Reason="", readiness=true. Elapsed: 20.148418519s
Jan 28 14:14:39.764: INFO: Pod "pod-subpath-test-configmap-lgbn": Phase="Running", Reason="", readiness=true. Elapsed: 22.161396828s
Jan 28 14:14:41.771: INFO: Pod "pod-subpath-test-configmap-lgbn": Phase="Running", Reason="", readiness=true. Elapsed: 24.168609695s
Jan 28 14:14:43.790: INFO: Pod "pod-subpath-test-configmap-lgbn": Phase="Running", Reason="", readiness=true. Elapsed: 26.187561146s
Jan 28 14:14:45.803: INFO: Pod "pod-subpath-test-configmap-lgbn": Phase="Running", Reason="", readiness=true. Elapsed: 28.199920652s
Jan 28 14:14:47.820: INFO: Pod "pod-subpath-test-configmap-lgbn": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.21740071s
STEP: Saw pod success
Jan 28 14:14:47.820: INFO: Pod "pod-subpath-test-configmap-lgbn" satisfied condition "success or failure"
Jan 28 14:14:47.841: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-configmap-lgbn container test-container-subpath-configmap-lgbn: 
STEP: delete the pod
Jan 28 14:14:48.050: INFO: Waiting for pod pod-subpath-test-configmap-lgbn to disappear
Jan 28 14:14:48.064: INFO: Pod pod-subpath-test-configmap-lgbn no longer exists
STEP: Deleting pod pod-subpath-test-configmap-lgbn
Jan 28 14:14:48.064: INFO: Deleting pod "pod-subpath-test-configmap-lgbn" in namespace "subpath-2344"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 28 14:14:48.070: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-2344" for this suite.
Jan 28 14:14:54.107: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 28 14:14:54.215: INFO: namespace subpath-2344 deletion completed in 6.139222041s

• [SLOW TEST:36.851 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 28 14:14:54.216: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Jan 28 14:14:54.323: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2818c77e-f359-4f42-ad48-37f881c9cd3e" in namespace "projected-2442" to be "success or failure"
Jan 28 14:14:54.329: INFO: Pod "downwardapi-volume-2818c77e-f359-4f42-ad48-37f881c9cd3e": Phase="Pending", Reason="", readiness=false. Elapsed: 5.563666ms
Jan 28 14:14:56.339: INFO: Pod "downwardapi-volume-2818c77e-f359-4f42-ad48-37f881c9cd3e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01569375s
Jan 28 14:14:58.351: INFO: Pod "downwardapi-volume-2818c77e-f359-4f42-ad48-37f881c9cd3e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.027001729s
Jan 28 14:15:00.388: INFO: Pod "downwardapi-volume-2818c77e-f359-4f42-ad48-37f881c9cd3e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.064105384s
Jan 28 14:15:02.411: INFO: Pod "downwardapi-volume-2818c77e-f359-4f42-ad48-37f881c9cd3e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.087525117s
STEP: Saw pod success
Jan 28 14:15:02.411: INFO: Pod "downwardapi-volume-2818c77e-f359-4f42-ad48-37f881c9cd3e" satisfied condition "success or failure"
Jan 28 14:15:02.416: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-2818c77e-f359-4f42-ad48-37f881c9cd3e container client-container: 
STEP: delete the pod
Jan 28 14:15:02.569: INFO: Waiting for pod downwardapi-volume-2818c77e-f359-4f42-ad48-37f881c9cd3e to disappear
Jan 28 14:15:02.589: INFO: Pod downwardapi-volume-2818c77e-f359-4f42-ad48-37f881c9cd3e no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 28 14:15:02.590: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2442" for this suite.
Jan 28 14:15:08.635: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 28 14:15:08.760: INFO: namespace projected-2442 deletion completed in 6.158580887s

• [SLOW TEST:14.544 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[k8s.io] Pods 
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 28 14:15:08.761: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jan 28 14:15:08.829: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 28 14:15:16.939: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-5637" for this suite.
Jan 28 14:15:58.972: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 28 14:15:59.101: INFO: namespace pods-5637 deletion completed in 42.155925163s

• [SLOW TEST:50.340 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 28 14:15:59.102: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating pod
Jan 28 14:16:09.273: INFO: Pod pod-hostip-704f2b31-0677-49a0-a1a2-bd3d1dfb16c0 has hostIP: 10.96.3.65
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 28 14:16:09.273: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-3733" for this suite.
Jan 28 14:16:31.316: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 28 14:16:31.457: INFO: namespace pods-3733 deletion completed in 22.177016502s

• [SLOW TEST:32.355 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 28 14:16:31.460: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75
STEP: Creating service test in namespace statefulset-4401
[It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Initializing watcher for selector baz=blah,foo=bar
STEP: Creating stateful set ss in namespace statefulset-4401
STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-4401
Jan 28 14:16:31.566: INFO: Found 0 stateful pods, waiting for 1
Jan 28 14:16:41.582: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod
Jan 28 14:16:41.590: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4401 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan 28 14:16:42.201: INFO: stderr: "I0128 14:16:41.864845    2226 log.go:172] (0xc000a72370) (0xc00093a640) Create stream\nI0128 14:16:41.865337    2226 log.go:172] (0xc000a72370) (0xc00093a640) Stream added, broadcasting: 1\nI0128 14:16:41.874424    2226 log.go:172] (0xc000a72370) Reply frame received for 1\nI0128 14:16:41.874480    2226 log.go:172] (0xc000a72370) (0xc0008d6000) Create stream\nI0128 14:16:41.874493    2226 log.go:172] (0xc000a72370) (0xc0008d6000) Stream added, broadcasting: 3\nI0128 14:16:41.875800    2226 log.go:172] (0xc000a72370) Reply frame received for 3\nI0128 14:16:41.875824    2226 log.go:172] (0xc000a72370) (0xc000a4c000) Create stream\nI0128 14:16:41.875835    2226 log.go:172] (0xc000a72370) (0xc000a4c000) Stream added, broadcasting: 5\nI0128 14:16:41.877389    2226 log.go:172] (0xc000a72370) Reply frame received for 5\nI0128 14:16:41.987550    2226 log.go:172] (0xc000a72370) Data frame received for 5\nI0128 14:16:41.987732    2226 log.go:172] (0xc000a4c000) (5) Data frame handling\nI0128 14:16:41.987760    2226 log.go:172] (0xc000a4c000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0128 14:16:42.035364    2226 log.go:172] (0xc000a72370) Data frame received for 3\nI0128 14:16:42.035472    2226 log.go:172] (0xc0008d6000) (3) Data frame handling\nI0128 14:16:42.035488    2226 log.go:172] (0xc0008d6000) (3) Data frame sent\nI0128 14:16:42.180841    2226 log.go:172] (0xc000a72370) Data frame received for 1\nI0128 14:16:42.181740    2226 log.go:172] (0xc000a72370) (0xc000a4c000) Stream removed, broadcasting: 5\nI0128 14:16:42.181975    2226 log.go:172] (0xc00093a640) (1) Data frame handling\nI0128 14:16:42.182163    2226 log.go:172] (0xc00093a640) (1) Data frame sent\nI0128 14:16:42.182311    2226 log.go:172] (0xc000a72370) (0xc0008d6000) Stream removed, broadcasting: 3\nI0128 14:16:42.182468    2226 log.go:172] (0xc000a72370) (0xc00093a640) Stream removed, broadcasting: 1\nI0128 14:16:42.182535    2226 log.go:172] (0xc000a72370) Go away received\nI0128 14:16:42.184364    2226 log.go:172] (0xc000a72370) (0xc00093a640) Stream removed, broadcasting: 1\nI0128 14:16:42.184393    2226 log.go:172] (0xc000a72370) (0xc0008d6000) Stream removed, broadcasting: 3\nI0128 14:16:42.184409    2226 log.go:172] (0xc000a72370) (0xc000a4c000) Stream removed, broadcasting: 5\n"
Jan 28 14:16:42.201: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan 28 14:16:42.201: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan 28 14:16:42.216: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
Jan 28 14:16:52.229: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Jan 28 14:16:52.229: INFO: Waiting for statefulset status.replicas updated to 0
Jan 28 14:16:52.268: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.99999907s
Jan 28 14:16:53.286: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.982533879s
Jan 28 14:16:54.301: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.96491308s
Jan 28 14:16:55.310: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.950080405s
Jan 28 14:16:56.329: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.940548059s
Jan 28 14:16:57.340: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.921172511s
Jan 28 14:16:58.355: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.910635725s
Jan 28 14:16:59.365: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.895976586s
Jan 28 14:17:00.384: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.885821228s
Jan 28 14:17:01.475: INFO: Verifying statefulset ss doesn't scale past 1 for another 866.446627ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-4401
Jan 28 14:17:02.501: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4401 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 28 14:17:03.010: INFO: stderr: "I0128 14:17:02.684999    2246 log.go:172] (0xc000944370) (0xc0005b4820) Create stream\nI0128 14:17:02.685248    2246 log.go:172] (0xc000944370) (0xc0005b4820) Stream added, broadcasting: 1\nI0128 14:17:02.691089    2246 log.go:172] (0xc000944370) Reply frame received for 1\nI0128 14:17:02.691168    2246 log.go:172] (0xc000944370) (0xc0005b48c0) Create stream\nI0128 14:17:02.691182    2246 log.go:172] (0xc000944370) (0xc0005b48c0) Stream added, broadcasting: 3\nI0128 14:17:02.692663    2246 log.go:172] (0xc000944370) Reply frame received for 3\nI0128 14:17:02.692683    2246 log.go:172] (0xc000944370) (0xc000578000) Create stream\nI0128 14:17:02.692690    2246 log.go:172] (0xc000944370) (0xc000578000) Stream added, broadcasting: 5\nI0128 14:17:02.693910    2246 log.go:172] (0xc000944370) Reply frame received for 5\nI0128 14:17:02.845973    2246 log.go:172] (0xc000944370) Data frame received for 5\nI0128 14:17:02.846109    2246 log.go:172] (0xc000578000) (5) Data frame handling\nI0128 14:17:02.846126    2246 log.go:172] (0xc000578000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0128 14:17:02.846168    2246 log.go:172] (0xc000944370) Data frame received for 3\nI0128 14:17:02.846177    2246 log.go:172] (0xc0005b48c0) (3) Data frame handling\nI0128 14:17:02.846187    2246 log.go:172] (0xc0005b48c0) (3) Data frame sent\nI0128 14:17:03.001593    2246 log.go:172] (0xc000944370) (0xc0005b48c0) Stream removed, broadcasting: 3\nI0128 14:17:03.001924    2246 log.go:172] (0xc000944370) Data frame received for 1\nI0128 14:17:03.002057    2246 log.go:172] (0xc000944370) (0xc000578000) Stream removed, broadcasting: 5\nI0128 14:17:03.002082    2246 log.go:172] (0xc0005b4820) (1) Data frame handling\nI0128 14:17:03.002108    2246 log.go:172] (0xc0005b4820) (1) Data frame sent\nI0128 14:17:03.002124    2246 log.go:172] (0xc000944370) (0xc0005b4820) Stream removed, broadcasting: 1\nI0128 14:17:03.002138    2246 log.go:172] (0xc000944370) Go away received\nI0128 14:17:03.003199    2246 log.go:172] (0xc000944370) (0xc0005b4820) Stream removed, broadcasting: 1\nI0128 14:17:03.003218    2246 log.go:172] (0xc000944370) (0xc0005b48c0) Stream removed, broadcasting: 3\nI0128 14:17:03.003223    2246 log.go:172] (0xc000944370) (0xc000578000) Stream removed, broadcasting: 5\n"
Jan 28 14:17:03.010: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jan 28 14:17:03.011: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jan 28 14:17:03.020: INFO: Found 1 stateful pods, waiting for 3
Jan 28 14:17:13.031: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Jan 28 14:17:13.031: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Jan 28 14:17:13.031: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Pending - Ready=false
Jan 28 14:17:23.029: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Jan 28 14:17:23.029: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Jan 28 14:17:23.029: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Verifying that stateful set ss was scaled up in order
STEP: Scale down will halt with unhealthy stateful pod
Jan 28 14:17:23.041: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4401 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan 28 14:17:23.758: INFO: stderr: "I0128 14:17:23.306906    2263 log.go:172] (0xc000980370) (0xc000382820) Create stream\nI0128 14:17:23.307096    2263 log.go:172] (0xc000980370) (0xc000382820) Stream added, broadcasting: 1\nI0128 14:17:23.356160    2263 log.go:172] (0xc000980370) Reply frame received for 1\nI0128 14:17:23.356775    2263 log.go:172] (0xc000980370) (0xc00065e320) Create stream\nI0128 14:17:23.356851    2263 log.go:172] (0xc000980370) (0xc00065e320) Stream added, broadcasting: 3\nI0128 14:17:23.372282    2263 log.go:172] (0xc000980370) Reply frame received for 3\nI0128 14:17:23.372567    2263 log.go:172] (0xc000980370) (0xc000938000) Create stream\nI0128 14:17:23.372633    2263 log.go:172] (0xc000980370) (0xc000938000) Stream added, broadcasting: 5\nI0128 14:17:23.413591    2263 log.go:172] (0xc000980370) Reply frame received for 5\nI0128 14:17:23.554050    2263 log.go:172] (0xc000980370) Data frame received for 3\nI0128 14:17:23.554120    2263 log.go:172] (0xc00065e320) (3) Data frame handling\nI0128 14:17:23.554134    2263 log.go:172] (0xc00065e320) (3) Data frame sent\nI0128 14:17:23.554171    2263 log.go:172] (0xc000980370) Data frame received for 5\nI0128 14:17:23.554181    2263 log.go:172] (0xc000938000) (5) Data frame handling\nI0128 14:17:23.554196    2263 log.go:172] (0xc000938000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0128 14:17:23.743214    2263 log.go:172] (0xc000980370) Data frame received for 1\nI0128 14:17:23.743428    2263 log.go:172] (0xc000980370) (0xc00065e320) Stream removed, broadcasting: 3\nI0128 14:17:23.743588    2263 log.go:172] (0xc000980370) (0xc000938000) Stream removed, broadcasting: 5\nI0128 14:17:23.743660    2263 log.go:172] (0xc000382820) (1) Data frame handling\nI0128 14:17:23.743695    2263 log.go:172] (0xc000382820) (1) Data frame sent\nI0128 14:17:23.743707    2263 log.go:172] (0xc000980370) (0xc000382820) Stream removed, broadcasting: 1\nI0128 14:17:23.743721    2263 log.go:172] (0xc000980370) Go away received\nI0128 14:17:23.744851    2263 log.go:172] (0xc000980370) (0xc000382820) Stream removed, broadcasting: 1\nI0128 14:17:23.744871    2263 log.go:172] (0xc000980370) (0xc00065e320) Stream removed, broadcasting: 3\nI0128 14:17:23.744881    2263 log.go:172] (0xc000980370) (0xc000938000) Stream removed, broadcasting: 5\n"
Jan 28 14:17:23.758: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan 28 14:17:23.758: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan 28 14:17:23.759: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4401 ss-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan 28 14:17:24.178: INFO: stderr: "I0128 14:17:23.969724    2284 log.go:172] (0xc000794210) (0xc00087e140) Create stream\nI0128 14:17:23.970092    2284 log.go:172] (0xc000794210) (0xc00087e140) Stream added, broadcasting: 1\nI0128 14:17:23.981018    2284 log.go:172] (0xc000794210) Reply frame received for 1\nI0128 14:17:23.981087    2284 log.go:172] (0xc000794210) (0xc000896000) Create stream\nI0128 14:17:23.981100    2284 log.go:172] (0xc000794210) (0xc000896000) Stream added, broadcasting: 3\nI0128 14:17:23.984027    2284 log.go:172] (0xc000794210) Reply frame received for 3\nI0128 14:17:23.984194    2284 log.go:172] (0xc000794210) (0xc00087e1e0) Create stream\nI0128 14:17:23.984207    2284 log.go:172] (0xc000794210) (0xc00087e1e0) Stream added, broadcasting: 5\nI0128 14:17:23.986491    2284 log.go:172] (0xc000794210) Reply frame received for 5\nI0128 14:17:24.073913    2284 log.go:172] (0xc000794210) Data frame received for 5\nI0128 14:17:24.074144    2284 log.go:172] (0xc00087e1e0) (5) Data frame handling\nI0128 14:17:24.074197    2284 log.go:172] (0xc00087e1e0) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0128 14:17:24.092696    2284 log.go:172] (0xc000794210) Data frame received for 3\nI0128 14:17:24.092870    2284 log.go:172] (0xc000896000) (3) Data frame handling\nI0128 14:17:24.092894    2284 log.go:172] (0xc000896000) (3) Data frame sent\nI0128 14:17:24.168955    2284 log.go:172] (0xc000794210) Data frame received for 1\nI0128 14:17:24.169066    2284 log.go:172] (0xc00087e140) (1) Data frame handling\nI0128 14:17:24.169083    2284 log.go:172] (0xc00087e140) (1) Data frame sent\nI0128 14:17:24.169557    2284 log.go:172] (0xc000794210) (0xc00087e140) Stream removed, broadcasting: 1\nI0128 14:17:24.169932    2284 log.go:172] (0xc000794210) (0xc000896000) Stream removed, broadcasting: 3\nI0128 14:17:24.170049    2284 log.go:172] (0xc000794210) (0xc00087e1e0) Stream removed, broadcasting: 5\nI0128 14:17:24.170273    2284 log.go:172] (0xc000794210) Go away received\nI0128 14:17:24.170606    2284 log.go:172] (0xc000794210) (0xc00087e140) Stream removed, broadcasting: 1\nI0128 14:17:24.170640    2284 log.go:172] (0xc000794210) (0xc000896000) Stream removed, broadcasting: 3\nI0128 14:17:24.170658    2284 log.go:172] (0xc000794210) (0xc00087e1e0) Stream removed, broadcasting: 5\n"
Jan 28 14:17:24.178: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan 28 14:17:24.178: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan 28 14:17:24.179: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4401 ss-2 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan 28 14:17:24.748: INFO: stderr: "I0128 14:17:24.341573    2302 log.go:172] (0xc000956420) (0xc00031a820) Create stream\nI0128 14:17:24.341757    2302 log.go:172] (0xc000956420) (0xc00031a820) Stream added, broadcasting: 1\nI0128 14:17:24.348385    2302 log.go:172] (0xc000956420) Reply frame received for 1\nI0128 14:17:24.348447    2302 log.go:172] (0xc000956420) (0xc000702000) Create stream\nI0128 14:17:24.348463    2302 log.go:172] (0xc000956420) (0xc000702000) Stream added, broadcasting: 3\nI0128 14:17:24.349670    2302 log.go:172] (0xc000956420) Reply frame received for 3\nI0128 14:17:24.349701    2302 log.go:172] (0xc000956420) (0xc00095c000) Create stream\nI0128 14:17:24.352240    2302 log.go:172] (0xc000956420) (0xc00095c000) Stream added, broadcasting: 5\nI0128 14:17:24.362670    2302 log.go:172] (0xc000956420) Reply frame received for 5\nI0128 14:17:24.504384    2302 log.go:172] (0xc000956420) Data frame received for 5\nI0128 14:17:24.504588    2302 log.go:172] (0xc00095c000) (5) Data frame handling\nI0128 14:17:24.504649    2302 log.go:172] (0xc00095c000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0128 14:17:24.541229    2302 log.go:172] (0xc000956420) Data frame received for 3\nI0128 14:17:24.541472    2302 log.go:172] (0xc000702000) (3) Data frame handling\nI0128 14:17:24.541541    2302 log.go:172] (0xc000702000) (3) Data frame sent\nI0128 14:17:24.732303    2302 log.go:172] (0xc000956420) Data frame received for 1\nI0128 14:17:24.732644    2302 log.go:172] (0xc000956420) (0xc000702000) Stream removed, broadcasting: 3\nI0128 14:17:24.732749    2302 log.go:172] (0xc00031a820) (1) Data frame handling\nI0128 14:17:24.732802    2302 log.go:172] (0xc00031a820) (1) Data frame sent\nI0128 14:17:24.733099    2302 log.go:172] (0xc000956420) (0xc00095c000) Stream removed, broadcasting: 5\nI0128 14:17:24.733373    2302 log.go:172] (0xc000956420) (0xc00031a820) Stream removed, broadcasting: 1\nI0128 14:17:24.733515    2302 log.go:172] (0xc000956420) Go away received\nI0128 14:17:24.735410    2302 log.go:172] (0xc000956420) (0xc00031a820) Stream removed, broadcasting: 1\nI0128 14:17:24.735535    2302 log.go:172] (0xc000956420) (0xc000702000) Stream removed, broadcasting: 3\nI0128 14:17:24.735575    2302 log.go:172] (0xc000956420) (0xc00095c000) Stream removed, broadcasting: 5\n"
Jan 28 14:17:24.748: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan 28 14:17:24.748: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan 28 14:17:24.748: INFO: Waiting for statefulset status.replicas updated to 0
Jan 28 14:17:24.754: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1
Jan 28 14:17:34.787: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Jan 28 14:17:34.787: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Jan 28 14:17:34.787: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Jan 28 14:17:34.832: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999447s
Jan 28 14:17:35.849: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.965487733s
Jan 28 14:17:36.899: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.947854881s
Jan 28 14:17:37.913: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.899534443s
Jan 28 14:17:38.939: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.884678171s
Jan 28 14:17:39.949: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.859149252s
Jan 28 14:17:41.206: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.849150018s
Jan 28 14:17:42.217: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.591938042s
Jan 28 14:17:43.230: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.581214749s
Jan 28 14:17:44.243: INFO: Verifying statefulset ss doesn't scale past 3 for another 568.43102ms
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-4401
Jan 28 14:17:45.257: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4401 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 28 14:17:45.924: INFO: stderr: "I0128 14:17:45.463456    2322 log.go:172] (0xc00013ad10) (0xc000678780) Create stream\nI0128 14:17:45.463927    2322 log.go:172] (0xc00013ad10) (0xc000678780) Stream added, broadcasting: 1\nI0128 14:17:45.473562    2322 log.go:172] (0xc00013ad10) Reply frame received for 1\nI0128 14:17:45.473630    2322 log.go:172] (0xc00013ad10) (0xc0007ec000) Create stream\nI0128 14:17:45.473642    2322 log.go:172] (0xc00013ad10) (0xc0007ec000) Stream added, broadcasting: 3\nI0128 14:17:45.476528    2322 log.go:172] (0xc00013ad10) Reply frame received for 3\nI0128 14:17:45.476684    2322 log.go:172] (0xc00013ad10) (0xc000678820) Create stream\nI0128 14:17:45.476710    2322 log.go:172] (0xc00013ad10) (0xc000678820) Stream added, broadcasting: 5\nI0128 14:17:45.478951    2322 log.go:172] (0xc00013ad10) Reply frame received for 5\nI0128 14:17:45.634383    2322 log.go:172] (0xc00013ad10) Data frame received for 5\nI0128 14:17:45.634672    2322 log.go:172] (0xc000678820) (5) Data frame handling\nI0128 14:17:45.634728    2322 log.go:172] (0xc000678820) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0128 14:17:45.634857    2322 log.go:172] (0xc00013ad10) Data frame received for 3\nI0128 14:17:45.634877    2322 log.go:172] (0xc0007ec000) (3) Data frame handling\nI0128 14:17:45.634901    2322 log.go:172] (0xc0007ec000) (3) Data frame sent\nI0128 14:17:45.901313    2322 log.go:172] (0xc00013ad10) Data frame received for 1\nI0128 14:17:45.901656    2322 log.go:172] (0xc00013ad10) (0xc000678820) Stream removed, broadcasting: 5\nI0128 14:17:45.901824    2322 log.go:172] (0xc000678780) (1) Data frame handling\nI0128 14:17:45.901870    2322 log.go:172] (0xc000678780) (1) Data frame sent\nI0128 14:17:45.901940    2322 log.go:172] (0xc00013ad10) (0xc0007ec000) Stream removed, broadcasting: 3\nI0128 14:17:45.902491    2322 log.go:172] (0xc00013ad10) (0xc000678780) Stream removed, broadcasting: 1\nI0128 14:17:45.902541    2322 log.go:172] (0xc00013ad10) Go away received\nI0128 14:17:45.905790    2322 log.go:172] (0xc00013ad10) (0xc000678780) Stream removed, broadcasting: 1\nI0128 14:17:45.906280    2322 log.go:172] (0xc00013ad10) (0xc0007ec000) Stream removed, broadcasting: 3\nI0128 14:17:45.906334    2322 log.go:172] (0xc00013ad10) (0xc000678820) Stream removed, broadcasting: 5\n"
Jan 28 14:17:45.924: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jan 28 14:17:45.924: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jan 28 14:17:45.925: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4401 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 28 14:17:46.263: INFO: stderr: "I0128 14:17:46.109257    2340 log.go:172] (0xc000118630) (0xc0008845a0) Create stream\nI0128 14:17:46.109475    2340 log.go:172] (0xc000118630) (0xc0008845a0) Stream added, broadcasting: 1\nI0128 14:17:46.112763    2340 log.go:172] (0xc000118630) Reply frame received for 1\nI0128 14:17:46.112789    2340 log.go:172] (0xc000118630) (0xc0006a40a0) Create stream\nI0128 14:17:46.112795    2340 log.go:172] (0xc000118630) (0xc0006a40a0) Stream added, broadcasting: 3\nI0128 14:17:46.113745    2340 log.go:172] (0xc000118630) Reply frame received for 3\nI0128 14:17:46.113768    2340 log.go:172] (0xc000118630) (0xc00098a000) Create stream\nI0128 14:17:46.113775    2340 log.go:172] (0xc000118630) (0xc00098a000) Stream added, broadcasting: 5\nI0128 14:17:46.114713    2340 log.go:172] (0xc000118630) Reply frame received for 5\nI0128 14:17:46.187230    2340 log.go:172] (0xc000118630) Data frame received for 3\nI0128 14:17:46.187408    2340 log.go:172] (0xc0006a40a0) (3) Data frame handling\nI0128 14:17:46.187452    2340 log.go:172] (0xc0006a40a0) (3) Data frame sent\nI0128 14:17:46.187606    2340 log.go:172] (0xc000118630) Data frame received for 5\nI0128 14:17:46.187635    2340 log.go:172] (0xc00098a000) (5) Data frame handling\nI0128 14:17:46.187666    2340 log.go:172] (0xc00098a000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0128 14:17:46.257476    2340 log.go:172] (0xc000118630) Data frame received for 1\nI0128 14:17:46.257552    2340 log.go:172] (0xc000118630) (0xc0006a40a0) Stream removed, broadcasting: 3\nI0128 14:17:46.257610    2340 log.go:172] (0xc0008845a0) (1) Data frame handling\nI0128 14:17:46.257632    2340 log.go:172] (0xc0008845a0) (1) Data frame sent\nI0128 14:17:46.257666    2340 log.go:172] (0xc000118630) (0xc00098a000) Stream removed, broadcasting: 5\nI0128 14:17:46.257699    2340 log.go:172] (0xc000118630) (0xc0008845a0) Stream removed, broadcasting: 1\nI0128 14:17:46.257718    2340 log.go:172] (0xc000118630) Go away received\nI0128 14:17:46.258438    2340 log.go:172] (0xc000118630) (0xc0008845a0) Stream removed, broadcasting: 1\nI0128 14:17:46.258459    2340 log.go:172] (0xc000118630) (0xc0006a40a0) Stream removed, broadcasting: 3\nI0128 14:17:46.258465    2340 log.go:172] (0xc000118630) (0xc00098a000) Stream removed, broadcasting: 5\n"
Jan 28 14:17:46.263: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jan 28 14:17:46.263: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jan 28 14:17:46.264: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4401 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 28 14:17:47.109: INFO: stderr: "I0128 14:17:46.557189    2359 log.go:172] (0xc00094e000) (0xc000890140) Create stream\nI0128 14:17:46.557871    2359 log.go:172] (0xc00094e000) (0xc000890140) Stream added, broadcasting: 1\nI0128 14:17:46.568516    2359 log.go:172] (0xc00094e000) Reply frame received for 1\nI0128 14:17:46.568725    2359 log.go:172] (0xc00094e000) (0xc00064e280) Create stream\nI0128 14:17:46.568768    2359 log.go:172] (0xc00094e000) (0xc00064e280) Stream added, broadcasting: 3\nI0128 14:17:46.574953    2359 log.go:172] (0xc00094e000) Reply frame received for 3\nI0128 14:17:46.575620    2359 log.go:172] (0xc00094e000) (0xc000326000) Create stream\nI0128 14:17:46.575786    2359 log.go:172] (0xc00094e000) (0xc000326000) Stream added, broadcasting: 5\nI0128 14:17:46.580255    2359 log.go:172] (0xc00094e000) Reply frame received for 5\nI0128 14:17:46.785817    2359 log.go:172] (0xc00094e000) Data frame received for 3\nI0128 14:17:46.786015    2359 log.go:172] (0xc00064e280) (3) Data frame handling\nI0128 14:17:46.786054    2359 log.go:172] (0xc00064e280) (3) Data frame sent\nI0128 14:17:46.786185    2359 log.go:172] (0xc00094e000) Data frame received for 5\nI0128 14:17:46.786335    2359 log.go:172] (0xc000326000) (5) Data frame handling\nI0128 14:17:46.786357    2359 log.go:172] (0xc000326000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0128 14:17:47.100675    2359 log.go:172] (0xc00094e000) (0xc00064e280) Stream removed, broadcasting: 3\nI0128 14:17:47.100821    2359 log.go:172] (0xc00094e000) Data frame received for 1\nI0128 14:17:47.101083    2359 log.go:172] (0xc00094e000) (0xc000326000) Stream removed, broadcasting: 5\nI0128 14:17:47.101157    2359 log.go:172] (0xc000890140) (1) Data frame handling\nI0128 14:17:47.101186    2359 log.go:172] (0xc000890140) (1) Data frame sent\nI0128 14:17:47.101193    2359 log.go:172] (0xc00094e000) (0xc000890140) Stream removed, broadcasting: 1\nI0128 14:17:47.101204    2359 log.go:172] (0xc00094e000) Go away received\nI0128 14:17:47.102267    2359 log.go:172] (0xc00094e000) (0xc000890140) Stream removed, broadcasting: 1\nI0128 14:17:47.102283    2359 log.go:172] (0xc00094e000) (0xc00064e280) Stream removed, broadcasting: 3\nI0128 14:17:47.102293    2359 log.go:172] (0xc00094e000) (0xc000326000) Stream removed, broadcasting: 5\n"
Jan 28 14:17:47.109: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jan 28 14:17:47.109: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jan 28 14:17:47.109: INFO: Scaling statefulset ss to 0
STEP: Verifying that stateful set ss was scaled down in reverse order
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86
Jan 28 14:18:17.136: INFO: Deleting all statefulset in ns statefulset-4401
Jan 28 14:18:17.143: INFO: Scaling statefulset ss to 0
Jan 28 14:18:17.157: INFO: Waiting for statefulset status.replicas updated to 0
Jan 28 14:18:17.160: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 28 14:18:17.203: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-4401" for this suite.
Jan 28 14:18:23.232: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 28 14:18:23.385: INFO: namespace statefulset-4401 deletion completed in 6.17611183s

• [SLOW TEST:111.925 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy logs on node using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 28 14:18:23.388: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy logs on node using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jan 28 14:18:23.528: INFO: (0) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 16.156432ms)
Jan 28 14:18:23.535: INFO: (1) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.154662ms)
Jan 28 14:18:23.552: INFO: (2) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 17.140137ms)
Jan 28 14:18:23.564: INFO: (3) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 11.624725ms)
Jan 28 14:18:23.569: INFO: (4) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.436142ms)
Jan 28 14:18:23.577: INFO: (5) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.450901ms)
Jan 28 14:18:23.583: INFO: (6) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.183719ms)
Jan 28 14:18:23.588: INFO: (7) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.483668ms)
Jan 28 14:18:23.594: INFO: (8) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.58063ms)
Jan 28 14:18:23.600: INFO: (9) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.076583ms)
Jan 28 14:18:23.605: INFO: (10) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.938311ms)
Jan 28 14:18:23.612: INFO: (11) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.866004ms)
Jan 28 14:18:23.617: INFO: (12) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.601516ms)
Jan 28 14:18:23.621: INFO: (13) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.635839ms)
Jan 28 14:18:23.628: INFO: (14) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.449165ms)
Jan 28 14:18:23.635: INFO: (15) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.769757ms)
Jan 28 14:18:23.643: INFO: (16) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.243819ms)
Jan 28 14:18:23.656: INFO: (17) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 13.331178ms)
Jan 28 14:18:23.663: INFO: (18) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.13812ms)
Jan 28 14:18:23.667: INFO: (19) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.994625ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 28 14:18:23.667: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-1554" for this suite.
Jan 28 14:18:29.696: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 28 14:18:29.892: INFO: namespace proxy-1554 deletion completed in 6.221032043s

• [SLOW TEST:6.504 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58
    should proxy logs on node using proxy subresource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 28 14:18:29.893: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name projected-secret-test-aac874db-28b3-40bd-85c2-5825573e4ef8
STEP: Creating a pod to test consume secrets
Jan 28 14:18:30.037: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-c3cf2a58-330f-4dc4-82cb-36c98c4ac67f" in namespace "projected-2784" to be "success or failure"
Jan 28 14:18:30.079: INFO: Pod "pod-projected-secrets-c3cf2a58-330f-4dc4-82cb-36c98c4ac67f": Phase="Pending", Reason="", readiness=false. Elapsed: 42.079538ms
Jan 28 14:18:32.091: INFO: Pod "pod-projected-secrets-c3cf2a58-330f-4dc4-82cb-36c98c4ac67f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.053976929s
Jan 28 14:18:34.102: INFO: Pod "pod-projected-secrets-c3cf2a58-330f-4dc4-82cb-36c98c4ac67f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.064465303s
Jan 28 14:18:36.111: INFO: Pod "pod-projected-secrets-c3cf2a58-330f-4dc4-82cb-36c98c4ac67f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.07342832s
Jan 28 14:18:38.128: INFO: Pod "pod-projected-secrets-c3cf2a58-330f-4dc4-82cb-36c98c4ac67f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.091066599s
STEP: Saw pod success
Jan 28 14:18:38.128: INFO: Pod "pod-projected-secrets-c3cf2a58-330f-4dc4-82cb-36c98c4ac67f" satisfied condition "success or failure"
Jan 28 14:18:38.136: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-c3cf2a58-330f-4dc4-82cb-36c98c4ac67f container projected-secret-volume-test: 
STEP: delete the pod
Jan 28 14:18:38.219: INFO: Waiting for pod pod-projected-secrets-c3cf2a58-330f-4dc4-82cb-36c98c4ac67f to disappear
Jan 28 14:18:38.244: INFO: Pod pod-projected-secrets-c3cf2a58-330f-4dc4-82cb-36c98c4ac67f no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 28 14:18:38.244: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2784" for this suite.
Jan 28 14:18:44.360: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 28 14:18:44.600: INFO: namespace projected-2784 deletion completed in 6.269710914s

• [SLOW TEST:14.707 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 28 14:18:44.602: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Jan 28 14:18:44.682: INFO: Waiting up to 5m0s for pod "downwardapi-volume-dcd96994-4fef-44df-9c56-fb351a8196fa" in namespace "downward-api-6040" to be "success or failure"
Jan 28 14:18:44.697: INFO: Pod "downwardapi-volume-dcd96994-4fef-44df-9c56-fb351a8196fa": Phase="Pending", Reason="", readiness=false. Elapsed: 15.539734ms
Jan 28 14:18:46.707: INFO: Pod "downwardapi-volume-dcd96994-4fef-44df-9c56-fb351a8196fa": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025885836s
Jan 28 14:18:48.716: INFO: Pod "downwardapi-volume-dcd96994-4fef-44df-9c56-fb351a8196fa": Phase="Pending", Reason="", readiness=false. Elapsed: 4.034005921s
Jan 28 14:18:50.737: INFO: Pod "downwardapi-volume-dcd96994-4fef-44df-9c56-fb351a8196fa": Phase="Pending", Reason="", readiness=false. Elapsed: 6.055287686s
Jan 28 14:18:52.743: INFO: Pod "downwardapi-volume-dcd96994-4fef-44df-9c56-fb351a8196fa": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.061744605s
STEP: Saw pod success
Jan 28 14:18:52.743: INFO: Pod "downwardapi-volume-dcd96994-4fef-44df-9c56-fb351a8196fa" satisfied condition "success or failure"
Jan 28 14:18:52.750: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-dcd96994-4fef-44df-9c56-fb351a8196fa container client-container: 
STEP: delete the pod
Jan 28 14:18:52.814: INFO: Waiting for pod downwardapi-volume-dcd96994-4fef-44df-9c56-fb351a8196fa to disappear
Jan 28 14:18:52.898: INFO: Pod downwardapi-volume-dcd96994-4fef-44df-9c56-fb351a8196fa no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 28 14:18:52.898: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-6040" for this suite.
Jan 28 14:18:58.985: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 28 14:18:59.151: INFO: namespace downward-api-6040 deletion completed in 6.206333139s

• [SLOW TEST:14.550 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-node] Downward API 
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 28 14:18:59.152: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward api env vars
Jan 28 14:18:59.311: INFO: Waiting up to 5m0s for pod "downward-api-ba186284-62c4-4b94-9b42-abec10d78c4a" in namespace "downward-api-6193" to be "success or failure"
Jan 28 14:18:59.315: INFO: Pod "downward-api-ba186284-62c4-4b94-9b42-abec10d78c4a": Phase="Pending", Reason="", readiness=false. Elapsed: 3.882314ms
Jan 28 14:19:01.437: INFO: Pod "downward-api-ba186284-62c4-4b94-9b42-abec10d78c4a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.125270439s
Jan 28 14:19:03.450: INFO: Pod "downward-api-ba186284-62c4-4b94-9b42-abec10d78c4a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.138248455s
Jan 28 14:19:05.460: INFO: Pod "downward-api-ba186284-62c4-4b94-9b42-abec10d78c4a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.148359238s
Jan 28 14:19:07.468: INFO: Pod "downward-api-ba186284-62c4-4b94-9b42-abec10d78c4a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.156850618s
STEP: Saw pod success
Jan 28 14:19:07.468: INFO: Pod "downward-api-ba186284-62c4-4b94-9b42-abec10d78c4a" satisfied condition "success or failure"
Jan 28 14:19:07.474: INFO: Trying to get logs from node iruya-node pod downward-api-ba186284-62c4-4b94-9b42-abec10d78c4a container dapi-container: 
STEP: delete the pod
Jan 28 14:19:07.556: INFO: Waiting for pod downward-api-ba186284-62c4-4b94-9b42-abec10d78c4a to disappear
Jan 28 14:19:07.636: INFO: Pod downward-api-ba186284-62c4-4b94-9b42-abec10d78c4a no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 28 14:19:07.636: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-6193" for this suite.
Jan 28 14:19:13.716: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 28 14:19:13.905: INFO: namespace downward-api-6193 deletion completed in 6.263227712s

• [SLOW TEST:14.753 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with projected pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 28 14:19:13.907: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with projected pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod pod-subpath-test-projected-djdz
STEP: Creating a pod to test atomic-volume-subpath
Jan 28 14:19:14.101: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-djdz" in namespace "subpath-9025" to be "success or failure"
Jan 28 14:19:14.110: INFO: Pod "pod-subpath-test-projected-djdz": Phase="Pending", Reason="", readiness=false. Elapsed: 9.562973ms
Jan 28 14:19:16.120: INFO: Pod "pod-subpath-test-projected-djdz": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019507226s
Jan 28 14:19:18.137: INFO: Pod "pod-subpath-test-projected-djdz": Phase="Pending", Reason="", readiness=false. Elapsed: 4.03627166s
Jan 28 14:19:20.148: INFO: Pod "pod-subpath-test-projected-djdz": Phase="Pending", Reason="", readiness=false. Elapsed: 6.047170697s
Jan 28 14:19:22.167: INFO: Pod "pod-subpath-test-projected-djdz": Phase="Running", Reason="", readiness=true. Elapsed: 8.066621415s
Jan 28 14:19:24.176: INFO: Pod "pod-subpath-test-projected-djdz": Phase="Running", Reason="", readiness=true. Elapsed: 10.075257158s
Jan 28 14:19:26.189: INFO: Pod "pod-subpath-test-projected-djdz": Phase="Running", Reason="", readiness=true. Elapsed: 12.088605078s
Jan 28 14:19:28.198: INFO: Pod "pod-subpath-test-projected-djdz": Phase="Running", Reason="", readiness=true. Elapsed: 14.09765388s
Jan 28 14:19:30.206: INFO: Pod "pod-subpath-test-projected-djdz": Phase="Running", Reason="", readiness=true. Elapsed: 16.105108186s
Jan 28 14:19:32.212: INFO: Pod "pod-subpath-test-projected-djdz": Phase="Running", Reason="", readiness=true. Elapsed: 18.111556666s
Jan 28 14:19:34.225: INFO: Pod "pod-subpath-test-projected-djdz": Phase="Running", Reason="", readiness=true. Elapsed: 20.123779763s
Jan 28 14:19:36.235: INFO: Pod "pod-subpath-test-projected-djdz": Phase="Running", Reason="", readiness=true. Elapsed: 22.133843816s
Jan 28 14:19:38.244: INFO: Pod "pod-subpath-test-projected-djdz": Phase="Running", Reason="", readiness=true. Elapsed: 24.143718423s
Jan 28 14:19:40.254: INFO: Pod "pod-subpath-test-projected-djdz": Phase="Running", Reason="", readiness=true. Elapsed: 26.153200561s
Jan 28 14:19:42.262: INFO: Pod "pod-subpath-test-projected-djdz": Phase="Running", Reason="", readiness=true. Elapsed: 28.161070308s
Jan 28 14:19:44.274: INFO: Pod "pod-subpath-test-projected-djdz": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.172939228s
STEP: Saw pod success
Jan 28 14:19:44.274: INFO: Pod "pod-subpath-test-projected-djdz" satisfied condition "success or failure"
Jan 28 14:19:44.278: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-projected-djdz container test-container-subpath-projected-djdz: 
STEP: delete the pod
Jan 28 14:19:44.347: INFO: Waiting for pod pod-subpath-test-projected-djdz to disappear
Jan 28 14:19:44.356: INFO: Pod pod-subpath-test-projected-djdz no longer exists
STEP: Deleting pod pod-subpath-test-projected-djdz
Jan 28 14:19:44.357: INFO: Deleting pod "pod-subpath-test-projected-djdz" in namespace "subpath-9025"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 28 14:19:44.394: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-9025" for this suite.
Jan 28 14:19:50.473: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 28 14:19:50.648: INFO: namespace subpath-9025 deletion completed in 6.22440294s

• [SLOW TEST:36.741 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with projected pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 28 14:19:50.650: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod test-webserver-25f00983-4526-42ea-aa71-2483a5e69584 in namespace container-probe-9314
Jan 28 14:19:58.776: INFO: Started pod test-webserver-25f00983-4526-42ea-aa71-2483a5e69584 in namespace container-probe-9314
STEP: checking the pod's current state and verifying that restartCount is present
Jan 28 14:19:58.782: INFO: Initial restart count of pod test-webserver-25f00983-4526-42ea-aa71-2483a5e69584 is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 28 14:24:00.690: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-9314" for this suite.
Jan 28 14:24:06.794: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 28 14:24:06.960: INFO: namespace container-probe-9314 deletion completed in 6.197772919s

• [SLOW TEST:256.311 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not cause race condition when used for configmaps [Serial] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 28 14:24:06.961: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not cause race condition when used for configmaps [Serial] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating 50 configmaps
STEP: Creating RC which spawns configmap-volume pods
Jan 28 14:24:07.833: INFO: Pod name wrapped-volume-race-a6f75c23-fc9f-409b-a51b-2a90eb6ebff4: Found 0 pods out of 5
Jan 28 14:24:12.858: INFO: Pod name wrapped-volume-race-a6f75c23-fc9f-409b-a51b-2a90eb6ebff4: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-a6f75c23-fc9f-409b-a51b-2a90eb6ebff4 in namespace emptydir-wrapper-8303, will wait for the garbage collector to delete the pods
Jan 28 14:24:40.987: INFO: Deleting ReplicationController wrapped-volume-race-a6f75c23-fc9f-409b-a51b-2a90eb6ebff4 took: 21.129773ms
Jan 28 14:24:41.289: INFO: Terminating ReplicationController wrapped-volume-race-a6f75c23-fc9f-409b-a51b-2a90eb6ebff4 pods took: 301.434646ms
STEP: Creating RC which spawns configmap-volume pods
Jan 28 14:25:27.029: INFO: Pod name wrapped-volume-race-9c0c9849-c78e-424e-830e-45a557de8127: Found 0 pods out of 5
Jan 28 14:25:32.045: INFO: Pod name wrapped-volume-race-9c0c9849-c78e-424e-830e-45a557de8127: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-9c0c9849-c78e-424e-830e-45a557de8127 in namespace emptydir-wrapper-8303, will wait for the garbage collector to delete the pods
Jan 28 14:26:06.168: INFO: Deleting ReplicationController wrapped-volume-race-9c0c9849-c78e-424e-830e-45a557de8127 took: 8.408395ms
Jan 28 14:26:06.469: INFO: Terminating ReplicationController wrapped-volume-race-9c0c9849-c78e-424e-830e-45a557de8127 pods took: 300.704554ms
STEP: Creating RC which spawns configmap-volume pods
Jan 28 14:26:57.036: INFO: Pod name wrapped-volume-race-52b6432e-8deb-43d4-94c8-8968225165c7: Found 0 pods out of 5
Jan 28 14:27:02.057: INFO: Pod name wrapped-volume-race-52b6432e-8deb-43d4-94c8-8968225165c7: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-52b6432e-8deb-43d4-94c8-8968225165c7 in namespace emptydir-wrapper-8303, will wait for the garbage collector to delete the pods
Jan 28 14:27:34.212: INFO: Deleting ReplicationController wrapped-volume-race-52b6432e-8deb-43d4-94c8-8968225165c7 took: 34.436061ms
Jan 28 14:27:34.713: INFO: Terminating ReplicationController wrapped-volume-race-52b6432e-8deb-43d4-94c8-8968225165c7 pods took: 501.392523ms
STEP: Cleaning up the configMaps
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 28 14:28:27.551: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-wrapper-8303" for this suite.
Jan 28 14:28:37.636: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 28 14:28:37.743: INFO: namespace emptydir-wrapper-8303 deletion completed in 10.131579722s

• [SLOW TEST:270.781 seconds]
[sig-storage] EmptyDir wrapper volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  should not cause race condition when used for configmaps [Serial] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 28 14:28:37.744: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Jan 28 14:29:00.014: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 28 14:29:00.071: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 28 14:29:02.071: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 28 14:29:02.077: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 28 14:29:04.071: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 28 14:29:04.081: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 28 14:29:06.072: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 28 14:29:06.121: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 28 14:29:08.072: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 28 14:29:08.080: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 28 14:29:10.071: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 28 14:29:10.079: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 28 14:29:12.071: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 28 14:29:12.082: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 28 14:29:14.071: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 28 14:29:14.084: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 28 14:29:16.071: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 28 14:29:16.078: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 28 14:29:18.072: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 28 14:29:18.082: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 28 14:29:20.072: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 28 14:29:20.098: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 28 14:29:22.072: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 28 14:29:22.096: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 28 14:29:24.071: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 28 14:29:24.081: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 28 14:29:26.071: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 28 14:29:26.102: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 28 14:29:28.071: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 28 14:29:28.082: INFO: Pod pod-with-prestop-exec-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 28 14:29:28.132: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-8067" for this suite.
Jan 28 14:29:50.201: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 28 14:29:50.341: INFO: namespace container-lifecycle-hook-8067 deletion completed in 22.171025148s

• [SLOW TEST:72.597 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute prestop exec hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 28 14:29:50.343: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir volume type on node default medium
Jan 28 14:29:50.460: INFO: Waiting up to 5m0s for pod "pod-ca875b91-985a-41de-87b2-0a7388945c0d" in namespace "emptydir-7332" to be "success or failure"
Jan 28 14:29:50.487: INFO: Pod "pod-ca875b91-985a-41de-87b2-0a7388945c0d": Phase="Pending", Reason="", readiness=false. Elapsed: 26.881381ms
Jan 28 14:29:52.506: INFO: Pod "pod-ca875b91-985a-41de-87b2-0a7388945c0d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.045984058s
Jan 28 14:29:54.523: INFO: Pod "pod-ca875b91-985a-41de-87b2-0a7388945c0d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.063295598s
Jan 28 14:29:56.536: INFO: Pod "pod-ca875b91-985a-41de-87b2-0a7388945c0d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.075782136s
Jan 28 14:29:58.551: INFO: Pod "pod-ca875b91-985a-41de-87b2-0a7388945c0d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.090790474s
STEP: Saw pod success
Jan 28 14:29:58.551: INFO: Pod "pod-ca875b91-985a-41de-87b2-0a7388945c0d" satisfied condition "success or failure"
Jan 28 14:29:58.557: INFO: Trying to get logs from node iruya-node pod pod-ca875b91-985a-41de-87b2-0a7388945c0d container test-container: 
STEP: delete the pod
Jan 28 14:29:58.839: INFO: Waiting for pod pod-ca875b91-985a-41de-87b2-0a7388945c0d to disappear
Jan 28 14:29:58.843: INFO: Pod pod-ca875b91-985a-41de-87b2-0a7388945c0d no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 28 14:29:58.844: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-7332" for this suite.
Jan 28 14:30:04.907: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 28 14:30:05.164: INFO: namespace emptydir-7332 deletion completed in 6.309010148s

• [SLOW TEST:14.821 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 28 14:30:05.167: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap configmap-1388/configmap-test-9fa8e06f-88c9-4acc-9762-d986fc301e64
STEP: Creating a pod to test consume configMaps
Jan 28 14:30:05.347: INFO: Waiting up to 5m0s for pod "pod-configmaps-7c54068c-39ba-4031-b4cf-395002c3cf06" in namespace "configmap-1388" to be "success or failure"
Jan 28 14:30:05.350: INFO: Pod "pod-configmaps-7c54068c-39ba-4031-b4cf-395002c3cf06": Phase="Pending", Reason="", readiness=false. Elapsed: 3.37125ms
Jan 28 14:30:07.365: INFO: Pod "pod-configmaps-7c54068c-39ba-4031-b4cf-395002c3cf06": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018084883s
Jan 28 14:30:09.383: INFO: Pod "pod-configmaps-7c54068c-39ba-4031-b4cf-395002c3cf06": Phase="Pending", Reason="", readiness=false. Elapsed: 4.036626759s
Jan 28 14:30:11.654: INFO: Pod "pod-configmaps-7c54068c-39ba-4031-b4cf-395002c3cf06": Phase="Pending", Reason="", readiness=false. Elapsed: 6.306867008s
Jan 28 14:30:13.666: INFO: Pod "pod-configmaps-7c54068c-39ba-4031-b4cf-395002c3cf06": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.319274495s
STEP: Saw pod success
Jan 28 14:30:13.666: INFO: Pod "pod-configmaps-7c54068c-39ba-4031-b4cf-395002c3cf06" satisfied condition "success or failure"
Jan 28 14:30:13.675: INFO: Trying to get logs from node iruya-node pod pod-configmaps-7c54068c-39ba-4031-b4cf-395002c3cf06 container env-test: 
STEP: delete the pod
Jan 28 14:30:13.958: INFO: Waiting for pod pod-configmaps-7c54068c-39ba-4031-b4cf-395002c3cf06 to disappear
Jan 28 14:30:13.970: INFO: Pod pod-configmaps-7c54068c-39ba-4031-b4cf-395002c3cf06 no longer exists
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 28 14:30:13.971: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-1388" for this suite.
Jan 28 14:30:20.010: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 28 14:30:20.106: INFO: namespace configmap-1388 deletion completed in 6.122802312s

• [SLOW TEST:14.940 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform rolling updates and roll backs of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 28 14:30:20.109: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75
STEP: Creating service test in namespace statefulset-1076
[It] should perform rolling updates and roll backs of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a new StatefulSet
Jan 28 14:30:20.269: INFO: Found 0 stateful pods, waiting for 3
Jan 28 14:30:30.288: INFO: Found 2 stateful pods, waiting for 3
Jan 28 14:30:40.286: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jan 28 14:30:40.286: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jan 28 14:30:40.286: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Jan 28 14:30:50.287: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jan 28 14:30:50.288: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jan 28 14:30:50.288: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
Jan 28 14:30:50.313: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1076 ss2-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan 28 14:30:52.576: INFO: stderr: "I0128 14:30:52.325981    2379 log.go:172] (0xc000afe370) (0xc0006d4640) Create stream\nI0128 14:30:52.326224    2379 log.go:172] (0xc000afe370) (0xc0006d4640) Stream added, broadcasting: 1\nI0128 14:30:52.330742    2379 log.go:172] (0xc000afe370) Reply frame received for 1\nI0128 14:30:52.331055    2379 log.go:172] (0xc000afe370) (0xc00076c0a0) Create stream\nI0128 14:30:52.331107    2379 log.go:172] (0xc000afe370) (0xc00076c0a0) Stream added, broadcasting: 3\nI0128 14:30:52.333351    2379 log.go:172] (0xc000afe370) Reply frame received for 3\nI0128 14:30:52.333442    2379 log.go:172] (0xc000afe370) (0xc000310000) Create stream\nI0128 14:30:52.333477    2379 log.go:172] (0xc000afe370) (0xc000310000) Stream added, broadcasting: 5\nI0128 14:30:52.336320    2379 log.go:172] (0xc000afe370) Reply frame received for 5\nI0128 14:30:52.419679    2379 log.go:172] (0xc000afe370) Data frame received for 5\nI0128 14:30:52.419815    2379 log.go:172] (0xc000310000) (5) Data frame handling\nI0128 14:30:52.419859    2379 log.go:172] (0xc000310000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0128 14:30:52.461520    2379 log.go:172] (0xc000afe370) Data frame received for 3\nI0128 14:30:52.461598    2379 log.go:172] (0xc00076c0a0) (3) Data frame handling\nI0128 14:30:52.461643    2379 log.go:172] (0xc00076c0a0) (3) Data frame sent\nI0128 14:30:52.565452    2379 log.go:172] (0xc000afe370) Data frame received for 1\nI0128 14:30:52.565618    2379 log.go:172] (0xc000afe370) (0xc000310000) Stream removed, broadcasting: 5\nI0128 14:30:52.565786    2379 log.go:172] (0xc0006d4640) (1) Data frame handling\nI0128 14:30:52.565815    2379 log.go:172] (0xc0006d4640) (1) Data frame sent\nI0128 14:30:52.565859    2379 log.go:172] (0xc000afe370) (0xc00076c0a0) Stream removed, broadcasting: 3\nI0128 14:30:52.565944    2379 log.go:172] (0xc000afe370) (0xc0006d4640) Stream removed, broadcasting: 1\nI0128 14:30:52.565983    2379 log.go:172] (0xc000afe370) Go away received\nI0128 14:30:52.567259    2379 log.go:172] (0xc000afe370) (0xc0006d4640) Stream removed, broadcasting: 1\nI0128 14:30:52.567269    2379 log.go:172] (0xc000afe370) (0xc00076c0a0) Stream removed, broadcasting: 3\nI0128 14:30:52.567276    2379 log.go:172] (0xc000afe370) (0xc000310000) Stream removed, broadcasting: 5\n"
Jan 28 14:30:52.577: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan 28 14:30:52.577: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

STEP: Updating StatefulSet template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine
Jan 28 14:31:02.661: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Updating Pods in reverse ordinal order
Jan 28 14:31:12.760: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1076 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 28 14:31:13.132: INFO: stderr: "I0128 14:31:12.935363    2409 log.go:172] (0xc00094a370) (0xc0007e0640) Create stream\nI0128 14:31:12.935618    2409 log.go:172] (0xc00094a370) (0xc0007e0640) Stream added, broadcasting: 1\nI0128 14:31:12.938730    2409 log.go:172] (0xc00094a370) Reply frame received for 1\nI0128 14:31:12.938911    2409 log.go:172] (0xc00094a370) (0xc000870000) Create stream\nI0128 14:31:12.938965    2409 log.go:172] (0xc00094a370) (0xc000870000) Stream added, broadcasting: 3\nI0128 14:31:12.940285    2409 log.go:172] (0xc00094a370) Reply frame received for 3\nI0128 14:31:12.940320    2409 log.go:172] (0xc00094a370) (0xc0007e06e0) Create stream\nI0128 14:31:12.940334    2409 log.go:172] (0xc00094a370) (0xc0007e06e0) Stream added, broadcasting: 5\nI0128 14:31:12.941246    2409 log.go:172] (0xc00094a370) Reply frame received for 5\nI0128 14:31:13.032743    2409 log.go:172] (0xc00094a370) Data frame received for 3\nI0128 14:31:13.032975    2409 log.go:172] (0xc000870000) (3) Data frame handling\nI0128 14:31:13.033020    2409 log.go:172] (0xc000870000) (3) Data frame sent\nI0128 14:31:13.033383    2409 log.go:172] (0xc00094a370) Data frame received for 5\nI0128 14:31:13.033534    2409 log.go:172] (0xc0007e06e0) (5) Data frame handling\nI0128 14:31:13.033591    2409 log.go:172] (0xc0007e06e0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0128 14:31:13.120614    2409 log.go:172] (0xc00094a370) (0xc000870000) Stream removed, broadcasting: 3\nI0128 14:31:13.120812    2409 log.go:172] (0xc00094a370) Data frame received for 1\nI0128 14:31:13.120855    2409 log.go:172] (0xc00094a370) (0xc0007e06e0) Stream removed, broadcasting: 5\nI0128 14:31:13.120893    2409 log.go:172] (0xc0007e0640) (1) Data frame handling\nI0128 14:31:13.120914    2409 log.go:172] (0xc0007e0640) (1) Data frame sent\nI0128 14:31:13.120932    2409 log.go:172] (0xc00094a370) (0xc0007e0640) Stream removed, broadcasting: 1\nI0128 14:31:13.120949    2409 log.go:172] (0xc00094a370) Go away received\nI0128 14:31:13.121837    2409 log.go:172] (0xc00094a370) (0xc0007e0640) Stream removed, broadcasting: 1\nI0128 14:31:13.121866    2409 log.go:172] (0xc00094a370) (0xc000870000) Stream removed, broadcasting: 3\nI0128 14:31:13.121880    2409 log.go:172] (0xc00094a370) (0xc0007e06e0) Stream removed, broadcasting: 5\n"
Jan 28 14:31:13.133: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jan 28 14:31:13.133: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jan 28 14:31:23.204: INFO: Waiting for StatefulSet statefulset-1076/ss2 to complete update
Jan 28 14:31:23.204: INFO: Waiting for Pod statefulset-1076/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan 28 14:31:23.204: INFO: Waiting for Pod statefulset-1076/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan 28 14:31:33.870: INFO: Waiting for StatefulSet statefulset-1076/ss2 to complete update
Jan 28 14:31:33.870: INFO: Waiting for Pod statefulset-1076/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan 28 14:31:43.218: INFO: Waiting for StatefulSet statefulset-1076/ss2 to complete update
Jan 28 14:31:43.218: INFO: Waiting for Pod statefulset-1076/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan 28 14:31:53.221: INFO: Waiting for StatefulSet statefulset-1076/ss2 to complete update
STEP: Rolling back to a previous revision
Jan 28 14:32:03.224: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1076 ss2-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan 28 14:32:03.783: INFO: stderr: "I0128 14:32:03.471463    2431 log.go:172] (0xc00013afd0) (0xc0006d0b40) Create stream\nI0128 14:32:03.471786    2431 log.go:172] (0xc00013afd0) (0xc0006d0b40) Stream added, broadcasting: 1\nI0128 14:32:03.475691    2431 log.go:172] (0xc00013afd0) Reply frame received for 1\nI0128 14:32:03.475731    2431 log.go:172] (0xc00013afd0) (0xc0006d0be0) Create stream\nI0128 14:32:03.475741    2431 log.go:172] (0xc00013afd0) (0xc0006d0be0) Stream added, broadcasting: 3\nI0128 14:32:03.477363    2431 log.go:172] (0xc00013afd0) Reply frame received for 3\nI0128 14:32:03.477384    2431 log.go:172] (0xc00013afd0) (0xc0006d0c80) Create stream\nI0128 14:32:03.477392    2431 log.go:172] (0xc00013afd0) (0xc0006d0c80) Stream added, broadcasting: 5\nI0128 14:32:03.479020    2431 log.go:172] (0xc00013afd0) Reply frame received for 5\nI0128 14:32:03.608399    2431 log.go:172] (0xc00013afd0) Data frame received for 5\nI0128 14:32:03.608529    2431 log.go:172] (0xc0006d0c80) (5) Data frame handling\nI0128 14:32:03.608576    2431 log.go:172] (0xc0006d0c80) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0128 14:32:03.661860    2431 log.go:172] (0xc00013afd0) Data frame received for 3\nI0128 14:32:03.662012    2431 log.go:172] (0xc0006d0be0) (3) Data frame handling\nI0128 14:32:03.662039    2431 log.go:172] (0xc0006d0be0) (3) Data frame sent\nI0128 14:32:03.755554    2431 log.go:172] (0xc00013afd0) (0xc0006d0be0) Stream removed, broadcasting: 3\nI0128 14:32:03.756015    2431 log.go:172] (0xc00013afd0) Data frame received for 1\nI0128 14:32:03.756046    2431 log.go:172] (0xc0006d0b40) (1) Data frame handling\nI0128 14:32:03.756107    2431 log.go:172] (0xc0006d0b40) (1) Data frame sent\nI0128 14:32:03.756134    2431 log.go:172] (0xc00013afd0) (0xc0006d0b40) Stream removed, broadcasting: 1\nI0128 14:32:03.756384    2431 log.go:172] (0xc00013afd0) (0xc0006d0c80) Stream removed, broadcasting: 5\nI0128 14:32:03.756641    2431 log.go:172] (0xc00013afd0) Go away received\nI0128 14:32:03.760300    2431 log.go:172] (0xc00013afd0) (0xc0006d0b40) Stream removed, broadcasting: 1\nI0128 14:32:03.760328    2431 log.go:172] (0xc00013afd0) (0xc0006d0be0) Stream removed, broadcasting: 3\nI0128 14:32:03.760341    2431 log.go:172] (0xc00013afd0) (0xc0006d0c80) Stream removed, broadcasting: 5\n"
Jan 28 14:32:03.784: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan 28 14:32:03.784: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan 28 14:32:13.863: INFO: Updating stateful set ss2
STEP: Rolling back update in reverse ordinal order
Jan 28 14:32:23.951: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1076 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 28 14:32:24.250: INFO: stderr: "I0128 14:32:24.111957    2452 log.go:172] (0xc000860370) (0xc000746640) Create stream\nI0128 14:32:24.112226    2452 log.go:172] (0xc000860370) (0xc000746640) Stream added, broadcasting: 1\nI0128 14:32:24.115533    2452 log.go:172] (0xc000860370) Reply frame received for 1\nI0128 14:32:24.115560    2452 log.go:172] (0xc000860370) (0xc0007f6000) Create stream\nI0128 14:32:24.115571    2452 log.go:172] (0xc000860370) (0xc0007f6000) Stream added, broadcasting: 3\nI0128 14:32:24.116528    2452 log.go:172] (0xc000860370) Reply frame received for 3\nI0128 14:32:24.116552    2452 log.go:172] (0xc000860370) (0xc0001f81e0) Create stream\nI0128 14:32:24.116565    2452 log.go:172] (0xc000860370) (0xc0001f81e0) Stream added, broadcasting: 5\nI0128 14:32:24.117678    2452 log.go:172] (0xc000860370) Reply frame received for 5\nI0128 14:32:24.173847    2452 log.go:172] (0xc000860370) Data frame received for 5\nI0128 14:32:24.174113    2452 log.go:172] (0xc0001f81e0) (5) Data frame handling\nI0128 14:32:24.174176    2452 log.go:172] (0xc0001f81e0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0128 14:32:24.174785    2452 log.go:172] (0xc000860370) Data frame received for 3\nI0128 14:32:24.174846    2452 log.go:172] (0xc0007f6000) (3) Data frame handling\nI0128 14:32:24.174878    2452 log.go:172] (0xc0007f6000) (3) Data frame sent\nI0128 14:32:24.242852    2452 log.go:172] (0xc000860370) (0xc0007f6000) Stream removed, broadcasting: 3\nI0128 14:32:24.243026    2452 log.go:172] (0xc000860370) Data frame received for 1\nI0128 14:32:24.243044    2452 log.go:172] (0xc000860370) (0xc0001f81e0) Stream removed, broadcasting: 5\nI0128 14:32:24.243080    2452 log.go:172] (0xc000746640) (1) Data frame handling\nI0128 14:32:24.243092    2452 log.go:172] (0xc000746640) (1) Data frame sent\nI0128 14:32:24.243109    2452 log.go:172] (0xc000860370) (0xc000746640) Stream removed, broadcasting: 1\nI0128 14:32:24.243127    2452 log.go:172] (0xc000860370) Go away received\nI0128 14:32:24.243726    2452 log.go:172] (0xc000860370) (0xc000746640) Stream removed, broadcasting: 1\nI0128 14:32:24.243742    2452 log.go:172] (0xc000860370) (0xc0007f6000) Stream removed, broadcasting: 3\nI0128 14:32:24.243749    2452 log.go:172] (0xc000860370) (0xc0001f81e0) Stream removed, broadcasting: 5\n"
Jan 28 14:32:24.251: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jan 28 14:32:24.251: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jan 28 14:32:34.284: INFO: Waiting for StatefulSet statefulset-1076/ss2 to complete update
Jan 28 14:32:34.284: INFO: Waiting for Pod statefulset-1076/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Jan 28 14:32:34.284: INFO: Waiting for Pod statefulset-1076/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Jan 28 14:32:34.285: INFO: Waiting for Pod statefulset-1076/ss2-2 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Jan 28 14:32:44.338: INFO: Waiting for StatefulSet statefulset-1076/ss2 to complete update
Jan 28 14:32:44.338: INFO: Waiting for Pod statefulset-1076/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Jan 28 14:32:44.338: INFO: Waiting for Pod statefulset-1076/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Jan 28 14:32:55.071: INFO: Waiting for StatefulSet statefulset-1076/ss2 to complete update
Jan 28 14:32:55.071: INFO: Waiting for Pod statefulset-1076/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Jan 28 14:33:04.298: INFO: Waiting for StatefulSet statefulset-1076/ss2 to complete update
Jan 28 14:33:04.298: INFO: Waiting for Pod statefulset-1076/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Jan 28 14:33:14.298: INFO: Waiting for StatefulSet statefulset-1076/ss2 to complete update
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86
Jan 28 14:33:24.306: INFO: Deleting all statefulset in ns statefulset-1076
Jan 28 14:33:24.314: INFO: Scaling statefulset ss2 to 0
Jan 28 14:34:04.390: INFO: Waiting for statefulset status.replicas updated to 0
Jan 28 14:34:04.400: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 28 14:34:04.454: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-1076" for this suite.
Jan 28 14:34:12.594: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 28 14:34:12.697: INFO: namespace statefulset-1076 deletion completed in 8.207302782s

• [SLOW TEST:232.588 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should perform rolling updates and roll backs of template modifications [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 28 14:34:12.697: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-map-45338d81-86c3-4880-beea-b59c67a93a55
STEP: Creating a pod to test consume configMaps
Jan 28 14:34:12.885: INFO: Waiting up to 5m0s for pod "pod-configmaps-95fe951f-c39e-42da-a7e2-ce5e8d000a39" in namespace "configmap-7960" to be "success or failure"
Jan 28 14:34:12.892: INFO: Pod "pod-configmaps-95fe951f-c39e-42da-a7e2-ce5e8d000a39": Phase="Pending", Reason="", readiness=false. Elapsed: 7.240335ms
Jan 28 14:34:14.903: INFO: Pod "pod-configmaps-95fe951f-c39e-42da-a7e2-ce5e8d000a39": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017960357s
Jan 28 14:34:16.921: INFO: Pod "pod-configmaps-95fe951f-c39e-42da-a7e2-ce5e8d000a39": Phase="Pending", Reason="", readiness=false. Elapsed: 4.03591967s
Jan 28 14:34:18.935: INFO: Pod "pod-configmaps-95fe951f-c39e-42da-a7e2-ce5e8d000a39": Phase="Pending", Reason="", readiness=false. Elapsed: 6.049768638s
Jan 28 14:34:20.943: INFO: Pod "pod-configmaps-95fe951f-c39e-42da-a7e2-ce5e8d000a39": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.058483365s
STEP: Saw pod success
Jan 28 14:34:20.943: INFO: Pod "pod-configmaps-95fe951f-c39e-42da-a7e2-ce5e8d000a39" satisfied condition "success or failure"
Jan 28 14:34:20.948: INFO: Trying to get logs from node iruya-node pod pod-configmaps-95fe951f-c39e-42da-a7e2-ce5e8d000a39 container configmap-volume-test: 
STEP: delete the pod
Jan 28 14:34:21.079: INFO: Waiting for pod pod-configmaps-95fe951f-c39e-42da-a7e2-ce5e8d000a39 to disappear
Jan 28 14:34:21.095: INFO: Pod pod-configmaps-95fe951f-c39e-42da-a7e2-ce5e8d000a39 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 28 14:34:21.096: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-7960" for this suite.
Jan 28 14:34:27.134: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 28 14:34:27.342: INFO: namespace configmap-7960 deletion completed in 6.231810964s

• [SLOW TEST:14.645 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 28 14:34:27.344: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72
[It] RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jan 28 14:34:27.418: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted)
Jan 28 14:34:27.498: INFO: Pod name sample-pod: Found 0 pods out of 1
Jan 28 14:34:32.511: INFO: Pod name sample-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Jan 28 14:34:36.533: INFO: Creating deployment "test-rolling-update-deployment"
Jan 28 14:34:36.563: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has
Jan 28 14:34:36.587: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created
Jan 28 14:34:38.632: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected
Jan 28 14:34:38.648: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715818876, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715818876, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715818876, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715818876, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 28 14:34:40.675: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715818876, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715818876, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715818876, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715818876, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 28 14:34:42.660: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715818876, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715818876, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715818876, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715818876, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 28 14:34:44.656: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715818876, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715818876, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715818876, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715818876, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 28 14:34:46.663: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted)
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66
Jan 28 14:34:46.686: INFO: Deployment "test-rolling-update-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment,GenerateName:,Namespace:deployment-51,SelfLink:/apis/apps/v1/namespaces/deployment-51/deployments/test-rolling-update-deployment,UID:9907a1d1-a414-41c9-a6b0-aa7b9279d2a7,ResourceVersion:22199371,Generation:1,CreationTimestamp:2020-01-28 14:34:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-01-28 14:34:36 +0000 UTC 2020-01-28 14:34:36 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-01-28 14:34:44 +0000 UTC 2020-01-28 14:34:36 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rolling-update-deployment-79f6b9d75c" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},}

Jan 28 14:34:46.702: INFO: New ReplicaSet "test-rolling-update-deployment-79f6b9d75c" of Deployment "test-rolling-update-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-79f6b9d75c,GenerateName:,Namespace:deployment-51,SelfLink:/apis/apps/v1/namespaces/deployment-51/replicasets/test-rolling-update-deployment-79f6b9d75c,UID:fc99d1a4-f9aa-4855-b9b8-720eeed43616,ResourceVersion:22199359,Generation:1,CreationTimestamp:2020-01-28 14:34:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment 9907a1d1-a414-41c9-a6b0-aa7b9279d2a7 0xc001f75387 0xc001f75388}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Jan 28 14:34:46.702: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment":
Jan 28 14:34:46.702: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-controller,GenerateName:,Namespace:deployment-51,SelfLink:/apis/apps/v1/namespaces/deployment-51/replicasets/test-rolling-update-controller,UID:7ff56b3f-f888-42b7-91d0-e78f1f25b0ae,ResourceVersion:22199369,Generation:2,CreationTimestamp:2020-01-28 14:34:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305832,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment 9907a1d1-a414-41c9-a6b0-aa7b9279d2a7 0xc001f750f7 0xc001f750f8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Jan 28 14:34:46.719: INFO: Pod "test-rolling-update-deployment-79f6b9d75c-mn5l5" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-79f6b9d75c-mn5l5,GenerateName:test-rolling-update-deployment-79f6b9d75c-,Namespace:deployment-51,SelfLink:/api/v1/namespaces/deployment-51/pods/test-rolling-update-deployment-79f6b9d75c-mn5l5,UID:57772d94-f5cd-4da9-a59b-5fca49fc317a,ResourceVersion:22199358,Generation:0,CreationTimestamp:2020-01-28 14:34:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rolling-update-deployment-79f6b9d75c fc99d1a4-f9aa-4855-b9b8-720eeed43616 0xc002a852b7 0xc002a852b8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-c7bzz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-c7bzz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [{default-token-c7bzz true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002a85330} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002a85350}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 14:34:36 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 14:34:44 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 14:34:44 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-28 14:34:36 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.2,StartTime:2020-01-28 14:34:36 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-01-28 14:34:43 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 docker://1799206b90cf055f70516ad148c27043920da12a347f2333ef5dcebdff869889}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 28 14:34:46.720: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-51" for this suite.
Jan 28 14:34:53.698: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 28 14:34:54.085: INFO: namespace deployment-51 deletion completed in 7.356532317s

• [SLOW TEST:26.741 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Aggregator 
  Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 28 14:34:54.087: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename aggregator
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76
Jan 28 14:34:54.183: INFO: >>> kubeConfig: /root/.kube/config
[It] Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Registering the sample API server.
Jan 28 14:34:54.852: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set
Jan 28 14:34:57.112: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715818894, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715818894, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715818894, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715818894, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 28 14:34:59.123: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715818894, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715818894, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715818894, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715818894, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 28 14:35:01.121: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715818894, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715818894, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715818894, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715818894, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 28 14:35:03.122: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715818894, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715818894, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715818894, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715818894, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 28 14:35:05.122: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715818894, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715818894, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715818894, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715818894, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 28 14:35:10.973: INFO: Waited 3.841323291s for the sample-apiserver to be ready to handle requests.
[AfterEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67
[AfterEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 28 14:35:11.585: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "aggregator-7022" for this suite.
Jan 28 14:35:17.649: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 28 14:35:17.844: INFO: namespace aggregator-7022 deletion completed in 6.253517161s

• [SLOW TEST:23.757 seconds]
[sig-api-machinery] Aggregator
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 28 14:35:17.845: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating secret secrets-1097/secret-test-931ac69d-34ff-4af2-82f1-7d901d55a091
STEP: Creating a pod to test consume secrets
Jan 28 14:35:18.009: INFO: Waiting up to 5m0s for pod "pod-configmaps-88f9def1-af92-4fea-93d4-24771221e342" in namespace "secrets-1097" to be "success or failure"
Jan 28 14:35:18.027: INFO: Pod "pod-configmaps-88f9def1-af92-4fea-93d4-24771221e342": Phase="Pending", Reason="", readiness=false. Elapsed: 17.834362ms
Jan 28 14:35:20.037: INFO: Pod "pod-configmaps-88f9def1-af92-4fea-93d4-24771221e342": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027887111s
Jan 28 14:35:22.111: INFO: Pod "pod-configmaps-88f9def1-af92-4fea-93d4-24771221e342": Phase="Pending", Reason="", readiness=false. Elapsed: 4.101729963s
Jan 28 14:35:24.122: INFO: Pod "pod-configmaps-88f9def1-af92-4fea-93d4-24771221e342": Phase="Pending", Reason="", readiness=false. Elapsed: 6.112939926s
Jan 28 14:35:26.130: INFO: Pod "pod-configmaps-88f9def1-af92-4fea-93d4-24771221e342": Phase="Pending", Reason="", readiness=false. Elapsed: 8.120708185s
Jan 28 14:35:28.141: INFO: Pod "pod-configmaps-88f9def1-af92-4fea-93d4-24771221e342": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.132130101s
STEP: Saw pod success
Jan 28 14:35:28.141: INFO: Pod "pod-configmaps-88f9def1-af92-4fea-93d4-24771221e342" satisfied condition "success or failure"
Jan 28 14:35:28.146: INFO: Trying to get logs from node iruya-node pod pod-configmaps-88f9def1-af92-4fea-93d4-24771221e342 container env-test: 
STEP: delete the pod
Jan 28 14:35:28.218: INFO: Waiting for pod pod-configmaps-88f9def1-af92-4fea-93d4-24771221e342 to disappear
Jan 28 14:35:28.232: INFO: Pod pod-configmaps-88f9def1-af92-4fea-93d4-24771221e342 no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 28 14:35:28.232: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-1097" for this suite.
Jan 28 14:35:34.266: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 28 14:35:34.411: INFO: namespace secrets-1097 deletion completed in 6.171501635s

• [SLOW TEST:16.566 seconds]
[sig-api-machinery] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should receive events on concurrent watches in same order [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 28 14:35:34.412: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should receive events on concurrent watches in same order [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: starting a background goroutine to produce watch events
STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 28 14:35:39.926: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-5814" for this suite.
Jan 28 14:35:46.078: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 28 14:35:46.244: INFO: namespace watch-5814 deletion completed in 6.268696689s

• [SLOW TEST:11.832 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should receive events on concurrent watches in same order [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 28 14:35:46.247: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-3f669c63-ed2d-4ecb-84dc-6a1c35a010d5
STEP: Creating a pod to test consume secrets
Jan 28 14:35:46.346: INFO: Waiting up to 5m0s for pod "pod-secrets-49e5f965-2786-491d-a5a3-6eee2ffd5943" in namespace "secrets-3771" to be "success or failure"
Jan 28 14:35:46.351: INFO: Pod "pod-secrets-49e5f965-2786-491d-a5a3-6eee2ffd5943": Phase="Pending", Reason="", readiness=false. Elapsed: 4.316864ms
Jan 28 14:35:48.361: INFO: Pod "pod-secrets-49e5f965-2786-491d-a5a3-6eee2ffd5943": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014876626s
Jan 28 14:35:50.369: INFO: Pod "pod-secrets-49e5f965-2786-491d-a5a3-6eee2ffd5943": Phase="Pending", Reason="", readiness=false. Elapsed: 4.022695724s
Jan 28 14:35:52.379: INFO: Pod "pod-secrets-49e5f965-2786-491d-a5a3-6eee2ffd5943": Phase="Pending", Reason="", readiness=false. Elapsed: 6.032349204s
Jan 28 14:35:54.388: INFO: Pod "pod-secrets-49e5f965-2786-491d-a5a3-6eee2ffd5943": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.041825947s
STEP: Saw pod success
Jan 28 14:35:54.388: INFO: Pod "pod-secrets-49e5f965-2786-491d-a5a3-6eee2ffd5943" satisfied condition "success or failure"
Jan 28 14:35:54.393: INFO: Trying to get logs from node iruya-node pod pod-secrets-49e5f965-2786-491d-a5a3-6eee2ffd5943 container secret-volume-test: 
STEP: delete the pod
Jan 28 14:35:55.532: INFO: Waiting for pod pod-secrets-49e5f965-2786-491d-a5a3-6eee2ffd5943 to disappear
Jan 28 14:35:55.542: INFO: Pod pod-secrets-49e5f965-2786-491d-a5a3-6eee2ffd5943 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 28 14:35:55.542: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-3771" for this suite.
Jan 28 14:36:01.606: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 28 14:36:01.848: INFO: namespace secrets-3771 deletion completed in 6.299099023s

• [SLOW TEST:15.601 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 28 14:36:01.849: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test substitution in container's command
Jan 28 14:36:01.981: INFO: Waiting up to 5m0s for pod "var-expansion-14c6013a-a7ea-4dfd-9a11-4e173dac6d10" in namespace "var-expansion-1241" to be "success or failure"
Jan 28 14:36:02.013: INFO: Pod "var-expansion-14c6013a-a7ea-4dfd-9a11-4e173dac6d10": Phase="Pending", Reason="", readiness=false. Elapsed: 31.575406ms
Jan 28 14:36:04.028: INFO: Pod "var-expansion-14c6013a-a7ea-4dfd-9a11-4e173dac6d10": Phase="Pending", Reason="", readiness=false. Elapsed: 2.047063281s
Jan 28 14:36:06.035: INFO: Pod "var-expansion-14c6013a-a7ea-4dfd-9a11-4e173dac6d10": Phase="Pending", Reason="", readiness=false. Elapsed: 4.05365117s
Jan 28 14:36:08.046: INFO: Pod "var-expansion-14c6013a-a7ea-4dfd-9a11-4e173dac6d10": Phase="Pending", Reason="", readiness=false. Elapsed: 6.064452276s
Jan 28 14:36:10.059: INFO: Pod "var-expansion-14c6013a-a7ea-4dfd-9a11-4e173dac6d10": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.077895603s
STEP: Saw pod success
Jan 28 14:36:10.059: INFO: Pod "var-expansion-14c6013a-a7ea-4dfd-9a11-4e173dac6d10" satisfied condition "success or failure"
Jan 28 14:36:10.064: INFO: Trying to get logs from node iruya-node pod var-expansion-14c6013a-a7ea-4dfd-9a11-4e173dac6d10 container dapi-container: 
STEP: delete the pod
Jan 28 14:36:10.129: INFO: Waiting for pod var-expansion-14c6013a-a7ea-4dfd-9a11-4e173dac6d10 to disappear
Jan 28 14:36:10.162: INFO: Pod var-expansion-14c6013a-a7ea-4dfd-9a11-4e173dac6d10 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 28 14:36:10.163: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-1241" for this suite.
Jan 28 14:36:16.226: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 28 14:36:16.375: INFO: namespace var-expansion-1241 deletion completed in 6.174955888s

• [SLOW TEST:14.526 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl expose 
  should create services for rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 28 14:36:16.376: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should create services for rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating Redis RC
Jan 28 14:36:16.429: INFO: namespace kubectl-7803
Jan 28 14:36:16.429: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7803'
Jan 28 14:36:16.790: INFO: stderr: ""
Jan 28 14:36:16.791: INFO: stdout: "replicationcontroller/redis-master created\n"
STEP: Waiting for Redis master to start.
Jan 28 14:36:17.830: INFO: Selector matched 1 pods for map[app:redis]
Jan 28 14:36:17.830: INFO: Found 0 / 1
Jan 28 14:36:18.807: INFO: Selector matched 1 pods for map[app:redis]
Jan 28 14:36:18.807: INFO: Found 0 / 1
Jan 28 14:36:19.810: INFO: Selector matched 1 pods for map[app:redis]
Jan 28 14:36:19.810: INFO: Found 0 / 1
Jan 28 14:36:20.800: INFO: Selector matched 1 pods for map[app:redis]
Jan 28 14:36:20.800: INFO: Found 0 / 1
Jan 28 14:36:21.809: INFO: Selector matched 1 pods for map[app:redis]
Jan 28 14:36:21.809: INFO: Found 0 / 1
Jan 28 14:36:22.809: INFO: Selector matched 1 pods for map[app:redis]
Jan 28 14:36:22.809: INFO: Found 0 / 1
Jan 28 14:36:23.807: INFO: Selector matched 1 pods for map[app:redis]
Jan 28 14:36:23.807: INFO: Found 0 / 1
Jan 28 14:36:24.800: INFO: Selector matched 1 pods for map[app:redis]
Jan 28 14:36:24.800: INFO: Found 1 / 1
Jan 28 14:36:24.800: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Jan 28 14:36:24.809: INFO: Selector matched 1 pods for map[app:redis]
Jan 28 14:36:24.809: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Jan 28 14:36:24.809: INFO: wait on redis-master startup in kubectl-7803 
Jan 28 14:36:24.809: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-fx2fm redis-master --namespace=kubectl-7803'
Jan 28 14:36:25.057: INFO: stderr: ""
Jan 28 14:36:25.057: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 28 Jan 14:36:22.999 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 28 Jan 14:36:22.999 # Server started, Redis version 3.2.12\n1:M 28 Jan 14:36:22.999 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 28 Jan 14:36:22.999 * The server is now ready to accept connections on port 6379\n"
STEP: exposing RC
Jan 28 14:36:25.058: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc redis-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-7803'
Jan 28 14:36:25.348: INFO: stderr: ""
Jan 28 14:36:25.348: INFO: stdout: "service/rm2 exposed\n"
Jan 28 14:36:25.355: INFO: Service rm2 in namespace kubectl-7803 found.
STEP: exposing service
Jan 28 14:36:27.371: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-7803'
Jan 28 14:36:27.649: INFO: stderr: ""
Jan 28 14:36:27.649: INFO: stdout: "service/rm3 exposed\n"
Jan 28 14:36:27.656: INFO: Service rm3 in namespace kubectl-7803 found.
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 28 14:36:29.667: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-7803" for this suite.
Jan 28 14:36:51.710: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 28 14:36:51.854: INFO: namespace kubectl-7803 deletion completed in 22.182771977s

• [SLOW TEST:35.479 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl expose
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create services for rc  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 28 14:36:51.859: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-aa3d03cf-02c2-4ad0-9720-ea707166e074
STEP: Creating a pod to test consume secrets
Jan 28 14:36:51.981: INFO: Waiting up to 5m0s for pod "pod-secrets-29dad157-a121-4133-b28e-867753177b86" in namespace "secrets-613" to be "success or failure"
Jan 28 14:36:51.985: INFO: Pod "pod-secrets-29dad157-a121-4133-b28e-867753177b86": Phase="Pending", Reason="", readiness=false. Elapsed: 3.487768ms
Jan 28 14:36:53.998: INFO: Pod "pod-secrets-29dad157-a121-4133-b28e-867753177b86": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016819847s
Jan 28 14:36:56.023: INFO: Pod "pod-secrets-29dad157-a121-4133-b28e-867753177b86": Phase="Pending", Reason="", readiness=false. Elapsed: 4.041746452s
Jan 28 14:36:58.039: INFO: Pod "pod-secrets-29dad157-a121-4133-b28e-867753177b86": Phase="Pending", Reason="", readiness=false. Elapsed: 6.05761414s
Jan 28 14:37:00.054: INFO: Pod "pod-secrets-29dad157-a121-4133-b28e-867753177b86": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.072868511s
STEP: Saw pod success
Jan 28 14:37:00.054: INFO: Pod "pod-secrets-29dad157-a121-4133-b28e-867753177b86" satisfied condition "success or failure"
Jan 28 14:37:00.060: INFO: Trying to get logs from node iruya-node pod pod-secrets-29dad157-a121-4133-b28e-867753177b86 container secret-volume-test: 
STEP: delete the pod
Jan 28 14:37:00.152: INFO: Waiting for pod pod-secrets-29dad157-a121-4133-b28e-867753177b86 to disappear
Jan 28 14:37:00.179: INFO: Pod pod-secrets-29dad157-a121-4133-b28e-867753177b86 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 28 14:37:00.181: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-613" for this suite.
Jan 28 14:37:06.226: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 28 14:37:06.384: INFO: namespace secrets-613 deletion completed in 6.193044683s

• [SLOW TEST:14.526 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 28 14:37:06.385: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Jan 28 14:37:06.552: INFO: Waiting up to 5m0s for pod "downwardapi-volume-12d24bbe-405f-40e6-89b5-634de142157e" in namespace "downward-api-8495" to be "success or failure"
Jan 28 14:37:06.635: INFO: Pod "downwardapi-volume-12d24bbe-405f-40e6-89b5-634de142157e": Phase="Pending", Reason="", readiness=false. Elapsed: 82.575791ms
Jan 28 14:37:08.645: INFO: Pod "downwardapi-volume-12d24bbe-405f-40e6-89b5-634de142157e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0922048s
Jan 28 14:37:10.659: INFO: Pod "downwardapi-volume-12d24bbe-405f-40e6-89b5-634de142157e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.106210705s
Jan 28 14:37:12.666: INFO: Pod "downwardapi-volume-12d24bbe-405f-40e6-89b5-634de142157e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.113349893s
Jan 28 14:37:14.678: INFO: Pod "downwardapi-volume-12d24bbe-405f-40e6-89b5-634de142157e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.125384064s
STEP: Saw pod success
Jan 28 14:37:14.678: INFO: Pod "downwardapi-volume-12d24bbe-405f-40e6-89b5-634de142157e" satisfied condition "success or failure"
Jan 28 14:37:14.683: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-12d24bbe-405f-40e6-89b5-634de142157e container client-container: 
STEP: delete the pod
Jan 28 14:37:14.979: INFO: Waiting for pod downwardapi-volume-12d24bbe-405f-40e6-89b5-634de142157e to disappear
Jan 28 14:37:15.031: INFO: Pod downwardapi-volume-12d24bbe-405f-40e6-89b5-634de142157e no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 28 14:37:15.031: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-8495" for this suite.
Jan 28 14:37:21.056: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 28 14:37:21.225: INFO: namespace downward-api-8495 deletion completed in 6.18754427s

• [SLOW TEST:14.840 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 28 14:37:21.225: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward api env vars
Jan 28 14:37:21.341: INFO: Waiting up to 5m0s for pod "downward-api-6c594c2a-855b-4e96-93e4-5a681cdf512a" in namespace "downward-api-6573" to be "success or failure"
Jan 28 14:37:21.346: INFO: Pod "downward-api-6c594c2a-855b-4e96-93e4-5a681cdf512a": Phase="Pending", Reason="", readiness=false. Elapsed: 5.129451ms
Jan 28 14:37:23.365: INFO: Pod "downward-api-6c594c2a-855b-4e96-93e4-5a681cdf512a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02391887s
Jan 28 14:37:25.375: INFO: Pod "downward-api-6c594c2a-855b-4e96-93e4-5a681cdf512a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.033907003s
Jan 28 14:37:27.386: INFO: Pod "downward-api-6c594c2a-855b-4e96-93e4-5a681cdf512a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.045416369s
Jan 28 14:37:29.396: INFO: Pod "downward-api-6c594c2a-855b-4e96-93e4-5a681cdf512a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.055219177s
STEP: Saw pod success
Jan 28 14:37:29.396: INFO: Pod "downward-api-6c594c2a-855b-4e96-93e4-5a681cdf512a" satisfied condition "success or failure"
Jan 28 14:37:29.401: INFO: Trying to get logs from node iruya-node pod downward-api-6c594c2a-855b-4e96-93e4-5a681cdf512a container dapi-container: 
STEP: delete the pod
Jan 28 14:37:29.500: INFO: Waiting for pod downward-api-6c594c2a-855b-4e96-93e4-5a681cdf512a to disappear
Jan 28 14:37:29.508: INFO: Pod downward-api-6c594c2a-855b-4e96-93e4-5a681cdf512a no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 28 14:37:29.508: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-6573" for this suite.
Jan 28 14:37:35.595: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 28 14:37:35.731: INFO: namespace downward-api-6573 deletion completed in 6.162192963s

• [SLOW TEST:14.506 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32
  should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 28 14:37:35.732: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name s-test-opt-del-90bf19f7-8ba6-4e6c-8c4c-175f3d3150a9
STEP: Creating secret with name s-test-opt-upd-95d1708d-1493-4faa-ada5-df567442f2a3
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-90bf19f7-8ba6-4e6c-8c4c-175f3d3150a9
STEP: Updating secret s-test-opt-upd-95d1708d-1493-4faa-ada5-df567442f2a3
STEP: Creating secret with name s-test-opt-create-ae34acc4-6828-4ded-be1f-d685ddfad5a0
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 28 14:37:50.508: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3919" for this suite.
Jan 28 14:38:14.569: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 28 14:38:14.718: INFO: namespace projected-3919 deletion completed in 24.200220895s

• [SLOW TEST:38.986 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 28 14:38:14.720: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0644 on node default medium
Jan 28 14:38:15.170: INFO: Waiting up to 5m0s for pod "pod-e1998347-4761-4aa5-8f37-14939940296a" in namespace "emptydir-8235" to be "success or failure"
Jan 28 14:38:15.206: INFO: Pod "pod-e1998347-4761-4aa5-8f37-14939940296a": Phase="Pending", Reason="", readiness=false. Elapsed: 34.8242ms
Jan 28 14:38:17.215: INFO: Pod "pod-e1998347-4761-4aa5-8f37-14939940296a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.044082499s
Jan 28 14:38:19.223: INFO: Pod "pod-e1998347-4761-4aa5-8f37-14939940296a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.052455026s
Jan 28 14:38:21.234: INFO: Pod "pod-e1998347-4761-4aa5-8f37-14939940296a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.062757233s
Jan 28 14:38:23.243: INFO: Pod "pod-e1998347-4761-4aa5-8f37-14939940296a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.072280501s
STEP: Saw pod success
Jan 28 14:38:23.243: INFO: Pod "pod-e1998347-4761-4aa5-8f37-14939940296a" satisfied condition "success or failure"
Jan 28 14:38:23.247: INFO: Trying to get logs from node iruya-node pod pod-e1998347-4761-4aa5-8f37-14939940296a container test-container: 
STEP: delete the pod
Jan 28 14:38:23.366: INFO: Waiting for pod pod-e1998347-4761-4aa5-8f37-14939940296a to disappear
Jan 28 14:38:23.376: INFO: Pod pod-e1998347-4761-4aa5-8f37-14939940296a no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 28 14:38:23.377: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-8235" for this suite.
Jan 28 14:38:29.426: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 28 14:38:29.589: INFO: namespace emptydir-8235 deletion completed in 6.200603521s

• [SLOW TEST:14.869 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 28 14:38:29.590: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name projected-secret-test-map-23d0a5e7-c3ea-4637-81e5-8a08b5765f9f
STEP: Creating a pod to test consume secrets
Jan 28 14:38:29.684: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-d0e8a721-e792-4641-9497-53fba4f3090d" in namespace "projected-5443" to be "success or failure"
Jan 28 14:38:29.761: INFO: Pod "pod-projected-secrets-d0e8a721-e792-4641-9497-53fba4f3090d": Phase="Pending", Reason="", readiness=false. Elapsed: 77.382038ms
Jan 28 14:38:31.772: INFO: Pod "pod-projected-secrets-d0e8a721-e792-4641-9497-53fba4f3090d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.088162117s
Jan 28 14:38:33.802: INFO: Pod "pod-projected-secrets-d0e8a721-e792-4641-9497-53fba4f3090d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.118099593s
Jan 28 14:38:35.808: INFO: Pod "pod-projected-secrets-d0e8a721-e792-4641-9497-53fba4f3090d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.123825293s
Jan 28 14:38:37.827: INFO: Pod "pod-projected-secrets-d0e8a721-e792-4641-9497-53fba4f3090d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.14318526s
Jan 28 14:38:39.837: INFO: Pod "pod-projected-secrets-d0e8a721-e792-4641-9497-53fba4f3090d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.153105025s
STEP: Saw pod success
Jan 28 14:38:39.837: INFO: Pod "pod-projected-secrets-d0e8a721-e792-4641-9497-53fba4f3090d" satisfied condition "success or failure"
Jan 28 14:38:39.842: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-d0e8a721-e792-4641-9497-53fba4f3090d container projected-secret-volume-test: 
STEP: delete the pod
Jan 28 14:38:39.980: INFO: Waiting for pod pod-projected-secrets-d0e8a721-e792-4641-9497-53fba4f3090d to disappear
Jan 28 14:38:39.987: INFO: Pod pod-projected-secrets-d0e8a721-e792-4641-9497-53fba4f3090d no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 28 14:38:39.987: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5443" for this suite.
Jan 28 14:38:46.041: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 28 14:38:46.252: INFO: namespace projected-5443 deletion completed in 6.255509815s

• [SLOW TEST:16.663 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 28 14:38:46.253: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-69d314e7-e857-41bc-85ce-ea3845e79297
STEP: Creating a pod to test consume secrets
Jan 28 14:38:46.349: INFO: Waiting up to 5m0s for pod "pod-secrets-936a2071-d563-4a36-8bdf-75e6cffa86c1" in namespace "secrets-2072" to be "success or failure"
Jan 28 14:38:46.359: INFO: Pod "pod-secrets-936a2071-d563-4a36-8bdf-75e6cffa86c1": Phase="Pending", Reason="", readiness=false. Elapsed: 9.508439ms
Jan 28 14:38:48.370: INFO: Pod "pod-secrets-936a2071-d563-4a36-8bdf-75e6cffa86c1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019973562s
Jan 28 14:38:50.419: INFO: Pod "pod-secrets-936a2071-d563-4a36-8bdf-75e6cffa86c1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.069228366s
Jan 28 14:38:52.459: INFO: Pod "pod-secrets-936a2071-d563-4a36-8bdf-75e6cffa86c1": Phase="Pending", Reason="", readiness=false. Elapsed: 6.109755651s
Jan 28 14:38:54.480: INFO: Pod "pod-secrets-936a2071-d563-4a36-8bdf-75e6cffa86c1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.130722106s
STEP: Saw pod success
Jan 28 14:38:54.481: INFO: Pod "pod-secrets-936a2071-d563-4a36-8bdf-75e6cffa86c1" satisfied condition "success or failure"
Jan 28 14:38:54.489: INFO: Trying to get logs from node iruya-node pod pod-secrets-936a2071-d563-4a36-8bdf-75e6cffa86c1 container secret-volume-test: 
STEP: delete the pod
Jan 28 14:38:54.600: INFO: Waiting for pod pod-secrets-936a2071-d563-4a36-8bdf-75e6cffa86c1 to disappear
Jan 28 14:38:54.606: INFO: Pod pod-secrets-936a2071-d563-4a36-8bdf-75e6cffa86c1 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 28 14:38:54.607: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-2072" for this suite.
Jan 28 14:39:00.707: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 28 14:39:00.865: INFO: namespace secrets-2072 deletion completed in 6.25229515s

• [SLOW TEST:14.612 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run pod 
  should create a pod from an image when restart is Never  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 28 14:39:00.866: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1685
[It] should create a pod from an image when restart is Never  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Jan 28 14:39:00.976: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-3226'
Jan 28 14:39:01.157: INFO: stderr: ""
Jan 28 14:39:01.157: INFO: stdout: "pod/e2e-test-nginx-pod created\n"
STEP: verifying the pod e2e-test-nginx-pod was created
[AfterEach] [k8s.io] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1690
Jan 28 14:39:01.166: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=kubectl-3226'
Jan 28 14:39:06.569: INFO: stderr: ""
Jan 28 14:39:06.569: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 28 14:39:06.570: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3226" for this suite.
Jan 28 14:39:12.641: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 28 14:39:12.779: INFO: namespace kubectl-3226 deletion completed in 6.169282094s

• [SLOW TEST:11.913 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create a pod from an image when restart is Never  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 28 14:39:12.780: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-7ce21c64-2457-412b-89f5-e974d1263488
STEP: Creating a pod to test consume configMaps
Jan 28 14:39:12.941: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-6bcfe868-948c-4c8a-a53a-f441dc075a25" in namespace "projected-5554" to be "success or failure"
Jan 28 14:39:12.986: INFO: Pod "pod-projected-configmaps-6bcfe868-948c-4c8a-a53a-f441dc075a25": Phase="Pending", Reason="", readiness=false. Elapsed: 44.182572ms
Jan 28 14:39:14.994: INFO: Pod "pod-projected-configmaps-6bcfe868-948c-4c8a-a53a-f441dc075a25": Phase="Pending", Reason="", readiness=false. Elapsed: 2.051970558s
Jan 28 14:39:17.013: INFO: Pod "pod-projected-configmaps-6bcfe868-948c-4c8a-a53a-f441dc075a25": Phase="Pending", Reason="", readiness=false. Elapsed: 4.070942326s
Jan 28 14:39:19.020: INFO: Pod "pod-projected-configmaps-6bcfe868-948c-4c8a-a53a-f441dc075a25": Phase="Pending", Reason="", readiness=false. Elapsed: 6.078747333s
Jan 28 14:39:21.028: INFO: Pod "pod-projected-configmaps-6bcfe868-948c-4c8a-a53a-f441dc075a25": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.085819486s
STEP: Saw pod success
Jan 28 14:39:21.028: INFO: Pod "pod-projected-configmaps-6bcfe868-948c-4c8a-a53a-f441dc075a25" satisfied condition "success or failure"
Jan 28 14:39:21.030: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-6bcfe868-948c-4c8a-a53a-f441dc075a25 container projected-configmap-volume-test: 
STEP: delete the pod
Jan 28 14:39:21.100: INFO: Waiting for pod pod-projected-configmaps-6bcfe868-948c-4c8a-a53a-f441dc075a25 to disappear
Jan 28 14:39:21.133: INFO: Pod pod-projected-configmaps-6bcfe868-948c-4c8a-a53a-f441dc075a25 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 28 14:39:21.134: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5554" for this suite.
Jan 28 14:39:27.379: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 28 14:39:27.577: INFO: namespace projected-5554 deletion completed in 6.43776345s

• [SLOW TEST:14.798 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 28 14:39:27.578: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Jan 28 14:39:27.694: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f01690a0-b8ea-4911-bbb5-6daeb423bda2" in namespace "projected-5411" to be "success or failure"
Jan 28 14:39:27.715: INFO: Pod "downwardapi-volume-f01690a0-b8ea-4911-bbb5-6daeb423bda2": Phase="Pending", Reason="", readiness=false. Elapsed: 20.865495ms
Jan 28 14:39:29.727: INFO: Pod "downwardapi-volume-f01690a0-b8ea-4911-bbb5-6daeb423bda2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032739238s
Jan 28 14:39:31.734: INFO: Pod "downwardapi-volume-f01690a0-b8ea-4911-bbb5-6daeb423bda2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.040636916s
Jan 28 14:39:33.751: INFO: Pod "downwardapi-volume-f01690a0-b8ea-4911-bbb5-6daeb423bda2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.057420087s
Jan 28 14:39:35.767: INFO: Pod "downwardapi-volume-f01690a0-b8ea-4911-bbb5-6daeb423bda2": Phase="Pending", Reason="", readiness=false. Elapsed: 8.073304801s
Jan 28 14:39:37.786: INFO: Pod "downwardapi-volume-f01690a0-b8ea-4911-bbb5-6daeb423bda2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.091789153s
STEP: Saw pod success
Jan 28 14:39:37.786: INFO: Pod "downwardapi-volume-f01690a0-b8ea-4911-bbb5-6daeb423bda2" satisfied condition "success or failure"
Jan 28 14:39:37.791: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-f01690a0-b8ea-4911-bbb5-6daeb423bda2 container client-container: 
STEP: delete the pod
Jan 28 14:39:37.838: INFO: Waiting for pod downwardapi-volume-f01690a0-b8ea-4911-bbb5-6daeb423bda2 to disappear
Jan 28 14:39:37.844: INFO: Pod downwardapi-volume-f01690a0-b8ea-4911-bbb5-6daeb423bda2 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 28 14:39:37.844: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5411" for this suite.
Jan 28 14:39:43.883: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 28 14:39:44.095: INFO: namespace projected-5411 deletion completed in 6.243130711s

• [SLOW TEST:16.518 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 28 14:39:44.096: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44
[It] should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
Jan 28 14:39:44.193: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 28 14:39:58.581: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-9366" for this suite.
Jan 28 14:40:05.033: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 28 14:40:05.255: INFO: namespace init-container-9366 deletion completed in 6.270662864s

• [SLOW TEST:21.159 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a read only busybox container 
  should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 28 14:40:05.256: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 28 14:40:13.437: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-9618" for this suite.
Jan 28 14:40:59.477: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 28 14:40:59.675: INFO: namespace kubelet-test-9618 deletion completed in 46.22973402s

• [SLOW TEST:54.420 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when scheduling a read only busybox container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:187
    should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[k8s.io] Pods 
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 28 14:40:59.677: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jan 28 14:40:59.772: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 28 14:41:10.282: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-8319" for this suite.
Jan 28 14:42:02.341: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 28 14:42:02.521: INFO: namespace pods-8319 deletion completed in 52.22904075s

• [SLOW TEST:62.845 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 28 14:42:02.524: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-2cd8d594-f688-46d4-a7f1-f92606becb88
STEP: Creating a pod to test consume configMaps
Jan 28 14:42:02.680: INFO: Waiting up to 5m0s for pod "pod-configmaps-48f5a066-b382-48c2-b50a-83293535ded5" in namespace "configmap-7173" to be "success or failure"
Jan 28 14:42:02.693: INFO: Pod "pod-configmaps-48f5a066-b382-48c2-b50a-83293535ded5": Phase="Pending", Reason="", readiness=false. Elapsed: 13.078426ms
Jan 28 14:42:04.704: INFO: Pod "pod-configmaps-48f5a066-b382-48c2-b50a-83293535ded5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023731797s
Jan 28 14:42:06.717: INFO: Pod "pod-configmaps-48f5a066-b382-48c2-b50a-83293535ded5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.036238105s
Jan 28 14:42:08.725: INFO: Pod "pod-configmaps-48f5a066-b382-48c2-b50a-83293535ded5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.044660452s
Jan 28 14:42:10.747: INFO: Pod "pod-configmaps-48f5a066-b382-48c2-b50a-83293535ded5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.066538121s
STEP: Saw pod success
Jan 28 14:42:10.747: INFO: Pod "pod-configmaps-48f5a066-b382-48c2-b50a-83293535ded5" satisfied condition "success or failure"
Jan 28 14:42:10.757: INFO: Trying to get logs from node iruya-node pod pod-configmaps-48f5a066-b382-48c2-b50a-83293535ded5 container configmap-volume-test: 
STEP: delete the pod
Jan 28 14:42:10.973: INFO: Waiting for pod pod-configmaps-48f5a066-b382-48c2-b50a-83293535ded5 to disappear
Jan 28 14:42:10.991: INFO: Pod pod-configmaps-48f5a066-b382-48c2-b50a-83293535ded5 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 28 14:42:10.992: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-7173" for this suite.
Jan 28 14:42:17.075: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 28 14:42:17.262: INFO: namespace configmap-7173 deletion completed in 6.257025754s

• [SLOW TEST:14.739 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 28 14:42:17.263: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Jan 28 14:42:25.574: INFO: Expected: &{OK} to match Container's Termination Message: OK --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 28 14:42:25.711: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-4094" for this suite.
Jan 28 14:42:31.849: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 28 14:42:32.024: INFO: namespace container-runtime-4094 deletion completed in 6.304643576s

• [SLOW TEST:14.760 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129
      should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 28 14:42:32.025: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-8c5a7567-4321-41a1-840c-2666dd4883c1
STEP: Creating a pod to test consume configMaps
Jan 28 14:42:32.165: INFO: Waiting up to 5m0s for pod "pod-configmaps-84277990-f63a-44ff-9f08-ea8f3a7af779" in namespace "configmap-8148" to be "success or failure"
Jan 28 14:42:32.171: INFO: Pod "pod-configmaps-84277990-f63a-44ff-9f08-ea8f3a7af779": Phase="Pending", Reason="", readiness=false. Elapsed: 6.512273ms
Jan 28 14:42:34.179: INFO: Pod "pod-configmaps-84277990-f63a-44ff-9f08-ea8f3a7af779": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014350361s
Jan 28 14:42:36.189: INFO: Pod "pod-configmaps-84277990-f63a-44ff-9f08-ea8f3a7af779": Phase="Pending", Reason="", readiness=false. Elapsed: 4.024382915s
Jan 28 14:42:38.206: INFO: Pod "pod-configmaps-84277990-f63a-44ff-9f08-ea8f3a7af779": Phase="Pending", Reason="", readiness=false. Elapsed: 6.041155749s
Jan 28 14:42:40.218: INFO: Pod "pod-configmaps-84277990-f63a-44ff-9f08-ea8f3a7af779": Phase="Pending", Reason="", readiness=false. Elapsed: 8.053241269s
Jan 28 14:42:42.238: INFO: Pod "pod-configmaps-84277990-f63a-44ff-9f08-ea8f3a7af779": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.073034875s
STEP: Saw pod success
Jan 28 14:42:42.238: INFO: Pod "pod-configmaps-84277990-f63a-44ff-9f08-ea8f3a7af779" satisfied condition "success or failure"
Jan 28 14:42:42.245: INFO: Trying to get logs from node iruya-node pod pod-configmaps-84277990-f63a-44ff-9f08-ea8f3a7af779 container configmap-volume-test: 
STEP: delete the pod
Jan 28 14:42:42.362: INFO: Waiting for pod pod-configmaps-84277990-f63a-44ff-9f08-ea8f3a7af779 to disappear
Jan 28 14:42:42.373: INFO: Pod pod-configmaps-84277990-f63a-44ff-9f08-ea8f3a7af779 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 28 14:42:42.374: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-8148" for this suite.
Jan 28 14:42:48.396: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 28 14:42:48.548: INFO: namespace configmap-8148 deletion completed in 6.1675069s

• [SLOW TEST:16.524 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Service endpoints latency 
  should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Service endpoints latency
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 28 14:42:48.551: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svc-latency
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating replication controller svc-latency-rc in namespace svc-latency-3887
I0128 14:42:48.674973       9 runners.go:180] Created replication controller with name: svc-latency-rc, namespace: svc-latency-3887, replica count: 1
I0128 14:42:49.726392       9 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0128 14:42:50.727287       9 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0128 14:42:51.728296       9 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0128 14:42:52.728757       9 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0128 14:42:53.729568       9 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0128 14:42:54.730450       9 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0128 14:42:55.730958       9 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Jan 28 14:42:55.885: INFO: Created: latency-svc-w244t
Jan 28 14:42:55.913: INFO: Got endpoints: latency-svc-w244t [81.822584ms]
Jan 28 14:42:56.023: INFO: Created: latency-svc-5x22m
Jan 28 14:42:56.049: INFO: Got endpoints: latency-svc-5x22m [133.94496ms]
Jan 28 14:42:56.112: INFO: Created: latency-svc-fckvg
Jan 28 14:42:56.189: INFO: Created: latency-svc-jx6qv
Jan 28 14:42:56.189: INFO: Got endpoints: latency-svc-fckvg [272.346962ms]
Jan 28 14:42:56.199: INFO: Got endpoints: latency-svc-jx6qv [282.147123ms]
Jan 28 14:42:56.272: INFO: Created: latency-svc-5sbxj
Jan 28 14:42:56.331: INFO: Got endpoints: latency-svc-5sbxj [414.157749ms]
Jan 28 14:42:56.371: INFO: Created: latency-svc-hqjt9
Jan 28 14:42:56.402: INFO: Got endpoints: latency-svc-hqjt9 [484.303945ms]
Jan 28 14:42:56.498: INFO: Created: latency-svc-nfp95
Jan 28 14:42:56.513: INFO: Got endpoints: latency-svc-nfp95 [594.946047ms]
Jan 28 14:42:56.570: INFO: Created: latency-svc-gvw75
Jan 28 14:42:56.570: INFO: Got endpoints: latency-svc-gvw75 [652.541778ms]
Jan 28 14:42:56.645: INFO: Created: latency-svc-4d5k6
Jan 28 14:42:56.654: INFO: Got endpoints: latency-svc-4d5k6 [140.245627ms]
Jan 28 14:42:56.704: INFO: Created: latency-svc-86h2v
Jan 28 14:42:56.802: INFO: Created: latency-svc-zjf75
Jan 28 14:42:56.803: INFO: Got endpoints: latency-svc-86h2v [885.218594ms]
Jan 28 14:42:56.812: INFO: Got endpoints: latency-svc-zjf75 [893.81222ms]
Jan 28 14:42:56.875: INFO: Created: latency-svc-m5sgm
Jan 28 14:42:56.881: INFO: Got endpoints: latency-svc-m5sgm [964.128304ms]
Jan 28 14:42:56.990: INFO: Created: latency-svc-t5swx
Jan 28 14:42:57.002: INFO: Got endpoints: latency-svc-t5swx [1.084550354s]
Jan 28 14:42:57.053: INFO: Created: latency-svc-vf89f
Jan 28 14:42:57.060: INFO: Got endpoints: latency-svc-vf89f [1.141957523s]
Jan 28 14:42:57.168: INFO: Created: latency-svc-xmftg
Jan 28 14:42:57.192: INFO: Got endpoints: latency-svc-xmftg [1.27406878s]
Jan 28 14:42:57.228: INFO: Created: latency-svc-8fs4n
Jan 28 14:42:57.245: INFO: Got endpoints: latency-svc-8fs4n [1.32680812s]
Jan 28 14:42:57.352: INFO: Created: latency-svc-89h6m
Jan 28 14:42:57.363: INFO: Got endpoints: latency-svc-89h6m [1.44491571s]
Jan 28 14:42:57.424: INFO: Created: latency-svc-9cjts
Jan 28 14:42:57.425: INFO: Got endpoints: latency-svc-9cjts [1.375900889s]
Jan 28 14:42:57.583: INFO: Created: latency-svc-4sc4s
Jan 28 14:42:57.601: INFO: Got endpoints: latency-svc-4sc4s [1.411538778s]
Jan 28 14:42:57.786: INFO: Created: latency-svc-dlwq7
Jan 28 14:42:57.804: INFO: Got endpoints: latency-svc-dlwq7 [1.604456917s]
Jan 28 14:42:57.866: INFO: Created: latency-svc-hqss8
Jan 28 14:42:57.882: INFO: Got endpoints: latency-svc-hqss8 [1.550143606s]
Jan 28 14:42:58.005: INFO: Created: latency-svc-7q2xx
Jan 28 14:42:58.033: INFO: Got endpoints: latency-svc-7q2xx [1.630552849s]
Jan 28 14:42:58.075: INFO: Created: latency-svc-4m84c
Jan 28 14:42:58.089: INFO: Got endpoints: latency-svc-4m84c [1.518439952s]
Jan 28 14:42:58.196: INFO: Created: latency-svc-svrw6
Jan 28 14:42:58.202: INFO: Got endpoints: latency-svc-svrw6 [1.547631356s]
Jan 28 14:42:58.252: INFO: Created: latency-svc-99gbt
Jan 28 14:42:58.273: INFO: Got endpoints: latency-svc-99gbt [1.469920056s]
Jan 28 14:42:58.422: INFO: Created: latency-svc-vtv5v
Jan 28 14:42:58.426: INFO: Got endpoints: latency-svc-vtv5v [1.613823567s]
Jan 28 14:42:58.461: INFO: Created: latency-svc-jppmt
Jan 28 14:42:58.493: INFO: Got endpoints: latency-svc-jppmt [1.611133261s]
Jan 28 14:42:58.577: INFO: Created: latency-svc-zhfk5
Jan 28 14:42:58.594: INFO: Got endpoints: latency-svc-zhfk5 [1.591277078s]
Jan 28 14:42:58.656: INFO: Created: latency-svc-sbg7h
Jan 28 14:42:58.670: INFO: Got endpoints: latency-svc-sbg7h [1.609083537s]
Jan 28 14:42:58.771: INFO: Created: latency-svc-jww8l
Jan 28 14:42:58.789: INFO: Got endpoints: latency-svc-jww8l [1.596808094s]
Jan 28 14:42:58.894: INFO: Created: latency-svc-2x2gf
Jan 28 14:42:58.904: INFO: Got endpoints: latency-svc-2x2gf [1.658597596s]
Jan 28 14:42:58.948: INFO: Created: latency-svc-7m4lz
Jan 28 14:42:58.958: INFO: Got endpoints: latency-svc-7m4lz [1.594551791s]
Jan 28 14:42:59.000: INFO: Created: latency-svc-k4bjt
Jan 28 14:42:59.044: INFO: Got endpoints: latency-svc-k4bjt [1.619468515s]
Jan 28 14:42:59.089: INFO: Created: latency-svc-lm5pr
Jan 28 14:42:59.095: INFO: Got endpoints: latency-svc-lm5pr [1.49290452s]
Jan 28 14:42:59.152: INFO: Created: latency-svc-zwwlm
Jan 28 14:42:59.208: INFO: Got endpoints: latency-svc-zwwlm [1.403738712s]
Jan 28 14:42:59.241: INFO: Created: latency-svc-rgrs5
Jan 28 14:42:59.241: INFO: Got endpoints: latency-svc-rgrs5 [1.359010809s]
Jan 28 14:42:59.289: INFO: Created: latency-svc-r9t7t
Jan 28 14:42:59.378: INFO: Created: latency-svc-5zkft
Jan 28 14:42:59.380: INFO: Got endpoints: latency-svc-r9t7t [1.346988703s]
Jan 28 14:42:59.391: INFO: Got endpoints: latency-svc-5zkft [1.301835073s]
Jan 28 14:42:59.450: INFO: Created: latency-svc-nsx6q
Jan 28 14:42:59.515: INFO: Got endpoints: latency-svc-nsx6q [1.313324267s]
Jan 28 14:42:59.551: INFO: Created: latency-svc-tp5vp
Jan 28 14:42:59.560: INFO: Got endpoints: latency-svc-tp5vp [1.286210524s]
Jan 28 14:42:59.693: INFO: Created: latency-svc-5dnbj
Jan 28 14:42:59.704: INFO: Got endpoints: latency-svc-5dnbj [1.277804433s]
Jan 28 14:42:59.740: INFO: Created: latency-svc-rkhg6
Jan 28 14:42:59.744: INFO: Got endpoints: latency-svc-rkhg6 [1.251276124s]
Jan 28 14:42:59.790: INFO: Created: latency-svc-lrz7j
Jan 28 14:42:59.884: INFO: Got endpoints: latency-svc-lrz7j [1.289333711s]
Jan 28 14:42:59.888: INFO: Created: latency-svc-ncz48
Jan 28 14:42:59.897: INFO: Got endpoints: latency-svc-ncz48 [1.226975126s]
Jan 28 14:42:59.935: INFO: Created: latency-svc-xfclz
Jan 28 14:42:59.941: INFO: Got endpoints: latency-svc-xfclz [1.150961599s]
Jan 28 14:42:59.981: INFO: Created: latency-svc-g65vc
Jan 28 14:43:00.052: INFO: Got endpoints: latency-svc-g65vc [1.148047999s]
Jan 28 14:43:00.055: INFO: Created: latency-svc-klw66
Jan 28 14:43:00.084: INFO: Got endpoints: latency-svc-klw66 [1.12662853s]
Jan 28 14:43:00.125: INFO: Created: latency-svc-szzlz
Jan 28 14:43:00.249: INFO: Got endpoints: latency-svc-szzlz [1.203826536s]
Jan 28 14:43:00.249: INFO: Created: latency-svc-dk7nk
Jan 28 14:43:00.264: INFO: Got endpoints: latency-svc-dk7nk [1.168989581s]
Jan 28 14:43:00.328: INFO: Created: latency-svc-6cjct
Jan 28 14:43:00.337: INFO: Got endpoints: latency-svc-6cjct [1.128141815s]
Jan 28 14:43:00.439: INFO: Created: latency-svc-2zpk6
Jan 28 14:43:00.506: INFO: Got endpoints: latency-svc-2zpk6 [1.264090381s]
Jan 28 14:43:00.511: INFO: Created: latency-svc-7p8nc
Jan 28 14:43:00.589: INFO: Got endpoints: latency-svc-7p8nc [1.208269333s]
Jan 28 14:43:00.652: INFO: Created: latency-svc-zxnf7
Jan 28 14:43:00.677: INFO: Got endpoints: latency-svc-zxnf7 [1.285709058s]
Jan 28 14:43:00.779: INFO: Created: latency-svc-8tgcs
Jan 28 14:43:00.784: INFO: Got endpoints: latency-svc-8tgcs [1.268597916s]
Jan 28 14:43:00.827: INFO: Created: latency-svc-vk5c8
Jan 28 14:43:00.828: INFO: Got endpoints: latency-svc-vk5c8 [1.267358632s]
Jan 28 14:43:00.939: INFO: Created: latency-svc-4b94n
Jan 28 14:43:00.956: INFO: Got endpoints: latency-svc-4b94n [1.251843904s]
Jan 28 14:43:00.981: INFO: Created: latency-svc-sdnbl
Jan 28 14:43:01.003: INFO: Got endpoints: latency-svc-sdnbl [1.259070244s]
Jan 28 14:43:01.033: INFO: Created: latency-svc-k4rpl
Jan 28 14:43:01.107: INFO: Got endpoints: latency-svc-k4rpl [1.223558456s]
Jan 28 14:43:01.118: INFO: Created: latency-svc-bjbp2
Jan 28 14:43:01.131: INFO: Got endpoints: latency-svc-bjbp2 [1.233471877s]
Jan 28 14:43:01.175: INFO: Created: latency-svc-tb8j9
Jan 28 14:43:01.182: INFO: Got endpoints: latency-svc-tb8j9 [1.240831347s]
Jan 28 14:43:01.206: INFO: Created: latency-svc-dqcvh
Jan 28 14:43:01.317: INFO: Got endpoints: latency-svc-dqcvh [1.265087052s]
Jan 28 14:43:01.357: INFO: Created: latency-svc-n54z9
Jan 28 14:43:01.371: INFO: Got endpoints: latency-svc-n54z9 [1.285886309s]
Jan 28 14:43:01.421: INFO: Created: latency-svc-f42zt
Jan 28 14:43:01.468: INFO: Got endpoints: latency-svc-f42zt [1.218427377s]
Jan 28 14:43:01.500: INFO: Created: latency-svc-vrjwl
Jan 28 14:43:01.504: INFO: Got endpoints: latency-svc-vrjwl [1.239061024s]
Jan 28 14:43:01.544: INFO: Created: latency-svc-zvtkw
Jan 28 14:43:01.547: INFO: Got endpoints: latency-svc-zvtkw [1.20967896s]
Jan 28 14:43:01.800: INFO: Created: latency-svc-lrvtx
Jan 28 14:43:01.818: INFO: Got endpoints: latency-svc-lrvtx [1.312477088s]
Jan 28 14:43:01.872: INFO: Created: latency-svc-cx4nw
Jan 28 14:43:01.939: INFO: Got endpoints: latency-svc-cx4nw [1.348953248s]
Jan 28 14:43:02.010: INFO: Created: latency-svc-vkm87
Jan 28 14:43:02.020: INFO: Got endpoints: latency-svc-vkm87 [1.3424399s]
Jan 28 14:43:02.105: INFO: Created: latency-svc-g5crj
Jan 28 14:43:02.112: INFO: Got endpoints: latency-svc-g5crj [1.327143939s]
Jan 28 14:43:02.184: INFO: Created: latency-svc-w7f7c
Jan 28 14:43:02.191: INFO: Got endpoints: latency-svc-w7f7c [1.362870685s]
Jan 28 14:43:02.342: INFO: Created: latency-svc-sx57p
Jan 28 14:43:02.351: INFO: Got endpoints: latency-svc-sx57p [1.394608692s]
Jan 28 14:43:02.410: INFO: Created: latency-svc-fmnn4
Jan 28 14:43:02.488: INFO: Got endpoints: latency-svc-fmnn4 [1.484094952s]
Jan 28 14:43:02.501: INFO: Created: latency-svc-cckk7
Jan 28 14:43:02.505: INFO: Got endpoints: latency-svc-cckk7 [1.397580323s]
Jan 28 14:43:02.570: INFO: Created: latency-svc-vzd27
Jan 28 14:43:02.579: INFO: Got endpoints: latency-svc-vzd27 [1.448070615s]
Jan 28 14:43:02.666: INFO: Created: latency-svc-ht9m6
Jan 28 14:43:02.677: INFO: Got endpoints: latency-svc-ht9m6 [1.494986862s]
Jan 28 14:43:02.714: INFO: Created: latency-svc-lkr48
Jan 28 14:43:02.726: INFO: Got endpoints: latency-svc-lkr48 [1.40839403s]
Jan 28 14:43:02.846: INFO: Created: latency-svc-n8std
Jan 28 14:43:02.847: INFO: Got endpoints: latency-svc-n8std [1.475877343s]
Jan 28 14:43:02.892: INFO: Created: latency-svc-sllw2
Jan 28 14:43:02.907: INFO: Got endpoints: latency-svc-sllw2 [1.43959647s]
Jan 28 14:43:02.984: INFO: Created: latency-svc-458h4
Jan 28 14:43:02.989: INFO: Got endpoints: latency-svc-458h4 [1.484270746s]
Jan 28 14:43:03.065: INFO: Created: latency-svc-k4cpp
Jan 28 14:43:03.078: INFO: Got endpoints: latency-svc-k4cpp [1.530914481s]
Jan 28 14:43:03.163: INFO: Created: latency-svc-wvdqc
Jan 28 14:43:03.171: INFO: Got endpoints: latency-svc-wvdqc [1.352147119s]
Jan 28 14:43:03.205: INFO: Created: latency-svc-lmx2p
Jan 28 14:43:03.212: INFO: Got endpoints: latency-svc-lmx2p [1.272672172s]
Jan 28 14:43:03.320: INFO: Created: latency-svc-cf5lv
Jan 28 14:43:03.329: INFO: Got endpoints: latency-svc-cf5lv [1.308163183s]
Jan 28 14:43:03.399: INFO: Created: latency-svc-stwnx
Jan 28 14:43:03.570: INFO: Got endpoints: latency-svc-stwnx [1.457795039s]
Jan 28 14:43:03.589: INFO: Created: latency-svc-brh6d
Jan 28 14:43:03.602: INFO: Got endpoints: latency-svc-brh6d [1.410706479s]
Jan 28 14:43:03.673: INFO: Created: latency-svc-ngtjw
Jan 28 14:43:03.816: INFO: Created: latency-svc-slzlz
Jan 28 14:43:03.819: INFO: Got endpoints: latency-svc-ngtjw [1.46705725s]
Jan 28 14:43:03.826: INFO: Got endpoints: latency-svc-slzlz [1.337675802s]
Jan 28 14:43:03.870: INFO: Created: latency-svc-zfhdl
Jan 28 14:43:03.883: INFO: Got endpoints: latency-svc-zfhdl [1.377476201s]
Jan 28 14:43:04.056: INFO: Created: latency-svc-kfzf8
Jan 28 14:43:04.072: INFO: Got endpoints: latency-svc-kfzf8 [1.492083682s]
Jan 28 14:43:04.169: INFO: Created: latency-svc-t62hx
Jan 28 14:43:04.210: INFO: Got endpoints: latency-svc-t62hx [1.533358669s]
Jan 28 14:43:04.262: INFO: Created: latency-svc-sgsfq
Jan 28 14:43:04.340: INFO: Got endpoints: latency-svc-sgsfq [1.61442641s]
Jan 28 14:43:04.370: INFO: Created: latency-svc-wjjqt
Jan 28 14:43:04.376: INFO: Got endpoints: latency-svc-wjjqt [1.529109803s]
Jan 28 14:43:04.551: INFO: Created: latency-svc-5kkzm
Jan 28 14:43:04.592: INFO: Got endpoints: latency-svc-5kkzm [1.683827705s]
Jan 28 14:43:04.598: INFO: Created: latency-svc-f7gjg
Jan 28 14:43:04.639: INFO: Got endpoints: latency-svc-f7gjg [1.649509386s]
Jan 28 14:43:04.644: INFO: Created: latency-svc-2qd2t
Jan 28 14:43:04.705: INFO: Got endpoints: latency-svc-2qd2t [1.62644198s]
Jan 28 14:43:04.737: INFO: Created: latency-svc-k4mpt
Jan 28 14:43:04.751: INFO: Got endpoints: latency-svc-k4mpt [1.579584259s]
Jan 28 14:43:04.794: INFO: Created: latency-svc-thhvc
Jan 28 14:43:04.797: INFO: Got endpoints: latency-svc-thhvc [1.584070992s]
Jan 28 14:43:04.907: INFO: Created: latency-svc-db2vg
Jan 28 14:43:04.917: INFO: Got endpoints: latency-svc-db2vg [1.587660939s]
Jan 28 14:43:04.967: INFO: Created: latency-svc-57lcg
Jan 28 14:43:04.985: INFO: Got endpoints: latency-svc-57lcg [1.413978889s]
Jan 28 14:43:05.116: INFO: Created: latency-svc-2dgh7
Jan 28 14:43:05.138: INFO: Got endpoints: latency-svc-2dgh7 [1.536252001s]
Jan 28 14:43:05.205: INFO: Created: latency-svc-7rttj
Jan 28 14:43:05.294: INFO: Got endpoints: latency-svc-7rttj [1.467871806s]
Jan 28 14:43:05.307: INFO: Created: latency-svc-vcwqj
Jan 28 14:43:05.319: INFO: Got endpoints: latency-svc-vcwqj [1.500163235s]
Jan 28 14:43:05.353: INFO: Created: latency-svc-5bztz
Jan 28 14:43:05.358: INFO: Got endpoints: latency-svc-5bztz [1.473501226s]
Jan 28 14:43:05.378: INFO: Created: latency-svc-k9mvl
Jan 28 14:43:05.386: INFO: Got endpoints: latency-svc-k9mvl [1.313941984s]
Jan 28 14:43:05.528: INFO: Created: latency-svc-s8xxz
Jan 28 14:43:05.572: INFO: Got endpoints: latency-svc-s8xxz [1.360817255s]
Jan 28 14:43:05.599: INFO: Created: latency-svc-jxcw5
Jan 28 14:43:05.602: INFO: Got endpoints: latency-svc-jxcw5 [1.261394781s]
Jan 28 14:43:05.699: INFO: Created: latency-svc-mb6vs
Jan 28 14:43:05.764: INFO: Got endpoints: latency-svc-mb6vs [1.387292596s]
Jan 28 14:43:05.962: INFO: Created: latency-svc-s7w4v
Jan 28 14:43:05.982: INFO: Got endpoints: latency-svc-s7w4v [1.389513687s]
Jan 28 14:43:06.015: INFO: Created: latency-svc-cpsxc
Jan 28 14:43:06.021: INFO: Got endpoints: latency-svc-cpsxc [1.381481685s]
Jan 28 14:43:06.192: INFO: Created: latency-svc-dfkw4
Jan 28 14:43:06.209: INFO: Got endpoints: latency-svc-dfkw4 [1.504505555s]
Jan 28 14:43:06.307: INFO: Created: latency-svc-l2ksn
Jan 28 14:43:06.469: INFO: Got endpoints: latency-svc-l2ksn [1.717789259s]
Jan 28 14:43:06.544: INFO: Created: latency-svc-k6z5k
Jan 28 14:43:06.646: INFO: Got endpoints: latency-svc-k6z5k [1.849730786s]
Jan 28 14:43:06.677: INFO: Created: latency-svc-f77k7
Jan 28 14:43:06.713: INFO: Got endpoints: latency-svc-f77k7 [1.796350949s]
Jan 28 14:43:06.804: INFO: Created: latency-svc-w2s9s
Jan 28 14:43:06.814: INFO: Got endpoints: latency-svc-w2s9s [1.828403761s]
Jan 28 14:43:06.846: INFO: Created: latency-svc-fz4rc
Jan 28 14:43:06.854: INFO: Got endpoints: latency-svc-fz4rc [1.715514141s]
Jan 28 14:43:06.898: INFO: Created: latency-svc-lj6xf
Jan 28 14:43:06.976: INFO: Got endpoints: latency-svc-lj6xf [1.681664969s]
Jan 28 14:43:06.986: INFO: Created: latency-svc-fmzj5
Jan 28 14:43:06.995: INFO: Got endpoints: latency-svc-fmzj5 [1.675823877s]
Jan 28 14:43:07.036: INFO: Created: latency-svc-zr4lw
Jan 28 14:43:07.042: INFO: Got endpoints: latency-svc-zr4lw [1.683287365s]
Jan 28 14:43:07.068: INFO: Created: latency-svc-tmtbs
Jan 28 14:43:07.195: INFO: Got endpoints: latency-svc-tmtbs [1.808996578s]
Jan 28 14:43:07.223: INFO: Created: latency-svc-jp4rn
Jan 28 14:43:07.236: INFO: Got endpoints: latency-svc-jp4rn [1.663697511s]
Jan 28 14:43:07.276: INFO: Created: latency-svc-kzhcn
Jan 28 14:43:07.367: INFO: Got endpoints: latency-svc-kzhcn [1.764293098s]
Jan 28 14:43:07.372: INFO: Created: latency-svc-2dzfr
Jan 28 14:43:07.402: INFO: Got endpoints: latency-svc-2dzfr [1.637411327s]
Jan 28 14:43:07.405: INFO: Created: latency-svc-d2s2w
Jan 28 14:43:07.410: INFO: Got endpoints: latency-svc-d2s2w [1.428161197s]
Jan 28 14:43:07.454: INFO: Created: latency-svc-48hv7
Jan 28 14:43:07.460: INFO: Got endpoints: latency-svc-48hv7 [1.43881729s]
Jan 28 14:43:07.563: INFO: Created: latency-svc-b9ccz
Jan 28 14:43:07.576: INFO: Got endpoints: latency-svc-b9ccz [1.366525002s]
Jan 28 14:43:07.648: INFO: Created: latency-svc-nqdn9
Jan 28 14:43:07.773: INFO: Created: latency-svc-c7842
Jan 28 14:43:07.780: INFO: Got endpoints: latency-svc-nqdn9 [1.30990548s]
Jan 28 14:43:07.799: INFO: Got endpoints: latency-svc-c7842 [1.152124903s]
Jan 28 14:43:07.886: INFO: Created: latency-svc-r7qt9
Jan 28 14:43:07.991: INFO: Got endpoints: latency-svc-r7qt9 [1.277749724s]
Jan 28 14:43:07.997: INFO: Created: latency-svc-lqc9q
Jan 28 14:43:08.002: INFO: Got endpoints: latency-svc-lqc9q [1.187787915s]
Jan 28 14:43:08.035: INFO: Created: latency-svc-c9lvx
Jan 28 14:43:08.043: INFO: Got endpoints: latency-svc-c9lvx [1.188336698s]
Jan 28 14:43:08.091: INFO: Created: latency-svc-rnd5g
Jan 28 14:43:08.144: INFO: Got endpoints: latency-svc-rnd5g [1.167830253s]
Jan 28 14:43:08.160: INFO: Created: latency-svc-m6trs
Jan 28 14:43:08.193: INFO: Got endpoints: latency-svc-m6trs [1.197562905s]
Jan 28 14:43:08.213: INFO: Created: latency-svc-nlstd
Jan 28 14:43:08.213: INFO: Got endpoints: latency-svc-nlstd [1.171675581s]
Jan 28 14:43:08.251: INFO: Created: latency-svc-vn6bg
Jan 28 14:43:08.349: INFO: Got endpoints: latency-svc-vn6bg [1.153741893s]
Jan 28 14:43:08.357: INFO: Created: latency-svc-d49bl
Jan 28 14:43:08.373: INFO: Got endpoints: latency-svc-d49bl [1.136594566s]
Jan 28 14:43:08.425: INFO: Created: latency-svc-krqcz
Jan 28 14:43:08.431: INFO: Got endpoints: latency-svc-krqcz [1.063977827s]
Jan 28 14:43:08.516: INFO: Created: latency-svc-fj8b8
Jan 28 14:43:08.523: INFO: Got endpoints: latency-svc-fj8b8 [1.120762248s]
Jan 28 14:43:08.573: INFO: Created: latency-svc-lf66z
Jan 28 14:43:08.594: INFO: Got endpoints: latency-svc-lf66z [1.184055386s]
Jan 28 14:43:08.697: INFO: Created: latency-svc-jzwzl
Jan 28 14:43:08.717: INFO: Got endpoints: latency-svc-jzwzl [1.257468339s]
Jan 28 14:43:08.722: INFO: Created: latency-svc-js85k
Jan 28 14:43:08.727: INFO: Got endpoints: latency-svc-js85k [1.150658791s]
Jan 28 14:43:08.879: INFO: Created: latency-svc-6vsp5
Jan 28 14:43:08.892: INFO: Got endpoints: latency-svc-6vsp5 [1.11109948s]
Jan 28 14:43:08.923: INFO: Created: latency-svc-r9q6w
Jan 28 14:43:08.930: INFO: Got endpoints: latency-svc-r9q6w [1.130853905s]
Jan 28 14:43:09.062: INFO: Created: latency-svc-89rpc
Jan 28 14:43:09.068: INFO: Got endpoints: latency-svc-89rpc [1.076599646s]
Jan 28 14:43:09.122: INFO: Created: latency-svc-26r4w
Jan 28 14:43:09.132: INFO: Got endpoints: latency-svc-26r4w [1.130172095s]
Jan 28 14:43:09.311: INFO: Created: latency-svc-dj2w9
Jan 28 14:43:09.325: INFO: Got endpoints: latency-svc-dj2w9 [1.282104697s]
Jan 28 14:43:09.542: INFO: Created: latency-svc-6xpv2
Jan 28 14:43:09.592: INFO: Got endpoints: latency-svc-6xpv2 [1.44705801s]
Jan 28 14:43:09.602: INFO: Created: latency-svc-m9rp2
Jan 28 14:43:09.603: INFO: Got endpoints: latency-svc-m9rp2 [1.409536545s]
Jan 28 14:43:09.808: INFO: Created: latency-svc-h2pkf
Jan 28 14:43:09.831: INFO: Got endpoints: latency-svc-h2pkf [1.617485998s]
Jan 28 14:43:10.072: INFO: Created: latency-svc-dwm4d
Jan 28 14:43:10.081: INFO: Got endpoints: latency-svc-dwm4d [1.731149697s]
Jan 28 14:43:10.156: INFO: Created: latency-svc-ng8hv
Jan 28 14:43:10.163: INFO: Got endpoints: latency-svc-ng8hv [1.789989712s]
Jan 28 14:43:10.369: INFO: Created: latency-svc-4m7rq
Jan 28 14:43:10.582: INFO: Got endpoints: latency-svc-4m7rq [2.150252108s]
Jan 28 14:43:10.592: INFO: Created: latency-svc-hntgl
Jan 28 14:43:10.603: INFO: Got endpoints: latency-svc-hntgl [2.080188641s]
Jan 28 14:43:10.762: INFO: Created: latency-svc-xg4wj
Jan 28 14:43:10.767: INFO: Got endpoints: latency-svc-xg4wj [2.172837458s]
Jan 28 14:43:10.846: INFO: Created: latency-svc-899lx
Jan 28 14:43:10.859: INFO: Got endpoints: latency-svc-899lx [2.141466118s]
Jan 28 14:43:10.991: INFO: Created: latency-svc-9vxhk
Jan 28 14:43:11.002: INFO: Got endpoints: latency-svc-9vxhk [2.274961479s]
Jan 28 14:43:11.057: INFO: Created: latency-svc-s2pzn
Jan 28 14:43:11.071: INFO: Got endpoints: latency-svc-s2pzn [2.179257404s]
Jan 28 14:43:11.143: INFO: Created: latency-svc-589vp
Jan 28 14:43:11.157: INFO: Got endpoints: latency-svc-589vp [2.226540417s]
Jan 28 14:43:11.222: INFO: Created: latency-svc-gr4xz
Jan 28 14:43:11.397: INFO: Got endpoints: latency-svc-gr4xz [2.328029236s]
Jan 28 14:43:11.462: INFO: Created: latency-svc-xxlhf
Jan 28 14:43:11.585: INFO: Got endpoints: latency-svc-xxlhf [2.452709788s]
Jan 28 14:43:11.592: INFO: Created: latency-svc-8vhqq
Jan 28 14:43:11.600: INFO: Got endpoints: latency-svc-8vhqq [2.275102451s]
Jan 28 14:43:11.816: INFO: Created: latency-svc-2fpdj
Jan 28 14:43:11.826: INFO: Got endpoints: latency-svc-2fpdj [2.233220978s]
Jan 28 14:43:12.001: INFO: Created: latency-svc-mmxvf
Jan 28 14:43:12.019: INFO: Got endpoints: latency-svc-mmxvf [2.415610733s]
Jan 28 14:43:12.053: INFO: Created: latency-svc-lp49h
Jan 28 14:43:12.068: INFO: Got endpoints: latency-svc-lp49h [2.236531683s]
Jan 28 14:43:12.157: INFO: Created: latency-svc-77t2d
Jan 28 14:43:12.186: INFO: Got endpoints: latency-svc-77t2d [2.104946269s]
Jan 28 14:43:12.189: INFO: Created: latency-svc-rhhm2
Jan 28 14:43:12.193: INFO: Got endpoints: latency-svc-rhhm2 [2.030202958s]
Jan 28 14:43:12.243: INFO: Created: latency-svc-cf4h5
Jan 28 14:43:12.318: INFO: Got endpoints: latency-svc-cf4h5 [1.735601762s]
Jan 28 14:43:12.341: INFO: Created: latency-svc-2zvwt
Jan 28 14:43:12.341: INFO: Got endpoints: latency-svc-2zvwt [1.736910748s]
Jan 28 14:43:12.368: INFO: Created: latency-svc-2rdv7
Jan 28 14:43:12.373: INFO: Got endpoints: latency-svc-2rdv7 [1.605562155s]
Jan 28 14:43:12.403: INFO: Created: latency-svc-kgpb7
Jan 28 14:43:12.415: INFO: Got endpoints: latency-svc-kgpb7 [1.555151219s]
Jan 28 14:43:12.526: INFO: Created: latency-svc-f6k9r
Jan 28 14:43:12.559: INFO: Got endpoints: latency-svc-f6k9r [1.55642748s]
Jan 28 14:43:12.668: INFO: Created: latency-svc-5g784
Jan 28 14:43:12.703: INFO: Created: latency-svc-25l7f
Jan 28 14:43:12.704: INFO: Got endpoints: latency-svc-5g784 [1.632080812s]
Jan 28 14:43:12.834: INFO: Got endpoints: latency-svc-25l7f [1.676906379s]
Jan 28 14:43:12.852: INFO: Created: latency-svc-2sq5w
Jan 28 14:43:12.892: INFO: Got endpoints: latency-svc-2sq5w [1.494969962s]
Jan 28 14:43:13.082: INFO: Created: latency-svc-f2fvk
Jan 28 14:43:13.094: INFO: Got endpoints: latency-svc-f2fvk [1.508754s]
Jan 28 14:43:13.140: INFO: Created: latency-svc-22wcz
Jan 28 14:43:13.266: INFO: Got endpoints: latency-svc-22wcz [1.665699155s]
Jan 28 14:43:13.273: INFO: Created: latency-svc-k4fhp
Jan 28 14:43:13.277: INFO: Got endpoints: latency-svc-k4fhp [1.45094184s]
Jan 28 14:43:13.341: INFO: Created: latency-svc-7h7qw
Jan 28 14:43:13.342: INFO: Got endpoints: latency-svc-7h7qw [1.322947235s]
Jan 28 14:43:13.496: INFO: Created: latency-svc-7hrc8
Jan 28 14:43:13.504: INFO: Got endpoints: latency-svc-7hrc8 [1.435690805s]
Jan 28 14:43:13.576: INFO: Created: latency-svc-szjfx
Jan 28 14:43:13.584: INFO: Got endpoints: latency-svc-szjfx [1.397084294s]
Jan 28 14:43:13.682: INFO: Created: latency-svc-glc9m
Jan 28 14:43:13.690: INFO: Got endpoints: latency-svc-glc9m [1.496333617s]
Jan 28 14:43:13.850: INFO: Created: latency-svc-wksh6
Jan 28 14:43:13.850: INFO: Got endpoints: latency-svc-wksh6 [1.532080471s]
Jan 28 14:43:13.999: INFO: Created: latency-svc-p56fg
Jan 28 14:43:14.014: INFO: Got endpoints: latency-svc-p56fg [1.673450979s]
Jan 28 14:43:14.076: INFO: Created: latency-svc-97f8r
Jan 28 14:43:14.149: INFO: Got endpoints: latency-svc-97f8r [1.775942883s]
Jan 28 14:43:14.173: INFO: Created: latency-svc-r7h9f
Jan 28 14:43:14.173: INFO: Got endpoints: latency-svc-r7h9f [1.758213468s]
Jan 28 14:43:14.207: INFO: Created: latency-svc-tcd68
Jan 28 14:43:14.237: INFO: Got endpoints: latency-svc-tcd68 [1.677050165s]
Jan 28 14:43:14.249: INFO: Created: latency-svc-nkfqk
Jan 28 14:43:14.311: INFO: Got endpoints: latency-svc-nkfqk [1.606496458s]
Jan 28 14:43:14.337: INFO: Created: latency-svc-lrvtq
Jan 28 14:43:14.351: INFO: Got endpoints: latency-svc-lrvtq [1.516597211s]
Jan 28 14:43:14.384: INFO: Created: latency-svc-7dfdc
Jan 28 14:43:14.395: INFO: Got endpoints: latency-svc-7dfdc [1.503106517s]
Jan 28 14:43:14.464: INFO: Created: latency-svc-tp7rc
Jan 28 14:43:14.471: INFO: Got endpoints: latency-svc-tp7rc [1.376314721s]
Jan 28 14:43:14.531: INFO: Created: latency-svc-kf6x9
Jan 28 14:43:14.553: INFO: Got endpoints: latency-svc-kf6x9 [1.286491562s]
Jan 28 14:43:14.662: INFO: Created: latency-svc-xm4lv
Jan 28 14:43:14.675: INFO: Got endpoints: latency-svc-xm4lv [1.398366371s]
Jan 28 14:43:14.753: INFO: Created: latency-svc-42ldk
Jan 28 14:43:14.828: INFO: Got endpoints: latency-svc-42ldk [1.485544314s]
Jan 28 14:43:14.840: INFO: Created: latency-svc-gmw7v
Jan 28 14:43:14.861: INFO: Got endpoints: latency-svc-gmw7v [1.356758892s]
Jan 28 14:43:14.901: INFO: Created: latency-svc-g6k9d
Jan 28 14:43:14.911: INFO: Got endpoints: latency-svc-g6k9d [1.326756499s]
Jan 28 14:43:14.998: INFO: Created: latency-svc-8rz7q
Jan 28 14:43:15.023: INFO: Got endpoints: latency-svc-8rz7q [1.332982595s]
Jan 28 14:43:15.057: INFO: Created: latency-svc-s2q2v
Jan 28 14:43:15.157: INFO: Created: latency-svc-krbxj
Jan 28 14:43:15.158: INFO: Got endpoints: latency-svc-s2q2v [1.307678278s]
Jan 28 14:43:15.163: INFO: Got endpoints: latency-svc-krbxj [1.14824703s]
Jan 28 14:43:15.189: INFO: Created: latency-svc-qrb9b
Jan 28 14:43:15.195: INFO: Got endpoints: latency-svc-qrb9b [1.045429021s]
Jan 28 14:43:15.229: INFO: Created: latency-svc-v69mw
Jan 28 14:43:15.243: INFO: Got endpoints: latency-svc-v69mw [1.0691013s]
Jan 28 14:43:15.340: INFO: Created: latency-svc-kr56r
Jan 28 14:43:15.340: INFO: Got endpoints: latency-svc-kr56r [1.102348306s]
Jan 28 14:43:15.361: INFO: Created: latency-svc-jmqtj
Jan 28 14:43:15.365: INFO: Got endpoints: latency-svc-jmqtj [1.054212114s]
Jan 28 14:43:15.366: INFO: Latencies: [133.94496ms 140.245627ms 272.346962ms 282.147123ms 414.157749ms 484.303945ms 594.946047ms 652.541778ms 885.218594ms 893.81222ms 964.128304ms 1.045429021s 1.054212114s 1.063977827s 1.0691013s 1.076599646s 1.084550354s 1.102348306s 1.11109948s 1.120762248s 1.12662853s 1.128141815s 1.130172095s 1.130853905s 1.136594566s 1.141957523s 1.148047999s 1.14824703s 1.150658791s 1.150961599s 1.152124903s 1.153741893s 1.167830253s 1.168989581s 1.171675581s 1.184055386s 1.187787915s 1.188336698s 1.197562905s 1.203826536s 1.208269333s 1.20967896s 1.218427377s 1.223558456s 1.226975126s 1.233471877s 1.239061024s 1.240831347s 1.251276124s 1.251843904s 1.257468339s 1.259070244s 1.261394781s 1.264090381s 1.265087052s 1.267358632s 1.268597916s 1.272672172s 1.27406878s 1.277749724s 1.277804433s 1.282104697s 1.285709058s 1.285886309s 1.286210524s 1.286491562s 1.289333711s 1.301835073s 1.307678278s 1.308163183s 1.30990548s 1.312477088s 1.313324267s 1.313941984s 1.322947235s 1.326756499s 1.32680812s 1.327143939s 1.332982595s 1.337675802s 1.3424399s 1.346988703s 1.348953248s 1.352147119s 1.356758892s 1.359010809s 1.360817255s 1.362870685s 1.366525002s 1.375900889s 1.376314721s 1.377476201s 1.381481685s 1.387292596s 1.389513687s 1.394608692s 1.397084294s 1.397580323s 1.398366371s 1.403738712s 1.40839403s 1.409536545s 1.410706479s 1.411538778s 1.413978889s 1.428161197s 1.435690805s 1.43881729s 1.43959647s 1.44491571s 1.44705801s 1.448070615s 1.45094184s 1.457795039s 1.46705725s 1.467871806s 1.469920056s 1.473501226s 1.475877343s 1.484094952s 1.484270746s 1.485544314s 1.492083682s 1.49290452s 1.494969962s 1.494986862s 1.496333617s 1.500163235s 1.503106517s 1.504505555s 1.508754s 1.516597211s 1.518439952s 1.529109803s 1.530914481s 1.532080471s 1.533358669s 1.536252001s 1.547631356s 1.550143606s 1.555151219s 1.55642748s 1.579584259s 1.584070992s 1.587660939s 1.591277078s 1.594551791s 1.596808094s 1.604456917s 1.605562155s 1.606496458s 1.609083537s 1.611133261s 1.613823567s 1.61442641s 1.617485998s 1.619468515s 1.62644198s 1.630552849s 1.632080812s 1.637411327s 1.649509386s 1.658597596s 1.663697511s 1.665699155s 1.673450979s 1.675823877s 1.676906379s 1.677050165s 1.681664969s 1.683287365s 1.683827705s 1.715514141s 1.717789259s 1.731149697s 1.735601762s 1.736910748s 1.758213468s 1.764293098s 1.775942883s 1.789989712s 1.796350949s 1.808996578s 1.828403761s 1.849730786s 2.030202958s 2.080188641s 2.104946269s 2.141466118s 2.150252108s 2.172837458s 2.179257404s 2.226540417s 2.233220978s 2.236531683s 2.274961479s 2.275102451s 2.328029236s 2.415610733s 2.452709788s]
Jan 28 14:43:15.366: INFO: 50 %ile: 1.40839403s
Jan 28 14:43:15.366: INFO: 90 %ile: 1.789989712s
Jan 28 14:43:15.366: INFO: 99 %ile: 2.415610733s
Jan 28 14:43:15.366: INFO: Total sample count: 200
[AfterEach] [sig-network] Service endpoints latency
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 28 14:43:15.366: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svc-latency-3887" for this suite.
Jan 28 14:44:01.394: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 28 14:44:01.538: INFO: namespace svc-latency-3887 deletion completed in 46.165593732s

• [SLOW TEST:72.987 seconds]
[sig-network] Service endpoints latency
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 28 14:44:01.540: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44
[It] should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
Jan 28 14:44:01.629: INFO: PodSpec: initContainers in spec.initContainers
Jan 28 14:45:02.415: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-4bc118a4-0e85-4489-8390-6701cd94fa51", GenerateName:"", Namespace:"init-container-204", SelfLink:"/api/v1/namespaces/init-container-204/pods/pod-init-4bc118a4-0e85-4489-8390-6701cd94fa51", UID:"76e75a49-c122-4b5b-9419-d6ac8c62afc6", ResourceVersion:"22202198", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63715819441, loc:(*time.Location)(0x7ea48a0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"628845944"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-58nbn", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc000df1a00), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-58nbn", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-58nbn", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-58nbn", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0025b2178), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"iruya-node", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc002748060), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0025b2210)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0025b2230)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc0025b2238), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc0025b223c), PreemptionPolicy:(*v1.PreemptionPolicy)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715819441, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715819441, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715819441, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715819441, loc:(*time.Location)(0x7ea48a0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.96.3.65", PodIP:"10.44.0.1", StartTime:(*v1.Time)(0xc001302a00), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc002598070)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0025980e0)}, Ready:false, RestartCount:3, Image:"busybox:1.29", ImageID:"docker-pullable://busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"docker://be58877cdc9cbadaf3e3ab2caaec78cc4cbf03a3e79f03e93dd5dab48a634b03"}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc001302de0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:""}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc001302c00), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:""}}, QOSClass:"Guaranteed"}}
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 28 14:45:02.421: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-204" for this suite.
Jan 28 14:45:24.478: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 28 14:45:24.646: INFO: namespace init-container-204 deletion completed in 22.211993623s

• [SLOW TEST:83.107 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 28 14:45:24.648: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with configMap that has name projected-configmap-test-upd-7765dfb5-d83a-4191-a112-fa0ab395d235
STEP: Creating the pod
STEP: Updating configmap projected-configmap-test-upd-7765dfb5-d83a-4191-a112-fa0ab395d235
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 28 14:45:35.071: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5634" for this suite.
Jan 28 14:45:57.131: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 28 14:45:57.250: INFO: namespace projected-5634 deletion completed in 22.156058458s

• [SLOW TEST:32.602 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 28 14:45:57.252: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0777 on node default medium
Jan 28 14:45:57.459: INFO: Waiting up to 5m0s for pod "pod-aed5f0dd-7091-424d-abfe-39addaf9f199" in namespace "emptydir-7471" to be "success or failure"
Jan 28 14:45:57.484: INFO: Pod "pod-aed5f0dd-7091-424d-abfe-39addaf9f199": Phase="Pending", Reason="", readiness=false. Elapsed: 24.777389ms
Jan 28 14:45:59.495: INFO: Pod "pod-aed5f0dd-7091-424d-abfe-39addaf9f199": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035911703s
Jan 28 14:46:01.504: INFO: Pod "pod-aed5f0dd-7091-424d-abfe-39addaf9f199": Phase="Pending", Reason="", readiness=false. Elapsed: 4.044565081s
Jan 28 14:46:03.516: INFO: Pod "pod-aed5f0dd-7091-424d-abfe-39addaf9f199": Phase="Pending", Reason="", readiness=false. Elapsed: 6.057022312s
Jan 28 14:46:05.530: INFO: Pod "pod-aed5f0dd-7091-424d-abfe-39addaf9f199": Phase="Pending", Reason="", readiness=false. Elapsed: 8.070806057s
Jan 28 14:46:07.542: INFO: Pod "pod-aed5f0dd-7091-424d-abfe-39addaf9f199": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.082469379s
STEP: Saw pod success
Jan 28 14:46:07.542: INFO: Pod "pod-aed5f0dd-7091-424d-abfe-39addaf9f199" satisfied condition "success or failure"
Jan 28 14:46:07.552: INFO: Trying to get logs from node iruya-node pod pod-aed5f0dd-7091-424d-abfe-39addaf9f199 container test-container: 
STEP: delete the pod
Jan 28 14:46:07.653: INFO: Waiting for pod pod-aed5f0dd-7091-424d-abfe-39addaf9f199 to disappear
Jan 28 14:46:07.682: INFO: Pod pod-aed5f0dd-7091-424d-abfe-39addaf9f199 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 28 14:46:07.683: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-7471" for this suite.
Jan 28 14:46:13.720: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 28 14:46:13.846: INFO: namespace emptydir-7471 deletion completed in 6.155321356s

• [SLOW TEST:16.595 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 28 14:46:13.848: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44
[It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
Jan 28 14:46:14.007: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 28 14:46:27.225: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-95" for this suite.
Jan 28 14:46:33.290: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 28 14:46:33.494: INFO: namespace init-container-95 deletion completed in 6.236799952s

• [SLOW TEST:19.647 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-storage] Downward API volume 
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 28 14:46:33.496: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating the pod
Jan 28 14:46:42.172: INFO: Successfully updated pod "labelsupdate03f5cd3f-3e32-46a3-8976-e81e105fab93"
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 28 14:46:44.238: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-3846" for this suite.
Jan 28 14:47:08.337: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 28 14:47:08.531: INFO: namespace downward-api-3846 deletion completed in 24.267741321s

• [SLOW TEST:35.035 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 28 14:47:08.532: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the container
STEP: wait for the container to reach Failed
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Jan 28 14:47:16.738: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 28 14:47:16.841: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-7325" for this suite.
Jan 28 14:47:22.882: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 28 14:47:23.017: INFO: namespace container-runtime-7325 deletion completed in 6.164907809s

• [SLOW TEST:14.485 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129
      should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should scale a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 28 14:47:23.017: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should scale a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a replication controller
Jan 28 14:47:23.102: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-776'
Jan 28 14:47:25.711: INFO: stderr: ""
Jan 28 14:47:25.711: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jan 28 14:47:25.712: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-776'
Jan 28 14:47:25.908: INFO: stderr: ""
Jan 28 14:47:25.908: INFO: stdout: "update-demo-nautilus-7qqwp update-demo-nautilus-zm9sd "
Jan 28 14:47:25.909: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7qqwp -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-776'
Jan 28 14:47:26.189: INFO: stderr: ""
Jan 28 14:47:26.190: INFO: stdout: ""
Jan 28 14:47:26.190: INFO: update-demo-nautilus-7qqwp is created but not running
Jan 28 14:47:31.190: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-776'
Jan 28 14:47:32.228: INFO: stderr: ""
Jan 28 14:47:32.228: INFO: stdout: "update-demo-nautilus-7qqwp update-demo-nautilus-zm9sd "
Jan 28 14:47:32.228: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7qqwp -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-776'
Jan 28 14:47:33.204: INFO: stderr: ""
Jan 28 14:47:33.204: INFO: stdout: ""
Jan 28 14:47:33.204: INFO: update-demo-nautilus-7qqwp is created but not running
Jan 28 14:47:38.205: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-776'
Jan 28 14:47:38.305: INFO: stderr: ""
Jan 28 14:47:38.305: INFO: stdout: "update-demo-nautilus-7qqwp update-demo-nautilus-zm9sd "
Jan 28 14:47:38.305: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7qqwp -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-776'
Jan 28 14:47:38.430: INFO: stderr: ""
Jan 28 14:47:38.430: INFO: stdout: "true"
Jan 28 14:47:38.431: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7qqwp -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-776'
Jan 28 14:47:38.630: INFO: stderr: ""
Jan 28 14:47:38.630: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan 28 14:47:38.630: INFO: validating pod update-demo-nautilus-7qqwp
Jan 28 14:47:38.656: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan 28 14:47:38.656: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan 28 14:47:38.656: INFO: update-demo-nautilus-7qqwp is verified up and running
Jan 28 14:47:38.657: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-zm9sd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-776'
Jan 28 14:47:38.772: INFO: stderr: ""
Jan 28 14:47:38.772: INFO: stdout: "true"
Jan 28 14:47:38.773: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-zm9sd -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-776'
Jan 28 14:47:38.908: INFO: stderr: ""
Jan 28 14:47:38.908: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan 28 14:47:38.908: INFO: validating pod update-demo-nautilus-zm9sd
Jan 28 14:47:38.922: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan 28 14:47:38.922: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan 28 14:47:38.922: INFO: update-demo-nautilus-zm9sd is verified up and running
STEP: scaling down the replication controller
Jan 28 14:47:38.927: INFO: scanned /root for discovery docs: 
Jan 28 14:47:38.927: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-776'
Jan 28 14:47:40.119: INFO: stderr: ""
Jan 28 14:47:40.119: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jan 28 14:47:40.120: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-776'
Jan 28 14:47:40.279: INFO: stderr: ""
Jan 28 14:47:40.279: INFO: stdout: "update-demo-nautilus-7qqwp update-demo-nautilus-zm9sd "
STEP: Replicas for name=update-demo: expected=1 actual=2
Jan 28 14:47:45.280: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-776'
Jan 28 14:47:45.467: INFO: stderr: ""
Jan 28 14:47:45.467: INFO: stdout: "update-demo-nautilus-7qqwp update-demo-nautilus-zm9sd "
STEP: Replicas for name=update-demo: expected=1 actual=2
Jan 28 14:47:50.468: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-776'
Jan 28 14:47:50.688: INFO: stderr: ""
Jan 28 14:47:50.688: INFO: stdout: "update-demo-nautilus-zm9sd "
Jan 28 14:47:50.688: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-zm9sd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-776'
Jan 28 14:47:50.809: INFO: stderr: ""
Jan 28 14:47:50.809: INFO: stdout: "true"
Jan 28 14:47:50.810: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-zm9sd -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-776'
Jan 28 14:47:50.964: INFO: stderr: ""
Jan 28 14:47:50.964: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan 28 14:47:50.964: INFO: validating pod update-demo-nautilus-zm9sd
Jan 28 14:47:50.975: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan 28 14:47:50.976: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan 28 14:47:50.976: INFO: update-demo-nautilus-zm9sd is verified up and running
STEP: scaling up the replication controller
Jan 28 14:47:50.980: INFO: scanned /root for discovery docs: 
Jan 28 14:47:50.980: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-776'
Jan 28 14:47:52.241: INFO: stderr: ""
Jan 28 14:47:52.241: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jan 28 14:47:52.242: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-776'
Jan 28 14:47:52.380: INFO: stderr: ""
Jan 28 14:47:52.381: INFO: stdout: "update-demo-nautilus-sjshj update-demo-nautilus-zm9sd "
Jan 28 14:47:52.381: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-sjshj -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-776'
Jan 28 14:47:52.783: INFO: stderr: ""
Jan 28 14:47:52.783: INFO: stdout: ""
Jan 28 14:47:52.783: INFO: update-demo-nautilus-sjshj is created but not running
Jan 28 14:47:57.784: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-776'
Jan 28 14:47:57.945: INFO: stderr: ""
Jan 28 14:47:57.945: INFO: stdout: "update-demo-nautilus-sjshj update-demo-nautilus-zm9sd "
Jan 28 14:47:57.945: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-sjshj -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-776'
Jan 28 14:47:58.135: INFO: stderr: ""
Jan 28 14:47:58.135: INFO: stdout: ""
Jan 28 14:47:58.135: INFO: update-demo-nautilus-sjshj is created but not running
Jan 28 14:48:03.136: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-776'
Jan 28 14:48:03.334: INFO: stderr: ""
Jan 28 14:48:03.334: INFO: stdout: "update-demo-nautilus-sjshj update-demo-nautilus-zm9sd "
Jan 28 14:48:03.334: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-sjshj -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-776'
Jan 28 14:48:03.485: INFO: stderr: ""
Jan 28 14:48:03.485: INFO: stdout: "true"
Jan 28 14:48:03.486: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-sjshj -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-776'
Jan 28 14:48:03.604: INFO: stderr: ""
Jan 28 14:48:03.604: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan 28 14:48:03.604: INFO: validating pod update-demo-nautilus-sjshj
Jan 28 14:48:03.624: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan 28 14:48:03.624: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan 28 14:48:03.624: INFO: update-demo-nautilus-sjshj is verified up and running
Jan 28 14:48:03.624: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-zm9sd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-776'
Jan 28 14:48:03.710: INFO: stderr: ""
Jan 28 14:48:03.710: INFO: stdout: "true"
Jan 28 14:48:03.710: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-zm9sd -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-776'
Jan 28 14:48:03.828: INFO: stderr: ""
Jan 28 14:48:03.828: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan 28 14:48:03.828: INFO: validating pod update-demo-nautilus-zm9sd
Jan 28 14:48:03.852: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan 28 14:48:03.852: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan 28 14:48:03.853: INFO: update-demo-nautilus-zm9sd is verified up and running
STEP: using delete to clean up resources
Jan 28 14:48:03.853: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-776'
Jan 28 14:48:04.092: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan 28 14:48:04.092: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Jan 28 14:48:04.093: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-776'
Jan 28 14:48:04.209: INFO: stderr: "No resources found.\n"
Jan 28 14:48:04.209: INFO: stdout: ""
Jan 28 14:48:04.209: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-776 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Jan 28 14:48:04.591: INFO: stderr: ""
Jan 28 14:48:04.591: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 28 14:48:04.591: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-776" for this suite.
Jan 28 14:48:20.631: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 28 14:48:20.764: INFO: namespace kubectl-776 deletion completed in 16.159016396s

• [SLOW TEST:57.747 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should scale a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 28 14:48:20.767: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 28 14:49:20.888: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-3948" for this suite.
Jan 28 14:49:42.925: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 28 14:49:43.011: INFO: namespace container-probe-3948 deletion completed in 22.116254299s

• [SLOW TEST:82.244 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-apps] ReplicationController 
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 28 14:49:43.012: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating replication controller my-hostname-basic-680e5c3e-b43d-4609-817d-0aa6836ada0e
Jan 28 14:49:43.095: INFO: Pod name my-hostname-basic-680e5c3e-b43d-4609-817d-0aa6836ada0e: Found 0 pods out of 1
Jan 28 14:49:48.132: INFO: Pod name my-hostname-basic-680e5c3e-b43d-4609-817d-0aa6836ada0e: Found 1 pods out of 1
Jan 28 14:49:48.133: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-680e5c3e-b43d-4609-817d-0aa6836ada0e" are running
Jan 28 14:49:52.151: INFO: Pod "my-hostname-basic-680e5c3e-b43d-4609-817d-0aa6836ada0e-wgwwm" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-28 14:49:43 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-28 14:49:43 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-680e5c3e-b43d-4609-817d-0aa6836ada0e]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-28 14:49:43 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-680e5c3e-b43d-4609-817d-0aa6836ada0e]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-28 14:49:43 +0000 UTC Reason: Message:}])
Jan 28 14:49:52.151: INFO: Trying to dial the pod
Jan 28 14:49:57.190: INFO: Controller my-hostname-basic-680e5c3e-b43d-4609-817d-0aa6836ada0e: Got expected result from replica 1 [my-hostname-basic-680e5c3e-b43d-4609-817d-0aa6836ada0e-wgwwm]: "my-hostname-basic-680e5c3e-b43d-4609-817d-0aa6836ada0e-wgwwm", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 28 14:49:57.191: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-5831" for this suite.
Jan 28 14:50:03.222: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 28 14:50:03.370: INFO: namespace replication-controller-5831 deletion completed in 6.169584569s

• [SLOW TEST:20.359 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform canary updates and phased rolling updates of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 28 14:50:03.371: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75
STEP: Creating service test in namespace statefulset-3717
[It] should perform canary updates and phased rolling updates of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a new StatefulSet
Jan 28 14:50:03.674: INFO: Found 0 stateful pods, waiting for 3
Jan 28 14:50:13.692: INFO: Found 2 stateful pods, waiting for 3
Jan 28 14:50:23.693: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jan 28 14:50:23.693: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jan 28 14:50:23.693: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Jan 28 14:50:33.686: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jan 28 14:50:33.687: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jan 28 14:50:33.687: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Updating stateful set template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine
Jan 28 14:50:33.737: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Not applying an update when the partition is greater than the number of replicas
STEP: Performing a canary update
Jan 28 14:50:43.845: INFO: Updating stateful set ss2
Jan 28 14:50:43.972: INFO: Waiting for Pod statefulset-3717/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
STEP: Restoring Pods to the correct revision when they are deleted
Jan 28 14:50:54.393: INFO: Found 2 stateful pods, waiting for 3
Jan 28 14:51:04.406: INFO: Found 2 stateful pods, waiting for 3
Jan 28 14:51:14.424: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jan 28 14:51:14.425: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jan 28 14:51:14.425: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=false
Jan 28 14:51:24.403: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jan 28 14:51:24.404: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jan 28 14:51:24.404: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Performing a phased rolling update
Jan 28 14:51:24.439: INFO: Updating stateful set ss2
Jan 28 14:51:24.486: INFO: Waiting for Pod statefulset-3717/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan 28 14:51:35.280: INFO: Updating stateful set ss2
Jan 28 14:51:35.356: INFO: Waiting for StatefulSet statefulset-3717/ss2 to complete update
Jan 28 14:51:35.357: INFO: Waiting for Pod statefulset-3717/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan 28 14:51:45.377: INFO: Waiting for StatefulSet statefulset-3717/ss2 to complete update
Jan 28 14:51:45.377: INFO: Waiting for Pod statefulset-3717/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan 28 14:51:55.383: INFO: Waiting for StatefulSet statefulset-3717/ss2 to complete update
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86
Jan 28 14:52:05.374: INFO: Deleting all statefulset in ns statefulset-3717
Jan 28 14:52:05.379: INFO: Scaling statefulset ss2 to 0
Jan 28 14:52:35.409: INFO: Waiting for statefulset status.replicas updated to 0
Jan 28 14:52:35.422: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 28 14:52:35.448: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-3717" for this suite.
Jan 28 14:52:43.501: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 28 14:52:43.704: INFO: namespace statefulset-3717 deletion completed in 8.248504461s

• [SLOW TEST:160.333 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should perform canary updates and phased rolling updates of template modifications [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 28 14:52:43.705: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod busybox-c9480917-1a3d-48bf-bdc8-1751839e02bc in namespace container-probe-5822
Jan 28 14:52:51.947: INFO: Started pod busybox-c9480917-1a3d-48bf-bdc8-1751839e02bc in namespace container-probe-5822
STEP: checking the pod's current state and verifying that restartCount is present
Jan 28 14:52:51.955: INFO: Initial restart count of pod busybox-c9480917-1a3d-48bf-bdc8-1751839e02bc is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 28 14:56:52.478: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-5822" for this suite.
Jan 28 14:56:58.578: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 28 14:56:58.712: INFO: namespace container-probe-5822 deletion completed in 6.206966418s

• [SLOW TEST:255.007 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 28 14:56:58.713: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Jan 28 14:56:59.047: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a42d669a-28b4-4353-bacd-7ae1105a8193" in namespace "projected-8504" to be "success or failure"
Jan 28 14:56:59.075: INFO: Pod "downwardapi-volume-a42d669a-28b4-4353-bacd-7ae1105a8193": Phase="Pending", Reason="", readiness=false. Elapsed: 27.071948ms
Jan 28 14:57:01.261: INFO: Pod "downwardapi-volume-a42d669a-28b4-4353-bacd-7ae1105a8193": Phase="Pending", Reason="", readiness=false. Elapsed: 2.213473416s
Jan 28 14:57:03.269: INFO: Pod "downwardapi-volume-a42d669a-28b4-4353-bacd-7ae1105a8193": Phase="Pending", Reason="", readiness=false. Elapsed: 4.22181908s
Jan 28 14:57:05.278: INFO: Pod "downwardapi-volume-a42d669a-28b4-4353-bacd-7ae1105a8193": Phase="Pending", Reason="", readiness=false. Elapsed: 6.230926638s
Jan 28 14:57:07.287: INFO: Pod "downwardapi-volume-a42d669a-28b4-4353-bacd-7ae1105a8193": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.239773976s
STEP: Saw pod success
Jan 28 14:57:07.287: INFO: Pod "downwardapi-volume-a42d669a-28b4-4353-bacd-7ae1105a8193" satisfied condition "success or failure"
Jan 28 14:57:07.291: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-a42d669a-28b4-4353-bacd-7ae1105a8193 container client-container: 
STEP: delete the pod
Jan 28 14:57:07.356: INFO: Waiting for pod downwardapi-volume-a42d669a-28b4-4353-bacd-7ae1105a8193 to disappear
Jan 28 14:57:07.365: INFO: Pod downwardapi-volume-a42d669a-28b4-4353-bacd-7ae1105a8193 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 28 14:57:07.365: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8504" for this suite.
Jan 28 14:57:13.471: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 28 14:57:13.616: INFO: namespace projected-8504 deletion completed in 6.243269819s

• [SLOW TEST:14.903 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 28 14:57:13.617: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Jan 28 14:57:13.762: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1009f06e-a040-43cb-aa0d-41b13e3b964e" in namespace "downward-api-8344" to be "success or failure"
Jan 28 14:57:13.786: INFO: Pod "downwardapi-volume-1009f06e-a040-43cb-aa0d-41b13e3b964e": Phase="Pending", Reason="", readiness=false. Elapsed: 24.2606ms
Jan 28 14:57:15.805: INFO: Pod "downwardapi-volume-1009f06e-a040-43cb-aa0d-41b13e3b964e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.042522991s
Jan 28 14:57:17.828: INFO: Pod "downwardapi-volume-1009f06e-a040-43cb-aa0d-41b13e3b964e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.066215471s
Jan 28 14:57:19.836: INFO: Pod "downwardapi-volume-1009f06e-a040-43cb-aa0d-41b13e3b964e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.073585883s
Jan 28 14:57:21.849: INFO: Pod "downwardapi-volume-1009f06e-a040-43cb-aa0d-41b13e3b964e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.087352097s
STEP: Saw pod success
Jan 28 14:57:21.850: INFO: Pod "downwardapi-volume-1009f06e-a040-43cb-aa0d-41b13e3b964e" satisfied condition "success or failure"
Jan 28 14:57:21.858: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-1009f06e-a040-43cb-aa0d-41b13e3b964e container client-container: 
STEP: delete the pod
Jan 28 14:57:21.955: INFO: Waiting for pod downwardapi-volume-1009f06e-a040-43cb-aa0d-41b13e3b964e to disappear
Jan 28 14:57:21.963: INFO: Pod downwardapi-volume-1009f06e-a040-43cb-aa0d-41b13e3b964e no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 28 14:57:21.964: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-8344" for this suite.
Jan 28 14:57:28.001: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 28 14:57:28.133: INFO: namespace downward-api-8344 deletion completed in 6.158154016s

• [SLOW TEST:14.516 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 28 14:57:28.133: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Jan 28 14:57:28.256: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c498a60c-9047-4abf-aefd-d14099c34950" in namespace "projected-1292" to be "success or failure"
Jan 28 14:57:28.291: INFO: Pod "downwardapi-volume-c498a60c-9047-4abf-aefd-d14099c34950": Phase="Pending", Reason="", readiness=false. Elapsed: 35.498809ms
Jan 28 14:57:30.300: INFO: Pod "downwardapi-volume-c498a60c-9047-4abf-aefd-d14099c34950": Phase="Pending", Reason="", readiness=false. Elapsed: 2.044307817s
Jan 28 14:57:32.793: INFO: Pod "downwardapi-volume-c498a60c-9047-4abf-aefd-d14099c34950": Phase="Pending", Reason="", readiness=false. Elapsed: 4.536779093s
Jan 28 14:57:34.806: INFO: Pod "downwardapi-volume-c498a60c-9047-4abf-aefd-d14099c34950": Phase="Pending", Reason="", readiness=false. Elapsed: 6.550062834s
Jan 28 14:57:36.827: INFO: Pod "downwardapi-volume-c498a60c-9047-4abf-aefd-d14099c34950": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.570861685s
STEP: Saw pod success
Jan 28 14:57:36.827: INFO: Pod "downwardapi-volume-c498a60c-9047-4abf-aefd-d14099c34950" satisfied condition "success or failure"
Jan 28 14:57:36.837: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-c498a60c-9047-4abf-aefd-d14099c34950 container client-container: 
STEP: delete the pod
Jan 28 14:57:36.995: INFO: Waiting for pod downwardapi-volume-c498a60c-9047-4abf-aefd-d14099c34950 to disappear
Jan 28 14:57:37.002: INFO: Pod downwardapi-volume-c498a60c-9047-4abf-aefd-d14099c34950 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 28 14:57:37.003: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1292" for this suite.
Jan 28 14:57:43.031: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 28 14:57:43.149: INFO: namespace projected-1292 deletion completed in 6.139418518s

• [SLOW TEST:15.015 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run --rm job 
  should create a job from an image, then delete the job  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 28 14:57:43.149: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should create a job from an image, then delete the job  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: executing a command with run --rm and attach with stdin
Jan 28 14:57:43.238: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-9142 run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed''
Jan 28 14:57:53.745: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0128 14:57:52.093062    3152 log.go:172] (0xc00063e210) (0xc000770140) Create stream\nI0128 14:57:52.093195    3152 log.go:172] (0xc00063e210) (0xc000770140) Stream added, broadcasting: 1\nI0128 14:57:52.099496    3152 log.go:172] (0xc00063e210) Reply frame received for 1\nI0128 14:57:52.099566    3152 log.go:172] (0xc00063e210) (0xc0003ee5a0) Create stream\nI0128 14:57:52.099579    3152 log.go:172] (0xc00063e210) (0xc0003ee5a0) Stream added, broadcasting: 3\nI0128 14:57:52.104702    3152 log.go:172] (0xc00063e210) Reply frame received for 3\nI0128 14:57:52.104790    3152 log.go:172] (0xc00063e210) (0xc0007701e0) Create stream\nI0128 14:57:52.104813    3152 log.go:172] (0xc00063e210) (0xc0007701e0) Stream added, broadcasting: 5\nI0128 14:57:52.107409    3152 log.go:172] (0xc00063e210) Reply frame received for 5\nI0128 14:57:52.107580    3152 log.go:172] (0xc00063e210) (0xc0008a0000) Create stream\nI0128 14:57:52.107603    3152 log.go:172] (0xc00063e210) (0xc0008a0000) Stream added, broadcasting: 7\nI0128 14:57:52.112939    3152 log.go:172] (0xc00063e210) Reply frame received for 7\nI0128 14:57:52.113543    3152 log.go:172] (0xc0003ee5a0) (3) Writing data frame\nI0128 14:57:52.113993    3152 log.go:172] (0xc0003ee5a0) (3) Writing data frame\nI0128 14:57:52.126086    3152 log.go:172] (0xc00063e210) Data frame received for 5\nI0128 14:57:52.126113    3152 log.go:172] (0xc0007701e0) (5) Data frame handling\nI0128 14:57:52.126150    3152 log.go:172] (0xc0007701e0) (5) Data frame sent\nI0128 14:57:52.128532    3152 log.go:172] (0xc00063e210) Data frame received for 5\nI0128 14:57:52.128551    3152 log.go:172] (0xc0007701e0) (5) Data frame handling\nI0128 14:57:52.128575    3152 log.go:172] (0xc0007701e0) (5) Data frame sent\nI0128 14:57:53.666193    3152 log.go:172] (0xc00063e210) (0xc0003ee5a0) Stream removed, broadcasting: 3\nI0128 14:57:53.666501    3152 log.go:172] (0xc00063e210) Data frame received for 1\nI0128 14:57:53.666572    3152 log.go:172] (0xc000770140) (1) Data frame handling\nI0128 14:57:53.666608    3152 log.go:172] (0xc000770140) (1) Data frame sent\nI0128 14:57:53.666842    3152 log.go:172] (0xc00063e210) (0xc000770140) Stream removed, broadcasting: 1\nI0128 14:57:53.667269    3152 log.go:172] (0xc00063e210) (0xc0007701e0) Stream removed, broadcasting: 5\nI0128 14:57:53.667523    3152 log.go:172] (0xc00063e210) (0xc0008a0000) Stream removed, broadcasting: 7\nI0128 14:57:53.667728    3152 log.go:172] (0xc00063e210) Go away received\nI0128 14:57:53.667836    3152 log.go:172] (0xc00063e210) (0xc000770140) Stream removed, broadcasting: 1\nI0128 14:57:53.667862    3152 log.go:172] (0xc00063e210) (0xc0003ee5a0) Stream removed, broadcasting: 3\nI0128 14:57:53.667876    3152 log.go:172] (0xc00063e210) (0xc0007701e0) Stream removed, broadcasting: 5\nI0128 14:57:53.667886    3152 log.go:172] (0xc00063e210) (0xc0008a0000) Stream removed, broadcasting: 7\n"
Jan 28 14:57:53.746: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n"
STEP: verifying the job e2e-test-rm-busybox-job was deleted
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 28 14:57:55.829: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-9142" for this suite.
Jan 28 14:58:02.023: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 28 14:58:02.199: INFO: namespace kubectl-9142 deletion completed in 6.358661292s

• [SLOW TEST:19.050 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run --rm job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create a job from an image, then delete the job  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for ExternalName services [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 28 14:58:02.200: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for ExternalName services [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a test externalName service
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-6244.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-6244.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-6244.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-6244.svc.cluster.local; sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jan 28 14:58:16.448: INFO: File jessie_udp@dns-test-service-3.dns-6244.svc.cluster.local from pod  dns-6244/dns-test-60a8c41d-bc4f-4c6f-8332-533cf45743df contains '' instead of 'foo.example.com.'
Jan 28 14:58:16.448: INFO: Lookups using dns-6244/dns-test-60a8c41d-bc4f-4c6f-8332-533cf45743df failed for: [jessie_udp@dns-test-service-3.dns-6244.svc.cluster.local]

Jan 28 14:58:21.470: INFO: DNS probes using dns-test-60a8c41d-bc4f-4c6f-8332-533cf45743df succeeded

STEP: deleting the pod
STEP: changing the externalName to bar.example.com
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-6244.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-6244.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-6244.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-6244.svc.cluster.local; sleep 1; done

STEP: creating a second pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jan 28 14:58:35.704: INFO: File wheezy_udp@dns-test-service-3.dns-6244.svc.cluster.local from pod  dns-6244/dns-test-3313dbb3-cdeb-484c-9005-ef44de38d668 contains '' instead of 'bar.example.com.'
Jan 28 14:58:35.711: INFO: File jessie_udp@dns-test-service-3.dns-6244.svc.cluster.local from pod  dns-6244/dns-test-3313dbb3-cdeb-484c-9005-ef44de38d668 contains '' instead of 'bar.example.com.'
Jan 28 14:58:35.712: INFO: Lookups using dns-6244/dns-test-3313dbb3-cdeb-484c-9005-ef44de38d668 failed for: [wheezy_udp@dns-test-service-3.dns-6244.svc.cluster.local jessie_udp@dns-test-service-3.dns-6244.svc.cluster.local]

Jan 28 14:58:40.741: INFO: File wheezy_udp@dns-test-service-3.dns-6244.svc.cluster.local from pod  dns-6244/dns-test-3313dbb3-cdeb-484c-9005-ef44de38d668 contains 'foo.example.com.
' instead of 'bar.example.com.'
Jan 28 14:58:40.752: INFO: File jessie_udp@dns-test-service-3.dns-6244.svc.cluster.local from pod  dns-6244/dns-test-3313dbb3-cdeb-484c-9005-ef44de38d668 contains 'foo.example.com.
' instead of 'bar.example.com.'
Jan 28 14:58:40.752: INFO: Lookups using dns-6244/dns-test-3313dbb3-cdeb-484c-9005-ef44de38d668 failed for: [wheezy_udp@dns-test-service-3.dns-6244.svc.cluster.local jessie_udp@dns-test-service-3.dns-6244.svc.cluster.local]

Jan 28 14:58:45.724: INFO: File wheezy_udp@dns-test-service-3.dns-6244.svc.cluster.local from pod  dns-6244/dns-test-3313dbb3-cdeb-484c-9005-ef44de38d668 contains 'foo.example.com.
' instead of 'bar.example.com.'
Jan 28 14:58:45.733: INFO: File jessie_udp@dns-test-service-3.dns-6244.svc.cluster.local from pod  dns-6244/dns-test-3313dbb3-cdeb-484c-9005-ef44de38d668 contains 'foo.example.com.
' instead of 'bar.example.com.'
Jan 28 14:58:45.733: INFO: Lookups using dns-6244/dns-test-3313dbb3-cdeb-484c-9005-ef44de38d668 failed for: [wheezy_udp@dns-test-service-3.dns-6244.svc.cluster.local jessie_udp@dns-test-service-3.dns-6244.svc.cluster.local]

Jan 28 14:58:50.731: INFO: DNS probes using dns-test-3313dbb3-cdeb-484c-9005-ef44de38d668 succeeded

STEP: deleting the pod
STEP: changing the service to type=ClusterIP
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-6244.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-6244.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-6244.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-6244.svc.cluster.local; sleep 1; done

STEP: creating a third pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jan 28 14:59:07.116: INFO: File wheezy_udp@dns-test-service-3.dns-6244.svc.cluster.local from pod  dns-6244/dns-test-93186e21-acef-4b28-b248-bcabc3f81380 contains '' instead of '10.101.53.174'
Jan 28 14:59:07.125: INFO: File jessie_udp@dns-test-service-3.dns-6244.svc.cluster.local from pod  dns-6244/dns-test-93186e21-acef-4b28-b248-bcabc3f81380 contains '' instead of '10.101.53.174'
Jan 28 14:59:07.125: INFO: Lookups using dns-6244/dns-test-93186e21-acef-4b28-b248-bcabc3f81380 failed for: [wheezy_udp@dns-test-service-3.dns-6244.svc.cluster.local jessie_udp@dns-test-service-3.dns-6244.svc.cluster.local]

Jan 28 14:59:12.153: INFO: DNS probes using dns-test-93186e21-acef-4b28-b248-bcabc3f81380 succeeded

STEP: deleting the pod
STEP: deleting the test externalName service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 28 14:59:12.349: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-6244" for this suite.
Jan 28 14:59:18.459: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 28 14:59:18.626: INFO: namespace dns-6244 deletion completed in 6.251657537s

• [SLOW TEST:76.425 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for ExternalName services [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 28 14:59:18.626: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
[It] should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Jan 28 14:59:18.836: INFO: Number of nodes with available pods: 0
Jan 28 14:59:18.836: INFO: Node iruya-node is running more than one daemon pod
Jan 28 14:59:19.870: INFO: Number of nodes with available pods: 0
Jan 28 14:59:19.870: INFO: Node iruya-node is running more than one daemon pod
Jan 28 14:59:20.922: INFO: Number of nodes with available pods: 0
Jan 28 14:59:20.922: INFO: Node iruya-node is running more than one daemon pod
Jan 28 14:59:21.851: INFO: Number of nodes with available pods: 0
Jan 28 14:59:21.851: INFO: Node iruya-node is running more than one daemon pod
Jan 28 14:59:22.856: INFO: Number of nodes with available pods: 0
Jan 28 14:59:22.856: INFO: Node iruya-node is running more than one daemon pod
Jan 28 14:59:23.896: INFO: Number of nodes with available pods: 0
Jan 28 14:59:23.896: INFO: Node iruya-node is running more than one daemon pod
Jan 28 14:59:25.874: INFO: Number of nodes with available pods: 0
Jan 28 14:59:25.874: INFO: Node iruya-node is running more than one daemon pod
Jan 28 14:59:27.377: INFO: Number of nodes with available pods: 0
Jan 28 14:59:27.377: INFO: Node iruya-node is running more than one daemon pod
Jan 28 14:59:27.876: INFO: Number of nodes with available pods: 0
Jan 28 14:59:27.876: INFO: Node iruya-node is running more than one daemon pod
Jan 28 14:59:28.857: INFO: Number of nodes with available pods: 1
Jan 28 14:59:28.857: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan 28 14:59:29.862: INFO: Number of nodes with available pods: 2
Jan 28 14:59:29.862: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived.
Jan 28 14:59:29.964: INFO: Number of nodes with available pods: 1
Jan 28 14:59:29.964: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan 28 14:59:31.278: INFO: Number of nodes with available pods: 1
Jan 28 14:59:31.278: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan 28 14:59:32.092: INFO: Number of nodes with available pods: 1
Jan 28 14:59:32.093: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan 28 14:59:32.983: INFO: Number of nodes with available pods: 1
Jan 28 14:59:32.983: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan 28 14:59:33.986: INFO: Number of nodes with available pods: 1
Jan 28 14:59:33.986: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan 28 14:59:35.460: INFO: Number of nodes with available pods: 1
Jan 28 14:59:35.460: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan 28 14:59:36.195: INFO: Number of nodes with available pods: 1
Jan 28 14:59:36.195: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan 28 14:59:37.093: INFO: Number of nodes with available pods: 1
Jan 28 14:59:37.093: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Jan 28 14:59:37.989: INFO: Number of nodes with available pods: 2
Jan 28 14:59:37.989: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Wait for the failed daemon pod to be completely deleted.
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-2004, will wait for the garbage collector to delete the pods
Jan 28 14:59:38.060: INFO: Deleting DaemonSet.extensions daemon-set took: 10.218642ms
Jan 28 14:59:38.361: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.874442ms
Jan 28 14:59:47.936: INFO: Number of nodes with available pods: 0
Jan 28 14:59:47.936: INFO: Number of running nodes: 0, number of available pods: 0
Jan 28 14:59:47.943: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-2004/daemonsets","resourceVersion":"22204217"},"items":null}

Jan 28 14:59:47.946: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-2004/pods","resourceVersion":"22204217"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 28 14:59:47.968: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-2004" for this suite.
Jan 28 14:59:54.000: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 28 14:59:54.080: INFO: namespace daemonsets-2004 deletion completed in 6.10568783s

• [SLOW TEST:35.454 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 28 14:59:54.080: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name projected-secret-test-3b0d36f2-e751-43b8-b33e-23af049c39c3
STEP: Creating a pod to test consume secrets
Jan 28 14:59:54.147: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-3b8d62d6-e14c-4b2e-93b9-f5425b9202ec" in namespace "projected-5016" to be "success or failure"
Jan 28 14:59:54.162: INFO: Pod "pod-projected-secrets-3b8d62d6-e14c-4b2e-93b9-f5425b9202ec": Phase="Pending", Reason="", readiness=false. Elapsed: 15.140262ms
Jan 28 14:59:56.172: INFO: Pod "pod-projected-secrets-3b8d62d6-e14c-4b2e-93b9-f5425b9202ec": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025186451s
Jan 28 14:59:58.182: INFO: Pod "pod-projected-secrets-3b8d62d6-e14c-4b2e-93b9-f5425b9202ec": Phase="Pending", Reason="", readiness=false. Elapsed: 4.035337395s
Jan 28 15:00:00.191: INFO: Pod "pod-projected-secrets-3b8d62d6-e14c-4b2e-93b9-f5425b9202ec": Phase="Pending", Reason="", readiness=false. Elapsed: 6.044273341s
Jan 28 15:00:02.200: INFO: Pod "pod-projected-secrets-3b8d62d6-e14c-4b2e-93b9-f5425b9202ec": Phase="Pending", Reason="", readiness=false. Elapsed: 8.052889574s
Jan 28 15:00:04.211: INFO: Pod "pod-projected-secrets-3b8d62d6-e14c-4b2e-93b9-f5425b9202ec": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.063864256s
STEP: Saw pod success
Jan 28 15:00:04.211: INFO: Pod "pod-projected-secrets-3b8d62d6-e14c-4b2e-93b9-f5425b9202ec" satisfied condition "success or failure"
Jan 28 15:00:04.216: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-3b8d62d6-e14c-4b2e-93b9-f5425b9202ec container projected-secret-volume-test: 
STEP: delete the pod
Jan 28 15:00:04.266: INFO: Waiting for pod pod-projected-secrets-3b8d62d6-e14c-4b2e-93b9-f5425b9202ec to disappear
Jan 28 15:00:04.297: INFO: Pod pod-projected-secrets-3b8d62d6-e14c-4b2e-93b9-f5425b9202ec no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 28 15:00:04.297: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5016" for this suite.
Jan 28 15:00:10.393: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 28 15:00:10.537: INFO: namespace projected-5016 deletion completed in 6.231787491s

• [SLOW TEST:16.458 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl label 
  should update the label on a resource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 28 15:00:10.540: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1210
STEP: creating the pod
Jan 28 15:00:10.671: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9130'
Jan 28 15:00:11.146: INFO: stderr: ""
Jan 28 15:00:11.146: INFO: stdout: "pod/pause created\n"
Jan 28 15:00:11.146: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause]
Jan 28 15:00:11.147: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-9130" to be "running and ready"
Jan 28 15:00:11.152: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 5.535622ms
Jan 28 15:00:13.159: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012611966s
Jan 28 15:00:15.172: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 4.025501599s
Jan 28 15:00:17.179: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 6.032744644s
Jan 28 15:00:19.190: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 8.043071402s
Jan 28 15:00:19.190: INFO: Pod "pause" satisfied condition "running and ready"
Jan 28 15:00:19.190: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause]
[It] should update the label on a resource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: adding the label testing-label with value testing-label-value to a pod
Jan 28 15:00:19.191: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-9130'
Jan 28 15:00:19.412: INFO: stderr: ""
Jan 28 15:00:19.412: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod has the label testing-label with the value testing-label-value
Jan 28 15:00:19.413: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-9130'
Jan 28 15:00:19.525: INFO: stderr: ""
Jan 28 15:00:19.525: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          8s    testing-label-value\n"
STEP: removing the label testing-label of a pod
Jan 28 15:00:19.526: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-9130'
Jan 28 15:00:19.680: INFO: stderr: ""
Jan 28 15:00:19.680: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod doesn't have the label testing-label
Jan 28 15:00:19.680: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-9130'
Jan 28 15:00:19.759: INFO: stderr: ""
Jan 28 15:00:19.759: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          8s    \n"
[AfterEach] [k8s.io] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1217
STEP: using delete to clean up resources
Jan 28 15:00:19.759: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9130'
Jan 28 15:00:19.907: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan 28 15:00:19.907: INFO: stdout: "pod \"pause\" force deleted\n"
Jan 28 15:00:19.908: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-9130'
Jan 28 15:00:20.178: INFO: stderr: "No resources found.\n"
Jan 28 15:00:20.178: INFO: stdout: ""
Jan 28 15:00:20.178: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-9130 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Jan 28 15:00:20.293: INFO: stderr: ""
Jan 28 15:00:20.293: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 28 15:00:20.294: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-9130" for this suite.
Jan 28 15:00:26.347: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 28 15:00:26.506: INFO: namespace kubectl-9130 deletion completed in 6.206760422s

• [SLOW TEST:15.967 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should update the label on a resource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 28 15:00:26.507: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with configmap pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod pod-subpath-test-configmap-pl96
STEP: Creating a pod to test atomic-volume-subpath
Jan 28 15:00:26.644: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-pl96" in namespace "subpath-5921" to be "success or failure"
Jan 28 15:00:26.713: INFO: Pod "pod-subpath-test-configmap-pl96": Phase="Pending", Reason="", readiness=false. Elapsed: 68.331179ms
Jan 28 15:00:28.725: INFO: Pod "pod-subpath-test-configmap-pl96": Phase="Pending", Reason="", readiness=false. Elapsed: 2.080815402s
Jan 28 15:00:30.736: INFO: Pod "pod-subpath-test-configmap-pl96": Phase="Pending", Reason="", readiness=false. Elapsed: 4.091241987s
Jan 28 15:00:32.757: INFO: Pod "pod-subpath-test-configmap-pl96": Phase="Pending", Reason="", readiness=false. Elapsed: 6.11278645s
Jan 28 15:00:34.768: INFO: Pod "pod-subpath-test-configmap-pl96": Phase="Running", Reason="", readiness=true. Elapsed: 8.123598926s
Jan 28 15:00:36.782: INFO: Pod "pod-subpath-test-configmap-pl96": Phase="Running", Reason="", readiness=true. Elapsed: 10.137194249s
Jan 28 15:00:38.790: INFO: Pod "pod-subpath-test-configmap-pl96": Phase="Running", Reason="", readiness=true. Elapsed: 12.145953648s
Jan 28 15:00:40.799: INFO: Pod "pod-subpath-test-configmap-pl96": Phase="Running", Reason="", readiness=true. Elapsed: 14.154281454s
Jan 28 15:00:42.808: INFO: Pod "pod-subpath-test-configmap-pl96": Phase="Running", Reason="", readiness=true. Elapsed: 16.163440062s
Jan 28 15:00:44.824: INFO: Pod "pod-subpath-test-configmap-pl96": Phase="Running", Reason="", readiness=true. Elapsed: 18.179414877s
Jan 28 15:00:46.831: INFO: Pod "pod-subpath-test-configmap-pl96": Phase="Running", Reason="", readiness=true. Elapsed: 20.186923721s
Jan 28 15:00:48.841: INFO: Pod "pod-subpath-test-configmap-pl96": Phase="Running", Reason="", readiness=true. Elapsed: 22.196636614s
Jan 28 15:00:50.857: INFO: Pod "pod-subpath-test-configmap-pl96": Phase="Running", Reason="", readiness=true. Elapsed: 24.212537764s
Jan 28 15:00:52.870: INFO: Pod "pod-subpath-test-configmap-pl96": Phase="Running", Reason="", readiness=true. Elapsed: 26.225201947s
Jan 28 15:00:54.877: INFO: Pod "pod-subpath-test-configmap-pl96": Phase="Running", Reason="", readiness=true. Elapsed: 28.232970143s
Jan 28 15:00:56.888: INFO: Pod "pod-subpath-test-configmap-pl96": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.243518346s
STEP: Saw pod success
Jan 28 15:00:56.888: INFO: Pod "pod-subpath-test-configmap-pl96" satisfied condition "success or failure"
Jan 28 15:00:56.899: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-configmap-pl96 container test-container-subpath-configmap-pl96: 
STEP: delete the pod
Jan 28 15:00:57.018: INFO: Waiting for pod pod-subpath-test-configmap-pl96 to disappear
Jan 28 15:00:57.068: INFO: Pod pod-subpath-test-configmap-pl96 no longer exists
STEP: Deleting pod pod-subpath-test-configmap-pl96
Jan 28 15:00:57.069: INFO: Deleting pod "pod-subpath-test-configmap-pl96" in namespace "subpath-5921"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 28 15:00:57.077: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-5921" for this suite.
Jan 28 15:01:03.128: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 28 15:01:03.295: INFO: namespace subpath-5921 deletion completed in 6.201181478s

• [SLOW TEST:36.788 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with configmap pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 28 15:01:03.296: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-41a22765-c727-4a05-9ff5-6825627b6f0c
STEP: Creating a pod to test consume configMaps
Jan 28 15:01:03.424: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-cde8751a-c405-42ef-875d-063f3a37ebbb" in namespace "projected-9115" to be "success or failure"
Jan 28 15:01:03.459: INFO: Pod "pod-projected-configmaps-cde8751a-c405-42ef-875d-063f3a37ebbb": Phase="Pending", Reason="", readiness=false. Elapsed: 34.562359ms
Jan 28 15:01:05.466: INFO: Pod "pod-projected-configmaps-cde8751a-c405-42ef-875d-063f3a37ebbb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.042084457s
Jan 28 15:01:07.480: INFO: Pod "pod-projected-configmaps-cde8751a-c405-42ef-875d-063f3a37ebbb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.056112843s
Jan 28 15:01:09.493: INFO: Pod "pod-projected-configmaps-cde8751a-c405-42ef-875d-063f3a37ebbb": Phase="Pending", Reason="", readiness=false. Elapsed: 6.068606845s
Jan 28 15:01:11.502: INFO: Pod "pod-projected-configmaps-cde8751a-c405-42ef-875d-063f3a37ebbb": Phase="Pending", Reason="", readiness=false. Elapsed: 8.077645263s
Jan 28 15:01:13.515: INFO: Pod "pod-projected-configmaps-cde8751a-c405-42ef-875d-063f3a37ebbb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.091439876s
STEP: Saw pod success
Jan 28 15:01:13.516: INFO: Pod "pod-projected-configmaps-cde8751a-c405-42ef-875d-063f3a37ebbb" satisfied condition "success or failure"
Jan 28 15:01:13.523: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-cde8751a-c405-42ef-875d-063f3a37ebbb container projected-configmap-volume-test: 
STEP: delete the pod
Jan 28 15:01:13.609: INFO: Waiting for pod pod-projected-configmaps-cde8751a-c405-42ef-875d-063f3a37ebbb to disappear
Jan 28 15:01:13.626: INFO: Pod pod-projected-configmaps-cde8751a-c405-42ef-875d-063f3a37ebbb no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 28 15:01:13.627: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9115" for this suite.
Jan 28 15:01:19.764: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 28 15:01:19.953: INFO: namespace projected-9115 deletion completed in 6.316907186s

• [SLOW TEST:16.657 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 28 15:01:19.955: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the rc
STEP: delete the rc
STEP: wait for all pods to be garbage collected
STEP: Gathering metrics
W0128 15:01:30.189513       9 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jan 28 15:01:30.189: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 28 15:01:30.189: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-4043" for this suite.
Jan 28 15:01:36.273: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 28 15:01:36.421: INFO: namespace gc-4043 deletion completed in 6.224417094s

• [SLOW TEST:16.466 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan 28 15:01:36.422: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Jan 28 15:01:36.611: INFO: Waiting up to 5m0s for pod "downwardapi-volume-6e6c727a-275d-48ce-8614-a6a6f1f87cf6" in namespace "downward-api-6604" to be "success or failure"
Jan 28 15:01:36.628: INFO: Pod "downwardapi-volume-6e6c727a-275d-48ce-8614-a6a6f1f87cf6": Phase="Pending", Reason="", readiness=false. Elapsed: 15.942885ms
Jan 28 15:01:38.642: INFO: Pod "downwardapi-volume-6e6c727a-275d-48ce-8614-a6a6f1f87cf6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030267579s
Jan 28 15:01:40.654: INFO: Pod "downwardapi-volume-6e6c727a-275d-48ce-8614-a6a6f1f87cf6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.042671395s
Jan 28 15:01:42.676: INFO: Pod "downwardapi-volume-6e6c727a-275d-48ce-8614-a6a6f1f87cf6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.06462771s
Jan 28 15:01:44.684: INFO: Pod "downwardapi-volume-6e6c727a-275d-48ce-8614-a6a6f1f87cf6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.072720194s
STEP: Saw pod success
Jan 28 15:01:44.684: INFO: Pod "downwardapi-volume-6e6c727a-275d-48ce-8614-a6a6f1f87cf6" satisfied condition "success or failure"
Jan 28 15:01:44.688: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-6e6c727a-275d-48ce-8614-a6a6f1f87cf6 container client-container: 
STEP: delete the pod
Jan 28 15:01:44.731: INFO: Waiting for pod downwardapi-volume-6e6c727a-275d-48ce-8614-a6a6f1f87cf6 to disappear
Jan 28 15:01:44.754: INFO: Pod downwardapi-volume-6e6c727a-275d-48ce-8614-a6a6f1f87cf6 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan 28 15:01:44.754: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-6604" for this suite.
Jan 28 15:01:50.847: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 28 15:01:51.045: INFO: namespace downward-api-6604 deletion completed in 6.28542547s

• [SLOW TEST:14.623 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSJan 28 15:01:51.046: INFO: Running AfterSuite actions on all nodes
Jan 28 15:01:51.046: INFO: Running AfterSuite actions on node 1
Jan 28 15:01:51.046: INFO: Skipping dumping logs from cluster

Ran 215 of 4412 Specs in 7539.320 seconds
SUCCESS! -- 215 Passed | 0 Failed | 0 Pending | 4197 Skipped
PASS