I0202 12:56:18.991436 8 e2e.go:243] Starting e2e run "76bbf8e8-e6fd-40b9-810b-62960b987b33" on Ginkgo node 1 Running Suite: Kubernetes e2e suite =================================== Random Seed: 1580648177 - Will randomize all specs Will run 215 of 4412 specs Feb 2 12:56:19.261: INFO: >>> kubeConfig: /root/.kube/config Feb 2 12:56:19.264: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Feb 2 12:56:19.283: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Feb 2 12:56:19.311: INFO: 10 / 10 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Feb 2 12:56:19.311: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Feb 2 12:56:19.311: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Feb 2 12:56:19.344: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Feb 2 12:56:19.344: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'weave-net' (0 seconds elapsed) Feb 2 12:56:19.344: INFO: e2e test version: v1.15.7 Feb 2 12:56:19.345: INFO: kube-apiserver version: v1.15.1 SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 2 12:56:19.346: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns Feb 2 12:56:19.458: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8929.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8929.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Feb 2 12:56:35.518: INFO: Unable to read wheezy_udp@kubernetes.default.svc.cluster.local from pod dns-8929/dns-test-59726751-cedc-4b9f-ba2a-66b1c762a2e8: the server could not find the requested resource (get pods dns-test-59726751-cedc-4b9f-ba2a-66b1c762a2e8) Feb 2 12:56:35.529: INFO: Unable to read wheezy_tcp@kubernetes.default.svc.cluster.local from pod dns-8929/dns-test-59726751-cedc-4b9f-ba2a-66b1c762a2e8: the server could not find the requested resource (get pods dns-test-59726751-cedc-4b9f-ba2a-66b1c762a2e8) Feb 2 12:56:35.535: INFO: Unable to read wheezy_udp@PodARecord from pod dns-8929/dns-test-59726751-cedc-4b9f-ba2a-66b1c762a2e8: the server could not find the requested resource (get pods dns-test-59726751-cedc-4b9f-ba2a-66b1c762a2e8) Feb 2 12:56:35.542: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-8929/dns-test-59726751-cedc-4b9f-ba2a-66b1c762a2e8: the server could not find the requested resource (get pods dns-test-59726751-cedc-4b9f-ba2a-66b1c762a2e8) Feb 2 12:56:35.548: INFO: Unable to read jessie_udp@kubernetes.default.svc.cluster.local from pod dns-8929/dns-test-59726751-cedc-4b9f-ba2a-66b1c762a2e8: the server could not find the requested resource (get pods dns-test-59726751-cedc-4b9f-ba2a-66b1c762a2e8) Feb 2 12:56:35.552: INFO: Unable to read jessie_tcp@kubernetes.default.svc.cluster.local from pod dns-8929/dns-test-59726751-cedc-4b9f-ba2a-66b1c762a2e8: the server could not find the requested resource (get pods dns-test-59726751-cedc-4b9f-ba2a-66b1c762a2e8) Feb 2 12:56:35.556: INFO: Unable to read jessie_udp@PodARecord from pod dns-8929/dns-test-59726751-cedc-4b9f-ba2a-66b1c762a2e8: the server could not find the requested resource (get pods dns-test-59726751-cedc-4b9f-ba2a-66b1c762a2e8) Feb 2 12:56:35.560: INFO: Unable to read jessie_tcp@PodARecord from pod dns-8929/dns-test-59726751-cedc-4b9f-ba2a-66b1c762a2e8: the server could not find the requested resource (get pods dns-test-59726751-cedc-4b9f-ba2a-66b1c762a2e8) Feb 2 12:56:35.561: INFO: Lookups using dns-8929/dns-test-59726751-cedc-4b9f-ba2a-66b1c762a2e8 failed for: [wheezy_udp@kubernetes.default.svc.cluster.local wheezy_tcp@kubernetes.default.svc.cluster.local wheezy_udp@PodARecord wheezy_tcp@PodARecord jessie_udp@kubernetes.default.svc.cluster.local jessie_tcp@kubernetes.default.svc.cluster.local jessie_udp@PodARecord jessie_tcp@PodARecord] Feb 2 12:56:40.617: INFO: DNS probes using dns-8929/dns-test-59726751-cedc-4b9f-ba2a-66b1c762a2e8 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 2 12:56:40.771: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-8929" for this suite. Feb 2 12:56:47.048: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 2 12:56:47.177: INFO: namespace dns-8929 deletion completed in 6.386467483s • [SLOW TEST:27.831 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 2 12:56:47.177: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test env composition Feb 2 12:56:47.297: INFO: Waiting up to 5m0s for pod "var-expansion-3d50d844-734f-4f24-8392-779dcfa6c8af" in namespace "var-expansion-240" to be "success or failure" Feb 2 12:56:47.302: INFO: Pod "var-expansion-3d50d844-734f-4f24-8392-779dcfa6c8af": Phase="Pending", Reason="", readiness=false. Elapsed: 4.474004ms Feb 2 12:56:49.314: INFO: Pod "var-expansion-3d50d844-734f-4f24-8392-779dcfa6c8af": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016526865s Feb 2 12:56:51.321: INFO: Pod "var-expansion-3d50d844-734f-4f24-8392-779dcfa6c8af": Phase="Pending", Reason="", readiness=false. Elapsed: 4.023721183s Feb 2 12:56:53.328: INFO: Pod "var-expansion-3d50d844-734f-4f24-8392-779dcfa6c8af": Phase="Pending", Reason="", readiness=false. Elapsed: 6.031006099s Feb 2 12:56:55.340: INFO: Pod "var-expansion-3d50d844-734f-4f24-8392-779dcfa6c8af": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.04287182s STEP: Saw pod success Feb 2 12:56:55.341: INFO: Pod "var-expansion-3d50d844-734f-4f24-8392-779dcfa6c8af" satisfied condition "success or failure" Feb 2 12:56:55.346: INFO: Trying to get logs from node iruya-node pod var-expansion-3d50d844-734f-4f24-8392-779dcfa6c8af container dapi-container: STEP: delete the pod Feb 2 12:56:55.420: INFO: Waiting for pod var-expansion-3d50d844-734f-4f24-8392-779dcfa6c8af to disappear Feb 2 12:56:55.452: INFO: Pod var-expansion-3d50d844-734f-4f24-8392-779dcfa6c8af no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 2 12:56:55.453: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-240" for this suite. Feb 2 12:57:03.585: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 2 12:57:03.712: INFO: namespace var-expansion-240 deletion completed in 8.253857114s • [SLOW TEST:16.535 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 2 12:57:03.713: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [It] should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a replication controller Feb 2 12:57:03.884: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2998' Feb 2 12:57:06.623: INFO: stderr: "" Feb 2 12:57:06.623: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Feb 2 12:57:06.624: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2998' Feb 2 12:57:06.897: INFO: stderr: "" Feb 2 12:57:06.898: INFO: stdout: "update-demo-nautilus-644lm update-demo-nautilus-wzwtr " Feb 2 12:57:06.898: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-644lm -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2998' Feb 2 12:57:07.055: INFO: stderr: "" Feb 2 12:57:07.055: INFO: stdout: "" Feb 2 12:57:07.055: INFO: update-demo-nautilus-644lm is created but not running Feb 2 12:57:12.056: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2998' Feb 2 12:57:12.324: INFO: stderr: "" Feb 2 12:57:12.324: INFO: stdout: "update-demo-nautilus-644lm update-demo-nautilus-wzwtr " Feb 2 12:57:12.325: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-644lm -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2998' Feb 2 12:57:13.113: INFO: stderr: "" Feb 2 12:57:13.113: INFO: stdout: "" Feb 2 12:57:13.113: INFO: update-demo-nautilus-644lm is created but not running Feb 2 12:57:18.115: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2998' Feb 2 12:57:18.297: INFO: stderr: "" Feb 2 12:57:18.297: INFO: stdout: "update-demo-nautilus-644lm update-demo-nautilus-wzwtr " Feb 2 12:57:18.297: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-644lm -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2998' Feb 2 12:57:18.418: INFO: stderr: "" Feb 2 12:57:18.418: INFO: stdout: "true" Feb 2 12:57:18.419: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-644lm -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-2998' Feb 2 12:57:18.535: INFO: stderr: "" Feb 2 12:57:18.535: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Feb 2 12:57:18.535: INFO: validating pod update-demo-nautilus-644lm Feb 2 12:57:18.549: INFO: got data: { "image": "nautilus.jpg" } Feb 2 12:57:18.549: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Feb 2 12:57:18.549: INFO: update-demo-nautilus-644lm is verified up and running Feb 2 12:57:18.550: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-wzwtr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2998' Feb 2 12:57:18.662: INFO: stderr: "" Feb 2 12:57:18.662: INFO: stdout: "true" Feb 2 12:57:18.662: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-wzwtr -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-2998' Feb 2 12:57:18.736: INFO: stderr: "" Feb 2 12:57:18.736: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Feb 2 12:57:18.736: INFO: validating pod update-demo-nautilus-wzwtr Feb 2 12:57:18.745: INFO: got data: { "image": "nautilus.jpg" } Feb 2 12:57:18.745: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Feb 2 12:57:18.745: INFO: update-demo-nautilus-wzwtr is verified up and running STEP: scaling down the replication controller Feb 2 12:57:18.768: INFO: scanned /root for discovery docs: Feb 2 12:57:18.768: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-2998' Feb 2 12:57:19.919: INFO: stderr: "" Feb 2 12:57:19.919: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Feb 2 12:57:19.920: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2998' Feb 2 12:57:20.248: INFO: stderr: "" Feb 2 12:57:20.248: INFO: stdout: "update-demo-nautilus-644lm update-demo-nautilus-wzwtr " STEP: Replicas for name=update-demo: expected=1 actual=2 Feb 2 12:57:25.249: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2998' Feb 2 12:57:25.393: INFO: stderr: "" Feb 2 12:57:25.393: INFO: stdout: "update-demo-nautilus-644lm update-demo-nautilus-wzwtr " STEP: Replicas for name=update-demo: expected=1 actual=2 Feb 2 12:57:30.393: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2998' Feb 2 12:57:30.632: INFO: stderr: "" Feb 2 12:57:30.633: INFO: stdout: "update-demo-nautilus-wzwtr " Feb 2 12:57:30.633: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-wzwtr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2998' Feb 2 12:57:30.752: INFO: stderr: "" Feb 2 12:57:30.752: INFO: stdout: "true" Feb 2 12:57:30.753: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-wzwtr -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-2998' Feb 2 12:57:30.923: INFO: stderr: "" Feb 2 12:57:30.923: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Feb 2 12:57:30.923: INFO: validating pod update-demo-nautilus-wzwtr Feb 2 12:57:30.961: INFO: got data: { "image": "nautilus.jpg" } Feb 2 12:57:30.962: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Feb 2 12:57:30.962: INFO: update-demo-nautilus-wzwtr is verified up and running STEP: scaling up the replication controller Feb 2 12:57:30.970: INFO: scanned /root for discovery docs: Feb 2 12:57:30.970: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-2998' Feb 2 12:57:32.302: INFO: stderr: "" Feb 2 12:57:32.302: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Feb 2 12:57:32.303: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2998' Feb 2 12:57:32.789: INFO: stderr: "" Feb 2 12:57:32.789: INFO: stdout: "update-demo-nautilus-wgsvw update-demo-nautilus-wzwtr " Feb 2 12:57:32.790: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-wgsvw -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2998' Feb 2 12:57:33.702: INFO: stderr: "" Feb 2 12:57:33.702: INFO: stdout: "" Feb 2 12:57:33.702: INFO: update-demo-nautilus-wgsvw is created but not running Feb 2 12:57:38.703: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2998' Feb 2 12:57:38.831: INFO: stderr: "" Feb 2 12:57:38.831: INFO: stdout: "update-demo-nautilus-wgsvw update-demo-nautilus-wzwtr " Feb 2 12:57:38.831: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-wgsvw -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2998' Feb 2 12:57:38.934: INFO: stderr: "" Feb 2 12:57:38.934: INFO: stdout: "true" Feb 2 12:57:38.934: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-wgsvw -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-2998' Feb 2 12:57:39.090: INFO: stderr: "" Feb 2 12:57:39.090: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Feb 2 12:57:39.090: INFO: validating pod update-demo-nautilus-wgsvw Feb 2 12:57:39.097: INFO: got data: { "image": "nautilus.jpg" } Feb 2 12:57:39.097: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Feb 2 12:57:39.097: INFO: update-demo-nautilus-wgsvw is verified up and running Feb 2 12:57:39.097: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-wzwtr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2998' Feb 2 12:57:39.176: INFO: stderr: "" Feb 2 12:57:39.176: INFO: stdout: "true" Feb 2 12:57:39.176: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-wzwtr -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-2998' Feb 2 12:57:39.338: INFO: stderr: "" Feb 2 12:57:39.338: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Feb 2 12:57:39.338: INFO: validating pod update-demo-nautilus-wzwtr Feb 2 12:57:39.344: INFO: got data: { "image": "nautilus.jpg" } Feb 2 12:57:39.344: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Feb 2 12:57:39.344: INFO: update-demo-nautilus-wzwtr is verified up and running STEP: using delete to clean up resources Feb 2 12:57:39.344: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2998' Feb 2 12:57:39.430: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Feb 2 12:57:39.430: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Feb 2 12:57:39.430: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-2998' Feb 2 12:57:39.578: INFO: stderr: "No resources found.\n" Feb 2 12:57:39.578: INFO: stdout: "" Feb 2 12:57:39.578: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-2998 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Feb 2 12:57:39.763: INFO: stderr: "" Feb 2 12:57:39.763: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 2 12:57:39.763: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2998" for this suite. Feb 2 12:58:03.817: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 2 12:58:03.991: INFO: namespace kubectl-2998 deletion completed in 24.214401928s • [SLOW TEST:60.278 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 2 12:58:03.992: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-map-c2d350e3-e7da-4ada-af15-b2fa21e82f2a STEP: Creating a pod to test consume secrets Feb 2 12:58:04.166: INFO: Waiting up to 5m0s for pod "pod-secrets-6b13dd08-7895-4988-b2b4-db20b8f03db3" in namespace "secrets-1183" to be "success or failure" Feb 2 12:58:04.170: INFO: Pod "pod-secrets-6b13dd08-7895-4988-b2b4-db20b8f03db3": Phase="Pending", Reason="", readiness=false. Elapsed: 3.979291ms Feb 2 12:58:06.176: INFO: Pod "pod-secrets-6b13dd08-7895-4988-b2b4-db20b8f03db3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009717987s Feb 2 12:58:08.187: INFO: Pod "pod-secrets-6b13dd08-7895-4988-b2b4-db20b8f03db3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.021657296s Feb 2 12:58:10.194: INFO: Pod "pod-secrets-6b13dd08-7895-4988-b2b4-db20b8f03db3": Phase="Pending", Reason="", readiness=false. Elapsed: 6.028342787s Feb 2 12:58:12.199: INFO: Pod "pod-secrets-6b13dd08-7895-4988-b2b4-db20b8f03db3": Phase="Pending", Reason="", readiness=false. Elapsed: 8.033654132s Feb 2 12:58:14.208: INFO: Pod "pod-secrets-6b13dd08-7895-4988-b2b4-db20b8f03db3": Phase="Pending", Reason="", readiness=false. Elapsed: 10.042010057s Feb 2 12:58:16.219: INFO: Pod "pod-secrets-6b13dd08-7895-4988-b2b4-db20b8f03db3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.052819634s STEP: Saw pod success Feb 2 12:58:16.219: INFO: Pod "pod-secrets-6b13dd08-7895-4988-b2b4-db20b8f03db3" satisfied condition "success or failure" Feb 2 12:58:16.224: INFO: Trying to get logs from node iruya-node pod pod-secrets-6b13dd08-7895-4988-b2b4-db20b8f03db3 container secret-volume-test: STEP: delete the pod Feb 2 12:58:16.281: INFO: Waiting for pod pod-secrets-6b13dd08-7895-4988-b2b4-db20b8f03db3 to disappear Feb 2 12:58:16.368: INFO: Pod pod-secrets-6b13dd08-7895-4988-b2b4-db20b8f03db3 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 2 12:58:16.368: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1183" for this suite. Feb 2 12:58:22.410: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 2 12:58:22.527: INFO: namespace secrets-1183 deletion completed in 6.145674621s • [SLOW TEST:18.536 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 2 12:58:22.528: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object Feb 2 12:58:22.665: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-9777,SelfLink:/api/v1/namespaces/watch-9777/configmaps/e2e-watch-test-label-changed,UID:9918e5bd-58e8-46fd-b33e-bc5e97083906,ResourceVersion:22812423,Generation:0,CreationTimestamp:2020-02-02 12:58:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} Feb 2 12:58:22.666: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-9777,SelfLink:/api/v1/namespaces/watch-9777/configmaps/e2e-watch-test-label-changed,UID:9918e5bd-58e8-46fd-b33e-bc5e97083906,ResourceVersion:22812424,Generation:0,CreationTimestamp:2020-02-02 12:58:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} Feb 2 12:58:22.666: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-9777,SelfLink:/api/v1/namespaces/watch-9777/configmaps/e2e-watch-test-label-changed,UID:9918e5bd-58e8-46fd-b33e-bc5e97083906,ResourceVersion:22812425,Generation:0,CreationTimestamp:2020-02-02 12:58:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored Feb 2 12:58:32.797: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-9777,SelfLink:/api/v1/namespaces/watch-9777/configmaps/e2e-watch-test-label-changed,UID:9918e5bd-58e8-46fd-b33e-bc5e97083906,ResourceVersion:22812441,Generation:0,CreationTimestamp:2020-02-02 12:58:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Feb 2 12:58:32.797: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-9777,SelfLink:/api/v1/namespaces/watch-9777/configmaps/e2e-watch-test-label-changed,UID:9918e5bd-58e8-46fd-b33e-bc5e97083906,ResourceVersion:22812442,Generation:0,CreationTimestamp:2020-02-02 12:58:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} Feb 2 12:58:32.797: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-9777,SelfLink:/api/v1/namespaces/watch-9777/configmaps/e2e-watch-test-label-changed,UID:9918e5bd-58e8-46fd-b33e-bc5e97083906,ResourceVersion:22812443,Generation:0,CreationTimestamp:2020-02-02 12:58:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 2 12:58:32.797: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-9777" for this suite. Feb 2 12:58:38.857: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 2 12:58:39.080: INFO: namespace watch-9777 deletion completed in 6.251035104s • [SLOW TEST:16.552 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 2 12:58:39.081: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Feb 2 12:58:55.798: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Feb 2 12:58:55.826: INFO: Pod pod-with-poststart-http-hook still exists Feb 2 12:58:57.827: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Feb 2 12:58:57.846: INFO: Pod pod-with-poststart-http-hook still exists Feb 2 12:58:59.827: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Feb 2 12:58:59.835: INFO: Pod pod-with-poststart-http-hook still exists Feb 2 12:59:01.827: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Feb 2 12:59:01.834: INFO: Pod pod-with-poststart-http-hook still exists Feb 2 12:59:03.827: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Feb 2 12:59:03.848: INFO: Pod pod-with-poststart-http-hook still exists Feb 2 12:59:05.827: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Feb 2 12:59:05.832: INFO: Pod pod-with-poststart-http-hook still exists Feb 2 12:59:07.827: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Feb 2 12:59:07.837: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 2 12:59:07.837: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-6916" for this suite. Feb 2 12:59:29.903: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 2 12:59:29.981: INFO: namespace container-lifecycle-hook-6916 deletion completed in 22.137455449s • [SLOW TEST:50.900 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 2 12:59:29.981: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Feb 2 12:59:30.073: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 2 12:59:31.241: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-1437" for this suite. Feb 2 12:59:37.345: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 2 12:59:37.491: INFO: namespace custom-resource-definition-1437 deletion completed in 6.226723595s • [SLOW TEST:7.510 seconds] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Simple CustomResourceDefinition /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:35 creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-apps] ReplicationController should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 2 12:59:37.492: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change Feb 2 12:59:37.605: INFO: Pod name pod-release: Found 0 pods out of 1 Feb 2 12:59:42.617: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 2 12:59:43.669: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-9209" for this suite. Feb 2 12:59:49.809: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 2 12:59:49.918: INFO: namespace replication-controller-9209 deletion completed in 6.240031739s • [SLOW TEST:12.426 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 2 12:59:49.918: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-a5f3976d-0765-43e5-9bda-c656372561c2 STEP: Creating a pod to test consume secrets Feb 2 12:59:50.135: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-a0a4ddb0-57d4-4339-8090-cc269bf65171" in namespace "projected-2739" to be "success or failure" Feb 2 12:59:50.145: INFO: Pod "pod-projected-secrets-a0a4ddb0-57d4-4339-8090-cc269bf65171": Phase="Pending", Reason="", readiness=false. Elapsed: 9.697767ms Feb 2 12:59:52.175: INFO: Pod "pod-projected-secrets-a0a4ddb0-57d4-4339-8090-cc269bf65171": Phase="Pending", Reason="", readiness=false. Elapsed: 2.039676649s Feb 2 12:59:54.184: INFO: Pod "pod-projected-secrets-a0a4ddb0-57d4-4339-8090-cc269bf65171": Phase="Pending", Reason="", readiness=false. Elapsed: 4.048453627s Feb 2 12:59:56.195: INFO: Pod "pod-projected-secrets-a0a4ddb0-57d4-4339-8090-cc269bf65171": Phase="Pending", Reason="", readiness=false. Elapsed: 6.059820651s Feb 2 12:59:58.202: INFO: Pod "pod-projected-secrets-a0a4ddb0-57d4-4339-8090-cc269bf65171": Phase="Pending", Reason="", readiness=false. Elapsed: 8.066813513s Feb 2 13:00:00.209: INFO: Pod "pod-projected-secrets-a0a4ddb0-57d4-4339-8090-cc269bf65171": Phase="Pending", Reason="", readiness=false. Elapsed: 10.073216016s Feb 2 13:00:02.216: INFO: Pod "pod-projected-secrets-a0a4ddb0-57d4-4339-8090-cc269bf65171": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.080843944s STEP: Saw pod success Feb 2 13:00:02.217: INFO: Pod "pod-projected-secrets-a0a4ddb0-57d4-4339-8090-cc269bf65171" satisfied condition "success or failure" Feb 2 13:00:02.221: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-a0a4ddb0-57d4-4339-8090-cc269bf65171 container projected-secret-volume-test: STEP: delete the pod Feb 2 13:00:02.263: INFO: Waiting for pod pod-projected-secrets-a0a4ddb0-57d4-4339-8090-cc269bf65171 to disappear Feb 2 13:00:02.273: INFO: Pod pod-projected-secrets-a0a4ddb0-57d4-4339-8090-cc269bf65171 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 2 13:00:02.274: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2739" for this suite. Feb 2 13:00:08.464: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 2 13:00:08.554: INFO: namespace projected-2739 deletion completed in 6.128379113s • [SLOW TEST:18.635 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 2 13:00:08.554: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod Feb 2 13:00:08.645: INFO: PodSpec: initContainers in spec.initContainers Feb 2 13:01:14.064: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-3c4853ea-e6d7-4cbe-84c8-7d2c67c96540", GenerateName:"", Namespace:"init-container-7817", SelfLink:"/api/v1/namespaces/init-container-7817/pods/pod-init-3c4853ea-e6d7-4cbe-84c8-7d2c67c96540", UID:"28f7f20a-4646-4d44-9638-0a5c16e7df31", ResourceVersion:"22812818", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63716245208, loc:(*time.Location)(0x7ea48a0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"645140051"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-9swzc", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc002164300), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-9swzc", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-9swzc", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-9swzc", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc002984298), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"iruya-node", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc002cbc2a0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002984320)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002984340)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc002984348), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc00298434c), PreemptionPolicy:(*v1.PreemptionPolicy)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716245208, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716245208, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716245208, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716245208, loc:(*time.Location)(0x7ea48a0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.96.3.65", PodIP:"10.44.0.1", StartTime:(*v1.Time)(0xc002600540), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0021ac150)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0021ac1c0)}, Ready:false, RestartCount:3, Image:"busybox:1.29", ImageID:"docker-pullable://busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"docker://c33567a7f718c04ceea60c0ebae89c14c75f1494239db08ab51fe30bf83b42ab"}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc002600640), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:""}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0026005c0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:""}}, QOSClass:"Guaranteed"}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 2 13:01:14.067: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-7817" for this suite. Feb 2 13:01:36.182: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 2 13:01:36.335: INFO: namespace init-container-7817 deletion completed in 22.233817077s • [SLOW TEST:87.781 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 2 13:01:36.335: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Feb 2 13:01:36.461: INFO: Pod name rollover-pod: Found 0 pods out of 1 Feb 2 13:01:41.467: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Feb 2 13:01:43.481: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready Feb 2 13:01:45.491: INFO: Creating deployment "test-rollover-deployment" Feb 2 13:01:45.516: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations Feb 2 13:01:47.537: INFO: Check revision of new replica set for deployment "test-rollover-deployment" Feb 2 13:01:47.550: INFO: Ensure that both replica sets have 1 created replica Feb 2 13:01:47.558: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update Feb 2 13:01:47.570: INFO: Updating deployment test-rollover-deployment Feb 2 13:01:47.570: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller Feb 2 13:01:49.592: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 Feb 2 13:01:49.601: INFO: Make sure deployment "test-rollover-deployment" is complete Feb 2 13:01:49.613: INFO: all replica sets need to contain the pod-template-hash label Feb 2 13:01:49.613: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716245305, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716245305, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716245307, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716245305, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 2 13:01:51.628: INFO: all replica sets need to contain the pod-template-hash label Feb 2 13:01:51.628: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716245305, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716245305, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716245307, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716245305, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 2 13:01:53.631: INFO: all replica sets need to contain the pod-template-hash label Feb 2 13:01:53.631: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716245305, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716245305, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716245307, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716245305, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 2 13:01:55.633: INFO: all replica sets need to contain the pod-template-hash label Feb 2 13:01:55.633: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716245305, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716245305, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716245307, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716245305, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 2 13:01:57.627: INFO: all replica sets need to contain the pod-template-hash label Feb 2 13:01:57.627: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716245305, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716245305, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716245316, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716245305, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 2 13:01:59.626: INFO: all replica sets need to contain the pod-template-hash label Feb 2 13:01:59.626: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716245305, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716245305, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716245316, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716245305, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 2 13:02:01.627: INFO: all replica sets need to contain the pod-template-hash label Feb 2 13:02:01.627: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716245305, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716245305, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716245316, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716245305, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 2 13:02:03.635: INFO: all replica sets need to contain the pod-template-hash label Feb 2 13:02:03.635: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716245305, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716245305, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716245316, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716245305, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 2 13:02:05.628: INFO: all replica sets need to contain the pod-template-hash label Feb 2 13:02:05.629: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716245305, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716245305, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716245316, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716245305, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 2 13:02:07.631: INFO: Feb 2 13:02:07.631: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Feb 2 13:02:07.641: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment,GenerateName:,Namespace:deployment-5741,SelfLink:/apis/apps/v1/namespaces/deployment-5741/deployments/test-rollover-deployment,UID:f45f46f6-90cf-49ea-821d-7a7dd90bd003,ResourceVersion:22812976,Generation:2,CreationTimestamp:2020-02-02 13:01:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-02-02 13:01:45 +0000 UTC 2020-02-02 13:01:45 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-02-02 13:02:07 +0000 UTC 2020-02-02 13:01:45 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rollover-deployment-854595fc44" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} Feb 2 13:02:07.645: INFO: New ReplicaSet "test-rollover-deployment-854595fc44" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-854595fc44,GenerateName:,Namespace:deployment-5741,SelfLink:/apis/apps/v1/namespaces/deployment-5741/replicasets/test-rollover-deployment-854595fc44,UID:a5042368-12c1-4d3f-ae4d-fa62f2dc5277,ResourceVersion:22812965,Generation:2,CreationTimestamp:2020-02-02 13:01:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment f45f46f6-90cf-49ea-821d-7a7dd90bd003 0xc001faf257 0xc001faf258}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Feb 2 13:02:07.645: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": Feb 2 13:02:07.645: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-controller,GenerateName:,Namespace:deployment-5741,SelfLink:/apis/apps/v1/namespaces/deployment-5741/replicasets/test-rollover-controller,UID:29cb7992-e240-43c4-87ac-92f3de808196,ResourceVersion:22812974,Generation:2,CreationTimestamp:2020-02-02 13:01:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment f45f46f6-90cf-49ea-821d-7a7dd90bd003 0xc001faf187 0xc001faf188}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Feb 2 13:02:07.645: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-9b8b997cf,GenerateName:,Namespace:deployment-5741,SelfLink:/apis/apps/v1/namespaces/deployment-5741/replicasets/test-rollover-deployment-9b8b997cf,UID:d04836ad-7c15-446c-8102-e6558e3416ec,ResourceVersion:22812922,Generation:2,CreationTimestamp:2020-02-02 13:01:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment f45f46f6-90cf-49ea-821d-7a7dd90bd003 0xc001faf320 0xc001faf321}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Feb 2 13:02:07.650: INFO: Pod "test-rollover-deployment-854595fc44-z6nls" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-854595fc44-z6nls,GenerateName:test-rollover-deployment-854595fc44-,Namespace:deployment-5741,SelfLink:/api/v1/namespaces/deployment-5741/pods/test-rollover-deployment-854595fc44-z6nls,UID:ec5ee4e8-4579-4e3a-94fc-168148485dd0,ResourceVersion:22812949,Generation:0,CreationTimestamp:2020-02-02 13:01:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rollover-deployment-854595fc44 a5042368-12c1-4d3f-ae4d-fa62f2dc5277 0xc001faff07 0xc001faff08}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-bjv5b {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-bjv5b,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-bjv5b true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001faff80} {node.kubernetes.io/unreachable Exists NoExecute 0xc001faffa0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-02 13:01:47 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-02 13:01:56 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-02 13:01:56 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-02 13:01:47 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.2,StartTime:2020-02-02 13:01:47 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-02-02 13:01:55 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 docker://3fef4b26fb0294b9dab21e28f1e10ee04cb7160090011256b6a16003c5fed2b3}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 2 13:02:07.650: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-5741" for this suite. Feb 2 13:02:13.678: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 2 13:02:13.819: INFO: namespace deployment-5741 deletion completed in 6.164008494s • [SLOW TEST:37.484 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 2 13:02:13.820: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0777 on tmpfs Feb 2 13:02:14.032: INFO: Waiting up to 5m0s for pod "pod-7e1c07ec-5c24-493a-903f-bcca7d46df74" in namespace "emptydir-965" to be "success or failure" Feb 2 13:02:14.071: INFO: Pod "pod-7e1c07ec-5c24-493a-903f-bcca7d46df74": Phase="Pending", Reason="", readiness=false. Elapsed: 39.338057ms Feb 2 13:02:16.082: INFO: Pod "pod-7e1c07ec-5c24-493a-903f-bcca7d46df74": Phase="Pending", Reason="", readiness=false. Elapsed: 2.050316618s Feb 2 13:02:18.091: INFO: Pod "pod-7e1c07ec-5c24-493a-903f-bcca7d46df74": Phase="Pending", Reason="", readiness=false. Elapsed: 4.05951122s Feb 2 13:02:20.100: INFO: Pod "pod-7e1c07ec-5c24-493a-903f-bcca7d46df74": Phase="Pending", Reason="", readiness=false. Elapsed: 6.068825449s Feb 2 13:02:22.122: INFO: Pod "pod-7e1c07ec-5c24-493a-903f-bcca7d46df74": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.089923211s STEP: Saw pod success Feb 2 13:02:22.122: INFO: Pod "pod-7e1c07ec-5c24-493a-903f-bcca7d46df74" satisfied condition "success or failure" Feb 2 13:02:22.129: INFO: Trying to get logs from node iruya-node pod pod-7e1c07ec-5c24-493a-903f-bcca7d46df74 container test-container: STEP: delete the pod Feb 2 13:02:22.233: INFO: Waiting for pod pod-7e1c07ec-5c24-493a-903f-bcca7d46df74 to disappear Feb 2 13:02:22.240: INFO: Pod pod-7e1c07ec-5c24-493a-903f-bcca7d46df74 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 2 13:02:22.240: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-965" for this suite. Feb 2 13:02:28.269: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 2 13:02:28.380: INFO: namespace emptydir-965 deletion completed in 6.134596681s • [SLOW TEST:14.560 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 2 13:02:28.380: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-map-cb42d072-c067-47b8-9a6a-bd10908fc969 STEP: Creating a pod to test consume configMaps Feb 2 13:02:28.501: INFO: Waiting up to 5m0s for pod "pod-configmaps-926f16be-a860-4f6f-8711-1dae350c29a9" in namespace "configmap-6810" to be "success or failure" Feb 2 13:02:28.536: INFO: Pod "pod-configmaps-926f16be-a860-4f6f-8711-1dae350c29a9": Phase="Pending", Reason="", readiness=false. Elapsed: 34.349265ms Feb 2 13:02:30.556: INFO: Pod "pod-configmaps-926f16be-a860-4f6f-8711-1dae350c29a9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.054223546s Feb 2 13:02:32.568: INFO: Pod "pod-configmaps-926f16be-a860-4f6f-8711-1dae350c29a9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.066370395s Feb 2 13:02:34.580: INFO: Pod "pod-configmaps-926f16be-a860-4f6f-8711-1dae350c29a9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.078783625s Feb 2 13:02:36.593: INFO: Pod "pod-configmaps-926f16be-a860-4f6f-8711-1dae350c29a9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.091736269s STEP: Saw pod success Feb 2 13:02:36.594: INFO: Pod "pod-configmaps-926f16be-a860-4f6f-8711-1dae350c29a9" satisfied condition "success or failure" Feb 2 13:02:36.598: INFO: Trying to get logs from node iruya-node pod pod-configmaps-926f16be-a860-4f6f-8711-1dae350c29a9 container configmap-volume-test: STEP: delete the pod Feb 2 13:02:36.777: INFO: Waiting for pod pod-configmaps-926f16be-a860-4f6f-8711-1dae350c29a9 to disappear Feb 2 13:02:36.895: INFO: Pod pod-configmaps-926f16be-a860-4f6f-8711-1dae350c29a9 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 2 13:02:36.896: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6810" for this suite. Feb 2 13:02:42.943: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 2 13:02:43.130: INFO: namespace configmap-6810 deletion completed in 6.217080627s • [SLOW TEST:14.750 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 2 13:02:43.131: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod Feb 2 13:02:51.817: INFO: Successfully updated pod "annotationupdate7317e456-f264-425e-9484-2c305502c379" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 2 13:02:53.960: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7807" for this suite. Feb 2 13:03:16.038: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 2 13:03:16.236: INFO: namespace downward-api-7807 deletion completed in 22.266997574s • [SLOW TEST:33.106 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 2 13:03:16.237: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W0202 13:03:28.536478 8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Feb 2 13:03:28.536: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 2 13:03:28.536: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-4979" for this suite. Feb 2 13:03:34.577: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 2 13:03:34.695: INFO: namespace gc-4979 deletion completed in 6.152889709s • [SLOW TEST:18.459 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 2 13:03:34.699: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Feb 2 13:03:34.890: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8a28435f-8a98-4d39-8656-357aa2462133" in namespace "projected-9052" to be "success or failure" Feb 2 13:03:34.900: INFO: Pod "downwardapi-volume-8a28435f-8a98-4d39-8656-357aa2462133": Phase="Pending", Reason="", readiness=false. Elapsed: 9.874948ms Feb 2 13:03:36.911: INFO: Pod "downwardapi-volume-8a28435f-8a98-4d39-8656-357aa2462133": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020396279s Feb 2 13:03:38.918: INFO: Pod "downwardapi-volume-8a28435f-8a98-4d39-8656-357aa2462133": Phase="Pending", Reason="", readiness=false. Elapsed: 4.02789436s Feb 2 13:03:40.928: INFO: Pod "downwardapi-volume-8a28435f-8a98-4d39-8656-357aa2462133": Phase="Pending", Reason="", readiness=false. Elapsed: 6.037526892s Feb 2 13:03:42.943: INFO: Pod "downwardapi-volume-8a28435f-8a98-4d39-8656-357aa2462133": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.052310541s STEP: Saw pod success Feb 2 13:03:42.943: INFO: Pod "downwardapi-volume-8a28435f-8a98-4d39-8656-357aa2462133" satisfied condition "success or failure" Feb 2 13:03:42.948: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-8a28435f-8a98-4d39-8656-357aa2462133 container client-container: STEP: delete the pod Feb 2 13:03:43.137: INFO: Waiting for pod downwardapi-volume-8a28435f-8a98-4d39-8656-357aa2462133 to disappear Feb 2 13:03:43.159: INFO: Pod downwardapi-volume-8a28435f-8a98-4d39-8656-357aa2462133 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 2 13:03:43.159: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9052" for this suite. Feb 2 13:03:49.230: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 2 13:03:49.368: INFO: namespace projected-9052 deletion completed in 6.202441615s • [SLOW TEST:14.669 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 2 13:03:49.369: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Feb 2 13:03:49.520: INFO: Waiting up to 5m0s for pod "downwardapi-volume-779db78f-ae93-4571-820c-079b7077c7d6" in namespace "downward-api-5212" to be "success or failure" Feb 2 13:03:49.530: INFO: Pod "downwardapi-volume-779db78f-ae93-4571-820c-079b7077c7d6": Phase="Pending", Reason="", readiness=false. Elapsed: 10.428256ms Feb 2 13:03:51.547: INFO: Pod "downwardapi-volume-779db78f-ae93-4571-820c-079b7077c7d6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026619605s Feb 2 13:03:53.587: INFO: Pod "downwardapi-volume-779db78f-ae93-4571-820c-079b7077c7d6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.067401527s Feb 2 13:03:55.618: INFO: Pod "downwardapi-volume-779db78f-ae93-4571-820c-079b7077c7d6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.098032097s Feb 2 13:03:57.639: INFO: Pod "downwardapi-volume-779db78f-ae93-4571-820c-079b7077c7d6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.119444004s STEP: Saw pod success Feb 2 13:03:57.640: INFO: Pod "downwardapi-volume-779db78f-ae93-4571-820c-079b7077c7d6" satisfied condition "success or failure" Feb 2 13:03:57.647: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-779db78f-ae93-4571-820c-079b7077c7d6 container client-container: STEP: delete the pod Feb 2 13:03:58.616: INFO: Waiting for pod downwardapi-volume-779db78f-ae93-4571-820c-079b7077c7d6 to disappear Feb 2 13:03:58.625: INFO: Pod downwardapi-volume-779db78f-ae93-4571-820c-079b7077c7d6 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 2 13:03:58.625: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5212" for this suite. Feb 2 13:04:04.808: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 2 13:04:04.919: INFO: namespace downward-api-5212 deletion completed in 6.287636888s • [SLOW TEST:15.551 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 2 13:04:04.920: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 2 13:04:11.271: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-3129" for this suite. Feb 2 13:04:17.298: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 2 13:04:17.420: INFO: namespace namespaces-3129 deletion completed in 6.143363421s STEP: Destroying namespace "nsdeletetest-7583" for this suite. Feb 2 13:04:17.424: INFO: Namespace nsdeletetest-7583 was already deleted STEP: Destroying namespace "nsdeletetest-6248" for this suite. Feb 2 13:04:23.445: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 2 13:04:23.555: INFO: namespace nsdeletetest-6248 deletion completed in 6.131712303s • [SLOW TEST:18.636 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 2 13:04:23.556: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 2 13:04:33.738: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-3804" for this suite. Feb 2 13:05:13.771: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 2 13:05:13.932: INFO: namespace kubelet-test-3804 deletion completed in 40.185165646s • [SLOW TEST:50.376 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a busybox command in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40 should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 2 13:05:13.933: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false Feb 2 13:05:34.110: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-236 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 2 13:05:34.110: INFO: >>> kubeConfig: /root/.kube/config I0202 13:05:34.207397 8 log.go:172] (0xc0021869a0) (0xc00120e5a0) Create stream I0202 13:05:34.207540 8 log.go:172] (0xc0021869a0) (0xc00120e5a0) Stream added, broadcasting: 1 I0202 13:05:34.214808 8 log.go:172] (0xc0021869a0) Reply frame received for 1 I0202 13:05:34.214865 8 log.go:172] (0xc0021869a0) (0xc0013ca140) Create stream I0202 13:05:34.214875 8 log.go:172] (0xc0021869a0) (0xc0013ca140) Stream added, broadcasting: 3 I0202 13:05:34.218588 8 log.go:172] (0xc0021869a0) Reply frame received for 3 I0202 13:05:34.218667 8 log.go:172] (0xc0021869a0) (0xc000dffe00) Create stream I0202 13:05:34.218676 8 log.go:172] (0xc0021869a0) (0xc000dffe00) Stream added, broadcasting: 5 I0202 13:05:34.220121 8 log.go:172] (0xc0021869a0) Reply frame received for 5 I0202 13:05:34.322621 8 log.go:172] (0xc0021869a0) Data frame received for 3 I0202 13:05:34.322710 8 log.go:172] (0xc0013ca140) (3) Data frame handling I0202 13:05:34.322736 8 log.go:172] (0xc0013ca140) (3) Data frame sent I0202 13:05:34.464459 8 log.go:172] (0xc0021869a0) Data frame received for 1 I0202 13:05:34.464533 8 log.go:172] (0xc0021869a0) (0xc0013ca140) Stream removed, broadcasting: 3 I0202 13:05:34.464660 8 log.go:172] (0xc00120e5a0) (1) Data frame handling I0202 13:05:34.464699 8 log.go:172] (0xc00120e5a0) (1) Data frame sent I0202 13:05:34.464764 8 log.go:172] (0xc0021869a0) (0xc000dffe00) Stream removed, broadcasting: 5 I0202 13:05:34.464793 8 log.go:172] (0xc0021869a0) (0xc00120e5a0) Stream removed, broadcasting: 1 I0202 13:05:34.464975 8 log.go:172] (0xc0021869a0) Go away received I0202 13:05:34.464999 8 log.go:172] (0xc0021869a0) (0xc00120e5a0) Stream removed, broadcasting: 1 I0202 13:05:34.465013 8 log.go:172] (0xc0021869a0) (0xc0013ca140) Stream removed, broadcasting: 3 I0202 13:05:34.465024 8 log.go:172] (0xc0021869a0) (0xc000dffe00) Stream removed, broadcasting: 5 Feb 2 13:05:34.465: INFO: Exec stderr: "" Feb 2 13:05:34.465: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-236 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 2 13:05:34.465: INFO: >>> kubeConfig: /root/.kube/config I0202 13:05:34.546007 8 log.go:172] (0xc001ab33f0) (0xc0022f1540) Create stream I0202 13:05:34.546301 8 log.go:172] (0xc001ab33f0) (0xc0022f1540) Stream added, broadcasting: 1 I0202 13:05:34.568738 8 log.go:172] (0xc001ab33f0) Reply frame received for 1 I0202 13:05:34.569002 8 log.go:172] (0xc001ab33f0) (0xc0017a6960) Create stream I0202 13:05:34.569026 8 log.go:172] (0xc001ab33f0) (0xc0017a6960) Stream added, broadcasting: 3 I0202 13:05:34.571602 8 log.go:172] (0xc001ab33f0) Reply frame received for 3 I0202 13:05:34.571652 8 log.go:172] (0xc001ab33f0) (0xc000dffea0) Create stream I0202 13:05:34.571664 8 log.go:172] (0xc001ab33f0) (0xc000dffea0) Stream added, broadcasting: 5 I0202 13:05:34.575986 8 log.go:172] (0xc001ab33f0) Reply frame received for 5 I0202 13:05:34.876915 8 log.go:172] (0xc001ab33f0) Data frame received for 3 I0202 13:05:34.877074 8 log.go:172] (0xc0017a6960) (3) Data frame handling I0202 13:05:34.877152 8 log.go:172] (0xc0017a6960) (3) Data frame sent I0202 13:05:35.099193 8 log.go:172] (0xc001ab33f0) (0xc0017a6960) Stream removed, broadcasting: 3 I0202 13:05:35.099310 8 log.go:172] (0xc001ab33f0) Data frame received for 1 I0202 13:05:35.099322 8 log.go:172] (0xc0022f1540) (1) Data frame handling I0202 13:05:35.099342 8 log.go:172] (0xc0022f1540) (1) Data frame sent I0202 13:05:35.099352 8 log.go:172] (0xc001ab33f0) (0xc0022f1540) Stream removed, broadcasting: 1 I0202 13:05:35.099475 8 log.go:172] (0xc001ab33f0) (0xc000dffea0) Stream removed, broadcasting: 5 I0202 13:05:35.099599 8 log.go:172] (0xc001ab33f0) Go away received I0202 13:05:35.099742 8 log.go:172] (0xc001ab33f0) (0xc0022f1540) Stream removed, broadcasting: 1 I0202 13:05:35.099803 8 log.go:172] (0xc001ab33f0) (0xc0017a6960) Stream removed, broadcasting: 3 I0202 13:05:35.099819 8 log.go:172] (0xc001ab33f0) (0xc000dffea0) Stream removed, broadcasting: 5 Feb 2 13:05:35.099: INFO: Exec stderr: "" Feb 2 13:05:35.099: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-236 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 2 13:05:35.100: INFO: >>> kubeConfig: /root/.kube/config I0202 13:05:35.178076 8 log.go:172] (0xc002187340) (0xc00120e8c0) Create stream I0202 13:05:35.178182 8 log.go:172] (0xc002187340) (0xc00120e8c0) Stream added, broadcasting: 1 I0202 13:05:35.190029 8 log.go:172] (0xc002187340) Reply frame received for 1 I0202 13:05:35.190060 8 log.go:172] (0xc002187340) (0xc0013ca280) Create stream I0202 13:05:35.190067 8 log.go:172] (0xc002187340) (0xc0013ca280) Stream added, broadcasting: 3 I0202 13:05:35.191695 8 log.go:172] (0xc002187340) Reply frame received for 3 I0202 13:05:35.191724 8 log.go:172] (0xc002187340) (0xc000dfff40) Create stream I0202 13:05:35.191733 8 log.go:172] (0xc002187340) (0xc000dfff40) Stream added, broadcasting: 5 I0202 13:05:35.193083 8 log.go:172] (0xc002187340) Reply frame received for 5 I0202 13:05:35.321187 8 log.go:172] (0xc002187340) Data frame received for 3 I0202 13:05:35.321273 8 log.go:172] (0xc0013ca280) (3) Data frame handling I0202 13:05:35.321297 8 log.go:172] (0xc0013ca280) (3) Data frame sent I0202 13:05:35.417020 8 log.go:172] (0xc002187340) (0xc0013ca280) Stream removed, broadcasting: 3 I0202 13:05:35.417115 8 log.go:172] (0xc002187340) Data frame received for 1 I0202 13:05:35.417173 8 log.go:172] (0xc002187340) (0xc000dfff40) Stream removed, broadcasting: 5 I0202 13:05:35.417212 8 log.go:172] (0xc00120e8c0) (1) Data frame handling I0202 13:05:35.417249 8 log.go:172] (0xc00120e8c0) (1) Data frame sent I0202 13:05:35.417263 8 log.go:172] (0xc002187340) (0xc00120e8c0) Stream removed, broadcasting: 1 I0202 13:05:35.417282 8 log.go:172] (0xc002187340) Go away received I0202 13:05:35.417439 8 log.go:172] (0xc002187340) (0xc00120e8c0) Stream removed, broadcasting: 1 I0202 13:05:35.417448 8 log.go:172] (0xc002187340) (0xc0013ca280) Stream removed, broadcasting: 3 I0202 13:05:35.417451 8 log.go:172] (0xc002187340) (0xc000dfff40) Stream removed, broadcasting: 5 Feb 2 13:05:35.417: INFO: Exec stderr: "" Feb 2 13:05:35.417: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-236 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 2 13:05:35.417: INFO: >>> kubeConfig: /root/.kube/config I0202 13:05:35.464511 8 log.go:172] (0xc002187ce0) (0xc00120ebe0) Create stream I0202 13:05:35.464557 8 log.go:172] (0xc002187ce0) (0xc00120ebe0) Stream added, broadcasting: 1 I0202 13:05:35.471003 8 log.go:172] (0xc002187ce0) Reply frame received for 1 I0202 13:05:35.471027 8 log.go:172] (0xc002187ce0) (0xc0003785a0) Create stream I0202 13:05:35.471034 8 log.go:172] (0xc002187ce0) (0xc0003785a0) Stream added, broadcasting: 3 I0202 13:05:35.473242 8 log.go:172] (0xc002187ce0) Reply frame received for 3 I0202 13:05:35.473280 8 log.go:172] (0xc002187ce0) (0xc001bf2000) Create stream I0202 13:05:35.473295 8 log.go:172] (0xc002187ce0) (0xc001bf2000) Stream added, broadcasting: 5 I0202 13:05:35.474830 8 log.go:172] (0xc002187ce0) Reply frame received for 5 I0202 13:05:35.581225 8 log.go:172] (0xc002187ce0) Data frame received for 3 I0202 13:05:35.581292 8 log.go:172] (0xc0003785a0) (3) Data frame handling I0202 13:05:35.581311 8 log.go:172] (0xc0003785a0) (3) Data frame sent I0202 13:05:35.669092 8 log.go:172] (0xc002187ce0) Data frame received for 1 I0202 13:05:35.669146 8 log.go:172] (0xc00120ebe0) (1) Data frame handling I0202 13:05:35.669180 8 log.go:172] (0xc00120ebe0) (1) Data frame sent I0202 13:05:35.669201 8 log.go:172] (0xc002187ce0) (0xc00120ebe0) Stream removed, broadcasting: 1 I0202 13:05:35.669688 8 log.go:172] (0xc002187ce0) (0xc0003785a0) Stream removed, broadcasting: 3 I0202 13:05:35.669732 8 log.go:172] (0xc002187ce0) (0xc001bf2000) Stream removed, broadcasting: 5 I0202 13:05:35.669819 8 log.go:172] (0xc002187ce0) Go away received I0202 13:05:35.669856 8 log.go:172] (0xc002187ce0) (0xc00120ebe0) Stream removed, broadcasting: 1 I0202 13:05:35.669869 8 log.go:172] (0xc002187ce0) (0xc0003785a0) Stream removed, broadcasting: 3 I0202 13:05:35.669877 8 log.go:172] (0xc002187ce0) (0xc001bf2000) Stream removed, broadcasting: 5 Feb 2 13:05:35.669: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount Feb 2 13:05:35.670: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-236 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 2 13:05:35.670: INFO: >>> kubeConfig: /root/.kube/config I0202 13:05:35.733133 8 log.go:172] (0xc000dfda20) (0xc0017a6fa0) Create stream I0202 13:05:35.733185 8 log.go:172] (0xc000dfda20) (0xc0017a6fa0) Stream added, broadcasting: 1 I0202 13:05:35.738945 8 log.go:172] (0xc000dfda20) Reply frame received for 1 I0202 13:05:35.739011 8 log.go:172] (0xc000dfda20) (0xc001bf2140) Create stream I0202 13:05:35.739019 8 log.go:172] (0xc000dfda20) (0xc001bf2140) Stream added, broadcasting: 3 I0202 13:05:35.739960 8 log.go:172] (0xc000dfda20) Reply frame received for 3 I0202 13:05:35.739982 8 log.go:172] (0xc000dfda20) (0xc00120ef00) Create stream I0202 13:05:35.739991 8 log.go:172] (0xc000dfda20) (0xc00120ef00) Stream added, broadcasting: 5 I0202 13:05:35.740894 8 log.go:172] (0xc000dfda20) Reply frame received for 5 I0202 13:05:35.804833 8 log.go:172] (0xc000dfda20) Data frame received for 3 I0202 13:05:35.804952 8 log.go:172] (0xc001bf2140) (3) Data frame handling I0202 13:05:35.804978 8 log.go:172] (0xc001bf2140) (3) Data frame sent I0202 13:05:35.904252 8 log.go:172] (0xc000dfda20) (0xc001bf2140) Stream removed, broadcasting: 3 I0202 13:05:35.904500 8 log.go:172] (0xc000dfda20) Data frame received for 1 I0202 13:05:35.904531 8 log.go:172] (0xc0017a6fa0) (1) Data frame handling I0202 13:05:35.904553 8 log.go:172] (0xc0017a6fa0) (1) Data frame sent I0202 13:05:35.904560 8 log.go:172] (0xc000dfda20) (0xc0017a6fa0) Stream removed, broadcasting: 1 I0202 13:05:35.904815 8 log.go:172] (0xc000dfda20) (0xc00120ef00) Stream removed, broadcasting: 5 I0202 13:05:35.904894 8 log.go:172] (0xc000dfda20) (0xc0017a6fa0) Stream removed, broadcasting: 1 I0202 13:05:35.904906 8 log.go:172] (0xc000dfda20) (0xc001bf2140) Stream removed, broadcasting: 3 I0202 13:05:35.904935 8 log.go:172] (0xc000dfda20) (0xc00120ef00) Stream removed, broadcasting: 5 I0202 13:05:35.905208 8 log.go:172] (0xc000dfda20) Go away received Feb 2 13:05:35.905: INFO: Exec stderr: "" Feb 2 13:05:35.905: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-236 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 2 13:05:35.905: INFO: >>> kubeConfig: /root/.kube/config I0202 13:05:35.952684 8 log.go:172] (0xc002078a50) (0xc00120f180) Create stream I0202 13:05:35.952784 8 log.go:172] (0xc002078a50) (0xc00120f180) Stream added, broadcasting: 1 I0202 13:05:35.959399 8 log.go:172] (0xc002078a50) Reply frame received for 1 I0202 13:05:35.959431 8 log.go:172] (0xc002078a50) (0xc0017a74a0) Create stream I0202 13:05:35.959438 8 log.go:172] (0xc002078a50) (0xc0017a74a0) Stream added, broadcasting: 3 I0202 13:05:35.960456 8 log.go:172] (0xc002078a50) Reply frame received for 3 I0202 13:05:35.960489 8 log.go:172] (0xc002078a50) (0xc001bf21e0) Create stream I0202 13:05:35.960502 8 log.go:172] (0xc002078a50) (0xc001bf21e0) Stream added, broadcasting: 5 I0202 13:05:35.963800 8 log.go:172] (0xc002078a50) Reply frame received for 5 I0202 13:05:36.046837 8 log.go:172] (0xc002078a50) Data frame received for 3 I0202 13:05:36.046869 8 log.go:172] (0xc0017a74a0) (3) Data frame handling I0202 13:05:36.046883 8 log.go:172] (0xc0017a74a0) (3) Data frame sent I0202 13:05:36.153586 8 log.go:172] (0xc002078a50) (0xc0017a74a0) Stream removed, broadcasting: 3 I0202 13:05:36.153799 8 log.go:172] (0xc002078a50) Data frame received for 1 I0202 13:05:36.153837 8 log.go:172] (0xc00120f180) (1) Data frame handling I0202 13:05:36.153858 8 log.go:172] (0xc00120f180) (1) Data frame sent I0202 13:05:36.153880 8 log.go:172] (0xc002078a50) (0xc00120f180) Stream removed, broadcasting: 1 I0202 13:05:36.154224 8 log.go:172] (0xc002078a50) (0xc001bf21e0) Stream removed, broadcasting: 5 I0202 13:05:36.154287 8 log.go:172] (0xc002078a50) Go away received I0202 13:05:36.154316 8 log.go:172] (0xc002078a50) (0xc00120f180) Stream removed, broadcasting: 1 I0202 13:05:36.154334 8 log.go:172] (0xc002078a50) (0xc0017a74a0) Stream removed, broadcasting: 3 I0202 13:05:36.154369 8 log.go:172] (0xc002078a50) (0xc001bf21e0) Stream removed, broadcasting: 5 Feb 2 13:05:36.154: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true Feb 2 13:05:36.154: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-236 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 2 13:05:36.154: INFO: >>> kubeConfig: /root/.kube/config I0202 13:05:36.212243 8 log.go:172] (0xc0000ecfd0) (0xc0003788c0) Create stream I0202 13:05:36.212314 8 log.go:172] (0xc0000ecfd0) (0xc0003788c0) Stream added, broadcasting: 1 I0202 13:05:36.223279 8 log.go:172] (0xc0000ecfd0) Reply frame received for 1 I0202 13:05:36.223317 8 log.go:172] (0xc0000ecfd0) (0xc00120f400) Create stream I0202 13:05:36.223332 8 log.go:172] (0xc0000ecfd0) (0xc00120f400) Stream added, broadcasting: 3 I0202 13:05:36.229315 8 log.go:172] (0xc0000ecfd0) Reply frame received for 3 I0202 13:05:36.229340 8 log.go:172] (0xc0000ecfd0) (0xc0013ca320) Create stream I0202 13:05:36.229348 8 log.go:172] (0xc0000ecfd0) (0xc0013ca320) Stream added, broadcasting: 5 I0202 13:05:36.230886 8 log.go:172] (0xc0000ecfd0) Reply frame received for 5 I0202 13:05:36.384910 8 log.go:172] (0xc0000ecfd0) Data frame received for 3 I0202 13:05:36.384990 8 log.go:172] (0xc00120f400) (3) Data frame handling I0202 13:05:36.385026 8 log.go:172] (0xc00120f400) (3) Data frame sent I0202 13:05:36.498195 8 log.go:172] (0xc0000ecfd0) (0xc00120f400) Stream removed, broadcasting: 3 I0202 13:05:36.498374 8 log.go:172] (0xc0000ecfd0) (0xc0013ca320) Stream removed, broadcasting: 5 I0202 13:05:36.498449 8 log.go:172] (0xc0000ecfd0) Data frame received for 1 I0202 13:05:36.498485 8 log.go:172] (0xc0003788c0) (1) Data frame handling I0202 13:05:36.498525 8 log.go:172] (0xc0003788c0) (1) Data frame sent I0202 13:05:36.498574 8 log.go:172] (0xc0000ecfd0) (0xc0003788c0) Stream removed, broadcasting: 1 I0202 13:05:36.498613 8 log.go:172] (0xc0000ecfd0) Go away received I0202 13:05:36.498894 8 log.go:172] (0xc0000ecfd0) (0xc0003788c0) Stream removed, broadcasting: 1 I0202 13:05:36.498915 8 log.go:172] (0xc0000ecfd0) (0xc00120f400) Stream removed, broadcasting: 3 I0202 13:05:36.498927 8 log.go:172] (0xc0000ecfd0) (0xc0013ca320) Stream removed, broadcasting: 5 Feb 2 13:05:36.499: INFO: Exec stderr: "" Feb 2 13:05:36.499: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-236 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 2 13:05:36.499: INFO: >>> kubeConfig: /root/.kube/config I0202 13:05:36.597540 8 log.go:172] (0xc001aaba20) (0xc001bf2500) Create stream I0202 13:05:36.597679 8 log.go:172] (0xc001aaba20) (0xc001bf2500) Stream added, broadcasting: 1 I0202 13:05:36.603108 8 log.go:172] (0xc001aaba20) Reply frame received for 1 I0202 13:05:36.603162 8 log.go:172] (0xc001aaba20) (0xc0013ca3c0) Create stream I0202 13:05:36.603193 8 log.go:172] (0xc001aaba20) (0xc0013ca3c0) Stream added, broadcasting: 3 I0202 13:05:36.605792 8 log.go:172] (0xc001aaba20) Reply frame received for 3 I0202 13:05:36.605823 8 log.go:172] (0xc001aaba20) (0xc0013ca460) Create stream I0202 13:05:36.605837 8 log.go:172] (0xc001aaba20) (0xc0013ca460) Stream added, broadcasting: 5 I0202 13:05:36.607722 8 log.go:172] (0xc001aaba20) Reply frame received for 5 I0202 13:05:36.696612 8 log.go:172] (0xc001aaba20) Data frame received for 3 I0202 13:05:36.696748 8 log.go:172] (0xc0013ca3c0) (3) Data frame handling I0202 13:05:36.696812 8 log.go:172] (0xc0013ca3c0) (3) Data frame sent I0202 13:05:36.786260 8 log.go:172] (0xc001aaba20) (0xc0013ca3c0) Stream removed, broadcasting: 3 I0202 13:05:36.786355 8 log.go:172] (0xc001aaba20) Data frame received for 1 I0202 13:05:36.786377 8 log.go:172] (0xc001bf2500) (1) Data frame handling I0202 13:05:36.786398 8 log.go:172] (0xc001bf2500) (1) Data frame sent I0202 13:05:36.786535 8 log.go:172] (0xc001aaba20) (0xc0013ca460) Stream removed, broadcasting: 5 I0202 13:05:36.786589 8 log.go:172] (0xc001aaba20) (0xc001bf2500) Stream removed, broadcasting: 1 I0202 13:05:36.786625 8 log.go:172] (0xc001aaba20) Go away received I0202 13:05:36.786813 8 log.go:172] (0xc001aaba20) (0xc001bf2500) Stream removed, broadcasting: 1 I0202 13:05:36.786841 8 log.go:172] (0xc001aaba20) (0xc0013ca3c0) Stream removed, broadcasting: 3 I0202 13:05:36.786882 8 log.go:172] (0xc001aaba20) (0xc0013ca460) Stream removed, broadcasting: 5 Feb 2 13:05:36.786: INFO: Exec stderr: "" Feb 2 13:05:36.786: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-236 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 2 13:05:36.787: INFO: >>> kubeConfig: /root/.kube/config I0202 13:05:36.841323 8 log.go:172] (0xc0025484d0) (0xc001bf2820) Create stream I0202 13:05:36.841368 8 log.go:172] (0xc0025484d0) (0xc001bf2820) Stream added, broadcasting: 1 I0202 13:05:36.845870 8 log.go:172] (0xc0025484d0) Reply frame received for 1 I0202 13:05:36.845916 8 log.go:172] (0xc0025484d0) (0xc0013ca500) Create stream I0202 13:05:36.845927 8 log.go:172] (0xc0025484d0) (0xc0013ca500) Stream added, broadcasting: 3 I0202 13:05:36.847599 8 log.go:172] (0xc0025484d0) Reply frame received for 3 I0202 13:05:36.847637 8 log.go:172] (0xc0025484d0) (0xc0017a75e0) Create stream I0202 13:05:36.847654 8 log.go:172] (0xc0025484d0) (0xc0017a75e0) Stream added, broadcasting: 5 I0202 13:05:36.850925 8 log.go:172] (0xc0025484d0) Reply frame received for 5 I0202 13:05:36.947987 8 log.go:172] (0xc0025484d0) Data frame received for 3 I0202 13:05:36.948277 8 log.go:172] (0xc0013ca500) (3) Data frame handling I0202 13:05:36.948340 8 log.go:172] (0xc0013ca500) (3) Data frame sent I0202 13:05:37.043883 8 log.go:172] (0xc0025484d0) Data frame received for 1 I0202 13:05:37.043950 8 log.go:172] (0xc0025484d0) (0xc0013ca500) Stream removed, broadcasting: 3 I0202 13:05:37.044008 8 log.go:172] (0xc001bf2820) (1) Data frame handling I0202 13:05:37.044025 8 log.go:172] (0xc001bf2820) (1) Data frame sent I0202 13:05:37.044032 8 log.go:172] (0xc0025484d0) (0xc001bf2820) Stream removed, broadcasting: 1 I0202 13:05:37.044225 8 log.go:172] (0xc0025484d0) (0xc0017a75e0) Stream removed, broadcasting: 5 I0202 13:05:37.044273 8 log.go:172] (0xc0025484d0) (0xc001bf2820) Stream removed, broadcasting: 1 I0202 13:05:37.044284 8 log.go:172] (0xc0025484d0) (0xc0013ca500) Stream removed, broadcasting: 3 I0202 13:05:37.044296 8 log.go:172] (0xc0025484d0) (0xc0017a75e0) Stream removed, broadcasting: 5 I0202 13:05:37.044428 8 log.go:172] (0xc0025484d0) Go away received Feb 2 13:05:37.044: INFO: Exec stderr: "" Feb 2 13:05:37.044: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-236 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 2 13:05:37.044: INFO: >>> kubeConfig: /root/.kube/config I0202 13:05:37.092523 8 log.go:172] (0xc0022c8dc0) (0xc0017a7900) Create stream I0202 13:05:37.092596 8 log.go:172] (0xc0022c8dc0) (0xc0017a7900) Stream added, broadcasting: 1 I0202 13:05:37.097144 8 log.go:172] (0xc0022c8dc0) Reply frame received for 1 I0202 13:05:37.097179 8 log.go:172] (0xc0022c8dc0) (0xc001bf2aa0) Create stream I0202 13:05:37.097189 8 log.go:172] (0xc0022c8dc0) (0xc001bf2aa0) Stream added, broadcasting: 3 I0202 13:05:37.098301 8 log.go:172] (0xc0022c8dc0) Reply frame received for 3 I0202 13:05:37.098328 8 log.go:172] (0xc0022c8dc0) (0xc000378a00) Create stream I0202 13:05:37.098351 8 log.go:172] (0xc0022c8dc0) (0xc000378a00) Stream added, broadcasting: 5 I0202 13:05:37.099948 8 log.go:172] (0xc0022c8dc0) Reply frame received for 5 I0202 13:05:37.205001 8 log.go:172] (0xc0022c8dc0) Data frame received for 3 I0202 13:05:37.205068 8 log.go:172] (0xc001bf2aa0) (3) Data frame handling I0202 13:05:37.205094 8 log.go:172] (0xc001bf2aa0) (3) Data frame sent I0202 13:05:37.375218 8 log.go:172] (0xc0022c8dc0) (0xc001bf2aa0) Stream removed, broadcasting: 3 I0202 13:05:37.375430 8 log.go:172] (0xc0022c8dc0) Data frame received for 1 I0202 13:05:37.375462 8 log.go:172] (0xc0017a7900) (1) Data frame handling I0202 13:05:37.375514 8 log.go:172] (0xc0017a7900) (1) Data frame sent I0202 13:05:37.375535 8 log.go:172] (0xc0022c8dc0) (0xc000378a00) Stream removed, broadcasting: 5 I0202 13:05:37.375610 8 log.go:172] (0xc0022c8dc0) (0xc0017a7900) Stream removed, broadcasting: 1 I0202 13:05:37.375641 8 log.go:172] (0xc0022c8dc0) Go away received I0202 13:05:37.376099 8 log.go:172] (0xc0022c8dc0) (0xc0017a7900) Stream removed, broadcasting: 1 I0202 13:05:37.376123 8 log.go:172] (0xc0022c8dc0) (0xc001bf2aa0) Stream removed, broadcasting: 3 I0202 13:05:37.376138 8 log.go:172] (0xc0022c8dc0) (0xc000378a00) Stream removed, broadcasting: 5 Feb 2 13:05:37.376: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 2 13:05:37.376: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-kubelet-etc-hosts-236" for this suite. Feb 2 13:06:29.415: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 2 13:06:29.527: INFO: namespace e2e-kubelet-etc-hosts-236 deletion completed in 52.139854017s • [SLOW TEST:75.595 seconds] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 2 13:06:29.528: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Feb 2 13:06:38.945: INFO: Expected: &{OK} to match Container's Termination Message: OK -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 2 13:06:39.030: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-2416" for this suite. Feb 2 13:06:47.107: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 2 13:06:47.209: INFO: namespace container-runtime-2416 deletion completed in 8.12705153s • [SLOW TEST:17.682 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 2 13:06:47.209: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name cm-test-opt-del-b5d06096-20d6-4a6a-b10b-e27abc9efcb0 STEP: Creating configMap with name cm-test-opt-upd-30d399fc-a577-4515-9344-0c42881ce4cd STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-b5d06096-20d6-4a6a-b10b-e27abc9efcb0 STEP: Updating configmap cm-test-opt-upd-30d399fc-a577-4515-9344-0c42881ce4cd STEP: Creating configMap with name cm-test-opt-create-60398040-8598-48c7-b7f0-121a7884eb84 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 2 13:07:07.784: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5958" for this suite. Feb 2 13:07:29.820: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 2 13:07:29.910: INFO: namespace configmap-5958 deletion completed in 22.121378031s • [SLOW TEST:42.700 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 2 13:07:29.910: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Feb 2 13:07:30.019: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c9ab5745-be46-436c-be13-a28bbb5c66e4" in namespace "downward-api-8086" to be "success or failure" Feb 2 13:07:30.073: INFO: Pod "downwardapi-volume-c9ab5745-be46-436c-be13-a28bbb5c66e4": Phase="Pending", Reason="", readiness=false. Elapsed: 53.434605ms Feb 2 13:07:32.095: INFO: Pod "downwardapi-volume-c9ab5745-be46-436c-be13-a28bbb5c66e4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.07521245s Feb 2 13:07:34.121: INFO: Pod "downwardapi-volume-c9ab5745-be46-436c-be13-a28bbb5c66e4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.101421503s Feb 2 13:07:36.131: INFO: Pod "downwardapi-volume-c9ab5745-be46-436c-be13-a28bbb5c66e4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.111755323s Feb 2 13:07:38.146: INFO: Pod "downwardapi-volume-c9ab5745-be46-436c-be13-a28bbb5c66e4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.126682786s STEP: Saw pod success Feb 2 13:07:38.146: INFO: Pod "downwardapi-volume-c9ab5745-be46-436c-be13-a28bbb5c66e4" satisfied condition "success or failure" Feb 2 13:07:38.153: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-c9ab5745-be46-436c-be13-a28bbb5c66e4 container client-container: STEP: delete the pod Feb 2 13:07:38.359: INFO: Waiting for pod downwardapi-volume-c9ab5745-be46-436c-be13-a28bbb5c66e4 to disappear Feb 2 13:07:38.373: INFO: Pod downwardapi-volume-c9ab5745-be46-436c-be13-a28bbb5c66e4 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 2 13:07:38.373: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8086" for this suite. Feb 2 13:07:44.422: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 2 13:07:44.611: INFO: namespace downward-api-8086 deletion completed in 6.226749003s • [SLOW TEST:14.701 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 2 13:07:44.611: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-rrtdj in namespace proxy-4412 I0202 13:07:44.935982 8 runners.go:180] Created replication controller with name: proxy-service-rrtdj, namespace: proxy-4412, replica count: 1 I0202 13:07:45.988552 8 runners.go:180] proxy-service-rrtdj Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0202 13:07:46.989116 8 runners.go:180] proxy-service-rrtdj Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0202 13:07:47.990255 8 runners.go:180] proxy-service-rrtdj Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0202 13:07:48.990815 8 runners.go:180] proxy-service-rrtdj Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0202 13:07:49.991264 8 runners.go:180] proxy-service-rrtdj Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0202 13:07:50.991922 8 runners.go:180] proxy-service-rrtdj Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0202 13:07:51.993004 8 runners.go:180] proxy-service-rrtdj Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0202 13:07:52.993662 8 runners.go:180] proxy-service-rrtdj Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0202 13:07:53.994336 8 runners.go:180] proxy-service-rrtdj Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0202 13:07:54.995443 8 runners.go:180] proxy-service-rrtdj Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0202 13:07:55.995954 8 runners.go:180] proxy-service-rrtdj Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0202 13:07:56.996539 8 runners.go:180] proxy-service-rrtdj Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0202 13:07:57.997086 8 runners.go:180] proxy-service-rrtdj Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0202 13:07:58.997527 8 runners.go:180] proxy-service-rrtdj Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Feb 2 13:07:59.004: INFO: setup took 14.249397139s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts Feb 2 13:07:59.026: INFO: (0) /api/v1/namespaces/proxy-4412/pods/http:proxy-service-rrtdj-fqqbx:160/proxy/: foo (200; 22.422534ms) Feb 2 13:07:59.026: INFO: (0) /api/v1/namespaces/proxy-4412/pods/proxy-service-rrtdj-fqqbx/proxy/: test (200; 22.56506ms) Feb 2 13:07:59.026: INFO: (0) /api/v1/namespaces/proxy-4412/pods/proxy-service-rrtdj-fqqbx:1080/proxy/: test<... (200; 22.565077ms) Feb 2 13:07:59.026: INFO: (0) /api/v1/namespaces/proxy-4412/pods/proxy-service-rrtdj-fqqbx:160/proxy/: foo (200; 22.606873ms) Feb 2 13:07:59.027: INFO: (0) /api/v1/namespaces/proxy-4412/pods/http:proxy-service-rrtdj-fqqbx:1080/proxy/: ... (200; 23.207178ms) Feb 2 13:07:59.028: INFO: (0) /api/v1/namespaces/proxy-4412/services/proxy-service-rrtdj:portname1/proxy/: foo (200; 24.127039ms) Feb 2 13:07:59.032: INFO: (0) /api/v1/namespaces/proxy-4412/pods/http:proxy-service-rrtdj-fqqbx:162/proxy/: bar (200; 27.740486ms) Feb 2 13:07:59.033: INFO: (0) /api/v1/namespaces/proxy-4412/services/http:proxy-service-rrtdj:portname1/proxy/: foo (200; 28.652342ms) Feb 2 13:07:59.033: INFO: (0) /api/v1/namespaces/proxy-4412/services/proxy-service-rrtdj:portname2/proxy/: bar (200; 28.774974ms) Feb 2 13:07:59.035: INFO: (0) /api/v1/namespaces/proxy-4412/pods/proxy-service-rrtdj-fqqbx:162/proxy/: bar (200; 31.710064ms) Feb 2 13:07:59.036: INFO: (0) /api/v1/namespaces/proxy-4412/services/http:proxy-service-rrtdj:portname2/proxy/: bar (200; 31.612502ms) Feb 2 13:07:59.040: INFO: (0) /api/v1/namespaces/proxy-4412/services/https:proxy-service-rrtdj:tlsportname2/proxy/: tls qux (200; 35.836222ms) Feb 2 13:07:59.040: INFO: (0) /api/v1/namespaces/proxy-4412/services/https:proxy-service-rrtdj:tlsportname1/proxy/: tls baz (200; 36.176503ms) Feb 2 13:07:59.040: INFO: (0) /api/v1/namespaces/proxy-4412/pods/https:proxy-service-rrtdj-fqqbx:443/proxy/: test<... (200; 18.826414ms) Feb 2 13:07:59.069: INFO: (1) /api/v1/namespaces/proxy-4412/pods/proxy-service-rrtdj-fqqbx/proxy/: test (200; 19.251569ms) Feb 2 13:07:59.070: INFO: (1) /api/v1/namespaces/proxy-4412/pods/proxy-service-rrtdj-fqqbx:160/proxy/: foo (200; 20.297124ms) Feb 2 13:07:59.070: INFO: (1) /api/v1/namespaces/proxy-4412/pods/http:proxy-service-rrtdj-fqqbx:1080/proxy/: ... (200; 20.490714ms) Feb 2 13:07:59.070: INFO: (1) /api/v1/namespaces/proxy-4412/pods/proxy-service-rrtdj-fqqbx:162/proxy/: bar (200; 20.722764ms) Feb 2 13:07:59.070: INFO: (1) /api/v1/namespaces/proxy-4412/services/proxy-service-rrtdj:portname1/proxy/: foo (200; 21.169302ms) Feb 2 13:07:59.072: INFO: (1) /api/v1/namespaces/proxy-4412/pods/https:proxy-service-rrtdj-fqqbx:462/proxy/: tls qux (200; 22.085931ms) Feb 2 13:07:59.072: INFO: (1) /api/v1/namespaces/proxy-4412/services/https:proxy-service-rrtdj:tlsportname2/proxy/: tls qux (200; 22.640213ms) Feb 2 13:07:59.072: INFO: (1) /api/v1/namespaces/proxy-4412/services/proxy-service-rrtdj:portname2/proxy/: bar (200; 22.881516ms) Feb 2 13:07:59.073: INFO: (1) /api/v1/namespaces/proxy-4412/pods/https:proxy-service-rrtdj-fqqbx:460/proxy/: tls baz (200; 23.561457ms) Feb 2 13:07:59.074: INFO: (1) /api/v1/namespaces/proxy-4412/pods/http:proxy-service-rrtdj-fqqbx:162/proxy/: bar (200; 24.199304ms) Feb 2 13:07:59.074: INFO: (1) /api/v1/namespaces/proxy-4412/pods/https:proxy-service-rrtdj-fqqbx:443/proxy/: ... (200; 10.101397ms) Feb 2 13:07:59.086: INFO: (2) /api/v1/namespaces/proxy-4412/pods/proxy-service-rrtdj-fqqbx:162/proxy/: bar (200; 11.393236ms) Feb 2 13:07:59.088: INFO: (2) /api/v1/namespaces/proxy-4412/pods/proxy-service-rrtdj-fqqbx:1080/proxy/: test<... (200; 13.164203ms) Feb 2 13:07:59.088: INFO: (2) /api/v1/namespaces/proxy-4412/pods/http:proxy-service-rrtdj-fqqbx:160/proxy/: foo (200; 13.100892ms) Feb 2 13:07:59.089: INFO: (2) /api/v1/namespaces/proxy-4412/pods/https:proxy-service-rrtdj-fqqbx:460/proxy/: tls baz (200; 13.053896ms) Feb 2 13:07:59.089: INFO: (2) /api/v1/namespaces/proxy-4412/pods/http:proxy-service-rrtdj-fqqbx:162/proxy/: bar (200; 13.298409ms) Feb 2 13:07:59.092: INFO: (2) /api/v1/namespaces/proxy-4412/pods/proxy-service-rrtdj-fqqbx/proxy/: test (200; 16.573723ms) Feb 2 13:07:59.092: INFO: (2) /api/v1/namespaces/proxy-4412/pods/https:proxy-service-rrtdj-fqqbx:462/proxy/: tls qux (200; 16.703395ms) Feb 2 13:07:59.092: INFO: (2) /api/v1/namespaces/proxy-4412/services/proxy-service-rrtdj:portname1/proxy/: foo (200; 16.665817ms) Feb 2 13:07:59.092: INFO: (2) /api/v1/namespaces/proxy-4412/services/http:proxy-service-rrtdj:portname2/proxy/: bar (200; 17.463075ms) Feb 2 13:07:59.093: INFO: (2) /api/v1/namespaces/proxy-4412/pods/proxy-service-rrtdj-fqqbx:160/proxy/: foo (200; 17.84777ms) Feb 2 13:07:59.094: INFO: (2) /api/v1/namespaces/proxy-4412/pods/https:proxy-service-rrtdj-fqqbx:443/proxy/: ... (200; 8.030997ms) Feb 2 13:07:59.106: INFO: (3) /api/v1/namespaces/proxy-4412/pods/https:proxy-service-rrtdj-fqqbx:462/proxy/: tls qux (200; 9.01751ms) Feb 2 13:07:59.107: INFO: (3) /api/v1/namespaces/proxy-4412/pods/proxy-service-rrtdj-fqqbx/proxy/: test (200; 9.413302ms) Feb 2 13:07:59.108: INFO: (3) /api/v1/namespaces/proxy-4412/pods/https:proxy-service-rrtdj-fqqbx:460/proxy/: tls baz (200; 10.329403ms) Feb 2 13:07:59.108: INFO: (3) /api/v1/namespaces/proxy-4412/pods/http:proxy-service-rrtdj-fqqbx:160/proxy/: foo (200; 10.78758ms) Feb 2 13:07:59.110: INFO: (3) /api/v1/namespaces/proxy-4412/pods/proxy-service-rrtdj-fqqbx:162/proxy/: bar (200; 12.680362ms) Feb 2 13:07:59.111: INFO: (3) /api/v1/namespaces/proxy-4412/pods/http:proxy-service-rrtdj-fqqbx:162/proxy/: bar (200; 13.637016ms) Feb 2 13:07:59.113: INFO: (3) /api/v1/namespaces/proxy-4412/services/http:proxy-service-rrtdj:portname1/proxy/: foo (200; 15.57474ms) Feb 2 13:07:59.113: INFO: (3) /api/v1/namespaces/proxy-4412/services/https:proxy-service-rrtdj:tlsportname1/proxy/: tls baz (200; 15.764981ms) Feb 2 13:07:59.114: INFO: (3) /api/v1/namespaces/proxy-4412/services/proxy-service-rrtdj:portname1/proxy/: foo (200; 16.506943ms) Feb 2 13:07:59.114: INFO: (3) /api/v1/namespaces/proxy-4412/services/proxy-service-rrtdj:portname2/proxy/: bar (200; 16.54408ms) Feb 2 13:07:59.114: INFO: (3) /api/v1/namespaces/proxy-4412/pods/https:proxy-service-rrtdj-fqqbx:443/proxy/: test<... (200; 16.311441ms) Feb 2 13:07:59.114: INFO: (3) /api/v1/namespaces/proxy-4412/pods/proxy-service-rrtdj-fqqbx:160/proxy/: foo (200; 16.351868ms) Feb 2 13:07:59.115: INFO: (3) /api/v1/namespaces/proxy-4412/services/http:proxy-service-rrtdj:portname2/proxy/: bar (200; 17.92075ms) Feb 2 13:07:59.120: INFO: (4) /api/v1/namespaces/proxy-4412/pods/proxy-service-rrtdj-fqqbx:160/proxy/: foo (200; 4.665778ms) Feb 2 13:07:59.122: INFO: (4) /api/v1/namespaces/proxy-4412/pods/https:proxy-service-rrtdj-fqqbx:443/proxy/: test<... (200; 14.470817ms) Feb 2 13:07:59.133: INFO: (4) /api/v1/namespaces/proxy-4412/pods/proxy-service-rrtdj-fqqbx/proxy/: test (200; 18.165584ms) Feb 2 13:07:59.134: INFO: (4) /api/v1/namespaces/proxy-4412/services/proxy-service-rrtdj:portname1/proxy/: foo (200; 18.097509ms) Feb 2 13:07:59.134: INFO: (4) /api/v1/namespaces/proxy-4412/services/http:proxy-service-rrtdj:portname1/proxy/: foo (200; 17.978521ms) Feb 2 13:07:59.135: INFO: (4) /api/v1/namespaces/proxy-4412/services/https:proxy-service-rrtdj:tlsportname1/proxy/: tls baz (200; 19.131648ms) Feb 2 13:07:59.135: INFO: (4) /api/v1/namespaces/proxy-4412/services/http:proxy-service-rrtdj:portname2/proxy/: bar (200; 19.004681ms) Feb 2 13:07:59.135: INFO: (4) /api/v1/namespaces/proxy-4412/pods/http:proxy-service-rrtdj-fqqbx:1080/proxy/: ... (200; 19.25931ms) Feb 2 13:07:59.135: INFO: (4) /api/v1/namespaces/proxy-4412/pods/proxy-service-rrtdj-fqqbx:162/proxy/: bar (200; 19.628061ms) Feb 2 13:07:59.139: INFO: (5) /api/v1/namespaces/proxy-4412/pods/http:proxy-service-rrtdj-fqqbx:160/proxy/: foo (200; 4.418536ms) Feb 2 13:07:59.148: INFO: (5) /api/v1/namespaces/proxy-4412/pods/http:proxy-service-rrtdj-fqqbx:162/proxy/: bar (200; 12.555866ms) Feb 2 13:07:59.148: INFO: (5) /api/v1/namespaces/proxy-4412/pods/https:proxy-service-rrtdj-fqqbx:460/proxy/: tls baz (200; 12.833639ms) Feb 2 13:07:59.148: INFO: (5) /api/v1/namespaces/proxy-4412/pods/http:proxy-service-rrtdj-fqqbx:1080/proxy/: ... (200; 12.778837ms) Feb 2 13:07:59.148: INFO: (5) /api/v1/namespaces/proxy-4412/pods/proxy-service-rrtdj-fqqbx:162/proxy/: bar (200; 13.139884ms) Feb 2 13:07:59.149: INFO: (5) /api/v1/namespaces/proxy-4412/pods/https:proxy-service-rrtdj-fqqbx:462/proxy/: tls qux (200; 13.148463ms) Feb 2 13:07:59.149: INFO: (5) /api/v1/namespaces/proxy-4412/pods/proxy-service-rrtdj-fqqbx/proxy/: test (200; 13.232264ms) Feb 2 13:07:59.149: INFO: (5) /api/v1/namespaces/proxy-4412/pods/proxy-service-rrtdj-fqqbx:160/proxy/: foo (200; 13.495793ms) Feb 2 13:07:59.149: INFO: (5) /api/v1/namespaces/proxy-4412/pods/proxy-service-rrtdj-fqqbx:1080/proxy/: test<... (200; 13.348514ms) Feb 2 13:07:59.149: INFO: (5) /api/v1/namespaces/proxy-4412/pods/https:proxy-service-rrtdj-fqqbx:443/proxy/: test<... (200; 12.113691ms) Feb 2 13:07:59.165: INFO: (6) /api/v1/namespaces/proxy-4412/services/proxy-service-rrtdj:portname2/proxy/: bar (200; 11.621192ms) Feb 2 13:07:59.165: INFO: (6) /api/v1/namespaces/proxy-4412/services/https:proxy-service-rrtdj:tlsportname1/proxy/: tls baz (200; 11.59122ms) Feb 2 13:07:59.166: INFO: (6) /api/v1/namespaces/proxy-4412/pods/http:proxy-service-rrtdj-fqqbx:1080/proxy/: ... (200; 12.505861ms) Feb 2 13:07:59.166: INFO: (6) /api/v1/namespaces/proxy-4412/pods/proxy-service-rrtdj-fqqbx:162/proxy/: bar (200; 11.523581ms) Feb 2 13:07:59.166: INFO: (6) /api/v1/namespaces/proxy-4412/pods/https:proxy-service-rrtdj-fqqbx:443/proxy/: test (200; 12.105377ms) Feb 2 13:07:59.166: INFO: (6) /api/v1/namespaces/proxy-4412/services/http:proxy-service-rrtdj:portname1/proxy/: foo (200; 12.195258ms) Feb 2 13:07:59.167: INFO: (6) /api/v1/namespaces/proxy-4412/services/http:proxy-service-rrtdj:portname2/proxy/: bar (200; 13.773796ms) Feb 2 13:07:59.172: INFO: (7) /api/v1/namespaces/proxy-4412/pods/proxy-service-rrtdj-fqqbx:160/proxy/: foo (200; 5.507132ms) Feb 2 13:07:59.173: INFO: (7) /api/v1/namespaces/proxy-4412/pods/http:proxy-service-rrtdj-fqqbx:160/proxy/: foo (200; 6.227533ms) Feb 2 13:07:59.173: INFO: (7) /api/v1/namespaces/proxy-4412/pods/proxy-service-rrtdj-fqqbx:162/proxy/: bar (200; 5.975901ms) Feb 2 13:07:59.173: INFO: (7) /api/v1/namespaces/proxy-4412/pods/proxy-service-rrtdj-fqqbx:1080/proxy/: test<... (200; 6.309579ms) Feb 2 13:07:59.173: INFO: (7) /api/v1/namespaces/proxy-4412/pods/https:proxy-service-rrtdj-fqqbx:460/proxy/: tls baz (200; 6.122505ms) Feb 2 13:07:59.174: INFO: (7) /api/v1/namespaces/proxy-4412/pods/https:proxy-service-rrtdj-fqqbx:443/proxy/: test (200; 8.429455ms) Feb 2 13:07:59.177: INFO: (7) /api/v1/namespaces/proxy-4412/pods/http:proxy-service-rrtdj-fqqbx:1080/proxy/: ... (200; 8.698787ms) Feb 2 13:07:59.177: INFO: (7) /api/v1/namespaces/proxy-4412/services/http:proxy-service-rrtdj:portname1/proxy/: foo (200; 9.437054ms) Feb 2 13:07:59.177: INFO: (7) /api/v1/namespaces/proxy-4412/pods/http:proxy-service-rrtdj-fqqbx:162/proxy/: bar (200; 9.135328ms) Feb 2 13:07:59.177: INFO: (7) /api/v1/namespaces/proxy-4412/services/proxy-service-rrtdj:portname2/proxy/: bar (200; 9.648361ms) Feb 2 13:07:59.177: INFO: (7) /api/v1/namespaces/proxy-4412/pods/https:proxy-service-rrtdj-fqqbx:462/proxy/: tls qux (200; 9.304238ms) Feb 2 13:07:59.180: INFO: (7) /api/v1/namespaces/proxy-4412/services/https:proxy-service-rrtdj:tlsportname1/proxy/: tls baz (200; 12.609126ms) Feb 2 13:07:59.180: INFO: (7) /api/v1/namespaces/proxy-4412/services/https:proxy-service-rrtdj:tlsportname2/proxy/: tls qux (200; 12.700701ms) Feb 2 13:07:59.180: INFO: (7) /api/v1/namespaces/proxy-4412/services/http:proxy-service-rrtdj:portname2/proxy/: bar (200; 12.422813ms) Feb 2 13:07:59.181: INFO: (7) /api/v1/namespaces/proxy-4412/services/proxy-service-rrtdj:portname1/proxy/: foo (200; 13.926241ms) Feb 2 13:07:59.188: INFO: (8) /api/v1/namespaces/proxy-4412/pods/http:proxy-service-rrtdj-fqqbx:162/proxy/: bar (200; 6.312008ms) Feb 2 13:07:59.188: INFO: (8) /api/v1/namespaces/proxy-4412/pods/https:proxy-service-rrtdj-fqqbx:460/proxy/: tls baz (200; 6.395837ms) Feb 2 13:07:59.189: INFO: (8) /api/v1/namespaces/proxy-4412/pods/https:proxy-service-rrtdj-fqqbx:462/proxy/: tls qux (200; 7.607632ms) Feb 2 13:07:59.190: INFO: (8) /api/v1/namespaces/proxy-4412/pods/proxy-service-rrtdj-fqqbx:1080/proxy/: test<... (200; 8.114749ms) Feb 2 13:07:59.190: INFO: (8) /api/v1/namespaces/proxy-4412/pods/https:proxy-service-rrtdj-fqqbx:443/proxy/: test (200; 12.082817ms) Feb 2 13:07:59.194: INFO: (8) /api/v1/namespaces/proxy-4412/pods/http:proxy-service-rrtdj-fqqbx:160/proxy/: foo (200; 11.900794ms) Feb 2 13:07:59.196: INFO: (8) /api/v1/namespaces/proxy-4412/pods/proxy-service-rrtdj-fqqbx:160/proxy/: foo (200; 14.941645ms) Feb 2 13:07:59.197: INFO: (8) /api/v1/namespaces/proxy-4412/pods/http:proxy-service-rrtdj-fqqbx:1080/proxy/: ... (200; 14.784307ms) Feb 2 13:07:59.197: INFO: (8) /api/v1/namespaces/proxy-4412/services/http:proxy-service-rrtdj:portname2/proxy/: bar (200; 15.222319ms) Feb 2 13:07:59.197: INFO: (8) /api/v1/namespaces/proxy-4412/services/http:proxy-service-rrtdj:portname1/proxy/: foo (200; 15.218421ms) Feb 2 13:07:59.197: INFO: (8) /api/v1/namespaces/proxy-4412/pods/proxy-service-rrtdj-fqqbx:162/proxy/: bar (200; 15.274805ms) Feb 2 13:07:59.197: INFO: (8) /api/v1/namespaces/proxy-4412/services/https:proxy-service-rrtdj:tlsportname1/proxy/: tls baz (200; 15.491126ms) Feb 2 13:07:59.198: INFO: (8) /api/v1/namespaces/proxy-4412/services/proxy-service-rrtdj:portname2/proxy/: bar (200; 16.192615ms) Feb 2 13:07:59.199: INFO: (8) /api/v1/namespaces/proxy-4412/services/proxy-service-rrtdj:portname1/proxy/: foo (200; 16.996931ms) Feb 2 13:07:59.207: INFO: (9) /api/v1/namespaces/proxy-4412/pods/http:proxy-service-rrtdj-fqqbx:1080/proxy/: ... (200; 8.633578ms) Feb 2 13:07:59.207: INFO: (9) /api/v1/namespaces/proxy-4412/pods/http:proxy-service-rrtdj-fqqbx:160/proxy/: foo (200; 8.523705ms) Feb 2 13:07:59.208: INFO: (9) /api/v1/namespaces/proxy-4412/pods/https:proxy-service-rrtdj-fqqbx:460/proxy/: tls baz (200; 9.201893ms) Feb 2 13:07:59.208: INFO: (9) /api/v1/namespaces/proxy-4412/pods/http:proxy-service-rrtdj-fqqbx:162/proxy/: bar (200; 9.184467ms) Feb 2 13:07:59.208: INFO: (9) /api/v1/namespaces/proxy-4412/pods/https:proxy-service-rrtdj-fqqbx:462/proxy/: tls qux (200; 9.115576ms) Feb 2 13:07:59.208: INFO: (9) /api/v1/namespaces/proxy-4412/pods/proxy-service-rrtdj-fqqbx:160/proxy/: foo (200; 9.22016ms) Feb 2 13:07:59.208: INFO: (9) /api/v1/namespaces/proxy-4412/pods/proxy-service-rrtdj-fqqbx:1080/proxy/: test<... (200; 9.233347ms) Feb 2 13:07:59.208: INFO: (9) /api/v1/namespaces/proxy-4412/pods/https:proxy-service-rrtdj-fqqbx:443/proxy/: test (200; 9.376408ms) Feb 2 13:07:59.208: INFO: (9) /api/v1/namespaces/proxy-4412/pods/proxy-service-rrtdj-fqqbx:162/proxy/: bar (200; 9.243807ms) Feb 2 13:07:59.215: INFO: (9) /api/v1/namespaces/proxy-4412/services/http:proxy-service-rrtdj:portname1/proxy/: foo (200; 15.786112ms) Feb 2 13:07:59.215: INFO: (9) /api/v1/namespaces/proxy-4412/services/proxy-service-rrtdj:portname2/proxy/: bar (200; 15.792842ms) Feb 2 13:07:59.215: INFO: (9) /api/v1/namespaces/proxy-4412/services/proxy-service-rrtdj:portname1/proxy/: foo (200; 15.840672ms) Feb 2 13:07:59.215: INFO: (9) /api/v1/namespaces/proxy-4412/services/http:proxy-service-rrtdj:portname2/proxy/: bar (200; 15.853157ms) Feb 2 13:07:59.216: INFO: (9) /api/v1/namespaces/proxy-4412/services/https:proxy-service-rrtdj:tlsportname1/proxy/: tls baz (200; 16.690569ms) Feb 2 13:07:59.216: INFO: (9) /api/v1/namespaces/proxy-4412/services/https:proxy-service-rrtdj:tlsportname2/proxy/: tls qux (200; 16.709128ms) Feb 2 13:07:59.227: INFO: (10) /api/v1/namespaces/proxy-4412/services/https:proxy-service-rrtdj:tlsportname2/proxy/: tls qux (200; 10.989348ms) Feb 2 13:07:59.227: INFO: (10) /api/v1/namespaces/proxy-4412/services/proxy-service-rrtdj:portname1/proxy/: foo (200; 11.098295ms) Feb 2 13:07:59.229: INFO: (10) /api/v1/namespaces/proxy-4412/services/http:proxy-service-rrtdj:portname1/proxy/: foo (200; 13.418328ms) Feb 2 13:07:59.229: INFO: (10) /api/v1/namespaces/proxy-4412/services/proxy-service-rrtdj:portname2/proxy/: bar (200; 13.507867ms) Feb 2 13:07:59.230: INFO: (10) /api/v1/namespaces/proxy-4412/services/http:proxy-service-rrtdj:portname2/proxy/: bar (200; 13.716859ms) Feb 2 13:07:59.231: INFO: (10) /api/v1/namespaces/proxy-4412/pods/http:proxy-service-rrtdj-fqqbx:162/proxy/: bar (200; 15.093384ms) Feb 2 13:07:59.232: INFO: (10) /api/v1/namespaces/proxy-4412/pods/https:proxy-service-rrtdj-fqqbx:460/proxy/: tls baz (200; 15.631848ms) Feb 2 13:07:59.232: INFO: (10) /api/v1/namespaces/proxy-4412/pods/proxy-service-rrtdj-fqqbx:1080/proxy/: test<... (200; 16.101776ms) Feb 2 13:07:59.232: INFO: (10) /api/v1/namespaces/proxy-4412/services/https:proxy-service-rrtdj:tlsportname1/proxy/: tls baz (200; 16.046415ms) Feb 2 13:07:59.232: INFO: (10) /api/v1/namespaces/proxy-4412/pods/https:proxy-service-rrtdj-fqqbx:462/proxy/: tls qux (200; 16.433608ms) Feb 2 13:07:59.232: INFO: (10) /api/v1/namespaces/proxy-4412/pods/proxy-service-rrtdj-fqqbx:162/proxy/: bar (200; 16.649955ms) Feb 2 13:07:59.232: INFO: (10) /api/v1/namespaces/proxy-4412/pods/proxy-service-rrtdj-fqqbx:160/proxy/: foo (200; 16.597348ms) Feb 2 13:07:59.233: INFO: (10) /api/v1/namespaces/proxy-4412/pods/https:proxy-service-rrtdj-fqqbx:443/proxy/: test (200; 17.278235ms) Feb 2 13:07:59.233: INFO: (10) /api/v1/namespaces/proxy-4412/pods/http:proxy-service-rrtdj-fqqbx:1080/proxy/: ... (200; 17.515127ms) Feb 2 13:07:59.244: INFO: (11) /api/v1/namespaces/proxy-4412/pods/proxy-service-rrtdj-fqqbx:1080/proxy/: test<... (200; 10.222586ms) Feb 2 13:07:59.244: INFO: (11) /api/v1/namespaces/proxy-4412/pods/https:proxy-service-rrtdj-fqqbx:443/proxy/: ... (200; 13.054451ms) Feb 2 13:07:59.247: INFO: (11) /api/v1/namespaces/proxy-4412/pods/https:proxy-service-rrtdj-fqqbx:460/proxy/: tls baz (200; 13.319239ms) Feb 2 13:07:59.247: INFO: (11) /api/v1/namespaces/proxy-4412/pods/proxy-service-rrtdj-fqqbx/proxy/: test (200; 13.226471ms) Feb 2 13:07:59.250: INFO: (11) /api/v1/namespaces/proxy-4412/services/proxy-service-rrtdj:portname1/proxy/: foo (200; 16.73297ms) Feb 2 13:07:59.250: INFO: (11) /api/v1/namespaces/proxy-4412/services/https:proxy-service-rrtdj:tlsportname1/proxy/: tls baz (200; 16.76949ms) Feb 2 13:07:59.250: INFO: (11) /api/v1/namespaces/proxy-4412/services/http:proxy-service-rrtdj:portname1/proxy/: foo (200; 17.188553ms) Feb 2 13:07:59.251: INFO: (11) /api/v1/namespaces/proxy-4412/services/https:proxy-service-rrtdj:tlsportname2/proxy/: tls qux (200; 17.140715ms) Feb 2 13:07:59.251: INFO: (11) /api/v1/namespaces/proxy-4412/services/http:proxy-service-rrtdj:portname2/proxy/: bar (200; 17.109014ms) Feb 2 13:07:59.251: INFO: (11) /api/v1/namespaces/proxy-4412/services/proxy-service-rrtdj:portname2/proxy/: bar (200; 17.446725ms) Feb 2 13:07:59.259: INFO: (12) /api/v1/namespaces/proxy-4412/pods/https:proxy-service-rrtdj-fqqbx:462/proxy/: tls qux (200; 8.557842ms) Feb 2 13:07:59.260: INFO: (12) /api/v1/namespaces/proxy-4412/pods/proxy-service-rrtdj-fqqbx:1080/proxy/: test<... (200; 8.831291ms) Feb 2 13:07:59.260: INFO: (12) /api/v1/namespaces/proxy-4412/pods/proxy-service-rrtdj-fqqbx:160/proxy/: foo (200; 8.567803ms) Feb 2 13:07:59.260: INFO: (12) /api/v1/namespaces/proxy-4412/pods/proxy-service-rrtdj-fqqbx:162/proxy/: bar (200; 9.179892ms) Feb 2 13:07:59.260: INFO: (12) /api/v1/namespaces/proxy-4412/pods/http:proxy-service-rrtdj-fqqbx:1080/proxy/: ... (200; 9.080459ms) Feb 2 13:07:59.260: INFO: (12) /api/v1/namespaces/proxy-4412/pods/http:proxy-service-rrtdj-fqqbx:160/proxy/: foo (200; 9.257813ms) Feb 2 13:07:59.262: INFO: (12) /api/v1/namespaces/proxy-4412/pods/http:proxy-service-rrtdj-fqqbx:162/proxy/: bar (200; 11.462903ms) Feb 2 13:07:59.263: INFO: (12) /api/v1/namespaces/proxy-4412/pods/proxy-service-rrtdj-fqqbx/proxy/: test (200; 12.473774ms) Feb 2 13:07:59.263: INFO: (12) /api/v1/namespaces/proxy-4412/pods/https:proxy-service-rrtdj-fqqbx:443/proxy/: ... (200; 8.668089ms) Feb 2 13:07:59.276: INFO: (13) /api/v1/namespaces/proxy-4412/pods/http:proxy-service-rrtdj-fqqbx:162/proxy/: bar (200; 10.908675ms) Feb 2 13:07:59.277: INFO: (13) /api/v1/namespaces/proxy-4412/services/proxy-service-rrtdj:portname1/proxy/: foo (200; 11.733903ms) Feb 2 13:07:59.277: INFO: (13) /api/v1/namespaces/proxy-4412/pods/https:proxy-service-rrtdj-fqqbx:443/proxy/: test<... (200; 12.059836ms) Feb 2 13:07:59.279: INFO: (13) /api/v1/namespaces/proxy-4412/pods/proxy-service-rrtdj-fqqbx:162/proxy/: bar (200; 13.049228ms) Feb 2 13:07:59.279: INFO: (13) /api/v1/namespaces/proxy-4412/pods/proxy-service-rrtdj-fqqbx:160/proxy/: foo (200; 13.871051ms) Feb 2 13:07:59.279: INFO: (13) /api/v1/namespaces/proxy-4412/services/proxy-service-rrtdj:portname2/proxy/: bar (200; 14.163203ms) Feb 2 13:07:59.281: INFO: (13) /api/v1/namespaces/proxy-4412/services/https:proxy-service-rrtdj:tlsportname1/proxy/: tls baz (200; 15.854644ms) Feb 2 13:07:59.281: INFO: (13) /api/v1/namespaces/proxy-4412/pods/proxy-service-rrtdj-fqqbx/proxy/: test (200; 16.055453ms) Feb 2 13:07:59.282: INFO: (13) /api/v1/namespaces/proxy-4412/pods/https:proxy-service-rrtdj-fqqbx:462/proxy/: tls qux (200; 16.572858ms) Feb 2 13:07:59.282: INFO: (13) /api/v1/namespaces/proxy-4412/services/https:proxy-service-rrtdj:tlsportname2/proxy/: tls qux (200; 16.494635ms) Feb 2 13:07:59.282: INFO: (13) /api/v1/namespaces/proxy-4412/services/http:proxy-service-rrtdj:portname1/proxy/: foo (200; 16.636816ms) Feb 2 13:07:59.283: INFO: (13) /api/v1/namespaces/proxy-4412/services/http:proxy-service-rrtdj:portname2/proxy/: bar (200; 17.120952ms) Feb 2 13:07:59.304: INFO: (14) /api/v1/namespaces/proxy-4412/pods/proxy-service-rrtdj-fqqbx:162/proxy/: bar (200; 21.534199ms) Feb 2 13:07:59.304: INFO: (14) /api/v1/namespaces/proxy-4412/pods/https:proxy-service-rrtdj-fqqbx:460/proxy/: tls baz (200; 21.539657ms) Feb 2 13:07:59.305: INFO: (14) /api/v1/namespaces/proxy-4412/services/http:proxy-service-rrtdj:portname1/proxy/: foo (200; 21.833325ms) Feb 2 13:07:59.305: INFO: (14) /api/v1/namespaces/proxy-4412/pods/https:proxy-service-rrtdj-fqqbx:462/proxy/: tls qux (200; 22.469796ms) Feb 2 13:07:59.305: INFO: (14) /api/v1/namespaces/proxy-4412/pods/proxy-service-rrtdj-fqqbx/proxy/: test (200; 22.428822ms) Feb 2 13:07:59.306: INFO: (14) /api/v1/namespaces/proxy-4412/pods/http:proxy-service-rrtdj-fqqbx:1080/proxy/: ... (200; 23.125165ms) Feb 2 13:07:59.306: INFO: (14) /api/v1/namespaces/proxy-4412/services/https:proxy-service-rrtdj:tlsportname2/proxy/: tls qux (200; 23.177041ms) Feb 2 13:07:59.307: INFO: (14) /api/v1/namespaces/proxy-4412/pods/http:proxy-service-rrtdj-fqqbx:162/proxy/: bar (200; 23.702987ms) Feb 2 13:07:59.307: INFO: (14) /api/v1/namespaces/proxy-4412/pods/https:proxy-service-rrtdj-fqqbx:443/proxy/: test<... (200; 23.94894ms) Feb 2 13:07:59.310: INFO: (14) /api/v1/namespaces/proxy-4412/services/http:proxy-service-rrtdj:portname2/proxy/: bar (200; 26.946503ms) Feb 2 13:07:59.310: INFO: (14) /api/v1/namespaces/proxy-4412/services/https:proxy-service-rrtdj:tlsportname1/proxy/: tls baz (200; 26.960918ms) Feb 2 13:07:59.310: INFO: (14) /api/v1/namespaces/proxy-4412/pods/proxy-service-rrtdj-fqqbx:160/proxy/: foo (200; 27.350564ms) Feb 2 13:07:59.310: INFO: (14) /api/v1/namespaces/proxy-4412/services/proxy-service-rrtdj:portname2/proxy/: bar (200; 27.207772ms) Feb 2 13:07:59.310: INFO: (14) /api/v1/namespaces/proxy-4412/services/proxy-service-rrtdj:portname1/proxy/: foo (200; 27.201654ms) Feb 2 13:07:59.310: INFO: (14) /api/v1/namespaces/proxy-4412/pods/http:proxy-service-rrtdj-fqqbx:160/proxy/: foo (200; 27.535047ms) Feb 2 13:07:59.319: INFO: (15) /api/v1/namespaces/proxy-4412/pods/https:proxy-service-rrtdj-fqqbx:443/proxy/: test<... (200; 7.727928ms) Feb 2 13:07:59.320: INFO: (15) /api/v1/namespaces/proxy-4412/pods/http:proxy-service-rrtdj-fqqbx:1080/proxy/: ... (200; 9.255724ms) Feb 2 13:07:59.320: INFO: (15) /api/v1/namespaces/proxy-4412/pods/https:proxy-service-rrtdj-fqqbx:462/proxy/: tls qux (200; 9.165941ms) Feb 2 13:07:59.320: INFO: (15) /api/v1/namespaces/proxy-4412/pods/proxy-service-rrtdj-fqqbx:162/proxy/: bar (200; 10.159181ms) Feb 2 13:07:59.321: INFO: (15) /api/v1/namespaces/proxy-4412/pods/http:proxy-service-rrtdj-fqqbx:160/proxy/: foo (200; 9.181045ms) Feb 2 13:07:59.321: INFO: (15) /api/v1/namespaces/proxy-4412/pods/proxy-service-rrtdj-fqqbx/proxy/: test (200; 9.977098ms) Feb 2 13:07:59.321: INFO: (15) /api/v1/namespaces/proxy-4412/services/http:proxy-service-rrtdj:portname1/proxy/: foo (200; 9.895215ms) Feb 2 13:07:59.321: INFO: (15) /api/v1/namespaces/proxy-4412/pods/https:proxy-service-rrtdj-fqqbx:460/proxy/: tls baz (200; 10.086253ms) Feb 2 13:07:59.321: INFO: (15) /api/v1/namespaces/proxy-4412/services/proxy-service-rrtdj:portname1/proxy/: foo (200; 10.347917ms) Feb 2 13:07:59.321: INFO: (15) /api/v1/namespaces/proxy-4412/pods/proxy-service-rrtdj-fqqbx:160/proxy/: foo (200; 10.114435ms) Feb 2 13:07:59.322: INFO: (15) /api/v1/namespaces/proxy-4412/services/http:proxy-service-rrtdj:portname2/proxy/: bar (200; 10.597289ms) Feb 2 13:07:59.322: INFO: (15) /api/v1/namespaces/proxy-4412/services/https:proxy-service-rrtdj:tlsportname1/proxy/: tls baz (200; 11.080221ms) Feb 2 13:07:59.324: INFO: (15) /api/v1/namespaces/proxy-4412/services/https:proxy-service-rrtdj:tlsportname2/proxy/: tls qux (200; 12.987056ms) Feb 2 13:07:59.325: INFO: (15) /api/v1/namespaces/proxy-4412/services/proxy-service-rrtdj:portname2/proxy/: bar (200; 13.705712ms) Feb 2 13:07:59.333: INFO: (16) /api/v1/namespaces/proxy-4412/pods/proxy-service-rrtdj-fqqbx:160/proxy/: foo (200; 8.453253ms) Feb 2 13:07:59.334: INFO: (16) /api/v1/namespaces/proxy-4412/pods/proxy-service-rrtdj-fqqbx/proxy/: test (200; 9.258527ms) Feb 2 13:07:59.335: INFO: (16) /api/v1/namespaces/proxy-4412/pods/http:proxy-service-rrtdj-fqqbx:162/proxy/: bar (200; 9.969758ms) Feb 2 13:07:59.335: INFO: (16) /api/v1/namespaces/proxy-4412/pods/http:proxy-service-rrtdj-fqqbx:1080/proxy/: ... (200; 10.391731ms) Feb 2 13:07:59.335: INFO: (16) /api/v1/namespaces/proxy-4412/pods/https:proxy-service-rrtdj-fqqbx:460/proxy/: tls baz (200; 10.21229ms) Feb 2 13:07:59.335: INFO: (16) /api/v1/namespaces/proxy-4412/pods/http:proxy-service-rrtdj-fqqbx:160/proxy/: foo (200; 10.431038ms) Feb 2 13:07:59.335: INFO: (16) /api/v1/namespaces/proxy-4412/pods/proxy-service-rrtdj-fqqbx:1080/proxy/: test<... (200; 10.164049ms) Feb 2 13:07:59.335: INFO: (16) /api/v1/namespaces/proxy-4412/pods/https:proxy-service-rrtdj-fqqbx:443/proxy/: test<... (200; 5.805483ms) Feb 2 13:07:59.345: INFO: (17) /api/v1/namespaces/proxy-4412/pods/https:proxy-service-rrtdj-fqqbx:462/proxy/: tls qux (200; 6.013208ms) Feb 2 13:07:59.345: INFO: (17) /api/v1/namespaces/proxy-4412/pods/http:proxy-service-rrtdj-fqqbx:160/proxy/: foo (200; 6.008954ms) Feb 2 13:07:59.345: INFO: (17) /api/v1/namespaces/proxy-4412/pods/proxy-service-rrtdj-fqqbx/proxy/: test (200; 6.518181ms) Feb 2 13:07:59.353: INFO: (17) /api/v1/namespaces/proxy-4412/services/proxy-service-rrtdj:portname2/proxy/: bar (200; 14.423036ms) Feb 2 13:07:59.354: INFO: (17) /api/v1/namespaces/proxy-4412/pods/https:proxy-service-rrtdj-fqqbx:443/proxy/: ... (200; 15.164757ms) Feb 2 13:07:59.358: INFO: (18) /api/v1/namespaces/proxy-4412/pods/http:proxy-service-rrtdj-fqqbx:160/proxy/: foo (200; 4.379142ms) Feb 2 13:07:59.362: INFO: (18) /api/v1/namespaces/proxy-4412/pods/http:proxy-service-rrtdj-fqqbx:162/proxy/: bar (200; 7.826532ms) Feb 2 13:07:59.362: INFO: (18) /api/v1/namespaces/proxy-4412/pods/proxy-service-rrtdj-fqqbx:162/proxy/: bar (200; 8.352655ms) Feb 2 13:07:59.364: INFO: (18) /api/v1/namespaces/proxy-4412/pods/proxy-service-rrtdj-fqqbx:1080/proxy/: test<... (200; 9.811042ms) Feb 2 13:07:59.364: INFO: (18) /api/v1/namespaces/proxy-4412/services/http:proxy-service-rrtdj:portname1/proxy/: foo (200; 10.10336ms) Feb 2 13:07:59.364: INFO: (18) /api/v1/namespaces/proxy-4412/services/https:proxy-service-rrtdj:tlsportname2/proxy/: tls qux (200; 10.197235ms) Feb 2 13:07:59.368: INFO: (18) /api/v1/namespaces/proxy-4412/services/proxy-service-rrtdj:portname2/proxy/: bar (200; 13.520913ms) Feb 2 13:07:59.368: INFO: (18) /api/v1/namespaces/proxy-4412/pods/https:proxy-service-rrtdj-fqqbx:460/proxy/: tls baz (200; 13.591658ms) Feb 2 13:07:59.368: INFO: (18) /api/v1/namespaces/proxy-4412/pods/http:proxy-service-rrtdj-fqqbx:1080/proxy/: ... (200; 13.725157ms) Feb 2 13:07:59.368: INFO: (18) /api/v1/namespaces/proxy-4412/pods/https:proxy-service-rrtdj-fqqbx:462/proxy/: tls qux (200; 13.637169ms) Feb 2 13:07:59.368: INFO: (18) /api/v1/namespaces/proxy-4412/pods/proxy-service-rrtdj-fqqbx:160/proxy/: foo (200; 13.551945ms) Feb 2 13:07:59.368: INFO: (18) /api/v1/namespaces/proxy-4412/pods/proxy-service-rrtdj-fqqbx/proxy/: test (200; 13.840062ms) Feb 2 13:07:59.368: INFO: (18) /api/v1/namespaces/proxy-4412/services/http:proxy-service-rrtdj:portname2/proxy/: bar (200; 13.600992ms) Feb 2 13:07:59.368: INFO: (18) /api/v1/namespaces/proxy-4412/services/proxy-service-rrtdj:portname1/proxy/: foo (200; 13.677815ms) Feb 2 13:07:59.368: INFO: (18) /api/v1/namespaces/proxy-4412/services/https:proxy-service-rrtdj:tlsportname1/proxy/: tls baz (200; 13.964312ms) Feb 2 13:07:59.368: INFO: (18) /api/v1/namespaces/proxy-4412/pods/https:proxy-service-rrtdj-fqqbx:443/proxy/: test (200; 37.008862ms) Feb 2 13:07:59.405: INFO: (19) /api/v1/namespaces/proxy-4412/pods/http:proxy-service-rrtdj-fqqbx:1080/proxy/: ... (200; 37.316059ms) Feb 2 13:07:59.405: INFO: (19) /api/v1/namespaces/proxy-4412/pods/proxy-service-rrtdj-fqqbx:1080/proxy/: test<... (200; 37.203739ms) Feb 2 13:07:59.405: INFO: (19) /api/v1/namespaces/proxy-4412/services/http:proxy-service-rrtdj:portname2/proxy/: bar (200; 37.283615ms) Feb 2 13:07:59.407: INFO: (19) /api/v1/namespaces/proxy-4412/pods/proxy-service-rrtdj-fqqbx:160/proxy/: foo (200; 38.84677ms) Feb 2 13:07:59.407: INFO: (19) /api/v1/namespaces/proxy-4412/services/https:proxy-service-rrtdj:tlsportname1/proxy/: tls baz (200; 39.187514ms) Feb 2 13:07:59.407: INFO: (19) /api/v1/namespaces/proxy-4412/pods/http:proxy-service-rrtdj-fqqbx:160/proxy/: foo (200; 39.219836ms) Feb 2 13:07:59.407: INFO: (19) /api/v1/namespaces/proxy-4412/pods/https:proxy-service-rrtdj-fqqbx:460/proxy/: tls baz (200; 39.11947ms) Feb 2 13:07:59.407: INFO: (19) /api/v1/namespaces/proxy-4412/services/proxy-service-rrtdj:portname2/proxy/: bar (200; 39.369376ms) Feb 2 13:07:59.408: INFO: (19) /api/v1/namespaces/proxy-4412/services/proxy-service-rrtdj:portname1/proxy/: foo (200; 39.944628ms) Feb 2 13:07:59.410: INFO: (19) /api/v1/namespaces/proxy-4412/services/http:proxy-service-rrtdj:portname1/proxy/: foo (200; 41.885366ms) Feb 2 13:07:59.410: INFO: (19) /api/v1/namespaces/proxy-4412/services/https:proxy-service-rrtdj:tlsportname2/proxy/: tls qux (200; 42.01078ms) Feb 2 13:07:59.410: INFO: (19) /api/v1/namespaces/proxy-4412/pods/proxy-service-rrtdj-fqqbx:162/proxy/: bar (200; 41.995957ms) Feb 2 13:07:59.410: INFO: (19) /api/v1/namespaces/proxy-4412/pods/https:proxy-service-rrtdj-fqqbx:443/proxy/: >> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap configmap-7062/configmap-test-aa87f4c6-067f-4c62-b222-7adba45508bc STEP: Creating a pod to test consume configMaps Feb 2 13:08:11.435: INFO: Waiting up to 5m0s for pod "pod-configmaps-6f441d09-9a60-4f83-8c11-c8997335b3ef" in namespace "configmap-7062" to be "success or failure" Feb 2 13:08:11.442: INFO: Pod "pod-configmaps-6f441d09-9a60-4f83-8c11-c8997335b3ef": Phase="Pending", Reason="", readiness=false. Elapsed: 6.667624ms Feb 2 13:08:13.455: INFO: Pod "pod-configmaps-6f441d09-9a60-4f83-8c11-c8997335b3ef": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020402764s Feb 2 13:08:15.461: INFO: Pod "pod-configmaps-6f441d09-9a60-4f83-8c11-c8997335b3ef": Phase="Pending", Reason="", readiness=false. Elapsed: 4.026343103s Feb 2 13:08:17.473: INFO: Pod "pod-configmaps-6f441d09-9a60-4f83-8c11-c8997335b3ef": Phase="Pending", Reason="", readiness=false. Elapsed: 6.038309793s Feb 2 13:08:19.482: INFO: Pod "pod-configmaps-6f441d09-9a60-4f83-8c11-c8997335b3ef": Phase="Pending", Reason="", readiness=false. Elapsed: 8.047383435s Feb 2 13:08:21.493: INFO: Pod "pod-configmaps-6f441d09-9a60-4f83-8c11-c8997335b3ef": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.058364312s STEP: Saw pod success Feb 2 13:08:21.493: INFO: Pod "pod-configmaps-6f441d09-9a60-4f83-8c11-c8997335b3ef" satisfied condition "success or failure" Feb 2 13:08:21.500: INFO: Trying to get logs from node iruya-node pod pod-configmaps-6f441d09-9a60-4f83-8c11-c8997335b3ef container env-test: STEP: delete the pod Feb 2 13:08:21.590: INFO: Waiting for pod pod-configmaps-6f441d09-9a60-4f83-8c11-c8997335b3ef to disappear Feb 2 13:08:21.597: INFO: Pod pod-configmaps-6f441d09-9a60-4f83-8c11-c8997335b3ef no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 2 13:08:21.597: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7062" for this suite. Feb 2 13:08:27.635: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 2 13:08:27.801: INFO: namespace configmap-7062 deletion completed in 6.196083314s • [SLOW TEST:16.488 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 2 13:08:27.801: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-6763 [It] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace statefulset-6763 STEP: Creating statefulset with conflicting port in namespace statefulset-6763 STEP: Waiting until pod test-pod will start running in namespace statefulset-6763 STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-6763 Feb 2 13:08:40.152: INFO: Observed stateful pod in namespace: statefulset-6763, name: ss-0, uid: 60093ad5-776a-4b78-9d97-8bf42951d7b8, status phase: Pending. Waiting for statefulset controller to delete. Feb 2 13:08:40.281: INFO: Observed stateful pod in namespace: statefulset-6763, name: ss-0, uid: 60093ad5-776a-4b78-9d97-8bf42951d7b8, status phase: Failed. Waiting for statefulset controller to delete. Feb 2 13:08:40.303: INFO: Observed stateful pod in namespace: statefulset-6763, name: ss-0, uid: 60093ad5-776a-4b78-9d97-8bf42951d7b8, status phase: Failed. Waiting for statefulset controller to delete. Feb 2 13:08:40.402: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-6763 STEP: Removing pod with conflicting port in namespace statefulset-6763 STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-6763 and will be in running state [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 Feb 2 13:08:50.503: INFO: Deleting all statefulset in ns statefulset-6763 Feb 2 13:08:50.511: INFO: Scaling statefulset ss to 0 Feb 2 13:09:10.545: INFO: Waiting for statefulset status.replicas updated to 0 Feb 2 13:09:10.552: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 2 13:09:10.582: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-6763" for this suite. Feb 2 13:09:18.617: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 2 13:09:18.795: INFO: namespace statefulset-6763 deletion completed in 8.203177644s • [SLOW TEST:50.994 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 2 13:09:18.796: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-map-55a53d9c-213e-49e3-93d5-d677443b2103 STEP: Creating a pod to test consume secrets Feb 2 13:09:18.946: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-7b7636cb-2219-4ff8-a10e-904e5b9aaafa" in namespace "projected-4678" to be "success or failure" Feb 2 13:09:18.957: INFO: Pod "pod-projected-secrets-7b7636cb-2219-4ff8-a10e-904e5b9aaafa": Phase="Pending", Reason="", readiness=false. Elapsed: 11.483254ms Feb 2 13:09:20.969: INFO: Pod "pod-projected-secrets-7b7636cb-2219-4ff8-a10e-904e5b9aaafa": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023490364s Feb 2 13:09:22.980: INFO: Pod "pod-projected-secrets-7b7636cb-2219-4ff8-a10e-904e5b9aaafa": Phase="Pending", Reason="", readiness=false. Elapsed: 4.034506066s Feb 2 13:09:24.989: INFO: Pod "pod-projected-secrets-7b7636cb-2219-4ff8-a10e-904e5b9aaafa": Phase="Pending", Reason="", readiness=false. Elapsed: 6.043231744s Feb 2 13:09:26.998: INFO: Pod "pod-projected-secrets-7b7636cb-2219-4ff8-a10e-904e5b9aaafa": Phase="Pending", Reason="", readiness=false. Elapsed: 8.051755726s Feb 2 13:09:29.007: INFO: Pod "pod-projected-secrets-7b7636cb-2219-4ff8-a10e-904e5b9aaafa": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.06126578s STEP: Saw pod success Feb 2 13:09:29.007: INFO: Pod "pod-projected-secrets-7b7636cb-2219-4ff8-a10e-904e5b9aaafa" satisfied condition "success or failure" Feb 2 13:09:29.031: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-7b7636cb-2219-4ff8-a10e-904e5b9aaafa container projected-secret-volume-test: STEP: delete the pod Feb 2 13:09:29.102: INFO: Waiting for pod pod-projected-secrets-7b7636cb-2219-4ff8-a10e-904e5b9aaafa to disappear Feb 2 13:09:29.107: INFO: Pod pod-projected-secrets-7b7636cb-2219-4ff8-a10e-904e5b9aaafa no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 2 13:09:29.107: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4678" for this suite. Feb 2 13:09:35.141: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 2 13:09:35.269: INFO: namespace projected-4678 deletion completed in 6.153841142s • [SLOW TEST:16.473 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 2 13:09:35.270: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-6449 STEP: creating a selector STEP: Creating the service pods in kubernetes Feb 2 13:09:35.364: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Feb 2 13:10:11.626: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostName&protocol=http&host=10.44.0.1&port=8080&tries=1'] Namespace:pod-network-test-6449 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 2 13:10:11.626: INFO: >>> kubeConfig: /root/.kube/config I0202 13:10:11.720582 8 log.go:172] (0xc002f7b1e0) (0xc00293ed20) Create stream I0202 13:10:11.720707 8 log.go:172] (0xc002f7b1e0) (0xc00293ed20) Stream added, broadcasting: 1 I0202 13:10:11.732212 8 log.go:172] (0xc002f7b1e0) Reply frame received for 1 I0202 13:10:11.732264 8 log.go:172] (0xc002f7b1e0) (0xc000e4ab40) Create stream I0202 13:10:11.732287 8 log.go:172] (0xc002f7b1e0) (0xc000e4ab40) Stream added, broadcasting: 3 I0202 13:10:11.735361 8 log.go:172] (0xc002f7b1e0) Reply frame received for 3 I0202 13:10:11.735389 8 log.go:172] (0xc002f7b1e0) (0xc001cae1e0) Create stream I0202 13:10:11.735398 8 log.go:172] (0xc002f7b1e0) (0xc001cae1e0) Stream added, broadcasting: 5 I0202 13:10:11.737367 8 log.go:172] (0xc002f7b1e0) Reply frame received for 5 I0202 13:10:11.943620 8 log.go:172] (0xc002f7b1e0) Data frame received for 3 I0202 13:10:11.943704 8 log.go:172] (0xc000e4ab40) (3) Data frame handling I0202 13:10:11.943731 8 log.go:172] (0xc000e4ab40) (3) Data frame sent I0202 13:10:12.103781 8 log.go:172] (0xc002f7b1e0) Data frame received for 1 I0202 13:10:12.103875 8 log.go:172] (0xc002f7b1e0) (0xc000e4ab40) Stream removed, broadcasting: 3 I0202 13:10:12.103936 8 log.go:172] (0xc00293ed20) (1) Data frame handling I0202 13:10:12.103966 8 log.go:172] (0xc00293ed20) (1) Data frame sent I0202 13:10:12.104016 8 log.go:172] (0xc002f7b1e0) (0xc001cae1e0) Stream removed, broadcasting: 5 I0202 13:10:12.104065 8 log.go:172] (0xc002f7b1e0) (0xc00293ed20) Stream removed, broadcasting: 1 I0202 13:10:12.104090 8 log.go:172] (0xc002f7b1e0) Go away received I0202 13:10:12.104274 8 log.go:172] (0xc002f7b1e0) (0xc00293ed20) Stream removed, broadcasting: 1 I0202 13:10:12.104287 8 log.go:172] (0xc002f7b1e0) (0xc000e4ab40) Stream removed, broadcasting: 3 I0202 13:10:12.104294 8 log.go:172] (0xc002f7b1e0) (0xc001cae1e0) Stream removed, broadcasting: 5 Feb 2 13:10:12.104: INFO: Waiting for endpoints: map[] Feb 2 13:10:12.111: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostName&protocol=http&host=10.32.0.4&port=8080&tries=1'] Namespace:pod-network-test-6449 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 2 13:10:12.111: INFO: >>> kubeConfig: /root/.kube/config I0202 13:10:12.216899 8 log.go:172] (0xc002f7ba20) (0xc00293f040) Create stream I0202 13:10:12.217212 8 log.go:172] (0xc002f7ba20) (0xc00293f040) Stream added, broadcasting: 1 I0202 13:10:12.229751 8 log.go:172] (0xc002f7ba20) Reply frame received for 1 I0202 13:10:12.229815 8 log.go:172] (0xc002f7ba20) (0xc0002f5680) Create stream I0202 13:10:12.229831 8 log.go:172] (0xc002f7ba20) (0xc0002f5680) Stream added, broadcasting: 3 I0202 13:10:12.232162 8 log.go:172] (0xc002f7ba20) Reply frame received for 3 I0202 13:10:12.232230 8 log.go:172] (0xc002f7ba20) (0xc00293f180) Create stream I0202 13:10:12.232266 8 log.go:172] (0xc002f7ba20) (0xc00293f180) Stream added, broadcasting: 5 I0202 13:10:12.235255 8 log.go:172] (0xc002f7ba20) Reply frame received for 5 I0202 13:10:12.348354 8 log.go:172] (0xc002f7ba20) Data frame received for 3 I0202 13:10:12.348430 8 log.go:172] (0xc0002f5680) (3) Data frame handling I0202 13:10:12.348448 8 log.go:172] (0xc0002f5680) (3) Data frame sent I0202 13:10:12.490422 8 log.go:172] (0xc002f7ba20) (0xc00293f180) Stream removed, broadcasting: 5 I0202 13:10:12.490598 8 log.go:172] (0xc002f7ba20) Data frame received for 1 I0202 13:10:12.490623 8 log.go:172] (0xc002f7ba20) (0xc0002f5680) Stream removed, broadcasting: 3 I0202 13:10:12.490655 8 log.go:172] (0xc00293f040) (1) Data frame handling I0202 13:10:12.490676 8 log.go:172] (0xc00293f040) (1) Data frame sent I0202 13:10:12.490687 8 log.go:172] (0xc002f7ba20) (0xc00293f040) Stream removed, broadcasting: 1 I0202 13:10:12.490722 8 log.go:172] (0xc002f7ba20) Go away received I0202 13:10:12.490860 8 log.go:172] (0xc002f7ba20) (0xc00293f040) Stream removed, broadcasting: 1 I0202 13:10:12.490876 8 log.go:172] (0xc002f7ba20) (0xc0002f5680) Stream removed, broadcasting: 3 I0202 13:10:12.490888 8 log.go:172] (0xc002f7ba20) (0xc00293f180) Stream removed, broadcasting: 5 Feb 2 13:10:12.490: INFO: Waiting for endpoints: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 2 13:10:12.491: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-6449" for this suite. Feb 2 13:10:36.544: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 2 13:10:36.783: INFO: namespace pod-network-test-6449 deletion completed in 24.281404565s • [SLOW TEST:61.513 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 2 13:10:36.784: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-configmap-qxt2 STEP: Creating a pod to test atomic-volume-subpath Feb 2 13:10:37.039: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-qxt2" in namespace "subpath-4271" to be "success or failure" Feb 2 13:10:37.049: INFO: Pod "pod-subpath-test-configmap-qxt2": Phase="Pending", Reason="", readiness=false. Elapsed: 9.703861ms Feb 2 13:10:39.057: INFO: Pod "pod-subpath-test-configmap-qxt2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017921608s Feb 2 13:10:41.065: INFO: Pod "pod-subpath-test-configmap-qxt2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.026328228s Feb 2 13:10:43.078: INFO: Pod "pod-subpath-test-configmap-qxt2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.039334812s Feb 2 13:10:45.085: INFO: Pod "pod-subpath-test-configmap-qxt2": Phase="Running", Reason="", readiness=true. Elapsed: 8.046259385s Feb 2 13:10:47.097: INFO: Pod "pod-subpath-test-configmap-qxt2": Phase="Running", Reason="", readiness=true. Elapsed: 10.05774634s Feb 2 13:10:49.107: INFO: Pod "pod-subpath-test-configmap-qxt2": Phase="Running", Reason="", readiness=true. Elapsed: 12.068227993s Feb 2 13:10:51.117: INFO: Pod "pod-subpath-test-configmap-qxt2": Phase="Running", Reason="", readiness=true. Elapsed: 14.077853974s Feb 2 13:10:53.153: INFO: Pod "pod-subpath-test-configmap-qxt2": Phase="Running", Reason="", readiness=true. Elapsed: 16.113975482s Feb 2 13:10:55.161: INFO: Pod "pod-subpath-test-configmap-qxt2": Phase="Running", Reason="", readiness=true. Elapsed: 18.122355022s Feb 2 13:10:57.172: INFO: Pod "pod-subpath-test-configmap-qxt2": Phase="Running", Reason="", readiness=true. Elapsed: 20.132567484s Feb 2 13:10:59.181: INFO: Pod "pod-subpath-test-configmap-qxt2": Phase="Running", Reason="", readiness=true. Elapsed: 22.141458303s Feb 2 13:11:01.195: INFO: Pod "pod-subpath-test-configmap-qxt2": Phase="Running", Reason="", readiness=true. Elapsed: 24.155770957s Feb 2 13:11:03.214: INFO: Pod "pod-subpath-test-configmap-qxt2": Phase="Running", Reason="", readiness=true. Elapsed: 26.175213314s Feb 2 13:11:05.227: INFO: Pod "pod-subpath-test-configmap-qxt2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 28.18777152s STEP: Saw pod success Feb 2 13:11:05.227: INFO: Pod "pod-subpath-test-configmap-qxt2" satisfied condition "success or failure" Feb 2 13:11:05.232: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-configmap-qxt2 container test-container-subpath-configmap-qxt2: STEP: delete the pod Feb 2 13:11:05.339: INFO: Waiting for pod pod-subpath-test-configmap-qxt2 to disappear Feb 2 13:11:05.396: INFO: Pod pod-subpath-test-configmap-qxt2 no longer exists STEP: Deleting pod pod-subpath-test-configmap-qxt2 Feb 2 13:11:05.396: INFO: Deleting pod "pod-subpath-test-configmap-qxt2" in namespace "subpath-4271" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 2 13:11:05.400: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-4271" for this suite. Feb 2 13:11:11.431: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 2 13:11:11.584: INFO: namespace subpath-4271 deletion completed in 6.179367572s • [SLOW TEST:34.801 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 2 13:11:11.585: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 2 13:11:19.760: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-4719" for this suite. Feb 2 13:12:11.805: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 2 13:12:12.072: INFO: namespace kubelet-test-4719 deletion completed in 52.299707317s • [SLOW TEST:60.488 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a busybox Pod with hostAliases /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136 should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 2 13:12:12.074: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88 [It] should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating service endpoint-test2 in namespace services-7219 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-7219 to expose endpoints map[] Feb 2 13:12:12.245: INFO: successfully validated that service endpoint-test2 in namespace services-7219 exposes endpoints map[] (11.812567ms elapsed) STEP: Creating pod pod1 in namespace services-7219 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-7219 to expose endpoints map[pod1:[80]] Feb 2 13:12:16.424: INFO: Unexpected endpoints: found map[], expected map[pod1:[80]] (4.123159568s elapsed, will retry) Feb 2 13:12:19.468: INFO: successfully validated that service endpoint-test2 in namespace services-7219 exposes endpoints map[pod1:[80]] (7.167399519s elapsed) STEP: Creating pod pod2 in namespace services-7219 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-7219 to expose endpoints map[pod1:[80] pod2:[80]] Feb 2 13:12:24.684: INFO: Unexpected endpoints: found map[660938ea-2caa-4440-8577-ea0e82a5df48:[80]], expected map[pod1:[80] pod2:[80]] (5.19634455s elapsed, will retry) Feb 2 13:12:27.751: INFO: successfully validated that service endpoint-test2 in namespace services-7219 exposes endpoints map[pod1:[80] pod2:[80]] (8.263998998s elapsed) STEP: Deleting pod pod1 in namespace services-7219 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-7219 to expose endpoints map[pod2:[80]] Feb 2 13:12:28.840: INFO: successfully validated that service endpoint-test2 in namespace services-7219 exposes endpoints map[pod2:[80]] (1.076225311s elapsed) STEP: Deleting pod pod2 in namespace services-7219 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-7219 to expose endpoints map[] Feb 2 13:12:29.870: INFO: successfully validated that service endpoint-test2 in namespace services-7219 exposes endpoints map[] (1.01384968s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 2 13:12:31.204: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-7219" for this suite. Feb 2 13:12:53.266: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 2 13:12:53.412: INFO: namespace services-7219 deletion completed in 22.195607638s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92 • [SLOW TEST:41.339 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 2 13:12:53.413: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 2 13:12:53.741: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-9739" for this suite. Feb 2 13:12:59.812: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 2 13:12:59.955: INFO: namespace kubelet-test-9739 deletion completed in 6.204173849s • [SLOW TEST:6.542 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78 should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 2 13:12:59.956: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-9060 [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace statefulset-9060 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-9060 Feb 2 13:13:00.186: INFO: Found 0 stateful pods, waiting for 1 Feb 2 13:13:10.199: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod Feb 2 13:13:10.205: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9060 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Feb 2 13:13:12.583: INFO: stderr: "I0202 13:13:12.184097 530 log.go:172] (0xc00070e4d0) (0xc0005d8780) Create stream\nI0202 13:13:12.184585 530 log.go:172] (0xc00070e4d0) (0xc0005d8780) Stream added, broadcasting: 1\nI0202 13:13:12.198752 530 log.go:172] (0xc00070e4d0) Reply frame received for 1\nI0202 13:13:12.198926 530 log.go:172] (0xc00070e4d0) (0xc00072a0a0) Create stream\nI0202 13:13:12.198959 530 log.go:172] (0xc00070e4d0) (0xc00072a0a0) Stream added, broadcasting: 3\nI0202 13:13:12.200619 530 log.go:172] (0xc00070e4d0) Reply frame received for 3\nI0202 13:13:12.200650 530 log.go:172] (0xc00070e4d0) (0xc00072a140) Create stream\nI0202 13:13:12.200660 530 log.go:172] (0xc00070e4d0) (0xc00072a140) Stream added, broadcasting: 5\nI0202 13:13:12.204223 530 log.go:172] (0xc00070e4d0) Reply frame received for 5\nI0202 13:13:12.324763 530 log.go:172] (0xc00070e4d0) Data frame received for 5\nI0202 13:13:12.325004 530 log.go:172] (0xc00072a140) (5) Data frame handling\nI0202 13:13:12.325053 530 log.go:172] (0xc00072a140) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0202 13:13:12.378486 530 log.go:172] (0xc00070e4d0) Data frame received for 3\nI0202 13:13:12.378695 530 log.go:172] (0xc00072a0a0) (3) Data frame handling\nI0202 13:13:12.378724 530 log.go:172] (0xc00072a0a0) (3) Data frame sent\nI0202 13:13:12.556473 530 log.go:172] (0xc00070e4d0) Data frame received for 1\nI0202 13:13:12.557083 530 log.go:172] (0xc0005d8780) (1) Data frame handling\nI0202 13:13:12.557192 530 log.go:172] (0xc0005d8780) (1) Data frame sent\nI0202 13:13:12.557263 530 log.go:172] (0xc00070e4d0) (0xc0005d8780) Stream removed, broadcasting: 1\nI0202 13:13:12.558472 530 log.go:172] (0xc00070e4d0) (0xc00072a0a0) Stream removed, broadcasting: 3\nI0202 13:13:12.558751 530 log.go:172] (0xc00070e4d0) (0xc00072a140) Stream removed, broadcasting: 5\nI0202 13:13:12.558856 530 log.go:172] (0xc00070e4d0) Go away received\nI0202 13:13:12.560433 530 log.go:172] (0xc00070e4d0) (0xc0005d8780) Stream removed, broadcasting: 1\nI0202 13:13:12.560526 530 log.go:172] (0xc00070e4d0) (0xc00072a0a0) Stream removed, broadcasting: 3\nI0202 13:13:12.560556 530 log.go:172] (0xc00070e4d0) (0xc00072a140) Stream removed, broadcasting: 5\n" Feb 2 13:13:12.584: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Feb 2 13:13:12.584: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Feb 2 13:13:12.601: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Feb 2 13:13:22.618: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Feb 2 13:13:22.619: INFO: Waiting for statefulset status.replicas updated to 0 Feb 2 13:13:22.661: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999463s Feb 2 13:13:23.676: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.979282773s Feb 2 13:13:24.688: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.96417676s Feb 2 13:13:25.697: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.95300995s Feb 2 13:13:26.705: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.94410401s Feb 2 13:13:27.718: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.93571912s Feb 2 13:13:28.730: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.92306705s Feb 2 13:13:29.739: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.910943479s Feb 2 13:13:30.756: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.901723779s Feb 2 13:13:31.767: INFO: Verifying statefulset ss doesn't scale past 1 for another 884.997979ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-9060 Feb 2 13:13:32.780: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9060 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 2 13:13:33.291: INFO: stderr: "I0202 13:13:33.024128 557 log.go:172] (0xc000116d10) (0xc0005aa820) Create stream\nI0202 13:13:33.024670 557 log.go:172] (0xc000116d10) (0xc0005aa820) Stream added, broadcasting: 1\nI0202 13:13:33.033819 557 log.go:172] (0xc000116d10) Reply frame received for 1\nI0202 13:13:33.033875 557 log.go:172] (0xc000116d10) (0xc000882000) Create stream\nI0202 13:13:33.033886 557 log.go:172] (0xc000116d10) (0xc000882000) Stream added, broadcasting: 3\nI0202 13:13:33.035184 557 log.go:172] (0xc000116d10) Reply frame received for 3\nI0202 13:13:33.035209 557 log.go:172] (0xc000116d10) (0xc0005aa8c0) Create stream\nI0202 13:13:33.035217 557 log.go:172] (0xc000116d10) (0xc0005aa8c0) Stream added, broadcasting: 5\nI0202 13:13:33.036539 557 log.go:172] (0xc000116d10) Reply frame received for 5\nI0202 13:13:33.155643 557 log.go:172] (0xc000116d10) Data frame received for 5\nI0202 13:13:33.155776 557 log.go:172] (0xc0005aa8c0) (5) Data frame handling\nI0202 13:13:33.155797 557 log.go:172] (0xc0005aa8c0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0202 13:13:33.155854 557 log.go:172] (0xc000116d10) Data frame received for 3\nI0202 13:13:33.155881 557 log.go:172] (0xc000882000) (3) Data frame handling\nI0202 13:13:33.155902 557 log.go:172] (0xc000882000) (3) Data frame sent\nI0202 13:13:33.278855 557 log.go:172] (0xc000116d10) Data frame received for 1\nI0202 13:13:33.278994 557 log.go:172] (0xc000116d10) (0xc0005aa8c0) Stream removed, broadcasting: 5\nI0202 13:13:33.279096 557 log.go:172] (0xc0005aa820) (1) Data frame handling\nI0202 13:13:33.279135 557 log.go:172] (0xc0005aa820) (1) Data frame sent\nI0202 13:13:33.279172 557 log.go:172] (0xc000116d10) (0xc000882000) Stream removed, broadcasting: 3\nI0202 13:13:33.279190 557 log.go:172] (0xc000116d10) (0xc0005aa820) Stream removed, broadcasting: 1\nI0202 13:13:33.279209 557 log.go:172] (0xc000116d10) Go away received\nI0202 13:13:33.280180 557 log.go:172] (0xc000116d10) (0xc0005aa820) Stream removed, broadcasting: 1\nI0202 13:13:33.280204 557 log.go:172] (0xc000116d10) (0xc000882000) Stream removed, broadcasting: 3\nI0202 13:13:33.280212 557 log.go:172] (0xc000116d10) (0xc0005aa8c0) Stream removed, broadcasting: 5\n" Feb 2 13:13:33.291: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Feb 2 13:13:33.291: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Feb 2 13:13:33.298: INFO: Found 1 stateful pods, waiting for 3 Feb 2 13:13:43.310: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Feb 2 13:13:43.310: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Feb 2 13:13:43.310: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Pending - Ready=false Feb 2 13:13:53.308: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Feb 2 13:13:53.309: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Feb 2 13:13:53.309: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod Feb 2 13:13:53.316: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9060 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Feb 2 13:13:54.209: INFO: stderr: "I0202 13:13:53.537305 575 log.go:172] (0xc00091c420) (0xc0007166e0) Create stream\nI0202 13:13:53.537705 575 log.go:172] (0xc00091c420) (0xc0007166e0) Stream added, broadcasting: 1\nI0202 13:13:53.544279 575 log.go:172] (0xc00091c420) Reply frame received for 1\nI0202 13:13:53.544353 575 log.go:172] (0xc00091c420) (0xc000716780) Create stream\nI0202 13:13:53.544365 575 log.go:172] (0xc00091c420) (0xc000716780) Stream added, broadcasting: 3\nI0202 13:13:53.547408 575 log.go:172] (0xc00091c420) Reply frame received for 3\nI0202 13:13:53.547456 575 log.go:172] (0xc00091c420) (0xc000120280) Create stream\nI0202 13:13:53.547468 575 log.go:172] (0xc00091c420) (0xc000120280) Stream added, broadcasting: 5\nI0202 13:13:53.551984 575 log.go:172] (0xc00091c420) Reply frame received for 5\nI0202 13:13:53.701834 575 log.go:172] (0xc00091c420) Data frame received for 5\nI0202 13:13:53.702518 575 log.go:172] (0xc000120280) (5) Data frame handling\nI0202 13:13:53.702691 575 log.go:172] (0xc000120280) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0202 13:13:53.703046 575 log.go:172] (0xc00091c420) Data frame received for 3\nI0202 13:13:53.703441 575 log.go:172] (0xc000716780) (3) Data frame handling\nI0202 13:13:53.703551 575 log.go:172] (0xc000716780) (3) Data frame sent\nI0202 13:13:54.197806 575 log.go:172] (0xc00091c420) (0xc000716780) Stream removed, broadcasting: 3\nI0202 13:13:54.198007 575 log.go:172] (0xc00091c420) Data frame received for 1\nI0202 13:13:54.198031 575 log.go:172] (0xc0007166e0) (1) Data frame handling\nI0202 13:13:54.198065 575 log.go:172] (0xc0007166e0) (1) Data frame sent\nI0202 13:13:54.198073 575 log.go:172] (0xc00091c420) (0xc000120280) Stream removed, broadcasting: 5\nI0202 13:13:54.198126 575 log.go:172] (0xc00091c420) (0xc0007166e0) Stream removed, broadcasting: 1\nI0202 13:13:54.198151 575 log.go:172] (0xc00091c420) Go away received\nI0202 13:13:54.199140 575 log.go:172] (0xc00091c420) (0xc0007166e0) Stream removed, broadcasting: 1\nI0202 13:13:54.199152 575 log.go:172] (0xc00091c420) (0xc000716780) Stream removed, broadcasting: 3\nI0202 13:13:54.199155 575 log.go:172] (0xc00091c420) (0xc000120280) Stream removed, broadcasting: 5\n" Feb 2 13:13:54.210: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Feb 2 13:13:54.210: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Feb 2 13:13:54.210: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9060 ss-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Feb 2 13:13:54.603: INFO: stderr: "I0202 13:13:54.344730 595 log.go:172] (0xc00092e0b0) (0xc0005c4aa0) Create stream\nI0202 13:13:54.345070 595 log.go:172] (0xc00092e0b0) (0xc0005c4aa0) Stream added, broadcasting: 1\nI0202 13:13:54.349978 595 log.go:172] (0xc00092e0b0) Reply frame received for 1\nI0202 13:13:54.350069 595 log.go:172] (0xc00092e0b0) (0xc00032a000) Create stream\nI0202 13:13:54.350085 595 log.go:172] (0xc00092e0b0) (0xc00032a000) Stream added, broadcasting: 3\nI0202 13:13:54.351153 595 log.go:172] (0xc00092e0b0) Reply frame received for 3\nI0202 13:13:54.351173 595 log.go:172] (0xc00092e0b0) (0xc0005c4b40) Create stream\nI0202 13:13:54.351180 595 log.go:172] (0xc00092e0b0) (0xc0005c4b40) Stream added, broadcasting: 5\nI0202 13:13:54.351839 595 log.go:172] (0xc00092e0b0) Reply frame received for 5\nI0202 13:13:54.460428 595 log.go:172] (0xc00092e0b0) Data frame received for 5\nI0202 13:13:54.460497 595 log.go:172] (0xc0005c4b40) (5) Data frame handling\nI0202 13:13:54.460511 595 log.go:172] (0xc0005c4b40) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0202 13:13:54.492114 595 log.go:172] (0xc00092e0b0) Data frame received for 3\nI0202 13:13:54.492159 595 log.go:172] (0xc00032a000) (3) Data frame handling\nI0202 13:13:54.492179 595 log.go:172] (0xc00032a000) (3) Data frame sent\nI0202 13:13:54.594019 595 log.go:172] (0xc00092e0b0) Data frame received for 1\nI0202 13:13:54.594184 595 log.go:172] (0xc0005c4aa0) (1) Data frame handling\nI0202 13:13:54.594242 595 log.go:172] (0xc0005c4aa0) (1) Data frame sent\nI0202 13:13:54.594779 595 log.go:172] (0xc00092e0b0) (0xc0005c4b40) Stream removed, broadcasting: 5\nI0202 13:13:54.594964 595 log.go:172] (0xc00092e0b0) (0xc0005c4aa0) Stream removed, broadcasting: 1\nI0202 13:13:54.595665 595 log.go:172] (0xc00092e0b0) (0xc00032a000) Stream removed, broadcasting: 3\nI0202 13:13:54.595734 595 log.go:172] (0xc00092e0b0) (0xc0005c4aa0) Stream removed, broadcasting: 1\nI0202 13:13:54.595812 595 log.go:172] (0xc00092e0b0) (0xc00032a000) Stream removed, broadcasting: 3\nI0202 13:13:54.595844 595 log.go:172] (0xc00092e0b0) (0xc0005c4b40) Stream removed, broadcasting: 5\nI0202 13:13:54.595891 595 log.go:172] (0xc00092e0b0) Go away received\n" Feb 2 13:13:54.603: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Feb 2 13:13:54.604: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Feb 2 13:13:54.604: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9060 ss-2 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Feb 2 13:13:54.980: INFO: stderr: "I0202 13:13:54.740205 614 log.go:172] (0xc00064e420) (0xc000714780) Create stream\nI0202 13:13:54.740588 614 log.go:172] (0xc00064e420) (0xc000714780) Stream added, broadcasting: 1\nI0202 13:13:54.744850 614 log.go:172] (0xc00064e420) Reply frame received for 1\nI0202 13:13:54.744893 614 log.go:172] (0xc00064e420) (0xc0007c8000) Create stream\nI0202 13:13:54.744906 614 log.go:172] (0xc00064e420) (0xc0007c8000) Stream added, broadcasting: 3\nI0202 13:13:54.747077 614 log.go:172] (0xc00064e420) Reply frame received for 3\nI0202 13:13:54.747136 614 log.go:172] (0xc00064e420) (0xc0007c80a0) Create stream\nI0202 13:13:54.747149 614 log.go:172] (0xc00064e420) (0xc0007c80a0) Stream added, broadcasting: 5\nI0202 13:13:54.748893 614 log.go:172] (0xc00064e420) Reply frame received for 5\nI0202 13:13:54.852239 614 log.go:172] (0xc00064e420) Data frame received for 5\nI0202 13:13:54.852371 614 log.go:172] (0xc0007c80a0) (5) Data frame handling\nI0202 13:13:54.852414 614 log.go:172] (0xc0007c80a0) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0202 13:13:54.881750 614 log.go:172] (0xc00064e420) Data frame received for 3\nI0202 13:13:54.881816 614 log.go:172] (0xc0007c8000) (3) Data frame handling\nI0202 13:13:54.881852 614 log.go:172] (0xc0007c8000) (3) Data frame sent\nI0202 13:13:54.969880 614 log.go:172] (0xc00064e420) (0xc0007c8000) Stream removed, broadcasting: 3\nI0202 13:13:54.970088 614 log.go:172] (0xc00064e420) Data frame received for 1\nI0202 13:13:54.970109 614 log.go:172] (0xc000714780) (1) Data frame handling\nI0202 13:13:54.970131 614 log.go:172] (0xc000714780) (1) Data frame sent\nI0202 13:13:54.970263 614 log.go:172] (0xc00064e420) (0xc0007c80a0) Stream removed, broadcasting: 5\nI0202 13:13:54.970307 614 log.go:172] (0xc00064e420) (0xc000714780) Stream removed, broadcasting: 1\nI0202 13:13:54.970337 614 log.go:172] (0xc00064e420) Go away received\nI0202 13:13:54.971116 614 log.go:172] (0xc00064e420) (0xc000714780) Stream removed, broadcasting: 1\nI0202 13:13:54.971134 614 log.go:172] (0xc00064e420) (0xc0007c8000) Stream removed, broadcasting: 3\nI0202 13:13:54.971143 614 log.go:172] (0xc00064e420) (0xc0007c80a0) Stream removed, broadcasting: 5\n" Feb 2 13:13:54.980: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Feb 2 13:13:54.980: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Feb 2 13:13:54.980: INFO: Waiting for statefulset status.replicas updated to 0 Feb 2 13:13:54.985: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 Feb 2 13:14:05.008: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Feb 2 13:14:05.008: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Feb 2 13:14:05.008: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Feb 2 13:14:05.031: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999724s Feb 2 13:14:06.040: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.988061767s Feb 2 13:14:07.051: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.978685596s Feb 2 13:14:08.060: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.967938045s Feb 2 13:14:09.068: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.95936703s Feb 2 13:14:10.432: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.950903605s Feb 2 13:14:11.441: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.586980347s Feb 2 13:14:12.455: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.577640255s Feb 2 13:14:13.473: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.563530322s Feb 2 13:14:14.496: INFO: Verifying statefulset ss doesn't scale past 3 for another 545.954799ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-9060 Feb 2 13:14:15.517: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9060 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 2 13:14:16.041: INFO: stderr: "I0202 13:14:15.755143 632 log.go:172] (0xc00096c370) (0xc0005c06e0) Create stream\nI0202 13:14:15.755482 632 log.go:172] (0xc00096c370) (0xc0005c06e0) Stream added, broadcasting: 1\nI0202 13:14:15.762450 632 log.go:172] (0xc00096c370) Reply frame received for 1\nI0202 13:14:15.762484 632 log.go:172] (0xc00096c370) (0xc00055e280) Create stream\nI0202 13:14:15.762492 632 log.go:172] (0xc00096c370) (0xc00055e280) Stream added, broadcasting: 3\nI0202 13:14:15.763923 632 log.go:172] (0xc00096c370) Reply frame received for 3\nI0202 13:14:15.763945 632 log.go:172] (0xc00096c370) (0xc0005c0780) Create stream\nI0202 13:14:15.763954 632 log.go:172] (0xc00096c370) (0xc0005c0780) Stream added, broadcasting: 5\nI0202 13:14:15.765116 632 log.go:172] (0xc00096c370) Reply frame received for 5\nI0202 13:14:15.896904 632 log.go:172] (0xc00096c370) Data frame received for 3\nI0202 13:14:15.897052 632 log.go:172] (0xc00055e280) (3) Data frame handling\nI0202 13:14:15.897083 632 log.go:172] (0xc00055e280) (3) Data frame sent\nI0202 13:14:15.897131 632 log.go:172] (0xc00096c370) Data frame received for 5\nI0202 13:14:15.897138 632 log.go:172] (0xc0005c0780) (5) Data frame handling\nI0202 13:14:15.897150 632 log.go:172] (0xc0005c0780) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0202 13:14:16.032808 632 log.go:172] (0xc00096c370) (0xc00055e280) Stream removed, broadcasting: 3\nI0202 13:14:16.033209 632 log.go:172] (0xc00096c370) Data frame received for 1\nI0202 13:14:16.033247 632 log.go:172] (0xc0005c06e0) (1) Data frame handling\nI0202 13:14:16.033278 632 log.go:172] (0xc0005c06e0) (1) Data frame sent\nI0202 13:14:16.033293 632 log.go:172] (0xc00096c370) (0xc0005c0780) Stream removed, broadcasting: 5\nI0202 13:14:16.033346 632 log.go:172] (0xc00096c370) (0xc0005c06e0) Stream removed, broadcasting: 1\nI0202 13:14:16.033605 632 log.go:172] (0xc00096c370) Go away received\nI0202 13:14:16.034720 632 log.go:172] (0xc00096c370) (0xc0005c06e0) Stream removed, broadcasting: 1\nI0202 13:14:16.034775 632 log.go:172] (0xc00096c370) (0xc00055e280) Stream removed, broadcasting: 3\nI0202 13:14:16.034781 632 log.go:172] (0xc00096c370) (0xc0005c0780) Stream removed, broadcasting: 5\n" Feb 2 13:14:16.041: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Feb 2 13:14:16.041: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Feb 2 13:14:16.041: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9060 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 2 13:14:16.401: INFO: stderr: "I0202 13:14:16.185988 650 log.go:172] (0xc000522420) (0xc00039e820) Create stream\nI0202 13:14:16.186322 650 log.go:172] (0xc000522420) (0xc00039e820) Stream added, broadcasting: 1\nI0202 13:14:16.207251 650 log.go:172] (0xc000522420) Reply frame received for 1\nI0202 13:14:16.207428 650 log.go:172] (0xc000522420) (0xc000508280) Create stream\nI0202 13:14:16.207442 650 log.go:172] (0xc000522420) (0xc000508280) Stream added, broadcasting: 3\nI0202 13:14:16.210441 650 log.go:172] (0xc000522420) Reply frame received for 3\nI0202 13:14:16.210666 650 log.go:172] (0xc000522420) (0xc00039e000) Create stream\nI0202 13:14:16.210726 650 log.go:172] (0xc000522420) (0xc00039e000) Stream added, broadcasting: 5\nI0202 13:14:16.212966 650 log.go:172] (0xc000522420) Reply frame received for 5\nI0202 13:14:16.313274 650 log.go:172] (0xc000522420) Data frame received for 5\nI0202 13:14:16.313421 650 log.go:172] (0xc00039e000) (5) Data frame handling\nI0202 13:14:16.313453 650 log.go:172] (0xc00039e000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0202 13:14:16.313500 650 log.go:172] (0xc000522420) Data frame received for 3\nI0202 13:14:16.313526 650 log.go:172] (0xc000508280) (3) Data frame handling\nI0202 13:14:16.313536 650 log.go:172] (0xc000508280) (3) Data frame sent\nI0202 13:14:16.391949 650 log.go:172] (0xc000522420) Data frame received for 1\nI0202 13:14:16.392030 650 log.go:172] (0xc00039e820) (1) Data frame handling\nI0202 13:14:16.392056 650 log.go:172] (0xc00039e820) (1) Data frame sent\nI0202 13:14:16.392455 650 log.go:172] (0xc000522420) (0xc00039e820) Stream removed, broadcasting: 1\nI0202 13:14:16.394840 650 log.go:172] (0xc000522420) (0xc000508280) Stream removed, broadcasting: 3\nI0202 13:14:16.395068 650 log.go:172] (0xc000522420) (0xc00039e000) Stream removed, broadcasting: 5\nI0202 13:14:16.395133 650 log.go:172] (0xc000522420) Go away received\nI0202 13:14:16.395213 650 log.go:172] (0xc000522420) (0xc00039e820) Stream removed, broadcasting: 1\nI0202 13:14:16.395239 650 log.go:172] (0xc000522420) (0xc000508280) Stream removed, broadcasting: 3\nI0202 13:14:16.395255 650 log.go:172] (0xc000522420) (0xc00039e000) Stream removed, broadcasting: 5\n" Feb 2 13:14:16.401: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Feb 2 13:14:16.401: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Feb 2 13:14:16.401: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9060 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 2 13:14:17.150: INFO: stderr: "I0202 13:14:16.773991 669 log.go:172] (0xc0009d6000) (0xc000ade140) Create stream\nI0202 13:14:16.774447 669 log.go:172] (0xc0009d6000) (0xc000ade140) Stream added, broadcasting: 1\nI0202 13:14:16.782670 669 log.go:172] (0xc0009d6000) Reply frame received for 1\nI0202 13:14:16.782857 669 log.go:172] (0xc0009d6000) (0xc0005bc140) Create stream\nI0202 13:14:16.782898 669 log.go:172] (0xc0009d6000) (0xc0005bc140) Stream added, broadcasting: 3\nI0202 13:14:16.786779 669 log.go:172] (0xc0009d6000) Reply frame received for 3\nI0202 13:14:16.786887 669 log.go:172] (0xc0009d6000) (0xc000ade280) Create stream\nI0202 13:14:16.786898 669 log.go:172] (0xc0009d6000) (0xc000ade280) Stream added, broadcasting: 5\nI0202 13:14:16.788714 669 log.go:172] (0xc0009d6000) Reply frame received for 5\nI0202 13:14:16.927521 669 log.go:172] (0xc0009d6000) Data frame received for 3\nI0202 13:14:16.927844 669 log.go:172] (0xc0005bc140) (3) Data frame handling\nI0202 13:14:16.927916 669 log.go:172] (0xc0009d6000) Data frame received for 5\nI0202 13:14:16.927950 669 log.go:172] (0xc000ade280) (5) Data frame handling\nI0202 13:14:16.927976 669 log.go:172] (0xc000ade280) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0202 13:14:16.928066 669 log.go:172] (0xc0005bc140) (3) Data frame sent\nI0202 13:14:17.130130 669 log.go:172] (0xc0009d6000) Data frame received for 1\nI0202 13:14:17.130615 669 log.go:172] (0xc0009d6000) (0xc0005bc140) Stream removed, broadcasting: 3\nI0202 13:14:17.130730 669 log.go:172] (0xc000ade140) (1) Data frame handling\nI0202 13:14:17.130806 669 log.go:172] (0xc000ade140) (1) Data frame sent\nI0202 13:14:17.130953 669 log.go:172] (0xc0009d6000) (0xc000ade280) Stream removed, broadcasting: 5\nI0202 13:14:17.131041 669 log.go:172] (0xc0009d6000) (0xc000ade140) Stream removed, broadcasting: 1\nI0202 13:14:17.131082 669 log.go:172] (0xc0009d6000) Go away received\nI0202 13:14:17.133817 669 log.go:172] (0xc0009d6000) (0xc000ade140) Stream removed, broadcasting: 1\nI0202 13:14:17.133949 669 log.go:172] (0xc0009d6000) (0xc0005bc140) Stream removed, broadcasting: 3\nI0202 13:14:17.134105 669 log.go:172] (0xc0009d6000) (0xc000ade280) Stream removed, broadcasting: 5\n" Feb 2 13:14:17.150: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Feb 2 13:14:17.150: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Feb 2 13:14:17.150: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 Feb 2 13:14:47.212: INFO: Deleting all statefulset in ns statefulset-9060 Feb 2 13:14:47.218: INFO: Scaling statefulset ss to 0 Feb 2 13:14:47.233: INFO: Waiting for statefulset status.replicas updated to 0 Feb 2 13:14:47.237: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 2 13:14:48.276: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-9060" for this suite. Feb 2 13:14:54.501: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 2 13:14:54.683: INFO: namespace statefulset-9060 deletion completed in 6.341909719s • [SLOW TEST:114.728 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 2 13:14:54.684: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on node default medium Feb 2 13:14:54.830: INFO: Waiting up to 5m0s for pod "pod-8a42719f-e6d1-451d-a75b-1e0c0e201451" in namespace "emptydir-9011" to be "success or failure" Feb 2 13:14:54.839: INFO: Pod "pod-8a42719f-e6d1-451d-a75b-1e0c0e201451": Phase="Pending", Reason="", readiness=false. Elapsed: 8.93761ms Feb 2 13:14:56.853: INFO: Pod "pod-8a42719f-e6d1-451d-a75b-1e0c0e201451": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022800713s Feb 2 13:14:58.863: INFO: Pod "pod-8a42719f-e6d1-451d-a75b-1e0c0e201451": Phase="Pending", Reason="", readiness=false. Elapsed: 4.033244233s Feb 2 13:15:00.877: INFO: Pod "pod-8a42719f-e6d1-451d-a75b-1e0c0e201451": Phase="Pending", Reason="", readiness=false. Elapsed: 6.046906246s Feb 2 13:15:02.889: INFO: Pod "pod-8a42719f-e6d1-451d-a75b-1e0c0e201451": Phase="Pending", Reason="", readiness=false. Elapsed: 8.059368782s Feb 2 13:15:04.897: INFO: Pod "pod-8a42719f-e6d1-451d-a75b-1e0c0e201451": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.066463878s STEP: Saw pod success Feb 2 13:15:04.897: INFO: Pod "pod-8a42719f-e6d1-451d-a75b-1e0c0e201451" satisfied condition "success or failure" Feb 2 13:15:04.899: INFO: Trying to get logs from node iruya-node pod pod-8a42719f-e6d1-451d-a75b-1e0c0e201451 container test-container: STEP: delete the pod Feb 2 13:15:04.954: INFO: Waiting for pod pod-8a42719f-e6d1-451d-a75b-1e0c0e201451 to disappear Feb 2 13:15:05.097: INFO: Pod pod-8a42719f-e6d1-451d-a75b-1e0c0e201451 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 2 13:15:05.097: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9011" for this suite. Feb 2 13:15:11.131: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 2 13:15:11.265: INFO: namespace emptydir-9011 deletion completed in 6.160377267s • [SLOW TEST:16.580 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 2 13:15:11.265: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Failed STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Feb 2 13:15:19.482: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 2 13:15:19.577: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-5957" for this suite. Feb 2 13:15:25.618: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 2 13:15:25.738: INFO: namespace container-runtime-5957 deletion completed in 6.154475697s • [SLOW TEST:14.474 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl api-versions should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 2 13:15:25.739: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: validating api versions Feb 2 13:15:25.855: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions' Feb 2 13:15:26.072: INFO: stderr: "" Feb 2 13:15:26.073: INFO: stdout: "admissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\napps/v1beta1\napps/v1beta2\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 2 13:15:26.073: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7448" for this suite. Feb 2 13:15:32.110: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 2 13:15:32.229: INFO: namespace kubectl-7448 deletion completed in 6.148682404s • [SLOW TEST:6.490 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl api-versions /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 2 13:15:32.230: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes Feb 2 13:15:32.316: INFO: observed the pod list STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 2 13:15:56.584: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-7768" for this suite. Feb 2 13:16:02.634: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 2 13:16:02.748: INFO: namespace pods-7768 deletion completed in 6.14495234s • [SLOW TEST:30.519 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 2 13:16:02.749: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Feb 2 13:16:02.859: INFO: Waiting up to 5m0s for pod "downward-api-b1184675-1929-4c1e-9e1f-0490dd19493c" in namespace "downward-api-563" to be "success or failure" Feb 2 13:16:02.880: INFO: Pod "downward-api-b1184675-1929-4c1e-9e1f-0490dd19493c": Phase="Pending", Reason="", readiness=false. Elapsed: 20.733344ms Feb 2 13:16:04.896: INFO: Pod "downward-api-b1184675-1929-4c1e-9e1f-0490dd19493c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036800143s Feb 2 13:16:06.909: INFO: Pod "downward-api-b1184675-1929-4c1e-9e1f-0490dd19493c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.050080678s Feb 2 13:16:08.925: INFO: Pod "downward-api-b1184675-1929-4c1e-9e1f-0490dd19493c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.06596284s Feb 2 13:16:10.934: INFO: Pod "downward-api-b1184675-1929-4c1e-9e1f-0490dd19493c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.074792108s STEP: Saw pod success Feb 2 13:16:10.934: INFO: Pod "downward-api-b1184675-1929-4c1e-9e1f-0490dd19493c" satisfied condition "success or failure" Feb 2 13:16:10.941: INFO: Trying to get logs from node iruya-node pod downward-api-b1184675-1929-4c1e-9e1f-0490dd19493c container dapi-container: STEP: delete the pod Feb 2 13:16:11.142: INFO: Waiting for pod downward-api-b1184675-1929-4c1e-9e1f-0490dd19493c to disappear Feb 2 13:16:11.152: INFO: Pod downward-api-b1184675-1929-4c1e-9e1f-0490dd19493c no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 2 13:16:11.153: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-563" for this suite. Feb 2 13:16:17.183: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 2 13:16:17.311: INFO: namespace downward-api-563 deletion completed in 6.150453859s • [SLOW TEST:14.562 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 2 13:16:17.312: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 2 13:16:47.874: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-3605" for this suite. Feb 2 13:16:53.928: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 2 13:16:54.026: INFO: namespace namespaces-3605 deletion completed in 6.145199838s STEP: Destroying namespace "nsdeletetest-4473" for this suite. Feb 2 13:16:54.029: INFO: Namespace nsdeletetest-4473 was already deleted STEP: Destroying namespace "nsdeletetest-5395" for this suite. Feb 2 13:17:00.051: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 2 13:17:00.143: INFO: namespace nsdeletetest-5395 deletion completed in 6.113750939s • [SLOW TEST:42.831 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 2 13:17:00.143: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-configmap-dk7z STEP: Creating a pod to test atomic-volume-subpath Feb 2 13:17:00.217: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-dk7z" in namespace "subpath-4920" to be "success or failure" Feb 2 13:17:00.224: INFO: Pod "pod-subpath-test-configmap-dk7z": Phase="Pending", Reason="", readiness=false. Elapsed: 6.906502ms Feb 2 13:17:02.232: INFO: Pod "pod-subpath-test-configmap-dk7z": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015165665s Feb 2 13:17:04.731: INFO: Pod "pod-subpath-test-configmap-dk7z": Phase="Pending", Reason="", readiness=false. Elapsed: 4.514189467s Feb 2 13:17:06.742: INFO: Pod "pod-subpath-test-configmap-dk7z": Phase="Pending", Reason="", readiness=false. Elapsed: 6.524905748s Feb 2 13:17:08.757: INFO: Pod "pod-subpath-test-configmap-dk7z": Phase="Running", Reason="", readiness=true. Elapsed: 8.539531115s Feb 2 13:17:10.767: INFO: Pod "pod-subpath-test-configmap-dk7z": Phase="Running", Reason="", readiness=true. Elapsed: 10.549752013s Feb 2 13:17:12.790: INFO: Pod "pod-subpath-test-configmap-dk7z": Phase="Running", Reason="", readiness=true. Elapsed: 12.573039475s Feb 2 13:17:14.801: INFO: Pod "pod-subpath-test-configmap-dk7z": Phase="Running", Reason="", readiness=true. Elapsed: 14.583610025s Feb 2 13:17:16.811: INFO: Pod "pod-subpath-test-configmap-dk7z": Phase="Running", Reason="", readiness=true. Elapsed: 16.593917939s Feb 2 13:17:18.822: INFO: Pod "pod-subpath-test-configmap-dk7z": Phase="Running", Reason="", readiness=true. Elapsed: 18.605064233s Feb 2 13:17:20.832: INFO: Pod "pod-subpath-test-configmap-dk7z": Phase="Running", Reason="", readiness=true. Elapsed: 20.614511075s Feb 2 13:17:22.845: INFO: Pod "pod-subpath-test-configmap-dk7z": Phase="Running", Reason="", readiness=true. Elapsed: 22.627741929s Feb 2 13:17:24.862: INFO: Pod "pod-subpath-test-configmap-dk7z": Phase="Running", Reason="", readiness=true. Elapsed: 24.644727237s Feb 2 13:17:26.872: INFO: Pod "pod-subpath-test-configmap-dk7z": Phase="Running", Reason="", readiness=true. Elapsed: 26.654741348s Feb 2 13:17:28.948: INFO: Pod "pod-subpath-test-configmap-dk7z": Phase="Succeeded", Reason="", readiness=false. Elapsed: 28.730745231s STEP: Saw pod success Feb 2 13:17:28.948: INFO: Pod "pod-subpath-test-configmap-dk7z" satisfied condition "success or failure" Feb 2 13:17:28.957: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-configmap-dk7z container test-container-subpath-configmap-dk7z: STEP: delete the pod Feb 2 13:17:29.114: INFO: Waiting for pod pod-subpath-test-configmap-dk7z to disappear Feb 2 13:17:29.119: INFO: Pod pod-subpath-test-configmap-dk7z no longer exists STEP: Deleting pod pod-subpath-test-configmap-dk7z Feb 2 13:17:29.119: INFO: Deleting pod "pod-subpath-test-configmap-dk7z" in namespace "subpath-4920" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 2 13:17:29.121: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-4920" for this suite. Feb 2 13:17:35.144: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 2 13:17:35.355: INFO: namespace subpath-4920 deletion completed in 6.229136315s • [SLOW TEST:35.212 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 2 13:17:35.356: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test substitution in container's args Feb 2 13:17:35.455: INFO: Waiting up to 5m0s for pod "var-expansion-d0ed28c5-4fee-4cae-af37-2868c6b185e2" in namespace "var-expansion-7228" to be "success or failure" Feb 2 13:17:35.498: INFO: Pod "var-expansion-d0ed28c5-4fee-4cae-af37-2868c6b185e2": Phase="Pending", Reason="", readiness=false. Elapsed: 42.336691ms Feb 2 13:17:37.516: INFO: Pod "var-expansion-d0ed28c5-4fee-4cae-af37-2868c6b185e2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.060595946s Feb 2 13:17:39.527: INFO: Pod "var-expansion-d0ed28c5-4fee-4cae-af37-2868c6b185e2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.07139081s Feb 2 13:17:41.545: INFO: Pod "var-expansion-d0ed28c5-4fee-4cae-af37-2868c6b185e2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.089515786s Feb 2 13:17:43.561: INFO: Pod "var-expansion-d0ed28c5-4fee-4cae-af37-2868c6b185e2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.105842119s STEP: Saw pod success Feb 2 13:17:43.561: INFO: Pod "var-expansion-d0ed28c5-4fee-4cae-af37-2868c6b185e2" satisfied condition "success or failure" Feb 2 13:17:43.568: INFO: Trying to get logs from node iruya-node pod var-expansion-d0ed28c5-4fee-4cae-af37-2868c6b185e2 container dapi-container: STEP: delete the pod Feb 2 13:17:43.887: INFO: Waiting for pod var-expansion-d0ed28c5-4fee-4cae-af37-2868c6b185e2 to disappear Feb 2 13:17:43.893: INFO: Pod var-expansion-d0ed28c5-4fee-4cae-af37-2868c6b185e2 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 2 13:17:43.893: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-7228" for this suite. Feb 2 13:17:49.938: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 2 13:17:50.047: INFO: namespace var-expansion-7228 deletion completed in 6.14771267s • [SLOW TEST:14.691 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 2 13:17:50.047: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Feb 2 13:18:06.324: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Feb 2 13:18:06.338: INFO: Pod pod-with-prestop-http-hook still exists Feb 2 13:18:08.339: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Feb 2 13:18:08.346: INFO: Pod pod-with-prestop-http-hook still exists Feb 2 13:18:10.339: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Feb 2 13:18:10.345: INFO: Pod pod-with-prestop-http-hook still exists Feb 2 13:18:12.339: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Feb 2 13:18:12.350: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 2 13:18:12.387: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-814" for this suite. Feb 2 13:18:36.420: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 2 13:18:36.565: INFO: namespace container-lifecycle-hook-814 deletion completed in 24.16763551s • [SLOW TEST:46.518 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 2 13:18:36.566: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change Feb 2 13:18:45.753: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 2 13:18:46.833: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-7782" for this suite. Feb 2 13:19:10.866: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 2 13:19:10.994: INFO: namespace replicaset-7782 deletion completed in 24.153529127s • [SLOW TEST:34.429 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 2 13:19:10.995: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted Feb 2 13:19:17.370: INFO: 1 pods remaining Feb 2 13:19:17.370: INFO: 0 pods has nil DeletionTimestamp Feb 2 13:19:17.370: INFO: STEP: Gathering metrics W0202 13:19:18.171437 8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Feb 2 13:19:18.171: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 2 13:19:18.171: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-9954" for this suite. Feb 2 13:19:28.435: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 2 13:19:28.641: INFO: namespace gc-9954 deletion completed in 10.464208184s • [SLOW TEST:17.647 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Proxy server should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 2 13:19:28.642: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: starting the proxy server Feb 2 13:19:28.852: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 2 13:19:28.989: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6786" for this suite. Feb 2 13:19:35.029: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 2 13:19:35.152: INFO: namespace kubectl-6786 deletion completed in 6.150505983s • [SLOW TEST:6.511 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Proxy server /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 2 13:19:35.153: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification Feb 2 13:19:35.305: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-7253,SelfLink:/api/v1/namespaces/watch-7253/configmaps/e2e-watch-test-configmap-a,UID:639d4b85-ae5a-405c-b254-b42588cdaea5,ResourceVersion:22815808,Generation:0,CreationTimestamp:2020-02-02 13:19:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} Feb 2 13:19:35.306: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-7253,SelfLink:/api/v1/namespaces/watch-7253/configmaps/e2e-watch-test-configmap-a,UID:639d4b85-ae5a-405c-b254-b42588cdaea5,ResourceVersion:22815808,Generation:0,CreationTimestamp:2020-02-02 13:19:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: modifying configmap A and ensuring the correct watchers observe the notification Feb 2 13:19:45.322: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-7253,SelfLink:/api/v1/namespaces/watch-7253/configmaps/e2e-watch-test-configmap-a,UID:639d4b85-ae5a-405c-b254-b42588cdaea5,ResourceVersion:22815822,Generation:0,CreationTimestamp:2020-02-02 13:19:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} Feb 2 13:19:45.322: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-7253,SelfLink:/api/v1/namespaces/watch-7253/configmaps/e2e-watch-test-configmap-a,UID:639d4b85-ae5a-405c-b254-b42588cdaea5,ResourceVersion:22815822,Generation:0,CreationTimestamp:2020-02-02 13:19:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying configmap A again and ensuring the correct watchers observe the notification Feb 2 13:19:55.349: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-7253,SelfLink:/api/v1/namespaces/watch-7253/configmaps/e2e-watch-test-configmap-a,UID:639d4b85-ae5a-405c-b254-b42588cdaea5,ResourceVersion:22815837,Generation:0,CreationTimestamp:2020-02-02 13:19:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Feb 2 13:19:55.350: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-7253,SelfLink:/api/v1/namespaces/watch-7253/configmaps/e2e-watch-test-configmap-a,UID:639d4b85-ae5a-405c-b254-b42588cdaea5,ResourceVersion:22815837,Generation:0,CreationTimestamp:2020-02-02 13:19:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: deleting configmap A and ensuring the correct watchers observe the notification Feb 2 13:20:05.368: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-7253,SelfLink:/api/v1/namespaces/watch-7253/configmaps/e2e-watch-test-configmap-a,UID:639d4b85-ae5a-405c-b254-b42588cdaea5,ResourceVersion:22815851,Generation:0,CreationTimestamp:2020-02-02 13:19:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Feb 2 13:20:05.369: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-7253,SelfLink:/api/v1/namespaces/watch-7253/configmaps/e2e-watch-test-configmap-a,UID:639d4b85-ae5a-405c-b254-b42588cdaea5,ResourceVersion:22815851,Generation:0,CreationTimestamp:2020-02-02 13:19:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification Feb 2 13:20:15.381: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-7253,SelfLink:/api/v1/namespaces/watch-7253/configmaps/e2e-watch-test-configmap-b,UID:29d00db8-3855-4fa7-a31c-ae1ff6842fec,ResourceVersion:22815865,Generation:0,CreationTimestamp:2020-02-02 13:20:15 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} Feb 2 13:20:15.381: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-7253,SelfLink:/api/v1/namespaces/watch-7253/configmaps/e2e-watch-test-configmap-b,UID:29d00db8-3855-4fa7-a31c-ae1ff6842fec,ResourceVersion:22815865,Generation:0,CreationTimestamp:2020-02-02 13:20:15 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: deleting configmap B and ensuring the correct watchers observe the notification Feb 2 13:20:25.394: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-7253,SelfLink:/api/v1/namespaces/watch-7253/configmaps/e2e-watch-test-configmap-b,UID:29d00db8-3855-4fa7-a31c-ae1ff6842fec,ResourceVersion:22815879,Generation:0,CreationTimestamp:2020-02-02 13:20:15 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} Feb 2 13:20:25.395: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-7253,SelfLink:/api/v1/namespaces/watch-7253/configmaps/e2e-watch-test-configmap-b,UID:29d00db8-3855-4fa7-a31c-ae1ff6842fec,ResourceVersion:22815879,Generation:0,CreationTimestamp:2020-02-02 13:20:15 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 2 13:20:35.396: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-7253" for this suite. Feb 2 13:20:41.495: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 2 13:20:41.601: INFO: namespace watch-7253 deletion completed in 6.179351988s • [SLOW TEST:66.448 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 2 13:20:41.601: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod busybox-9c2abc2f-708c-4aaf-916d-b88c503d7bf7 in namespace container-probe-8288 Feb 2 13:20:49.736: INFO: Started pod busybox-9c2abc2f-708c-4aaf-916d-b88c503d7bf7 in namespace container-probe-8288 STEP: checking the pod's current state and verifying that restartCount is present Feb 2 13:20:49.741: INFO: Initial restart count of pod busybox-9c2abc2f-708c-4aaf-916d-b88c503d7bf7 is 0 Feb 2 13:21:42.041: INFO: Restart count of pod container-probe-8288/busybox-9c2abc2f-708c-4aaf-916d-b88c503d7bf7 is now 1 (52.299733715s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 2 13:21:42.073: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-8288" for this suite. Feb 2 13:21:48.418: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 2 13:21:48.582: INFO: namespace container-probe-8288 deletion completed in 6.498270977s • [SLOW TEST:66.980 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 2 13:21:48.582: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-1305.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-1305.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-1305.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-1305.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-1305.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-1305.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-1305.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-1305.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-1305.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-1305.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-1305.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-1305.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1305.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 135.151.100.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.100.151.135_udp@PTR;check="$$(dig +tcp +noall +answer +search 135.151.100.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.100.151.135_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-1305.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-1305.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-1305.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-1305.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-1305.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-1305.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-1305.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-1305.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-1305.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-1305.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-1305.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-1305.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1305.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 135.151.100.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.100.151.135_udp@PTR;check="$$(dig +tcp +noall +answer +search 135.151.100.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.100.151.135_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Feb 2 13:22:01.205: INFO: Unable to read wheezy_udp@dns-test-service.dns-1305.svc.cluster.local from pod dns-1305/dns-test-95f219ef-f113-4cb7-8b39-3be6c5ef3a82: the server could not find the requested resource (get pods dns-test-95f219ef-f113-4cb7-8b39-3be6c5ef3a82) Feb 2 13:22:01.215: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1305.svc.cluster.local from pod dns-1305/dns-test-95f219ef-f113-4cb7-8b39-3be6c5ef3a82: the server could not find the requested resource (get pods dns-test-95f219ef-f113-4cb7-8b39-3be6c5ef3a82) Feb 2 13:22:01.225: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1305.svc.cluster.local from pod dns-1305/dns-test-95f219ef-f113-4cb7-8b39-3be6c5ef3a82: the server could not find the requested resource (get pods dns-test-95f219ef-f113-4cb7-8b39-3be6c5ef3a82) Feb 2 13:22:01.230: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1305.svc.cluster.local from pod dns-1305/dns-test-95f219ef-f113-4cb7-8b39-3be6c5ef3a82: the server could not find the requested resource (get pods dns-test-95f219ef-f113-4cb7-8b39-3be6c5ef3a82) Feb 2 13:22:01.235: INFO: Unable to read wheezy_udp@_http._tcp.test-service-2.dns-1305.svc.cluster.local from pod dns-1305/dns-test-95f219ef-f113-4cb7-8b39-3be6c5ef3a82: the server could not find the requested resource (get pods dns-test-95f219ef-f113-4cb7-8b39-3be6c5ef3a82) Feb 2 13:22:01.241: INFO: Unable to read wheezy_tcp@_http._tcp.test-service-2.dns-1305.svc.cluster.local from pod dns-1305/dns-test-95f219ef-f113-4cb7-8b39-3be6c5ef3a82: the server could not find the requested resource (get pods dns-test-95f219ef-f113-4cb7-8b39-3be6c5ef3a82) Feb 2 13:22:01.246: INFO: Unable to read wheezy_udp@PodARecord from pod dns-1305/dns-test-95f219ef-f113-4cb7-8b39-3be6c5ef3a82: the server could not find the requested resource (get pods dns-test-95f219ef-f113-4cb7-8b39-3be6c5ef3a82) Feb 2 13:22:01.251: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-1305/dns-test-95f219ef-f113-4cb7-8b39-3be6c5ef3a82: the server could not find the requested resource (get pods dns-test-95f219ef-f113-4cb7-8b39-3be6c5ef3a82) Feb 2 13:22:01.255: INFO: Unable to read 10.100.151.135_udp@PTR from pod dns-1305/dns-test-95f219ef-f113-4cb7-8b39-3be6c5ef3a82: the server could not find the requested resource (get pods dns-test-95f219ef-f113-4cb7-8b39-3be6c5ef3a82) Feb 2 13:22:01.259: INFO: Unable to read 10.100.151.135_tcp@PTR from pod dns-1305/dns-test-95f219ef-f113-4cb7-8b39-3be6c5ef3a82: the server could not find the requested resource (get pods dns-test-95f219ef-f113-4cb7-8b39-3be6c5ef3a82) Feb 2 13:22:01.263: INFO: Unable to read jessie_udp@dns-test-service.dns-1305.svc.cluster.local from pod dns-1305/dns-test-95f219ef-f113-4cb7-8b39-3be6c5ef3a82: the server could not find the requested resource (get pods dns-test-95f219ef-f113-4cb7-8b39-3be6c5ef3a82) Feb 2 13:22:01.267: INFO: Unable to read jessie_tcp@dns-test-service.dns-1305.svc.cluster.local from pod dns-1305/dns-test-95f219ef-f113-4cb7-8b39-3be6c5ef3a82: the server could not find the requested resource (get pods dns-test-95f219ef-f113-4cb7-8b39-3be6c5ef3a82) Feb 2 13:22:01.271: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1305.svc.cluster.local from pod dns-1305/dns-test-95f219ef-f113-4cb7-8b39-3be6c5ef3a82: the server could not find the requested resource (get pods dns-test-95f219ef-f113-4cb7-8b39-3be6c5ef3a82) Feb 2 13:22:01.276: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1305.svc.cluster.local from pod dns-1305/dns-test-95f219ef-f113-4cb7-8b39-3be6c5ef3a82: the server could not find the requested resource (get pods dns-test-95f219ef-f113-4cb7-8b39-3be6c5ef3a82) Feb 2 13:22:01.280: INFO: Unable to read jessie_udp@_http._tcp.test-service-2.dns-1305.svc.cluster.local from pod dns-1305/dns-test-95f219ef-f113-4cb7-8b39-3be6c5ef3a82: the server could not find the requested resource (get pods dns-test-95f219ef-f113-4cb7-8b39-3be6c5ef3a82) Feb 2 13:22:01.285: INFO: Unable to read jessie_tcp@_http._tcp.test-service-2.dns-1305.svc.cluster.local from pod dns-1305/dns-test-95f219ef-f113-4cb7-8b39-3be6c5ef3a82: the server could not find the requested resource (get pods dns-test-95f219ef-f113-4cb7-8b39-3be6c5ef3a82) Feb 2 13:22:01.289: INFO: Unable to read jessie_udp@PodARecord from pod dns-1305/dns-test-95f219ef-f113-4cb7-8b39-3be6c5ef3a82: the server could not find the requested resource (get pods dns-test-95f219ef-f113-4cb7-8b39-3be6c5ef3a82) Feb 2 13:22:01.294: INFO: Unable to read jessie_tcp@PodARecord from pod dns-1305/dns-test-95f219ef-f113-4cb7-8b39-3be6c5ef3a82: the server could not find the requested resource (get pods dns-test-95f219ef-f113-4cb7-8b39-3be6c5ef3a82) Feb 2 13:22:01.297: INFO: Unable to read 10.100.151.135_udp@PTR from pod dns-1305/dns-test-95f219ef-f113-4cb7-8b39-3be6c5ef3a82: the server could not find the requested resource (get pods dns-test-95f219ef-f113-4cb7-8b39-3be6c5ef3a82) Feb 2 13:22:01.303: INFO: Unable to read 10.100.151.135_tcp@PTR from pod dns-1305/dns-test-95f219ef-f113-4cb7-8b39-3be6c5ef3a82: the server could not find the requested resource (get pods dns-test-95f219ef-f113-4cb7-8b39-3be6c5ef3a82) Feb 2 13:22:01.303: INFO: Lookups using dns-1305/dns-test-95f219ef-f113-4cb7-8b39-3be6c5ef3a82 failed for: [wheezy_udp@dns-test-service.dns-1305.svc.cluster.local wheezy_tcp@dns-test-service.dns-1305.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-1305.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-1305.svc.cluster.local wheezy_udp@_http._tcp.test-service-2.dns-1305.svc.cluster.local wheezy_tcp@_http._tcp.test-service-2.dns-1305.svc.cluster.local wheezy_udp@PodARecord wheezy_tcp@PodARecord 10.100.151.135_udp@PTR 10.100.151.135_tcp@PTR jessie_udp@dns-test-service.dns-1305.svc.cluster.local jessie_tcp@dns-test-service.dns-1305.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-1305.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-1305.svc.cluster.local jessie_udp@_http._tcp.test-service-2.dns-1305.svc.cluster.local jessie_tcp@_http._tcp.test-service-2.dns-1305.svc.cluster.local jessie_udp@PodARecord jessie_tcp@PodARecord 10.100.151.135_udp@PTR 10.100.151.135_tcp@PTR] Feb 2 13:22:06.402: INFO: DNS probes using dns-1305/dns-test-95f219ef-f113-4cb7-8b39-3be6c5ef3a82 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 2 13:22:06.578: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-1305" for this suite. Feb 2 13:22:12.796: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 2 13:22:12.979: INFO: namespace dns-1305 deletion completed in 6.393126509s • [SLOW TEST:24.396 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl logs should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 2 13:22:12.979: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1292 STEP: creating an rc Feb 2 13:22:13.077: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4234' Feb 2 13:22:13.595: INFO: stderr: "" Feb 2 13:22:13.595: INFO: stdout: "replicationcontroller/redis-master created\n" [It] should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Waiting for Redis master to start. Feb 2 13:22:14.612: INFO: Selector matched 1 pods for map[app:redis] Feb 2 13:22:14.613: INFO: Found 0 / 1 Feb 2 13:22:15.604: INFO: Selector matched 1 pods for map[app:redis] Feb 2 13:22:15.604: INFO: Found 0 / 1 Feb 2 13:22:16.613: INFO: Selector matched 1 pods for map[app:redis] Feb 2 13:22:16.613: INFO: Found 0 / 1 Feb 2 13:22:17.604: INFO: Selector matched 1 pods for map[app:redis] Feb 2 13:22:17.604: INFO: Found 0 / 1 Feb 2 13:22:18.620: INFO: Selector matched 1 pods for map[app:redis] Feb 2 13:22:18.620: INFO: Found 0 / 1 Feb 2 13:22:19.607: INFO: Selector matched 1 pods for map[app:redis] Feb 2 13:22:19.607: INFO: Found 0 / 1 Feb 2 13:22:20.611: INFO: Selector matched 1 pods for map[app:redis] Feb 2 13:22:20.611: INFO: Found 0 / 1 Feb 2 13:22:21.602: INFO: Selector matched 1 pods for map[app:redis] Feb 2 13:22:21.602: INFO: Found 1 / 1 Feb 2 13:22:21.602: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Feb 2 13:22:21.607: INFO: Selector matched 1 pods for map[app:redis] Feb 2 13:22:21.607: INFO: ForEach: Found 1 pods from the filter. Now looping through them. STEP: checking for a matching strings Feb 2 13:22:21.607: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-jrhlz redis-master --namespace=kubectl-4234' Feb 2 13:22:21.883: INFO: stderr: "" Feb 2 13:22:21.883: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 02 Feb 13:22:20.246 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 02 Feb 13:22:20.247 # Server started, Redis version 3.2.12\n1:M 02 Feb 13:22:20.247 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 02 Feb 13:22:20.247 * The server is now ready to accept connections on port 6379\n" STEP: limiting log lines Feb 2 13:22:21.884: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-jrhlz redis-master --namespace=kubectl-4234 --tail=1' Feb 2 13:22:22.124: INFO: stderr: "" Feb 2 13:22:22.124: INFO: stdout: "1:M 02 Feb 13:22:20.247 * The server is now ready to accept connections on port 6379\n" STEP: limiting log bytes Feb 2 13:22:22.124: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-jrhlz redis-master --namespace=kubectl-4234 --limit-bytes=1' Feb 2 13:22:22.306: INFO: stderr: "" Feb 2 13:22:22.306: INFO: stdout: " " STEP: exposing timestamps Feb 2 13:22:22.307: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-jrhlz redis-master --namespace=kubectl-4234 --tail=1 --timestamps' Feb 2 13:22:22.604: INFO: stderr: "" Feb 2 13:22:22.604: INFO: stdout: "2020-02-02T13:22:20.24791823Z 1:M 02 Feb 13:22:20.247 * The server is now ready to accept connections on port 6379\n" STEP: restricting to a time range Feb 2 13:22:25.105: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-jrhlz redis-master --namespace=kubectl-4234 --since=1s' Feb 2 13:22:25.336: INFO: stderr: "" Feb 2 13:22:25.336: INFO: stdout: "" Feb 2 13:22:25.336: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-jrhlz redis-master --namespace=kubectl-4234 --since=24h' Feb 2 13:22:25.503: INFO: stderr: "" Feb 2 13:22:25.503: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 02 Feb 13:22:20.246 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 02 Feb 13:22:20.247 # Server started, Redis version 3.2.12\n1:M 02 Feb 13:22:20.247 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 02 Feb 13:22:20.247 * The server is now ready to accept connections on port 6379\n" [AfterEach] [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1298 STEP: using delete to clean up resources Feb 2 13:22:25.503: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4234' Feb 2 13:22:25.600: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Feb 2 13:22:25.600: INFO: stdout: "replicationcontroller \"redis-master\" force deleted\n" Feb 2 13:22:25.600: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=nginx --no-headers --namespace=kubectl-4234' Feb 2 13:22:25.716: INFO: stderr: "No resources found.\n" Feb 2 13:22:25.717: INFO: stdout: "" Feb 2 13:22:25.717: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=nginx --namespace=kubectl-4234 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Feb 2 13:22:25.821: INFO: stderr: "" Feb 2 13:22:25.821: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 2 13:22:25.822: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4234" for this suite. Feb 2 13:22:47.916: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 2 13:22:48.046: INFO: namespace kubectl-4234 deletion completed in 22.218521316s • [SLOW TEST:35.067 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 2 13:22:48.046: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Feb 2 13:22:48.187: INFO: Number of nodes with available pods: 0 Feb 2 13:22:48.187: INFO: Node iruya-node is running more than one daemon pod Feb 2 13:22:49.967: INFO: Number of nodes with available pods: 0 Feb 2 13:22:49.967: INFO: Node iruya-node is running more than one daemon pod Feb 2 13:22:50.202: INFO: Number of nodes with available pods: 0 Feb 2 13:22:50.202: INFO: Node iruya-node is running more than one daemon pod Feb 2 13:22:51.201: INFO: Number of nodes with available pods: 0 Feb 2 13:22:51.201: INFO: Node iruya-node is running more than one daemon pod Feb 2 13:22:52.208: INFO: Number of nodes with available pods: 0 Feb 2 13:22:52.208: INFO: Node iruya-node is running more than one daemon pod Feb 2 13:22:54.978: INFO: Number of nodes with available pods: 0 Feb 2 13:22:54.978: INFO: Node iruya-node is running more than one daemon pod Feb 2 13:22:55.207: INFO: Number of nodes with available pods: 0 Feb 2 13:22:55.207: INFO: Node iruya-node is running more than one daemon pod Feb 2 13:22:56.201: INFO: Number of nodes with available pods: 0 Feb 2 13:22:56.201: INFO: Node iruya-node is running more than one daemon pod Feb 2 13:22:57.201: INFO: Number of nodes with available pods: 0 Feb 2 13:22:57.202: INFO: Node iruya-node is running more than one daemon pod Feb 2 13:22:58.207: INFO: Number of nodes with available pods: 1 Feb 2 13:22:58.207: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Feb 2 13:22:59.203: INFO: Number of nodes with available pods: 2 Feb 2 13:22:59.203: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. Feb 2 13:22:59.266: INFO: Number of nodes with available pods: 2 Feb 2 13:22:59.266: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-3449, will wait for the garbage collector to delete the pods Feb 2 13:23:00.954: INFO: Deleting DaemonSet.extensions daemon-set took: 16.160659ms Feb 2 13:23:01.454: INFO: Terminating DaemonSet.extensions daemon-set pods took: 500.784925ms Feb 2 13:23:16.666: INFO: Number of nodes with available pods: 0 Feb 2 13:23:16.666: INFO: Number of running nodes: 0, number of available pods: 0 Feb 2 13:23:16.674: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-3449/daemonsets","resourceVersion":"22816284"},"items":null} Feb 2 13:23:16.678: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-3449/pods","resourceVersion":"22816284"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 2 13:23:16.694: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-3449" for this suite. Feb 2 13:23:22.720: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 2 13:23:22.869: INFO: namespace daemonsets-3449 deletion completed in 6.17205668s • [SLOW TEST:34.823 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 2 13:23:22.870: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-8042 STEP: creating a selector STEP: Creating the service pods in kubernetes Feb 2 13:23:22.998: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Feb 2 13:24:03.322: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.32.0.5:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-8042 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 2 13:24:03.322: INFO: >>> kubeConfig: /root/.kube/config I0202 13:24:03.410534 8 log.go:172] (0xc0021869a0) (0xc0020b9040) Create stream I0202 13:24:03.410662 8 log.go:172] (0xc0021869a0) (0xc0020b9040) Stream added, broadcasting: 1 I0202 13:24:03.420451 8 log.go:172] (0xc0021869a0) Reply frame received for 1 I0202 13:24:03.420513 8 log.go:172] (0xc0021869a0) (0xc002ee50e0) Create stream I0202 13:24:03.420532 8 log.go:172] (0xc0021869a0) (0xc002ee50e0) Stream added, broadcasting: 3 I0202 13:24:03.424149 8 log.go:172] (0xc0021869a0) Reply frame received for 3 I0202 13:24:03.424252 8 log.go:172] (0xc0021869a0) (0xc00120e960) Create stream I0202 13:24:03.424270 8 log.go:172] (0xc0021869a0) (0xc00120e960) Stream added, broadcasting: 5 I0202 13:24:03.428017 8 log.go:172] (0xc0021869a0) Reply frame received for 5 I0202 13:24:03.664862 8 log.go:172] (0xc0021869a0) Data frame received for 3 I0202 13:24:03.664965 8 log.go:172] (0xc002ee50e0) (3) Data frame handling I0202 13:24:03.665005 8 log.go:172] (0xc002ee50e0) (3) Data frame sent I0202 13:24:03.870057 8 log.go:172] (0xc0021869a0) (0xc002ee50e0) Stream removed, broadcasting: 3 I0202 13:24:03.870225 8 log.go:172] (0xc0021869a0) Data frame received for 1 I0202 13:24:03.870247 8 log.go:172] (0xc0020b9040) (1) Data frame handling I0202 13:24:03.870299 8 log.go:172] (0xc0020b9040) (1) Data frame sent I0202 13:24:03.870309 8 log.go:172] (0xc0021869a0) (0xc0020b9040) Stream removed, broadcasting: 1 I0202 13:24:03.870386 8 log.go:172] (0xc0021869a0) (0xc00120e960) Stream removed, broadcasting: 5 I0202 13:24:03.870669 8 log.go:172] (0xc0021869a0) Go away received I0202 13:24:03.870790 8 log.go:172] (0xc0021869a0) (0xc0020b9040) Stream removed, broadcasting: 1 I0202 13:24:03.870816 8 log.go:172] (0xc0021869a0) (0xc002ee50e0) Stream removed, broadcasting: 3 I0202 13:24:03.870830 8 log.go:172] (0xc0021869a0) (0xc00120e960) Stream removed, broadcasting: 5 Feb 2 13:24:03.870: INFO: Found all expected endpoints: [netserver-0] Feb 2 13:24:04.195: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.44.0.1:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-8042 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 2 13:24:04.195: INFO: >>> kubeConfig: /root/.kube/config I0202 13:24:04.258870 8 log.go:172] (0xc0021871e0) (0xc0020b9360) Create stream I0202 13:24:04.258959 8 log.go:172] (0xc0021871e0) (0xc0020b9360) Stream added, broadcasting: 1 I0202 13:24:04.265837 8 log.go:172] (0xc0021871e0) Reply frame received for 1 I0202 13:24:04.265947 8 log.go:172] (0xc0021871e0) (0xc002ee5180) Create stream I0202 13:24:04.265957 8 log.go:172] (0xc0021871e0) (0xc002ee5180) Stream added, broadcasting: 3 I0202 13:24:04.268329 8 log.go:172] (0xc0021871e0) Reply frame received for 3 I0202 13:24:04.268407 8 log.go:172] (0xc0021871e0) (0xc0020b9400) Create stream I0202 13:24:04.268427 8 log.go:172] (0xc0021871e0) (0xc0020b9400) Stream added, broadcasting: 5 I0202 13:24:04.270794 8 log.go:172] (0xc0021871e0) Reply frame received for 5 I0202 13:24:04.400784 8 log.go:172] (0xc0021871e0) Data frame received for 3 I0202 13:24:04.400870 8 log.go:172] (0xc002ee5180) (3) Data frame handling I0202 13:24:04.400890 8 log.go:172] (0xc002ee5180) (3) Data frame sent I0202 13:24:04.666232 8 log.go:172] (0xc0021871e0) (0xc002ee5180) Stream removed, broadcasting: 3 I0202 13:24:04.666789 8 log.go:172] (0xc0021871e0) Data frame received for 1 I0202 13:24:04.666949 8 log.go:172] (0xc0021871e0) (0xc0020b9400) Stream removed, broadcasting: 5 I0202 13:24:04.667394 8 log.go:172] (0xc0020b9360) (1) Data frame handling I0202 13:24:04.667614 8 log.go:172] (0xc0020b9360) (1) Data frame sent I0202 13:24:04.667707 8 log.go:172] (0xc0021871e0) (0xc0020b9360) Stream removed, broadcasting: 1 I0202 13:24:04.667800 8 log.go:172] (0xc0021871e0) Go away received I0202 13:24:04.668056 8 log.go:172] (0xc0021871e0) (0xc0020b9360) Stream removed, broadcasting: 1 I0202 13:24:04.668140 8 log.go:172] (0xc0021871e0) (0xc002ee5180) Stream removed, broadcasting: 3 I0202 13:24:04.668212 8 log.go:172] (0xc0021871e0) (0xc0020b9400) Stream removed, broadcasting: 5 Feb 2 13:24:04.668: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 2 13:24:04.668: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-8042" for this suite. Feb 2 13:24:26.746: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 2 13:24:26.848: INFO: namespace pod-network-test-8042 deletion completed in 22.166368762s • [SLOW TEST:63.978 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 2 13:24:26.849: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Feb 2 13:24:35.571: INFO: Successfully updated pod "pod-update-activedeadlineseconds-555b8bca-2c2c-4fca-ae39-76dce6006769" Feb 2 13:24:35.571: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-555b8bca-2c2c-4fca-ae39-76dce6006769" in namespace "pods-1507" to be "terminated due to deadline exceeded" Feb 2 13:24:35.583: INFO: Pod "pod-update-activedeadlineseconds-555b8bca-2c2c-4fca-ae39-76dce6006769": Phase="Running", Reason="", readiness=true. Elapsed: 12.240362ms Feb 2 13:24:37.591: INFO: Pod "pod-update-activedeadlineseconds-555b8bca-2c2c-4fca-ae39-76dce6006769": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.020273005s Feb 2 13:24:37.592: INFO: Pod "pod-update-activedeadlineseconds-555b8bca-2c2c-4fca-ae39-76dce6006769" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 2 13:24:37.592: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-1507" for this suite. Feb 2 13:24:43.634: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 2 13:24:43.757: INFO: namespace pods-1507 deletion completed in 6.159600334s • [SLOW TEST:16.908 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 2 13:24:43.758: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88 [It] should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 2 13:24:44.524: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-1250" for this suite. Feb 2 13:24:50.571: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 2 13:24:50.727: INFO: namespace services-1250 deletion completed in 6.195973635s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92 • [SLOW TEST:6.969 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 2 13:24:50.727: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-map-2e26f5db-da0a-4a8f-aeef-95f406f888cb STEP: Creating a pod to test consume secrets Feb 2 13:24:50.818: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-c63a8bc1-4a52-4872-8e30-1b4b46f2bc79" in namespace "projected-8126" to be "success or failure" Feb 2 13:24:50.827: INFO: Pod "pod-projected-secrets-c63a8bc1-4a52-4872-8e30-1b4b46f2bc79": Phase="Pending", Reason="", readiness=false. Elapsed: 8.799678ms Feb 2 13:24:52.837: INFO: Pod "pod-projected-secrets-c63a8bc1-4a52-4872-8e30-1b4b46f2bc79": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018915798s Feb 2 13:24:54.847: INFO: Pod "pod-projected-secrets-c63a8bc1-4a52-4872-8e30-1b4b46f2bc79": Phase="Pending", Reason="", readiness=false. Elapsed: 4.028261422s Feb 2 13:24:56.869: INFO: Pod "pod-projected-secrets-c63a8bc1-4a52-4872-8e30-1b4b46f2bc79": Phase="Pending", Reason="", readiness=false. Elapsed: 6.051034546s Feb 2 13:24:58.889: INFO: Pod "pod-projected-secrets-c63a8bc1-4a52-4872-8e30-1b4b46f2bc79": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.071021272s STEP: Saw pod success Feb 2 13:24:58.890: INFO: Pod "pod-projected-secrets-c63a8bc1-4a52-4872-8e30-1b4b46f2bc79" satisfied condition "success or failure" Feb 2 13:24:58.899: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-c63a8bc1-4a52-4872-8e30-1b4b46f2bc79 container projected-secret-volume-test: STEP: delete the pod Feb 2 13:24:59.022: INFO: Waiting for pod pod-projected-secrets-c63a8bc1-4a52-4872-8e30-1b4b46f2bc79 to disappear Feb 2 13:24:59.035: INFO: Pod pod-projected-secrets-c63a8bc1-4a52-4872-8e30-1b4b46f2bc79 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 2 13:24:59.035: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8126" for this suite. Feb 2 13:25:05.098: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 2 13:25:05.197: INFO: namespace projected-8126 deletion completed in 6.152957101s • [SLOW TEST:14.469 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 2 13:25:05.197: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-630.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-630.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-630.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-630.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-630.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-630.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe /etc/hosts STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Feb 2 13:25:19.362: INFO: Unable to read jessie_hosts@dns-querier-1.dns-test-service.dns-630.svc.cluster.local from pod dns-630/dns-test-5511b27c-e467-4f6a-8dc7-fc54313fe974: the server could not find the requested resource (get pods dns-test-5511b27c-e467-4f6a-8dc7-fc54313fe974) Feb 2 13:25:19.376: INFO: Unable to read jessie_hosts@dns-querier-1 from pod dns-630/dns-test-5511b27c-e467-4f6a-8dc7-fc54313fe974: the server could not find the requested resource (get pods dns-test-5511b27c-e467-4f6a-8dc7-fc54313fe974) Feb 2 13:25:19.389: INFO: Unable to read jessie_udp@PodARecord from pod dns-630/dns-test-5511b27c-e467-4f6a-8dc7-fc54313fe974: the server could not find the requested resource (get pods dns-test-5511b27c-e467-4f6a-8dc7-fc54313fe974) Feb 2 13:25:19.394: INFO: Unable to read jessie_tcp@PodARecord from pod dns-630/dns-test-5511b27c-e467-4f6a-8dc7-fc54313fe974: the server could not find the requested resource (get pods dns-test-5511b27c-e467-4f6a-8dc7-fc54313fe974) Feb 2 13:25:19.394: INFO: Lookups using dns-630/dns-test-5511b27c-e467-4f6a-8dc7-fc54313fe974 failed for: [jessie_hosts@dns-querier-1.dns-test-service.dns-630.svc.cluster.local jessie_hosts@dns-querier-1 jessie_udp@PodARecord jessie_tcp@PodARecord] Feb 2 13:25:24.469: INFO: DNS probes using dns-630/dns-test-5511b27c-e467-4f6a-8dc7-fc54313fe974 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 2 13:25:24.512: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-630" for this suite. Feb 2 13:25:30.566: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 2 13:25:30.719: INFO: namespace dns-630 deletion completed in 6.196132888s • [SLOW TEST:25.522 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 2 13:25:30.720: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W0202 13:26:13.572672 8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Feb 2 13:26:13.572: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 2 13:26:13.572: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-7838" for this suite. Feb 2 13:26:33.609: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 2 13:26:33.733: INFO: namespace gc-7838 deletion completed in 20.154448014s • [SLOW TEST:63.013 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 2 13:26:33.734: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-map-84d464c2-4694-4017-845a-bfc3bf046e7a STEP: Creating a pod to test consume configMaps Feb 2 13:26:33.947: INFO: Waiting up to 5m0s for pod "pod-configmaps-bd45f44b-7251-4b6a-af4d-65891e08b180" in namespace "configmap-8352" to be "success or failure" Feb 2 13:26:34.060: INFO: Pod "pod-configmaps-bd45f44b-7251-4b6a-af4d-65891e08b180": Phase="Pending", Reason="", readiness=false. Elapsed: 112.277429ms Feb 2 13:26:36.075: INFO: Pod "pod-configmaps-bd45f44b-7251-4b6a-af4d-65891e08b180": Phase="Pending", Reason="", readiness=false. Elapsed: 2.127838428s Feb 2 13:26:38.083: INFO: Pod "pod-configmaps-bd45f44b-7251-4b6a-af4d-65891e08b180": Phase="Pending", Reason="", readiness=false. Elapsed: 4.135597752s Feb 2 13:26:40.102: INFO: Pod "pod-configmaps-bd45f44b-7251-4b6a-af4d-65891e08b180": Phase="Pending", Reason="", readiness=false. Elapsed: 6.155251211s Feb 2 13:26:42.126: INFO: Pod "pod-configmaps-bd45f44b-7251-4b6a-af4d-65891e08b180": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.178464697s STEP: Saw pod success Feb 2 13:26:42.126: INFO: Pod "pod-configmaps-bd45f44b-7251-4b6a-af4d-65891e08b180" satisfied condition "success or failure" Feb 2 13:26:42.134: INFO: Trying to get logs from node iruya-node pod pod-configmaps-bd45f44b-7251-4b6a-af4d-65891e08b180 container configmap-volume-test: STEP: delete the pod Feb 2 13:26:42.244: INFO: Waiting for pod pod-configmaps-bd45f44b-7251-4b6a-af4d-65891e08b180 to disappear Feb 2 13:26:42.264: INFO: Pod pod-configmaps-bd45f44b-7251-4b6a-af4d-65891e08b180 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 2 13:26:42.264: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8352" for this suite. Feb 2 13:26:48.390: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 2 13:26:48.542: INFO: namespace configmap-8352 deletion completed in 6.205181655s • [SLOW TEST:14.808 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 2 13:26:48.544: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0644 on tmpfs Feb 2 13:26:48.715: INFO: Waiting up to 5m0s for pod "pod-6bc6d22b-0a53-48a2-afbd-b57c2fb49e58" in namespace "emptydir-967" to be "success or failure" Feb 2 13:26:48.729: INFO: Pod "pod-6bc6d22b-0a53-48a2-afbd-b57c2fb49e58": Phase="Pending", Reason="", readiness=false. Elapsed: 13.859339ms Feb 2 13:26:50.743: INFO: Pod "pod-6bc6d22b-0a53-48a2-afbd-b57c2fb49e58": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028228811s Feb 2 13:26:52.751: INFO: Pod "pod-6bc6d22b-0a53-48a2-afbd-b57c2fb49e58": Phase="Pending", Reason="", readiness=false. Elapsed: 4.03632414s Feb 2 13:26:54.764: INFO: Pod "pod-6bc6d22b-0a53-48a2-afbd-b57c2fb49e58": Phase="Pending", Reason="", readiness=false. Elapsed: 6.048545549s Feb 2 13:26:56.781: INFO: Pod "pod-6bc6d22b-0a53-48a2-afbd-b57c2fb49e58": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.066479625s STEP: Saw pod success Feb 2 13:26:56.782: INFO: Pod "pod-6bc6d22b-0a53-48a2-afbd-b57c2fb49e58" satisfied condition "success or failure" Feb 2 13:26:56.786: INFO: Trying to get logs from node iruya-node pod pod-6bc6d22b-0a53-48a2-afbd-b57c2fb49e58 container test-container: STEP: delete the pod Feb 2 13:26:56.984: INFO: Waiting for pod pod-6bc6d22b-0a53-48a2-afbd-b57c2fb49e58 to disappear Feb 2 13:26:56.992: INFO: Pod pod-6bc6d22b-0a53-48a2-afbd-b57c2fb49e58 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 2 13:26:56.992: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-967" for this suite. Feb 2 13:27:03.023: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 2 13:27:03.210: INFO: namespace emptydir-967 deletion completed in 6.208020933s • [SLOW TEST:14.666 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 2 13:27:03.210: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Feb 2 13:27:25.406: INFO: Container started at 2020-02-02 13:27:09 +0000 UTC, pod became ready at 2020-02-02 13:27:24 +0000 UTC [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 2 13:27:25.406: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-8880" for this suite. Feb 2 13:27:47.442: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 2 13:27:47.675: INFO: namespace container-probe-8880 deletion completed in 22.261123671s • [SLOW TEST:44.465 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 2 13:27:47.675: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-5ed426f8-8072-4d32-b9cd-aede3efc98d0 STEP: Creating a pod to test consume secrets Feb 2 13:27:47.831: INFO: Waiting up to 5m0s for pod "pod-secrets-6f419f13-9278-4069-938a-e846049476a5" in namespace "secrets-2776" to be "success or failure" Feb 2 13:27:47.838: INFO: Pod "pod-secrets-6f419f13-9278-4069-938a-e846049476a5": Phase="Pending", Reason="", readiness=false. Elapsed: 7.400558ms Feb 2 13:27:50.073: INFO: Pod "pod-secrets-6f419f13-9278-4069-938a-e846049476a5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.242554978s Feb 2 13:27:52.112: INFO: Pod "pod-secrets-6f419f13-9278-4069-938a-e846049476a5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.280808733s Feb 2 13:27:54.138: INFO: Pod "pod-secrets-6f419f13-9278-4069-938a-e846049476a5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.30722883s Feb 2 13:27:56.147: INFO: Pod "pod-secrets-6f419f13-9278-4069-938a-e846049476a5": Phase="Pending", Reason="", readiness=false. Elapsed: 8.316424534s Feb 2 13:27:58.155: INFO: Pod "pod-secrets-6f419f13-9278-4069-938a-e846049476a5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.324455742s STEP: Saw pod success Feb 2 13:27:58.155: INFO: Pod "pod-secrets-6f419f13-9278-4069-938a-e846049476a5" satisfied condition "success or failure" Feb 2 13:27:58.164: INFO: Trying to get logs from node iruya-node pod pod-secrets-6f419f13-9278-4069-938a-e846049476a5 container secret-volume-test: STEP: delete the pod Feb 2 13:27:58.376: INFO: Waiting for pod pod-secrets-6f419f13-9278-4069-938a-e846049476a5 to disappear Feb 2 13:27:58.385: INFO: Pod pod-secrets-6f419f13-9278-4069-938a-e846049476a5 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 2 13:27:58.385: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2776" for this suite. Feb 2 13:28:04.413: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 2 13:28:04.555: INFO: namespace secrets-2776 deletion completed in 6.161694849s • [SLOW TEST:16.880 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 2 13:28:04.555: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Feb 2 13:28:04.650: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d63c1499-0133-47b9-beb0-6fc9a5afcb6b" in namespace "downward-api-2565" to be "success or failure" Feb 2 13:28:04.710: INFO: Pod "downwardapi-volume-d63c1499-0133-47b9-beb0-6fc9a5afcb6b": Phase="Pending", Reason="", readiness=false. Elapsed: 59.714662ms Feb 2 13:28:06.717: INFO: Pod "downwardapi-volume-d63c1499-0133-47b9-beb0-6fc9a5afcb6b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.067038058s Feb 2 13:28:08.728: INFO: Pod "downwardapi-volume-d63c1499-0133-47b9-beb0-6fc9a5afcb6b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.078089457s Feb 2 13:28:10.738: INFO: Pod "downwardapi-volume-d63c1499-0133-47b9-beb0-6fc9a5afcb6b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.087479989s Feb 2 13:28:12.747: INFO: Pod "downwardapi-volume-d63c1499-0133-47b9-beb0-6fc9a5afcb6b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.096847756s STEP: Saw pod success Feb 2 13:28:12.747: INFO: Pod "downwardapi-volume-d63c1499-0133-47b9-beb0-6fc9a5afcb6b" satisfied condition "success or failure" Feb 2 13:28:12.751: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-d63c1499-0133-47b9-beb0-6fc9a5afcb6b container client-container: STEP: delete the pod Feb 2 13:28:12.859: INFO: Waiting for pod downwardapi-volume-d63c1499-0133-47b9-beb0-6fc9a5afcb6b to disappear Feb 2 13:28:12.870: INFO: Pod downwardapi-volume-d63c1499-0133-47b9-beb0-6fc9a5afcb6b no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 2 13:28:12.870: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2565" for this suite. Feb 2 13:28:18.904: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 2 13:28:19.034: INFO: namespace downward-api-2565 deletion completed in 6.155018149s • [SLOW TEST:14.479 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 2 13:28:19.034: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 Feb 2 13:28:19.102: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Feb 2 13:28:19.168: INFO: Waiting for terminating namespaces to be deleted... Feb 2 13:28:19.171: INFO: Logging pods the kubelet thinks is on node iruya-node before test Feb 2 13:28:19.181: INFO: weave-net-rlp57 from kube-system started at 2019-10-12 11:56:39 +0000 UTC (2 container statuses recorded) Feb 2 13:28:19.181: INFO: Container weave ready: true, restart count 0 Feb 2 13:28:19.181: INFO: Container weave-npc ready: true, restart count 0 Feb 2 13:28:19.181: INFO: kube-proxy-976zl from kube-system started at 2019-08-04 09:01:39 +0000 UTC (1 container statuses recorded) Feb 2 13:28:19.181: INFO: Container kube-proxy ready: true, restart count 0 Feb 2 13:28:19.181: INFO: Logging pods the kubelet thinks is on node iruya-server-sfge57q7djm7 before test Feb 2 13:28:19.193: INFO: coredns-5c98db65d4-bm4gs from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded) Feb 2 13:28:19.193: INFO: Container coredns ready: true, restart count 0 Feb 2 13:28:19.193: INFO: etcd-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:38 +0000 UTC (1 container statuses recorded) Feb 2 13:28:19.193: INFO: Container etcd ready: true, restart count 0 Feb 2 13:28:19.193: INFO: weave-net-bzl4d from kube-system started at 2019-08-04 08:52:37 +0000 UTC (2 container statuses recorded) Feb 2 13:28:19.193: INFO: Container weave ready: true, restart count 0 Feb 2 13:28:19.193: INFO: Container weave-npc ready: true, restart count 0 Feb 2 13:28:19.193: INFO: kube-controller-manager-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:42 +0000 UTC (1 container statuses recorded) Feb 2 13:28:19.193: INFO: Container kube-controller-manager ready: true, restart count 19 Feb 2 13:28:19.193: INFO: kube-proxy-58v95 from kube-system started at 2019-08-04 08:52:37 +0000 UTC (1 container statuses recorded) Feb 2 13:28:19.193: INFO: Container kube-proxy ready: true, restart count 0 Feb 2 13:28:19.193: INFO: kube-apiserver-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:39 +0000 UTC (1 container statuses recorded) Feb 2 13:28:19.193: INFO: Container kube-apiserver ready: true, restart count 0 Feb 2 13:28:19.193: INFO: kube-scheduler-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:43 +0000 UTC (1 container statuses recorded) Feb 2 13:28:19.193: INFO: Container kube-scheduler ready: true, restart count 13 Feb 2 13:28:19.193: INFO: coredns-5c98db65d4-xx8w8 from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded) Feb 2 13:28:19.193: INFO: Container coredns ready: true, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-42eeaa28-7478-497c-b1d7-05c8c63710c9 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-42eeaa28-7478-497c-b1d7-05c8c63710c9 off the node iruya-node STEP: verifying the node doesn't have the label kubernetes.io/e2e-42eeaa28-7478-497c-b1d7-05c8c63710c9 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 2 13:28:39.434: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-1485" for this suite. Feb 2 13:28:53.478: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 2 13:28:53.602: INFO: namespace sched-pred-1485 deletion completed in 14.161873891s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72 • [SLOW TEST:34.568 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23 validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 2 13:28:53.604: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod Feb 2 13:28:53.736: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 2 13:29:07.528: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-3459" for this suite. Feb 2 13:29:13.697: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 2 13:29:13.846: INFO: namespace init-container-3459 deletion completed in 6.266500704s • [SLOW TEST:20.242 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 2 13:29:13.847: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-ca74fa93-e6e2-4861-8f2b-39916b419d74 STEP: Creating a pod to test consume secrets Feb 2 13:29:13.993: INFO: Waiting up to 5m0s for pod "pod-secrets-a93ac877-f2e4-4e82-8f92-3a839b6ee263" in namespace "secrets-2719" to be "success or failure" Feb 2 13:29:14.001: INFO: Pod "pod-secrets-a93ac877-f2e4-4e82-8f92-3a839b6ee263": Phase="Pending", Reason="", readiness=false. Elapsed: 6.902732ms Feb 2 13:29:16.010: INFO: Pod "pod-secrets-a93ac877-f2e4-4e82-8f92-3a839b6ee263": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016003087s Feb 2 13:29:18.024: INFO: Pod "pod-secrets-a93ac877-f2e4-4e82-8f92-3a839b6ee263": Phase="Pending", Reason="", readiness=false. Elapsed: 4.030331384s Feb 2 13:29:20.033: INFO: Pod "pod-secrets-a93ac877-f2e4-4e82-8f92-3a839b6ee263": Phase="Pending", Reason="", readiness=false. Elapsed: 6.039448074s Feb 2 13:29:22.046: INFO: Pod "pod-secrets-a93ac877-f2e4-4e82-8f92-3a839b6ee263": Phase="Pending", Reason="", readiness=false. Elapsed: 8.052705752s Feb 2 13:29:24.054: INFO: Pod "pod-secrets-a93ac877-f2e4-4e82-8f92-3a839b6ee263": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.060101415s STEP: Saw pod success Feb 2 13:29:24.054: INFO: Pod "pod-secrets-a93ac877-f2e4-4e82-8f92-3a839b6ee263" satisfied condition "success or failure" Feb 2 13:29:24.059: INFO: Trying to get logs from node iruya-node pod pod-secrets-a93ac877-f2e4-4e82-8f92-3a839b6ee263 container secret-volume-test: STEP: delete the pod Feb 2 13:29:24.200: INFO: Waiting for pod pod-secrets-a93ac877-f2e4-4e82-8f92-3a839b6ee263 to disappear Feb 2 13:29:24.214: INFO: Pod pod-secrets-a93ac877-f2e4-4e82-8f92-3a839b6ee263 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 2 13:29:24.214: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2719" for this suite. Feb 2 13:29:30.269: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 2 13:29:30.403: INFO: namespace secrets-2719 deletion completed in 6.17722709s • [SLOW TEST:16.556 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 2 13:29:30.403: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Feb 2 13:29:30.506: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b48699a4-85f7-4ccf-99f8-57bd534c6e1e" in namespace "projected-1766" to be "success or failure" Feb 2 13:29:30.511: INFO: Pod "downwardapi-volume-b48699a4-85f7-4ccf-99f8-57bd534c6e1e": Phase="Pending", Reason="", readiness=false. Elapsed: 5.618549ms Feb 2 13:29:32.528: INFO: Pod "downwardapi-volume-b48699a4-85f7-4ccf-99f8-57bd534c6e1e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021836365s Feb 2 13:29:34.572: INFO: Pod "downwardapi-volume-b48699a4-85f7-4ccf-99f8-57bd534c6e1e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.065851283s Feb 2 13:29:36.582: INFO: Pod "downwardapi-volume-b48699a4-85f7-4ccf-99f8-57bd534c6e1e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.075906112s Feb 2 13:29:38.596: INFO: Pod "downwardapi-volume-b48699a4-85f7-4ccf-99f8-57bd534c6e1e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.090405579s STEP: Saw pod success Feb 2 13:29:38.596: INFO: Pod "downwardapi-volume-b48699a4-85f7-4ccf-99f8-57bd534c6e1e" satisfied condition "success or failure" Feb 2 13:29:38.600: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-b48699a4-85f7-4ccf-99f8-57bd534c6e1e container client-container: STEP: delete the pod Feb 2 13:29:38.735: INFO: Waiting for pod downwardapi-volume-b48699a4-85f7-4ccf-99f8-57bd534c6e1e to disappear Feb 2 13:29:38.768: INFO: Pod downwardapi-volume-b48699a4-85f7-4ccf-99f8-57bd534c6e1e no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 2 13:29:38.768: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1766" for this suite. Feb 2 13:29:44.869: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 2 13:29:45.000: INFO: namespace projected-1766 deletion completed in 6.222139049s • [SLOW TEST:14.597 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 2 13:29:45.000: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Feb 2 13:29:45.137: INFO: (0) /api/v1/nodes/iruya-node:10250/proxy/logs/:
alternatives.log
alternatives.l... (200; 20.590981ms)
Feb  2 13:29:45.144: INFO: (1) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.558649ms)
Feb  2 13:29:45.150: INFO: (2) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.280732ms)
Feb  2 13:29:45.156: INFO: (3) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.544305ms)
Feb  2 13:29:45.162: INFO: (4) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.443971ms)
Feb  2 13:29:45.170: INFO: (5) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.330758ms)
Feb  2 13:29:45.198: INFO: (6) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 27.87638ms)
Feb  2 13:29:45.203: INFO: (7) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.480781ms)
Feb  2 13:29:45.210: INFO: (8) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.218547ms)
Feb  2 13:29:45.216: INFO: (9) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.418017ms)
Feb  2 13:29:45.220: INFO: (10) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.525488ms)
Feb  2 13:29:45.225: INFO: (11) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.515611ms)
Feb  2 13:29:45.230: INFO: (12) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.811915ms)
Feb  2 13:29:45.234: INFO: (13) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.985737ms)
Feb  2 13:29:45.238: INFO: (14) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.603642ms)
Feb  2 13:29:45.242: INFO: (15) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.808121ms)
Feb  2 13:29:45.251: INFO: (16) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.653058ms)
Feb  2 13:29:45.259: INFO: (17) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.405763ms)
Feb  2 13:29:45.266: INFO: (18) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.871452ms)
Feb  2 13:29:45.272: INFO: (19) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.680702ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  2 13:29:45.272: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-3604" for this suite.
Feb  2 13:29:51.328: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 13:29:51.452: INFO: namespace proxy-3604 deletion completed in 6.175117344s

• [SLOW TEST:6.452 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58
    should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-node] Downward API 
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  2 13:29:51.452: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward api env vars
Feb  2 13:29:51.572: INFO: Waiting up to 5m0s for pod "downward-api-3a0a7542-6421-49a0-aa34-774469279be0" in namespace "downward-api-1373" to be "success or failure"
Feb  2 13:29:51.608: INFO: Pod "downward-api-3a0a7542-6421-49a0-aa34-774469279be0": Phase="Pending", Reason="", readiness=false. Elapsed: 35.784716ms
Feb  2 13:29:53.620: INFO: Pod "downward-api-3a0a7542-6421-49a0-aa34-774469279be0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.047544896s
Feb  2 13:29:55.628: INFO: Pod "downward-api-3a0a7542-6421-49a0-aa34-774469279be0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.055714221s
Feb  2 13:29:57.636: INFO: Pod "downward-api-3a0a7542-6421-49a0-aa34-774469279be0": Phase="Pending", Reason="", readiness=false. Elapsed: 6.06339658s
Feb  2 13:29:59.643: INFO: Pod "downward-api-3a0a7542-6421-49a0-aa34-774469279be0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.070501805s
STEP: Saw pod success
Feb  2 13:29:59.643: INFO: Pod "downward-api-3a0a7542-6421-49a0-aa34-774469279be0" satisfied condition "success or failure"
Feb  2 13:29:59.646: INFO: Trying to get logs from node iruya-node pod downward-api-3a0a7542-6421-49a0-aa34-774469279be0 container dapi-container: 
STEP: delete the pod
Feb  2 13:29:59.999: INFO: Waiting for pod downward-api-3a0a7542-6421-49a0-aa34-774469279be0 to disappear
Feb  2 13:30:00.010: INFO: Pod downward-api-3a0a7542-6421-49a0-aa34-774469279be0 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  2 13:30:00.011: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-1373" for this suite.
Feb  2 13:30:06.094: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 13:30:06.228: INFO: namespace downward-api-1373 deletion completed in 6.20356789s

• [SLOW TEST:14.776 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl label 
  should update the label on a resource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  2 13:30:06.228: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1210
STEP: creating the pod
Feb  2 13:30:06.294: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-105'
Feb  2 13:30:08.337: INFO: stderr: ""
Feb  2 13:30:08.337: INFO: stdout: "pod/pause created\n"
Feb  2 13:30:08.337: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause]
Feb  2 13:30:08.338: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-105" to be "running and ready"
Feb  2 13:30:08.656: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 317.867637ms
Feb  2 13:30:10.664: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.325964914s
Feb  2 13:30:12.693: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 4.35528576s
Feb  2 13:30:14.708: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 6.37007715s
Feb  2 13:30:16.716: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 8.378544526s
Feb  2 13:30:16.717: INFO: Pod "pause" satisfied condition "running and ready"
Feb  2 13:30:16.717: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause]
[It] should update the label on a resource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: adding the label testing-label with value testing-label-value to a pod
Feb  2 13:30:16.717: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-105'
Feb  2 13:30:16.892: INFO: stderr: ""
Feb  2 13:30:16.892: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod has the label testing-label with the value testing-label-value
Feb  2 13:30:16.893: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-105'
Feb  2 13:30:17.046: INFO: stderr: ""
Feb  2 13:30:17.046: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          9s    testing-label-value\n"
STEP: removing the label testing-label of a pod
Feb  2 13:30:17.046: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-105'
Feb  2 13:30:17.195: INFO: stderr: ""
Feb  2 13:30:17.195: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod doesn't have the label testing-label
Feb  2 13:30:17.196: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-105'
Feb  2 13:30:17.309: INFO: stderr: ""
Feb  2 13:30:17.309: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          9s    \n"
[AfterEach] [k8s.io] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1217
STEP: using delete to clean up resources
Feb  2 13:30:17.309: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-105'
Feb  2 13:30:17.492: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb  2 13:30:17.492: INFO: stdout: "pod \"pause\" force deleted\n"
Feb  2 13:30:17.492: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-105'
Feb  2 13:30:17.635: INFO: stderr: "No resources found.\n"
Feb  2 13:30:17.635: INFO: stdout: ""
Feb  2 13:30:17.635: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-105 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Feb  2 13:30:17.730: INFO: stderr: ""
Feb  2 13:30:17.730: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  2 13:30:17.730: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-105" for this suite.
Feb  2 13:30:23.761: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 13:30:23.937: INFO: namespace kubectl-105 deletion completed in 6.202877525s

• [SLOW TEST:17.709 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should update the label on a resource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Guestbook application 
  should create and stop a working application  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  2 13:30:23.938: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should create and stop a working application  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating all guestbook components
Feb  2 13:30:24.020: INFO: apiVersion: v1
kind: Service
metadata:
  name: redis-slave
  labels:
    app: redis
    role: slave
    tier: backend
spec:
  ports:
  - port: 6379
  selector:
    app: redis
    role: slave
    tier: backend

Feb  2 13:30:24.020: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1933'
Feb  2 13:30:24.347: INFO: stderr: ""
Feb  2 13:30:24.348: INFO: stdout: "service/redis-slave created\n"
Feb  2 13:30:24.348: INFO: apiVersion: v1
kind: Service
metadata:
  name: redis-master
  labels:
    app: redis
    role: master
    tier: backend
spec:
  ports:
  - port: 6379
    targetPort: 6379
  selector:
    app: redis
    role: master
    tier: backend

Feb  2 13:30:24.349: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1933'
Feb  2 13:30:24.875: INFO: stderr: ""
Feb  2 13:30:24.875: INFO: stdout: "service/redis-master created\n"
Feb  2 13:30:24.876: INFO: apiVersion: v1
kind: Service
metadata:
  name: frontend
  labels:
    app: guestbook
    tier: frontend
spec:
  # if your cluster supports it, uncomment the following to automatically create
  # an external load-balanced IP for the frontend service.
  # type: LoadBalancer
  ports:
  - port: 80
  selector:
    app: guestbook
    tier: frontend

Feb  2 13:30:24.877: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1933'
Feb  2 13:30:25.593: INFO: stderr: ""
Feb  2 13:30:25.593: INFO: stdout: "service/frontend created\n"
Feb  2 13:30:25.594: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: frontend
spec:
  replicas: 3
  selector:
    matchLabels:
      app: guestbook
      tier: frontend
  template:
    metadata:
      labels:
        app: guestbook
        tier: frontend
    spec:
      containers:
      - name: php-redis
        image: gcr.io/google-samples/gb-frontend:v6
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        env:
        - name: GET_HOSTS_FROM
          value: dns
          # If your cluster config does not include a dns service, then to
          # instead access environment variables to find service host
          # info, comment out the 'value: dns' line above, and uncomment the
          # line below:
          # value: env
        ports:
        - containerPort: 80

Feb  2 13:30:25.594: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1933'
Feb  2 13:30:25.938: INFO: stderr: ""
Feb  2 13:30:25.938: INFO: stdout: "deployment.apps/frontend created\n"
Feb  2 13:30:25.939: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: redis-master
spec:
  replicas: 1
  selector:
    matchLabels:
      app: redis
      role: master
      tier: backend
  template:
    metadata:
      labels:
        app: redis
        role: master
        tier: backend
    spec:
      containers:
      - name: master
        image: gcr.io/kubernetes-e2e-test-images/redis:1.0
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 6379

Feb  2 13:30:25.939: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1933'
Feb  2 13:30:26.636: INFO: stderr: ""
Feb  2 13:30:26.636: INFO: stdout: "deployment.apps/redis-master created\n"
Feb  2 13:30:26.637: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: redis-slave
spec:
  replicas: 2
  selector:
    matchLabels:
      app: redis
      role: slave
      tier: backend
  template:
    metadata:
      labels:
        app: redis
        role: slave
        tier: backend
    spec:
      containers:
      - name: slave
        image: gcr.io/google-samples/gb-redisslave:v3
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        env:
        - name: GET_HOSTS_FROM
          value: dns
          # If your cluster config does not include a dns service, then to
          # instead access an environment variable to find the master
          # service's host, comment out the 'value: dns' line above, and
          # uncomment the line below:
          # value: env
        ports:
        - containerPort: 6379

Feb  2 13:30:26.638: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1933'
Feb  2 13:30:27.995: INFO: stderr: ""
Feb  2 13:30:27.995: INFO: stdout: "deployment.apps/redis-slave created\n"
STEP: validating guestbook app
Feb  2 13:30:27.996: INFO: Waiting for all frontend pods to be Running.
Feb  2 13:30:53.049: INFO: Waiting for frontend to serve content.
Feb  2 13:30:53.129: INFO: Trying to add a new entry to the guestbook.
Feb  2 13:30:53.216: INFO: Verifying that added entry can be retrieved.
STEP: using delete to clean up resources
Feb  2 13:30:53.241: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1933'
Feb  2 13:30:53.569: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb  2 13:30:53.570: INFO: stdout: "service \"redis-slave\" force deleted\n"
STEP: using delete to clean up resources
Feb  2 13:30:53.571: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1933'
Feb  2 13:30:53.966: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb  2 13:30:53.966: INFO: stdout: "service \"redis-master\" force deleted\n"
STEP: using delete to clean up resources
Feb  2 13:30:53.968: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1933'
Feb  2 13:30:54.294: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb  2 13:30:54.294: INFO: stdout: "service \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Feb  2 13:30:54.295: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1933'
Feb  2 13:30:54.402: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb  2 13:30:54.402: INFO: stdout: "deployment.apps \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Feb  2 13:30:54.402: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1933'
Feb  2 13:30:54.516: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb  2 13:30:54.517: INFO: stdout: "deployment.apps \"redis-master\" force deleted\n"
STEP: using delete to clean up resources
Feb  2 13:30:54.517: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1933'
Feb  2 13:30:54.592: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb  2 13:30:54.592: INFO: stdout: "deployment.apps \"redis-slave\" force deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  2 13:30:54.593: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1933" for this suite.
Feb  2 13:31:42.666: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 13:31:42.781: INFO: namespace kubectl-1933 deletion completed in 48.146648091s

• [SLOW TEST:78.844 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Guestbook application
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create and stop a working application  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should do a rolling update of a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  2 13:31:42.782: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should do a rolling update of a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the initial replication controller
Feb  2 13:31:43.156: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3126'
Feb  2 13:31:43.545: INFO: stderr: ""
Feb  2 13:31:43.545: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Feb  2 13:31:43.546: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3126'
Feb  2 13:31:43.913: INFO: stderr: ""
Feb  2 13:31:43.913: INFO: stdout: "update-demo-nautilus-77lrl update-demo-nautilus-w7vpg "
Feb  2 13:31:43.913: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-77lrl -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3126'
Feb  2 13:31:44.144: INFO: stderr: ""
Feb  2 13:31:44.144: INFO: stdout: ""
Feb  2 13:31:44.144: INFO: update-demo-nautilus-77lrl is created but not running
Feb  2 13:31:49.145: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3126'
Feb  2 13:31:50.133: INFO: stderr: ""
Feb  2 13:31:50.133: INFO: stdout: "update-demo-nautilus-77lrl update-demo-nautilus-w7vpg "
Feb  2 13:31:50.134: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-77lrl -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3126'
Feb  2 13:31:50.585: INFO: stderr: ""
Feb  2 13:31:50.585: INFO: stdout: ""
Feb  2 13:31:50.585: INFO: update-demo-nautilus-77lrl is created but not running
Feb  2 13:31:55.586: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3126'
Feb  2 13:31:55.747: INFO: stderr: ""
Feb  2 13:31:55.747: INFO: stdout: "update-demo-nautilus-77lrl update-demo-nautilus-w7vpg "
Feb  2 13:31:55.748: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-77lrl -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3126'
Feb  2 13:31:55.973: INFO: stderr: ""
Feb  2 13:31:55.974: INFO: stdout: "true"
Feb  2 13:31:55.974: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-77lrl -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3126'
Feb  2 13:31:56.072: INFO: stderr: ""
Feb  2 13:31:56.072: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb  2 13:31:56.072: INFO: validating pod update-demo-nautilus-77lrl
Feb  2 13:31:56.078: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb  2 13:31:56.078: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb  2 13:31:56.079: INFO: update-demo-nautilus-77lrl is verified up and running
Feb  2 13:31:56.079: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-w7vpg -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3126'
Feb  2 13:31:56.229: INFO: stderr: ""
Feb  2 13:31:56.229: INFO: stdout: "true"
Feb  2 13:31:56.229: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-w7vpg -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3126'
Feb  2 13:31:56.335: INFO: stderr: ""
Feb  2 13:31:56.335: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb  2 13:31:56.335: INFO: validating pod update-demo-nautilus-w7vpg
Feb  2 13:31:56.347: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb  2 13:31:56.347: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb  2 13:31:56.347: INFO: update-demo-nautilus-w7vpg is verified up and running
STEP: rolling-update to new replication controller
Feb  2 13:31:56.350: INFO: scanned /root for discovery docs: 
Feb  2 13:31:56.350: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=kubectl-3126'
Feb  2 13:32:25.861: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Feb  2 13:32:25.861: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Feb  2 13:32:25.861: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3126'
Feb  2 13:32:26.088: INFO: stderr: ""
Feb  2 13:32:26.088: INFO: stdout: "update-demo-kitten-nkvbm update-demo-kitten-pbv2q update-demo-nautilus-77lrl "
STEP: Replicas for name=update-demo: expected=2 actual=3
Feb  2 13:32:31.088: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3126'
Feb  2 13:32:31.292: INFO: stderr: ""
Feb  2 13:32:31.292: INFO: stdout: "update-demo-kitten-nkvbm update-demo-kitten-pbv2q "
Feb  2 13:32:31.293: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-nkvbm -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3126'
Feb  2 13:32:31.377: INFO: stderr: ""
Feb  2 13:32:31.377: INFO: stdout: "true"
Feb  2 13:32:31.378: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-nkvbm -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3126'
Feb  2 13:32:31.460: INFO: stderr: ""
Feb  2 13:32:31.460: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Feb  2 13:32:31.460: INFO: validating pod update-demo-kitten-nkvbm
Feb  2 13:32:31.487: INFO: got data: {
  "image": "kitten.jpg"
}

Feb  2 13:32:31.487: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Feb  2 13:32:31.488: INFO: update-demo-kitten-nkvbm is verified up and running
Feb  2 13:32:31.488: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-pbv2q -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3126'
Feb  2 13:32:31.593: INFO: stderr: ""
Feb  2 13:32:31.593: INFO: stdout: "true"
Feb  2 13:32:31.593: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-pbv2q -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3126'
Feb  2 13:32:31.693: INFO: stderr: ""
Feb  2 13:32:31.693: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Feb  2 13:32:31.693: INFO: validating pod update-demo-kitten-pbv2q
Feb  2 13:32:31.706: INFO: got data: {
  "image": "kitten.jpg"
}

Feb  2 13:32:31.706: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Feb  2 13:32:31.706: INFO: update-demo-kitten-pbv2q is verified up and running
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  2 13:32:31.706: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3126" for this suite.
Feb  2 13:32:55.735: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 13:32:55.894: INFO: namespace kubectl-3126 deletion completed in 24.181524253s

• [SLOW TEST:73.112 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should do a rolling update of a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  2 13:32:55.894: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-8fc68ded-4746-4105-9cee-407d65a450a6
STEP: Creating a pod to test consume configMaps
Feb  2 13:32:56.668: INFO: Waiting up to 5m0s for pod "pod-configmaps-c54ae02a-52a3-4f82-b911-324e1fc799a6" in namespace "configmap-6140" to be "success or failure"
Feb  2 13:32:56.701: INFO: Pod "pod-configmaps-c54ae02a-52a3-4f82-b911-324e1fc799a6": Phase="Pending", Reason="", readiness=false. Elapsed: 33.128664ms
Feb  2 13:32:58.710: INFO: Pod "pod-configmaps-c54ae02a-52a3-4f82-b911-324e1fc799a6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.042076189s
Feb  2 13:33:00.738: INFO: Pod "pod-configmaps-c54ae02a-52a3-4f82-b911-324e1fc799a6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.069732478s
Feb  2 13:33:02.750: INFO: Pod "pod-configmaps-c54ae02a-52a3-4f82-b911-324e1fc799a6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.08140781s
Feb  2 13:33:04.762: INFO: Pod "pod-configmaps-c54ae02a-52a3-4f82-b911-324e1fc799a6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.093436429s
STEP: Saw pod success
Feb  2 13:33:04.762: INFO: Pod "pod-configmaps-c54ae02a-52a3-4f82-b911-324e1fc799a6" satisfied condition "success or failure"
Feb  2 13:33:04.766: INFO: Trying to get logs from node iruya-node pod pod-configmaps-c54ae02a-52a3-4f82-b911-324e1fc799a6 container configmap-volume-test: 
STEP: delete the pod
Feb  2 13:33:04.829: INFO: Waiting for pod pod-configmaps-c54ae02a-52a3-4f82-b911-324e1fc799a6 to disappear
Feb  2 13:33:04.847: INFO: Pod pod-configmaps-c54ae02a-52a3-4f82-b911-324e1fc799a6 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  2 13:33:04.847: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-6140" for this suite.
Feb  2 13:33:10.927: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 13:33:11.022: INFO: namespace configmap-6140 deletion completed in 6.164373417s

• [SLOW TEST:15.128 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  2 13:33:11.024: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72
[It] RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb  2 13:33:11.072: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted)
Feb  2 13:33:11.104: INFO: Pod name sample-pod: Found 0 pods out of 1
Feb  2 13:33:16.122: INFO: Pod name sample-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Feb  2 13:33:20.149: INFO: Creating deployment "test-rolling-update-deployment"
Feb  2 13:33:20.164: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has
Feb  2 13:33:20.261: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created
Feb  2 13:33:22.275: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected
Feb  2 13:33:22.280: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716247200, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716247200, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716247200, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716247200, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  2 13:33:24.289: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716247200, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716247200, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716247200, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716247200, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  2 13:33:26.288: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716247200, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716247200, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716247200, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716247200, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  2 13:33:28.293: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted)
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66
Feb  2 13:33:28.315: INFO: Deployment "test-rolling-update-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment,GenerateName:,Namespace:deployment-4494,SelfLink:/apis/apps/v1/namespaces/deployment-4494/deployments/test-rolling-update-deployment,UID:d5cb820c-7c22-4d41-97e1-e10d2827941b,ResourceVersion:22818134,Generation:1,CreationTimestamp:2020-02-02 13:33:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-02-02 13:33:20 +0000 UTC 2020-02-02 13:33:20 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-02-02 13:33:27 +0000 UTC 2020-02-02 13:33:20 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rolling-update-deployment-79f6b9d75c" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},}

Feb  2 13:33:28.321: INFO: New ReplicaSet "test-rolling-update-deployment-79f6b9d75c" of Deployment "test-rolling-update-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-79f6b9d75c,GenerateName:,Namespace:deployment-4494,SelfLink:/apis/apps/v1/namespaces/deployment-4494/replicasets/test-rolling-update-deployment-79f6b9d75c,UID:2360cfe4-90f8-44ab-b8aa-9cba392e971c,ResourceVersion:22818124,Generation:1,CreationTimestamp:2020-02-02 13:33:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment d5cb820c-7c22-4d41-97e1-e10d2827941b 0xc00297e8e7 0xc00297e8e8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Feb  2 13:33:28.321: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment":
Feb  2 13:33:28.322: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-controller,GenerateName:,Namespace:deployment-4494,SelfLink:/apis/apps/v1/namespaces/deployment-4494/replicasets/test-rolling-update-controller,UID:5fbc9ce8-f03d-4a38-ba7c-be0a99e000c5,ResourceVersion:22818133,Generation:2,CreationTimestamp:2020-02-02 13:33:11 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305832,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment d5cb820c-7c22-4d41-97e1-e10d2827941b 0xc00297e7ff 0xc00297e810}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Feb  2 13:33:28.328: INFO: Pod "test-rolling-update-deployment-79f6b9d75c-csqsk" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-79f6b9d75c-csqsk,GenerateName:test-rolling-update-deployment-79f6b9d75c-,Namespace:deployment-4494,SelfLink:/api/v1/namespaces/deployment-4494/pods/test-rolling-update-deployment-79f6b9d75c-csqsk,UID:052a4387-471a-41b3-8078-693d8f1da25e,ResourceVersion:22818123,Generation:0,CreationTimestamp:2020-02-02 13:33:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rolling-update-deployment-79f6b9d75c 2360cfe4-90f8-44ab-b8aa-9cba392e971c 0xc00297f557 0xc00297f558}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-m7pts {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-m7pts,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [{default-token-m7pts true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00297f5d0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00297f5f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-02 13:33:20 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-02 13:33:27 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-02 13:33:27 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-02 13:33:20 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.2,StartTime:2020-02-02 13:33:20 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-02-02 13:33:26 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 docker://1c11979c70d4bb1417a45105c91c9600c551d3e1cd2fc7f57a5b53c2b44cfcc6}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  2 13:33:28.328: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-4494" for this suite.
Feb  2 13:33:34.419: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 13:33:34.506: INFO: namespace deployment-4494 deletion completed in 6.169168661s

• [SLOW TEST:23.482 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  2 13:33:34.507: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward api env vars
Feb  2 13:33:34.650: INFO: Waiting up to 5m0s for pod "downward-api-c5c5bedd-12cc-4c67-bad7-495de7137bc0" in namespace "downward-api-3821" to be "success or failure"
Feb  2 13:33:34.699: INFO: Pod "downward-api-c5c5bedd-12cc-4c67-bad7-495de7137bc0": Phase="Pending", Reason="", readiness=false. Elapsed: 48.977312ms
Feb  2 13:33:36.716: INFO: Pod "downward-api-c5c5bedd-12cc-4c67-bad7-495de7137bc0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.066175214s
Feb  2 13:33:38.731: INFO: Pod "downward-api-c5c5bedd-12cc-4c67-bad7-495de7137bc0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.080891228s
Feb  2 13:33:40.739: INFO: Pod "downward-api-c5c5bedd-12cc-4c67-bad7-495de7137bc0": Phase="Pending", Reason="", readiness=false. Elapsed: 6.088933758s
Feb  2 13:33:42.745: INFO: Pod "downward-api-c5c5bedd-12cc-4c67-bad7-495de7137bc0": Phase="Pending", Reason="", readiness=false. Elapsed: 8.095225154s
Feb  2 13:33:44.757: INFO: Pod "downward-api-c5c5bedd-12cc-4c67-bad7-495de7137bc0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.106668108s
STEP: Saw pod success
Feb  2 13:33:44.757: INFO: Pod "downward-api-c5c5bedd-12cc-4c67-bad7-495de7137bc0" satisfied condition "success or failure"
Feb  2 13:33:44.777: INFO: Trying to get logs from node iruya-node pod downward-api-c5c5bedd-12cc-4c67-bad7-495de7137bc0 container dapi-container: 
STEP: delete the pod
Feb  2 13:33:44.884: INFO: Waiting for pod downward-api-c5c5bedd-12cc-4c67-bad7-495de7137bc0 to disappear
Feb  2 13:33:44.889: INFO: Pod downward-api-c5c5bedd-12cc-4c67-bad7-495de7137bc0 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  2 13:33:44.889: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-3821" for this suite.
Feb  2 13:33:50.910: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 13:33:51.052: INFO: namespace downward-api-3821 deletion completed in 6.159484371s

• [SLOW TEST:16.546 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  2 13:33:51.053: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb  2 13:33:51.134: INFO: Waiting up to 5m0s for pod "downwardapi-volume-11dc640b-e0cd-42b6-a69c-8869d47cda45" in namespace "projected-6017" to be "success or failure"
Feb  2 13:33:51.142: INFO: Pod "downwardapi-volume-11dc640b-e0cd-42b6-a69c-8869d47cda45": Phase="Pending", Reason="", readiness=false. Elapsed: 7.956674ms
Feb  2 13:33:53.152: INFO: Pod "downwardapi-volume-11dc640b-e0cd-42b6-a69c-8869d47cda45": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017551182s
Feb  2 13:33:55.161: INFO: Pod "downwardapi-volume-11dc640b-e0cd-42b6-a69c-8869d47cda45": Phase="Pending", Reason="", readiness=false. Elapsed: 4.026702621s
Feb  2 13:33:57.172: INFO: Pod "downwardapi-volume-11dc640b-e0cd-42b6-a69c-8869d47cda45": Phase="Pending", Reason="", readiness=false. Elapsed: 6.037717864s
Feb  2 13:33:59.185: INFO: Pod "downwardapi-volume-11dc640b-e0cd-42b6-a69c-8869d47cda45": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.05135632s
STEP: Saw pod success
Feb  2 13:33:59.186: INFO: Pod "downwardapi-volume-11dc640b-e0cd-42b6-a69c-8869d47cda45" satisfied condition "success or failure"
Feb  2 13:33:59.200: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-11dc640b-e0cd-42b6-a69c-8869d47cda45 container client-container: 
STEP: delete the pod
Feb  2 13:33:59.288: INFO: Waiting for pod downwardapi-volume-11dc640b-e0cd-42b6-a69c-8869d47cda45 to disappear
Feb  2 13:33:59.296: INFO: Pod downwardapi-volume-11dc640b-e0cd-42b6-a69c-8869d47cda45 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  2 13:33:59.296: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6017" for this suite.
Feb  2 13:34:05.376: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 13:34:05.516: INFO: namespace projected-6017 deletion completed in 6.212110395s

• [SLOW TEST:14.463 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  2 13:34:05.517: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod liveness-5b891192-9c60-48a9-8c5e-f672328b67fb in namespace container-probe-2608
Feb  2 13:34:13.618: INFO: Started pod liveness-5b891192-9c60-48a9-8c5e-f672328b67fb in namespace container-probe-2608
STEP: checking the pod's current state and verifying that restartCount is present
Feb  2 13:34:13.622: INFO: Initial restart count of pod liveness-5b891192-9c60-48a9-8c5e-f672328b67fb is 0
Feb  2 13:34:31.717: INFO: Restart count of pod container-probe-2608/liveness-5b891192-9c60-48a9-8c5e-f672328b67fb is now 1 (18.094783928s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  2 13:34:31.755: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-2608" for this suite.
Feb  2 13:34:37.875: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 13:34:38.114: INFO: namespace container-probe-2608 deletion completed in 6.332811822s

• [SLOW TEST:32.598 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  2 13:34:38.115: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0644 on tmpfs
Feb  2 13:34:38.252: INFO: Waiting up to 5m0s for pod "pod-5ce64f0a-a7f4-430a-b554-e79a916b766a" in namespace "emptydir-4871" to be "success or failure"
Feb  2 13:34:38.273: INFO: Pod "pod-5ce64f0a-a7f4-430a-b554-e79a916b766a": Phase="Pending", Reason="", readiness=false. Elapsed: 20.485336ms
Feb  2 13:34:40.281: INFO: Pod "pod-5ce64f0a-a7f4-430a-b554-e79a916b766a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028208909s
Feb  2 13:34:42.307: INFO: Pod "pod-5ce64f0a-a7f4-430a-b554-e79a916b766a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.054514042s
Feb  2 13:34:44.324: INFO: Pod "pod-5ce64f0a-a7f4-430a-b554-e79a916b766a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.071926202s
Feb  2 13:34:46.382: INFO: Pod "pod-5ce64f0a-a7f4-430a-b554-e79a916b766a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.129327796s
STEP: Saw pod success
Feb  2 13:34:46.382: INFO: Pod "pod-5ce64f0a-a7f4-430a-b554-e79a916b766a" satisfied condition "success or failure"
Feb  2 13:34:46.396: INFO: Trying to get logs from node iruya-node pod pod-5ce64f0a-a7f4-430a-b554-e79a916b766a container test-container: 
STEP: delete the pod
Feb  2 13:34:46.571: INFO: Waiting for pod pod-5ce64f0a-a7f4-430a-b554-e79a916b766a to disappear
Feb  2 13:34:46.607: INFO: Pod pod-5ce64f0a-a7f4-430a-b554-e79a916b766a no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  2 13:34:46.607: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-4871" for this suite.
Feb  2 13:34:52.649: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 13:34:52.770: INFO: namespace emptydir-4871 deletion completed in 6.156051458s

• [SLOW TEST:14.656 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  2 13:34:52.771: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
[It] should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb  2 13:34:52.985: INFO: Create a RollingUpdate DaemonSet
Feb  2 13:34:52.993: INFO: Check that daemon pods launch on every node of the cluster
Feb  2 13:34:53.015: INFO: Number of nodes with available pods: 0
Feb  2 13:34:53.015: INFO: Node iruya-node is running more than one daemon pod
Feb  2 13:34:54.262: INFO: Number of nodes with available pods: 0
Feb  2 13:34:54.262: INFO: Node iruya-node is running more than one daemon pod
Feb  2 13:34:55.185: INFO: Number of nodes with available pods: 0
Feb  2 13:34:55.185: INFO: Node iruya-node is running more than one daemon pod
Feb  2 13:34:56.029: INFO: Number of nodes with available pods: 0
Feb  2 13:34:56.029: INFO: Node iruya-node is running more than one daemon pod
Feb  2 13:34:57.080: INFO: Number of nodes with available pods: 0
Feb  2 13:34:57.080: INFO: Node iruya-node is running more than one daemon pod
Feb  2 13:34:59.822: INFO: Number of nodes with available pods: 0
Feb  2 13:34:59.822: INFO: Node iruya-node is running more than one daemon pod
Feb  2 13:35:00.109: INFO: Number of nodes with available pods: 0
Feb  2 13:35:00.109: INFO: Node iruya-node is running more than one daemon pod
Feb  2 13:35:01.031: INFO: Number of nodes with available pods: 0
Feb  2 13:35:01.031: INFO: Node iruya-node is running more than one daemon pod
Feb  2 13:35:02.032: INFO: Number of nodes with available pods: 0
Feb  2 13:35:02.033: INFO: Node iruya-node is running more than one daemon pod
Feb  2 13:35:03.027: INFO: Number of nodes with available pods: 2
Feb  2 13:35:03.027: INFO: Number of running nodes: 2, number of available pods: 2
Feb  2 13:35:03.027: INFO: Update the DaemonSet to trigger a rollout
Feb  2 13:35:03.039: INFO: Updating DaemonSet daemon-set
Feb  2 13:35:18.058: INFO: Roll back the DaemonSet before rollout is complete
Feb  2 13:35:18.064: INFO: Updating DaemonSet daemon-set
Feb  2 13:35:18.064: INFO: Make sure DaemonSet rollback is complete
Feb  2 13:35:18.141: INFO: Wrong image for pod: daemon-set-w8blg. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent.
Feb  2 13:35:18.141: INFO: Pod daemon-set-w8blg is not available
Feb  2 13:35:19.198: INFO: Wrong image for pod: daemon-set-w8blg. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent.
Feb  2 13:35:19.198: INFO: Pod daemon-set-w8blg is not available
Feb  2 13:35:20.453: INFO: Wrong image for pod: daemon-set-w8blg. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent.
Feb  2 13:35:20.453: INFO: Pod daemon-set-w8blg is not available
Feb  2 13:35:21.158: INFO: Wrong image for pod: daemon-set-w8blg. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent.
Feb  2 13:35:21.158: INFO: Pod daemon-set-w8blg is not available
Feb  2 13:35:22.184: INFO: Wrong image for pod: daemon-set-w8blg. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent.
Feb  2 13:35:22.184: INFO: Pod daemon-set-w8blg is not available
Feb  2 13:35:23.211: INFO: Wrong image for pod: daemon-set-w8blg. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent.
Feb  2 13:35:23.211: INFO: Pod daemon-set-w8blg is not available
Feb  2 13:35:24.320: INFO: Pod daemon-set-lqjvg is not available
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-4399, will wait for the garbage collector to delete the pods
Feb  2 13:35:24.418: INFO: Deleting DaemonSet.extensions daemon-set took: 19.706311ms
Feb  2 13:35:24.719: INFO: Terminating DaemonSet.extensions daemon-set pods took: 301.098993ms
Feb  2 13:35:31.426: INFO: Number of nodes with available pods: 0
Feb  2 13:35:31.426: INFO: Number of running nodes: 0, number of available pods: 0
Feb  2 13:35:31.429: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-4399/daemonsets","resourceVersion":"22818493"},"items":null}

Feb  2 13:35:31.434: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-4399/pods","resourceVersion":"22818493"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  2 13:35:31.450: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-4399" for this suite.
Feb  2 13:35:39.521: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 13:35:39.646: INFO: namespace daemonsets-4399 deletion completed in 8.186588499s

• [SLOW TEST:46.875 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  2 13:35:39.647: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating secret secrets-1714/secret-test-3958affc-e349-45d6-a442-7b9064d86509
STEP: Creating a pod to test consume secrets
Feb  2 13:35:39.768: INFO: Waiting up to 5m0s for pod "pod-configmaps-9cd87ee2-6964-4063-8a94-fc23a318c09e" in namespace "secrets-1714" to be "success or failure"
Feb  2 13:35:39.782: INFO: Pod "pod-configmaps-9cd87ee2-6964-4063-8a94-fc23a318c09e": Phase="Pending", Reason="", readiness=false. Elapsed: 13.325351ms
Feb  2 13:35:41.797: INFO: Pod "pod-configmaps-9cd87ee2-6964-4063-8a94-fc23a318c09e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029034908s
Feb  2 13:35:43.809: INFO: Pod "pod-configmaps-9cd87ee2-6964-4063-8a94-fc23a318c09e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.040698923s
Feb  2 13:35:45.821: INFO: Pod "pod-configmaps-9cd87ee2-6964-4063-8a94-fc23a318c09e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.052659894s
Feb  2 13:35:47.833: INFO: Pod "pod-configmaps-9cd87ee2-6964-4063-8a94-fc23a318c09e": Phase="Pending", Reason="", readiness=false. Elapsed: 8.064451641s
Feb  2 13:35:49.844: INFO: Pod "pod-configmaps-9cd87ee2-6964-4063-8a94-fc23a318c09e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.075355004s
STEP: Saw pod success
Feb  2 13:35:49.844: INFO: Pod "pod-configmaps-9cd87ee2-6964-4063-8a94-fc23a318c09e" satisfied condition "success or failure"
Feb  2 13:35:49.850: INFO: Trying to get logs from node iruya-node pod pod-configmaps-9cd87ee2-6964-4063-8a94-fc23a318c09e container env-test: 
STEP: delete the pod
Feb  2 13:35:49.910: INFO: Waiting for pod pod-configmaps-9cd87ee2-6964-4063-8a94-fc23a318c09e to disappear
Feb  2 13:35:49.914: INFO: Pod pod-configmaps-9cd87ee2-6964-4063-8a94-fc23a318c09e no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  2 13:35:49.914: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-1714" for this suite.
Feb  2 13:35:55.936: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 13:35:56.041: INFO: namespace secrets-1714 deletion completed in 6.122093076s

• [SLOW TEST:16.394 seconds]
[sig-api-machinery] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] PreStop 
  should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  2 13:35:56.041: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename prestop
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:167
[It] should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating server pod server in namespace prestop-2223
STEP: Waiting for pods to come up.
STEP: Creating tester pod tester in namespace prestop-2223
STEP: Deleting pre-stop pod
Feb  2 13:36:17.324: INFO: Saw: {
	"Hostname": "server",
	"Sent": null,
	"Received": {
		"prestop": 1
	},
	"Errors": null,
	"Log": [
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up."
	],
	"StillContactingPeers": true
}
STEP: Deleting the server pod
[AfterEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  2 13:36:17.348: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "prestop-2223" for this suite.
Feb  2 13:36:57.446: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 13:36:57.594: INFO: namespace prestop-2223 deletion completed in 40.224559257s

• [SLOW TEST:61.553 seconds]
[k8s.io] [sig-node] PreStop
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should receive events on concurrent watches in same order [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  2 13:36:57.595: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should receive events on concurrent watches in same order [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: starting a background goroutine to produce watch events
STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  2 13:37:03.070: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-1234" for this suite.
Feb  2 13:37:09.160: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 13:37:09.277: INFO: namespace watch-1234 deletion completed in 6.1933804s

• [SLOW TEST:11.682 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should receive events on concurrent watches in same order [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with downward pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  2 13:37:09.277: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with downward pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod pod-subpath-test-downwardapi-tdqs
STEP: Creating a pod to test atomic-volume-subpath
Feb  2 13:37:09.441: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-tdqs" in namespace "subpath-8151" to be "success or failure"
Feb  2 13:37:09.448: INFO: Pod "pod-subpath-test-downwardapi-tdqs": Phase="Pending", Reason="", readiness=false. Elapsed: 6.743993ms
Feb  2 13:37:11.457: INFO: Pod "pod-subpath-test-downwardapi-tdqs": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015527207s
Feb  2 13:37:13.533: INFO: Pod "pod-subpath-test-downwardapi-tdqs": Phase="Pending", Reason="", readiness=false. Elapsed: 4.092136039s
Feb  2 13:37:15.542: INFO: Pod "pod-subpath-test-downwardapi-tdqs": Phase="Pending", Reason="", readiness=false. Elapsed: 6.100462318s
Feb  2 13:37:17.555: INFO: Pod "pod-subpath-test-downwardapi-tdqs": Phase="Running", Reason="", readiness=true. Elapsed: 8.114326477s
Feb  2 13:37:19.569: INFO: Pod "pod-subpath-test-downwardapi-tdqs": Phase="Running", Reason="", readiness=true. Elapsed: 10.128189215s
Feb  2 13:37:21.578: INFO: Pod "pod-subpath-test-downwardapi-tdqs": Phase="Running", Reason="", readiness=true. Elapsed: 12.137196438s
Feb  2 13:37:23.586: INFO: Pod "pod-subpath-test-downwardapi-tdqs": Phase="Running", Reason="", readiness=true. Elapsed: 14.14507025s
Feb  2 13:37:25.595: INFO: Pod "pod-subpath-test-downwardapi-tdqs": Phase="Running", Reason="", readiness=true. Elapsed: 16.154273882s
Feb  2 13:37:27.603: INFO: Pod "pod-subpath-test-downwardapi-tdqs": Phase="Running", Reason="", readiness=true. Elapsed: 18.161878792s
Feb  2 13:37:29.614: INFO: Pod "pod-subpath-test-downwardapi-tdqs": Phase="Running", Reason="", readiness=true. Elapsed: 20.172571261s
Feb  2 13:37:31.622: INFO: Pod "pod-subpath-test-downwardapi-tdqs": Phase="Running", Reason="", readiness=true. Elapsed: 22.180574702s
Feb  2 13:37:33.632: INFO: Pod "pod-subpath-test-downwardapi-tdqs": Phase="Running", Reason="", readiness=true. Elapsed: 24.190982947s
Feb  2 13:37:35.641: INFO: Pod "pod-subpath-test-downwardapi-tdqs": Phase="Running", Reason="", readiness=true. Elapsed: 26.19973579s
Feb  2 13:37:37.650: INFO: Pod "pod-subpath-test-downwardapi-tdqs": Phase="Running", Reason="", readiness=true. Elapsed: 28.208980093s
Feb  2 13:37:39.660: INFO: Pod "pod-subpath-test-downwardapi-tdqs": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.218672513s
STEP: Saw pod success
Feb  2 13:37:39.660: INFO: Pod "pod-subpath-test-downwardapi-tdqs" satisfied condition "success or failure"
Feb  2 13:37:39.664: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-downwardapi-tdqs container test-container-subpath-downwardapi-tdqs: 
STEP: delete the pod
Feb  2 13:37:39.746: INFO: Waiting for pod pod-subpath-test-downwardapi-tdqs to disappear
Feb  2 13:37:39.755: INFO: Pod pod-subpath-test-downwardapi-tdqs no longer exists
STEP: Deleting pod pod-subpath-test-downwardapi-tdqs
Feb  2 13:37:39.755: INFO: Deleting pod "pod-subpath-test-downwardapi-tdqs" in namespace "subpath-8151"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  2 13:37:39.764: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-8151" for this suite.
Feb  2 13:37:45.845: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 13:37:45.988: INFO: namespace subpath-8151 deletion completed in 6.209966178s

• [SLOW TEST:36.711 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with downward pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  2 13:37:45.989: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb  2 13:37:54.206: INFO: Waiting up to 5m0s for pod "client-envvars-690bde92-1231-4e0b-934a-68b5fa2a9c48" in namespace "pods-8606" to be "success or failure"
Feb  2 13:37:54.223: INFO: Pod "client-envvars-690bde92-1231-4e0b-934a-68b5fa2a9c48": Phase="Pending", Reason="", readiness=false. Elapsed: 17.293707ms
Feb  2 13:37:56.238: INFO: Pod "client-envvars-690bde92-1231-4e0b-934a-68b5fa2a9c48": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031732492s
Feb  2 13:37:58.254: INFO: Pod "client-envvars-690bde92-1231-4e0b-934a-68b5fa2a9c48": Phase="Pending", Reason="", readiness=false. Elapsed: 4.047624978s
Feb  2 13:38:00.264: INFO: Pod "client-envvars-690bde92-1231-4e0b-934a-68b5fa2a9c48": Phase="Pending", Reason="", readiness=false. Elapsed: 6.057678264s
Feb  2 13:38:02.276: INFO: Pod "client-envvars-690bde92-1231-4e0b-934a-68b5fa2a9c48": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.069702037s
STEP: Saw pod success
Feb  2 13:38:02.276: INFO: Pod "client-envvars-690bde92-1231-4e0b-934a-68b5fa2a9c48" satisfied condition "success or failure"
Feb  2 13:38:02.287: INFO: Trying to get logs from node iruya-node pod client-envvars-690bde92-1231-4e0b-934a-68b5fa2a9c48 container env3cont: 
STEP: delete the pod
Feb  2 13:38:02.373: INFO: Waiting for pod client-envvars-690bde92-1231-4e0b-934a-68b5fa2a9c48 to disappear
Feb  2 13:38:02.389: INFO: Pod client-envvars-690bde92-1231-4e0b-934a-68b5fa2a9c48 no longer exists
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  2 13:38:02.389: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-8606" for this suite.
Feb  2 13:38:50.477: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 13:38:50.593: INFO: namespace pods-8606 deletion completed in 48.158237564s

• [SLOW TEST:64.605 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  2 13:38:50.594: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod test-webserver-f38ff76a-7ada-4032-b211-9fd3663853ae in namespace container-probe-6287
Feb  2 13:38:58.786: INFO: Started pod test-webserver-f38ff76a-7ada-4032-b211-9fd3663853ae in namespace container-probe-6287
STEP: checking the pod's current state and verifying that restartCount is present
Feb  2 13:38:58.793: INFO: Initial restart count of pod test-webserver-f38ff76a-7ada-4032-b211-9fd3663853ae is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  2 13:43:00.790: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-6287" for this suite.
Feb  2 13:43:06.936: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 13:43:07.057: INFO: namespace container-probe-6287 deletion completed in 6.149937333s

• [SLOW TEST:256.463 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-auth] ServiceAccounts 
  should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  2 13:43:07.057: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: getting the auto-created API token
Feb  2 13:43:07.699: INFO: created pod pod-service-account-defaultsa
Feb  2 13:43:07.699: INFO: pod pod-service-account-defaultsa service account token volume mount: true
Feb  2 13:43:07.719: INFO: created pod pod-service-account-mountsa
Feb  2 13:43:07.719: INFO: pod pod-service-account-mountsa service account token volume mount: true
Feb  2 13:43:07.762: INFO: created pod pod-service-account-nomountsa
Feb  2 13:43:07.762: INFO: pod pod-service-account-nomountsa service account token volume mount: false
Feb  2 13:43:07.844: INFO: created pod pod-service-account-defaultsa-mountspec
Feb  2 13:43:07.844: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true
Feb  2 13:43:07.869: INFO: created pod pod-service-account-mountsa-mountspec
Feb  2 13:43:07.869: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true
Feb  2 13:43:07.991: INFO: created pod pod-service-account-nomountsa-mountspec
Feb  2 13:43:07.991: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true
Feb  2 13:43:08.042: INFO: created pod pod-service-account-defaultsa-nomountspec
Feb  2 13:43:08.042: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false
Feb  2 13:43:08.911: INFO: created pod pod-service-account-mountsa-nomountspec
Feb  2 13:43:08.911: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false
Feb  2 13:43:08.923: INFO: created pod pod-service-account-nomountsa-nomountspec
Feb  2 13:43:08.923: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false
[AfterEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  2 13:43:08.923: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-1810" for this suite.
Feb  2 13:43:43.953: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 13:43:44.068: INFO: namespace svcaccounts-1810 deletion completed in 34.718871503s

• [SLOW TEST:37.010 seconds]
[sig-auth] ServiceAccounts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23
  should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run default 
  should create an rc or deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  2 13:43:44.069: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1420
[It] should create an rc or deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Feb  2 13:43:44.255: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-8530'
Feb  2 13:43:46.132: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Feb  2 13:43:46.132: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n"
STEP: verifying the pod controlled by e2e-test-nginx-deployment gets created
[AfterEach] [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1426
Feb  2 13:43:48.157: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=kubectl-8530'
Feb  2 13:43:48.383: INFO: stderr: ""
Feb  2 13:43:48.383: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  2 13:43:48.383: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-8530" for this suite.
Feb  2 13:43:54.499: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 13:43:54.680: INFO: namespace kubectl-8530 deletion completed in 6.28844685s

• [SLOW TEST:10.612 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create an rc or deployment from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-network] Service endpoints latency 
  should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Service endpoints latency
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  2 13:43:54.681: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svc-latency
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating replication controller svc-latency-rc in namespace svc-latency-1115
I0202 13:43:54.840548       8 runners.go:180] Created replication controller with name: svc-latency-rc, namespace: svc-latency-1115, replica count: 1
I0202 13:43:55.891321       8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0202 13:43:56.891830       8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0202 13:43:57.892230       8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0202 13:43:58.892578       8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0202 13:43:59.893088       8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0202 13:44:00.893615       8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0202 13:44:01.894109       8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0202 13:44:02.894533       8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0202 13:44:03.894984       8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Feb  2 13:44:04.032: INFO: Created: latency-svc-sc7db
Feb  2 13:44:04.098: INFO: Got endpoints: latency-svc-sc7db [103.220445ms]
Feb  2 13:44:04.172: INFO: Created: latency-svc-bn25t
Feb  2 13:44:04.247: INFO: Created: latency-svc-r46c9
Feb  2 13:44:04.248: INFO: Got endpoints: latency-svc-bn25t [148.414945ms]
Feb  2 13:44:04.256: INFO: Got endpoints: latency-svc-r46c9 [155.935378ms]
Feb  2 13:44:04.313: INFO: Created: latency-svc-5kzdx
Feb  2 13:44:04.315: INFO: Got endpoints: latency-svc-5kzdx [214.926506ms]
Feb  2 13:44:04.454: INFO: Created: latency-svc-n7qf7
Feb  2 13:44:04.454: INFO: Got endpoints: latency-svc-n7qf7 [354.725127ms]
Feb  2 13:44:04.492: INFO: Created: latency-svc-zn4n2
Feb  2 13:44:04.498: INFO: Got endpoints: latency-svc-zn4n2 [398.343564ms]
Feb  2 13:44:04.667: INFO: Created: latency-svc-txz5x
Feb  2 13:44:04.696: INFO: Got endpoints: latency-svc-txz5x [596.22508ms]
Feb  2 13:44:04.845: INFO: Created: latency-svc-d2r8s
Feb  2 13:44:04.883: INFO: Got endpoints: latency-svc-d2r8s [783.583641ms]
Feb  2 13:44:05.020: INFO: Created: latency-svc-695zl
Feb  2 13:44:05.028: INFO: Got endpoints: latency-svc-695zl [928.534801ms]
Feb  2 13:44:05.094: INFO: Created: latency-svc-988jl
Feb  2 13:44:05.099: INFO: Got endpoints: latency-svc-988jl [998.893264ms]
Feb  2 13:44:05.383: INFO: Created: latency-svc-vlk2j
Feb  2 13:44:05.400: INFO: Got endpoints: latency-svc-vlk2j [1.300182822s]
Feb  2 13:44:05.473: INFO: Created: latency-svc-wlfmd
Feb  2 13:44:05.547: INFO: Got endpoints: latency-svc-wlfmd [1.446864098s]
Feb  2 13:44:05.559: INFO: Created: latency-svc-6l9wq
Feb  2 13:44:05.579: INFO: Got endpoints: latency-svc-6l9wq [1.479356082s]
Feb  2 13:44:05.618: INFO: Created: latency-svc-6r2jp
Feb  2 13:44:05.630: INFO: Got endpoints: latency-svc-6r2jp [1.529591166s]
Feb  2 13:44:05.743: INFO: Created: latency-svc-cgscs
Feb  2 13:44:05.752: INFO: Got endpoints: latency-svc-cgscs [1.652211573s]
Feb  2 13:44:05.837: INFO: Created: latency-svc-2vxtg
Feb  2 13:44:05.911: INFO: Got endpoints: latency-svc-2vxtg [1.811869959s]
Feb  2 13:44:05.986: INFO: Created: latency-svc-xfkxq
Feb  2 13:44:05.992: INFO: Got endpoints: latency-svc-xfkxq [1.743479071s]
Feb  2 13:44:06.129: INFO: Created: latency-svc-bjnmx
Feb  2 13:44:06.164: INFO: Got endpoints: latency-svc-bjnmx [1.908688504s]
Feb  2 13:44:06.198: INFO: Created: latency-svc-jdbc5
Feb  2 13:44:06.205: INFO: Got endpoints: latency-svc-jdbc5 [1.890551928s]
Feb  2 13:44:06.320: INFO: Created: latency-svc-24r4h
Feb  2 13:44:06.348: INFO: Got endpoints: latency-svc-24r4h [1.89435132s]
Feb  2 13:44:06.398: INFO: Created: latency-svc-rfzdf
Feb  2 13:44:06.481: INFO: Created: latency-svc-hbvk7
Feb  2 13:44:06.488: INFO: Got endpoints: latency-svc-rfzdf [1.989630181s]
Feb  2 13:44:06.501: INFO: Got endpoints: latency-svc-hbvk7 [1.80457777s]
Feb  2 13:44:06.559: INFO: Created: latency-svc-bnzz5
Feb  2 13:44:06.661: INFO: Got endpoints: latency-svc-bnzz5 [1.777250183s]
Feb  2 13:44:06.665: INFO: Created: latency-svc-d9b9x
Feb  2 13:44:06.682: INFO: Got endpoints: latency-svc-d9b9x [1.654504968s]
Feb  2 13:44:06.735: INFO: Created: latency-svc-rw2r7
Feb  2 13:44:06.829: INFO: Got endpoints: latency-svc-rw2r7 [1.730176467s]
Feb  2 13:44:06.836: INFO: Created: latency-svc-wgvd2
Feb  2 13:44:06.844: INFO: Got endpoints: latency-svc-wgvd2 [1.443822005s]
Feb  2 13:44:06.888: INFO: Created: latency-svc-88w2m
Feb  2 13:44:07.014: INFO: Got endpoints: latency-svc-88w2m [1.466722458s]
Feb  2 13:44:07.056: INFO: Created: latency-svc-gtldw
Feb  2 13:44:07.084: INFO: Created: latency-svc-86kg2
Feb  2 13:44:07.085: INFO: Got endpoints: latency-svc-gtldw [1.50536929s]
Feb  2 13:44:07.098: INFO: Got endpoints: latency-svc-86kg2 [1.467831276s]
Feb  2 13:44:07.195: INFO: Created: latency-svc-dqqdd
Feb  2 13:44:07.195: INFO: Got endpoints: latency-svc-dqqdd [1.442211103s]
Feb  2 13:44:07.239: INFO: Created: latency-svc-d4j2l
Feb  2 13:44:07.326: INFO: Got endpoints: latency-svc-d4j2l [1.415362994s]
Feb  2 13:44:07.359: INFO: Created: latency-svc-5fbds
Feb  2 13:44:07.361: INFO: Got endpoints: latency-svc-5fbds [1.369310152s]
Feb  2 13:44:07.485: INFO: Created: latency-svc-4vx5d
Feb  2 13:44:07.494: INFO: Got endpoints: latency-svc-4vx5d [1.329112513s]
Feb  2 13:44:07.569: INFO: Created: latency-svc-7zsp5
Feb  2 13:44:07.682: INFO: Got endpoints: latency-svc-7zsp5 [1.47675938s]
Feb  2 13:44:07.730: INFO: Created: latency-svc-8ftsr
Feb  2 13:44:07.777: INFO: Got endpoints: latency-svc-8ftsr [1.42829908s]
Feb  2 13:44:07.847: INFO: Created: latency-svc-spst4
Feb  2 13:44:07.861: INFO: Got endpoints: latency-svc-spst4 [178.047758ms]
Feb  2 13:44:07.948: INFO: Created: latency-svc-n2xzb
Feb  2 13:44:08.006: INFO: Got endpoints: latency-svc-n2xzb [1.517487157s]
Feb  2 13:44:08.050: INFO: Created: latency-svc-xsls7
Feb  2 13:44:08.076: INFO: Got endpoints: latency-svc-xsls7 [1.574956198s]
Feb  2 13:44:08.167: INFO: Created: latency-svc-tjvw2
Feb  2 13:44:08.177: INFO: Got endpoints: latency-svc-tjvw2 [1.51561517s]
Feb  2 13:44:08.246: INFO: Created: latency-svc-c2gkw
Feb  2 13:44:08.255: INFO: Got endpoints: latency-svc-c2gkw [1.572193727s]
Feb  2 13:44:08.355: INFO: Created: latency-svc-wzm9b
Feb  2 13:44:08.365: INFO: Got endpoints: latency-svc-wzm9b [1.53543643s]
Feb  2 13:44:08.417: INFO: Created: latency-svc-7ftx7
Feb  2 13:44:08.433: INFO: Got endpoints: latency-svc-7ftx7 [1.588370549s]
Feb  2 13:44:08.564: INFO: Created: latency-svc-284cz
Feb  2 13:44:08.564: INFO: Got endpoints: latency-svc-284cz [1.550619427s]
Feb  2 13:44:08.604: INFO: Created: latency-svc-bc874
Feb  2 13:44:08.688: INFO: Got endpoints: latency-svc-bc874 [1.602805076s]
Feb  2 13:44:08.728: INFO: Created: latency-svc-pwlwh
Feb  2 13:44:08.729: INFO: Got endpoints: latency-svc-pwlwh [1.631245766s]
Feb  2 13:44:08.864: INFO: Created: latency-svc-bgnpf
Feb  2 13:44:08.903: INFO: Got endpoints: latency-svc-bgnpf [1.708185628s]
Feb  2 13:44:09.036: INFO: Created: latency-svc-56bvl
Feb  2 13:44:09.040: INFO: Got endpoints: latency-svc-56bvl [1.713452405s]
Feb  2 13:44:09.098: INFO: Created: latency-svc-g54rt
Feb  2 13:44:09.117: INFO: Got endpoints: latency-svc-g54rt [1.755758202s]
Feb  2 13:44:09.281: INFO: Created: latency-svc-c57ss
Feb  2 13:44:09.435: INFO: Got endpoints: latency-svc-c57ss [1.940653921s]
Feb  2 13:44:09.481: INFO: Created: latency-svc-sg75b
Feb  2 13:44:09.501: INFO: Got endpoints: latency-svc-sg75b [1.724149566s]
Feb  2 13:44:09.680: INFO: Created: latency-svc-d6bx6
Feb  2 13:44:09.685: INFO: Got endpoints: latency-svc-d6bx6 [1.824542997s]
Feb  2 13:44:09.764: INFO: Created: latency-svc-7pc5m
Feb  2 13:44:09.870: INFO: Got endpoints: latency-svc-7pc5m [1.863209545s]
Feb  2 13:44:09.927: INFO: Created: latency-svc-vcpgd
Feb  2 13:44:09.949: INFO: Got endpoints: latency-svc-vcpgd [1.872860786s]
Feb  2 13:44:10.080: INFO: Created: latency-svc-q9l58
Feb  2 13:44:10.084: INFO: Got endpoints: latency-svc-q9l58 [1.90744594s]
Feb  2 13:44:10.213: INFO: Created: latency-svc-fb8n4
Feb  2 13:44:10.226: INFO: Got endpoints: latency-svc-fb8n4 [1.970581182s]
Feb  2 13:44:10.280: INFO: Created: latency-svc-s29x5
Feb  2 13:44:10.397: INFO: Got endpoints: latency-svc-s29x5 [2.031842714s]
Feb  2 13:44:10.412: INFO: Created: latency-svc-6cl26
Feb  2 13:44:10.422: INFO: Got endpoints: latency-svc-6cl26 [1.988888471s]
Feb  2 13:44:10.465: INFO: Created: latency-svc-f7lcw
Feb  2 13:44:10.477: INFO: Got endpoints: latency-svc-f7lcw [1.912301025s]
Feb  2 13:44:10.580: INFO: Created: latency-svc-sf4pr
Feb  2 13:44:10.624: INFO: Got endpoints: latency-svc-sf4pr [1.935321576s]
Feb  2 13:44:10.634: INFO: Created: latency-svc-kx9w9
Feb  2 13:44:10.649: INFO: Got endpoints: latency-svc-kx9w9 [1.919685901s]
Feb  2 13:44:10.727: INFO: Created: latency-svc-gwsgt
Feb  2 13:44:10.749: INFO: Got endpoints: latency-svc-gwsgt [1.846103072s]
Feb  2 13:44:10.920: INFO: Created: latency-svc-vd79l
Feb  2 13:44:10.951: INFO: Got endpoints: latency-svc-vd79l [1.910798127s]
Feb  2 13:44:10.974: INFO: Created: latency-svc-glmn8
Feb  2 13:44:10.987: INFO: Got endpoints: latency-svc-glmn8 [1.869220586s]
Feb  2 13:44:11.113: INFO: Created: latency-svc-wqwsc
Feb  2 13:44:11.113: INFO: Got endpoints: latency-svc-wqwsc [1.677543255s]
Feb  2 13:44:11.163: INFO: Created: latency-svc-rvfbx
Feb  2 13:44:11.166: INFO: Got endpoints: latency-svc-rvfbx [1.664997149s]
Feb  2 13:44:11.296: INFO: Created: latency-svc-4fq6v
Feb  2 13:44:11.296: INFO: Got endpoints: latency-svc-4fq6v [1.610527046s]
Feb  2 13:44:11.349: INFO: Created: latency-svc-fppnx
Feb  2 13:44:11.349: INFO: Got endpoints: latency-svc-fppnx [1.478810999s]
Feb  2 13:44:11.381: INFO: Created: latency-svc-kcwpw
Feb  2 13:44:11.383: INFO: Got endpoints: latency-svc-kcwpw [1.433657467s]
Feb  2 13:44:11.588: INFO: Created: latency-svc-wwrlh
Feb  2 13:44:11.606: INFO: Got endpoints: latency-svc-wwrlh [1.521154787s]
Feb  2 13:44:11.826: INFO: Created: latency-svc-d54ws
Feb  2 13:44:11.889: INFO: Got endpoints: latency-svc-d54ws [1.66285609s]
Feb  2 13:44:11.907: INFO: Created: latency-svc-qgj9r
Feb  2 13:44:11.913: INFO: Got endpoints: latency-svc-qgj9r [1.515162922s]
Feb  2 13:44:12.084: INFO: Created: latency-svc-pstd5
Feb  2 13:44:12.098: INFO: Got endpoints: latency-svc-pstd5 [1.676019916s]
Feb  2 13:44:12.142: INFO: Created: latency-svc-hp7gf
Feb  2 13:44:12.167: INFO: Got endpoints: latency-svc-hp7gf [1.689468973s]
Feb  2 13:44:12.321: INFO: Created: latency-svc-x6vx4
Feb  2 13:44:12.334: INFO: Got endpoints: latency-svc-x6vx4 [1.709828107s]
Feb  2 13:44:12.460: INFO: Created: latency-svc-qgh7w
Feb  2 13:44:12.475: INFO: Got endpoints: latency-svc-qgh7w [1.825909086s]
Feb  2 13:44:12.502: INFO: Created: latency-svc-hmzgv
Feb  2 13:44:12.555: INFO: Got endpoints: latency-svc-hmzgv [1.805284133s]
Feb  2 13:44:12.572: INFO: Created: latency-svc-fjx8z
Feb  2 13:44:12.694: INFO: Got endpoints: latency-svc-fjx8z [1.742639268s]
Feb  2 13:44:12.708: INFO: Created: latency-svc-jv7qg
Feb  2 13:44:12.722: INFO: Got endpoints: latency-svc-jv7qg [1.735278666s]
Feb  2 13:44:12.801: INFO: Created: latency-svc-b4kh7
Feb  2 13:44:12.910: INFO: Got endpoints: latency-svc-b4kh7 [1.796356771s]
Feb  2 13:44:12.959: INFO: Created: latency-svc-z4sdb
Feb  2 13:44:12.985: INFO: Got endpoints: latency-svc-z4sdb [1.818678093s]
Feb  2 13:44:13.098: INFO: Created: latency-svc-smvgp
Feb  2 13:44:13.098: INFO: Got endpoints: latency-svc-smvgp [1.802306793s]
Feb  2 13:44:13.141: INFO: Created: latency-svc-8pt2b
Feb  2 13:44:13.151: INFO: Got endpoints: latency-svc-8pt2b [1.802633936s]
Feb  2 13:44:13.185: INFO: Created: latency-svc-msw47
Feb  2 13:44:13.275: INFO: Got endpoints: latency-svc-msw47 [1.892091545s]
Feb  2 13:44:13.319: INFO: Created: latency-svc-7dpnq
Feb  2 13:44:13.330: INFO: Got endpoints: latency-svc-7dpnq [1.724208777s]
Feb  2 13:44:13.368: INFO: Created: latency-svc-c96p9
Feb  2 13:44:13.461: INFO: Got endpoints: latency-svc-c96p9 [1.571553798s]
Feb  2 13:44:13.499: INFO: Created: latency-svc-cpwlt
Feb  2 13:44:13.510: INFO: Got endpoints: latency-svc-cpwlt [1.597041208s]
Feb  2 13:44:13.662: INFO: Created: latency-svc-k9vxv
Feb  2 13:44:13.702: INFO: Got endpoints: latency-svc-k9vxv [1.604003558s]
Feb  2 13:44:13.735: INFO: Created: latency-svc-lbv8l
Feb  2 13:44:13.744: INFO: Got endpoints: latency-svc-lbv8l [1.576595113s]
Feb  2 13:44:13.951: INFO: Created: latency-svc-fsdl7
Feb  2 13:44:13.989: INFO: Got endpoints: latency-svc-fsdl7 [1.654521055s]
Feb  2 13:44:13.997: INFO: Created: latency-svc-l98jm
Feb  2 13:44:14.008: INFO: Got endpoints: latency-svc-l98jm [1.533122802s]
Feb  2 13:44:14.147: INFO: Created: latency-svc-5x9br
Feb  2 13:44:14.160: INFO: Got endpoints: latency-svc-5x9br [1.605189488s]
Feb  2 13:44:14.211: INFO: Created: latency-svc-n7tnm
Feb  2 13:44:14.242: INFO: Got endpoints: latency-svc-n7tnm [1.547621808s]
Feb  2 13:44:14.243: INFO: Created: latency-svc-ns6kr
Feb  2 13:44:14.404: INFO: Got endpoints: latency-svc-ns6kr [1.681663308s]
Feb  2 13:44:14.435: INFO: Created: latency-svc-wtpcw
Feb  2 13:44:14.439: INFO: Got endpoints: latency-svc-wtpcw [1.528652244s]
Feb  2 13:44:14.626: INFO: Created: latency-svc-h8czf
Feb  2 13:44:14.632: INFO: Got endpoints: latency-svc-h8czf [1.645831033s]
Feb  2 13:44:14.690: INFO: Created: latency-svc-w6k8h
Feb  2 13:44:14.690: INFO: Got endpoints: latency-svc-w6k8h [1.591886354s]
Feb  2 13:44:14.842: INFO: Created: latency-svc-rf5cz
Feb  2 13:44:14.870: INFO: Got endpoints: latency-svc-rf5cz [1.718234374s]
Feb  2 13:44:14.888: INFO: Created: latency-svc-lltpp
Feb  2 13:44:14.893: INFO: Got endpoints: latency-svc-lltpp [1.617655404s]
Feb  2 13:44:15.036: INFO: Created: latency-svc-wgpx2
Feb  2 13:44:15.043: INFO: Got endpoints: latency-svc-wgpx2 [1.712444638s]
Feb  2 13:44:15.076: INFO: Created: latency-svc-72dl8
Feb  2 13:44:15.110: INFO: Got endpoints: latency-svc-72dl8 [1.648882316s]
Feb  2 13:44:15.119: INFO: Created: latency-svc-fmt75
Feb  2 13:44:15.223: INFO: Got endpoints: latency-svc-fmt75 [1.713503569s]
Feb  2 13:44:15.282: INFO: Created: latency-svc-vsm22
Feb  2 13:44:15.288: INFO: Got endpoints: latency-svc-vsm22 [1.584783004s]
Feb  2 13:44:15.317: INFO: Created: latency-svc-gvnrc
Feb  2 13:44:15.435: INFO: Got endpoints: latency-svc-gvnrc [1.690843921s]
Feb  2 13:44:15.502: INFO: Created: latency-svc-k9z2m
Feb  2 13:44:15.502: INFO: Got endpoints: latency-svc-k9z2m [1.512558347s]
Feb  2 13:44:15.643: INFO: Created: latency-svc-mzmqb
Feb  2 13:44:15.643: INFO: Got endpoints: latency-svc-mzmqb [1.63469329s]
Feb  2 13:44:15.691: INFO: Created: latency-svc-qzl4d
Feb  2 13:44:15.698: INFO: Got endpoints: latency-svc-qzl4d [1.53715722s]
Feb  2 13:44:15.841: INFO: Created: latency-svc-dmtnm
Feb  2 13:44:15.843: INFO: Got endpoints: latency-svc-dmtnm [1.600675463s]
Feb  2 13:44:16.319: INFO: Created: latency-svc-n5zpv
Feb  2 13:44:16.322: INFO: Got endpoints: latency-svc-n5zpv [1.917753525s]
Feb  2 13:44:16.387: INFO: Created: latency-svc-wlqbz
Feb  2 13:44:16.485: INFO: Got endpoints: latency-svc-wlqbz [2.046162912s]
Feb  2 13:44:16.557: INFO: Created: latency-svc-d7r7s
Feb  2 13:44:16.573: INFO: Got endpoints: latency-svc-d7r7s [1.941187325s]
Feb  2 13:44:16.690: INFO: Created: latency-svc-wxkst
Feb  2 13:44:16.715: INFO: Got endpoints: latency-svc-wxkst [2.024900078s]
Feb  2 13:44:16.747: INFO: Created: latency-svc-p556c
Feb  2 13:44:16.749: INFO: Got endpoints: latency-svc-p556c [1.878427992s]
Feb  2 13:44:16.852: INFO: Created: latency-svc-4w6g2
Feb  2 13:44:16.884: INFO: Got endpoints: latency-svc-4w6g2 [1.990547646s]
Feb  2 13:44:17.050: INFO: Created: latency-svc-q6b4d
Feb  2 13:44:17.050: INFO: Got endpoints: latency-svc-q6b4d [2.007229683s]
Feb  2 13:44:17.328: INFO: Created: latency-svc-lzrx2
Feb  2 13:44:17.388: INFO: Got endpoints: latency-svc-lzrx2 [2.278350746s]
Feb  2 13:44:17.400: INFO: Created: latency-svc-xppdq
Feb  2 13:44:17.413: INFO: Got endpoints: latency-svc-xppdq [2.189255849s]
Feb  2 13:44:17.607: INFO: Created: latency-svc-6d468
Feb  2 13:44:17.619: INFO: Got endpoints: latency-svc-6d468 [2.331632409s]
Feb  2 13:44:17.845: INFO: Created: latency-svc-7mjzd
Feb  2 13:44:17.857: INFO: Got endpoints: latency-svc-7mjzd [2.421970402s]
Feb  2 13:44:18.108: INFO: Created: latency-svc-8vtpd
Feb  2 13:44:18.108: INFO: Got endpoints: latency-svc-8vtpd [2.60547905s]
Feb  2 13:44:18.154: INFO: Created: latency-svc-q22c7
Feb  2 13:44:18.334: INFO: Created: latency-svc-szddd
Feb  2 13:44:18.334: INFO: Got endpoints: latency-svc-q22c7 [2.690961687s]
Feb  2 13:44:18.347: INFO: Got endpoints: latency-svc-szddd [2.648668566s]
Feb  2 13:44:18.407: INFO: Created: latency-svc-dpckj
Feb  2 13:44:18.427: INFO: Got endpoints: latency-svc-dpckj [2.583536905s]
Feb  2 13:44:18.553: INFO: Created: latency-svc-86mfb
Feb  2 13:44:18.568: INFO: Got endpoints: latency-svc-86mfb [2.245848732s]
Feb  2 13:44:18.610: INFO: Created: latency-svc-cjgxk
Feb  2 13:44:18.617: INFO: Got endpoints: latency-svc-cjgxk [2.131892458s]
Feb  2 13:44:18.813: INFO: Created: latency-svc-4qq78
Feb  2 13:44:18.813: INFO: Got endpoints: latency-svc-4qq78 [2.239492661s]
Feb  2 13:44:18.873: INFO: Created: latency-svc-thrvr
Feb  2 13:44:18.881: INFO: Got endpoints: latency-svc-thrvr [2.165889482s]
Feb  2 13:44:19.063: INFO: Created: latency-svc-xp72b
Feb  2 13:44:19.072: INFO: Got endpoints: latency-svc-xp72b [2.322838826s]
Feb  2 13:44:19.173: INFO: Created: latency-svc-v772l
Feb  2 13:44:19.185: INFO: Got endpoints: latency-svc-v772l [2.300685128s]
Feb  2 13:44:19.225: INFO: Created: latency-svc-ndvdp
Feb  2 13:44:19.236: INFO: Got endpoints: latency-svc-ndvdp [2.185838225s]
Feb  2 13:44:19.345: INFO: Created: latency-svc-x2mcj
Feb  2 13:44:19.354: INFO: Got endpoints: latency-svc-x2mcj [1.96514327s]
Feb  2 13:44:19.401: INFO: Created: latency-svc-tvpzv
Feb  2 13:44:19.415: INFO: Got endpoints: latency-svc-tvpzv [2.002074224s]
Feb  2 13:44:19.524: INFO: Created: latency-svc-79db2
Feb  2 13:44:19.552: INFO: Got endpoints: latency-svc-79db2 [1.93200648s]
Feb  2 13:44:19.580: INFO: Created: latency-svc-hrzg5
Feb  2 13:44:19.587: INFO: Got endpoints: latency-svc-hrzg5 [1.729164116s]
Feb  2 13:44:19.747: INFO: Created: latency-svc-zvqxw
Feb  2 13:44:19.748: INFO: Got endpoints: latency-svc-zvqxw [1.639677012s]
Feb  2 13:44:19.835: INFO: Created: latency-svc-pttng
Feb  2 13:44:19.835: INFO: Got endpoints: latency-svc-pttng [1.500629296s]
Feb  2 13:44:19.991: INFO: Created: latency-svc-6rpqn
Feb  2 13:44:19.999: INFO: Got endpoints: latency-svc-6rpqn [1.652290626s]
Feb  2 13:44:20.158: INFO: Created: latency-svc-fwf5k
Feb  2 13:44:20.160: INFO: Got endpoints: latency-svc-fwf5k [1.733553667s]
Feb  2 13:44:20.219: INFO: Created: latency-svc-wmm7d
Feb  2 13:44:20.219: INFO: Got endpoints: latency-svc-wmm7d [1.65036556s]
Feb  2 13:44:20.337: INFO: Created: latency-svc-pzbgg
Feb  2 13:44:20.348: INFO: Got endpoints: latency-svc-pzbgg [1.730378071s]
Feb  2 13:44:20.400: INFO: Created: latency-svc-djft8
Feb  2 13:44:20.474: INFO: Got endpoints: latency-svc-djft8 [1.661256174s]
Feb  2 13:44:20.713: INFO: Created: latency-svc-65cth
Feb  2 13:44:20.882: INFO: Created: latency-svc-skc4l
Feb  2 13:44:20.882: INFO: Got endpoints: latency-svc-65cth [2.00099397s]
Feb  2 13:44:20.915: INFO: Got endpoints: latency-svc-skc4l [1.842851744s]
Feb  2 13:44:20.963: INFO: Created: latency-svc-jrq7k
Feb  2 13:44:21.059: INFO: Got endpoints: latency-svc-jrq7k [1.874223021s]
Feb  2 13:44:21.070: INFO: Created: latency-svc-fv55p
Feb  2 13:44:21.144: INFO: Got endpoints: latency-svc-fv55p [1.907917307s]
Feb  2 13:44:21.154: INFO: Created: latency-svc-sgdnh
Feb  2 13:44:21.222: INFO: Got endpoints: latency-svc-sgdnh [1.868603172s]
Feb  2 13:44:21.263: INFO: Created: latency-svc-x279k
Feb  2 13:44:21.308: INFO: Got endpoints: latency-svc-x279k [1.893171069s]
Feb  2 13:44:21.320: INFO: Created: latency-svc-6m4bd
Feb  2 13:44:21.390: INFO: Got endpoints: latency-svc-6m4bd [1.83846855s]
Feb  2 13:44:21.423: INFO: Created: latency-svc-xqd4n
Feb  2 13:44:21.424: INFO: Got endpoints: latency-svc-xqd4n [1.837113069s]
Feb  2 13:44:21.469: INFO: Created: latency-svc-d227h
Feb  2 13:44:21.472: INFO: Got endpoints: latency-svc-d227h [1.724190381s]
Feb  2 13:44:21.595: INFO: Created: latency-svc-zdjfn
Feb  2 13:44:21.632: INFO: Got endpoints: latency-svc-zdjfn [1.796295225s]
Feb  2 13:44:21.632: INFO: Created: latency-svc-fltrw
Feb  2 13:44:21.657: INFO: Got endpoints: latency-svc-fltrw [1.6581674s]
Feb  2 13:44:21.772: INFO: Created: latency-svc-wbqzq
Feb  2 13:44:21.808: INFO: Created: latency-svc-kjhx7
Feb  2 13:44:21.809: INFO: Got endpoints: latency-svc-wbqzq [1.648786892s]
Feb  2 13:44:21.859: INFO: Got endpoints: latency-svc-kjhx7 [1.639606547s]
Feb  2 13:44:21.876: INFO: Created: latency-svc-gbzcn
Feb  2 13:44:21.986: INFO: Got endpoints: latency-svc-gbzcn [1.638297687s]
Feb  2 13:44:22.016: INFO: Created: latency-svc-xfmmg
Feb  2 13:44:22.029: INFO: Got endpoints: latency-svc-xfmmg [1.554268185s]
Feb  2 13:44:22.089: INFO: Created: latency-svc-6rt5k
Feb  2 13:44:22.229: INFO: Got endpoints: latency-svc-6rt5k [1.346866899s]
Feb  2 13:44:22.266: INFO: Created: latency-svc-zlf2w
Feb  2 13:44:22.272: INFO: Got endpoints: latency-svc-zlf2w [1.356593103s]
Feb  2 13:44:22.320: INFO: Created: latency-svc-gb7jc
Feb  2 13:44:22.408: INFO: Got endpoints: latency-svc-gb7jc [1.348589288s]
Feb  2 13:44:22.439: INFO: Created: latency-svc-dzv9j
Feb  2 13:44:22.453: INFO: Got endpoints: latency-svc-dzv9j [1.308096552s]
Feb  2 13:44:22.519: INFO: Created: latency-svc-78zpg
Feb  2 13:44:22.588: INFO: Got endpoints: latency-svc-78zpg [1.364488146s]
Feb  2 13:44:22.621: INFO: Created: latency-svc-xkfd9
Feb  2 13:44:22.632: INFO: Got endpoints: latency-svc-xkfd9 [1.323042174s]
Feb  2 13:44:22.663: INFO: Created: latency-svc-zp6v7
Feb  2 13:44:22.672: INFO: Got endpoints: latency-svc-zp6v7 [1.281128039s]
Feb  2 13:44:22.768: INFO: Created: latency-svc-mfcbd
Feb  2 13:44:22.779: INFO: Got endpoints: latency-svc-mfcbd [1.354399067s]
Feb  2 13:44:22.816: INFO: Created: latency-svc-x667z
Feb  2 13:44:22.939: INFO: Got endpoints: latency-svc-x667z [1.467154242s]
Feb  2 13:44:23.003: INFO: Created: latency-svc-cglvk
Feb  2 13:44:23.007: INFO: Got endpoints: latency-svc-cglvk [1.374538221s]
Feb  2 13:44:23.188: INFO: Created: latency-svc-d2qpn
Feb  2 13:44:23.203: INFO: Got endpoints: latency-svc-d2qpn [1.546031775s]
Feb  2 13:44:23.395: INFO: Created: latency-svc-79fg9
Feb  2 13:44:23.398: INFO: Got endpoints: latency-svc-79fg9 [1.588371186s]
Feb  2 13:44:23.476: INFO: Created: latency-svc-wfmbz
Feb  2 13:44:23.553: INFO: Got endpoints: latency-svc-wfmbz [1.694278334s]
Feb  2 13:44:23.572: INFO: Created: latency-svc-2x7cf
Feb  2 13:44:23.588: INFO: Got endpoints: latency-svc-2x7cf [1.60098685s]
Feb  2 13:44:23.624: INFO: Created: latency-svc-f76f5
Feb  2 13:44:23.743: INFO: Got endpoints: latency-svc-f76f5 [1.713876503s]
Feb  2 13:44:23.746: INFO: Created: latency-svc-hws6z
Feb  2 13:44:23.770: INFO: Got endpoints: latency-svc-hws6z [1.540488678s]
Feb  2 13:44:23.840: INFO: Created: latency-svc-xzsmw
Feb  2 13:44:23.951: INFO: Got endpoints: latency-svc-xzsmw [1.678851636s]
Feb  2 13:44:24.005: INFO: Created: latency-svc-jxhws
Feb  2 13:44:24.006: INFO: Got endpoints: latency-svc-jxhws [1.596669123s]
Feb  2 13:44:24.182: INFO: Created: latency-svc-dx865
Feb  2 13:44:24.189: INFO: Got endpoints: latency-svc-dx865 [1.735661992s]
Feb  2 13:44:24.232: INFO: Created: latency-svc-swhz9
Feb  2 13:44:24.342: INFO: Created: latency-svc-jfltf
Feb  2 13:44:24.343: INFO: Got endpoints: latency-svc-swhz9 [1.755110878s]
Feb  2 13:44:24.346: INFO: Got endpoints: latency-svc-jfltf [1.713481303s]
Feb  2 13:44:24.384: INFO: Created: latency-svc-ccqvn
Feb  2 13:44:24.385: INFO: Got endpoints: latency-svc-ccqvn [1.713480006s]
Feb  2 13:44:24.442: INFO: Created: latency-svc-rldtm
Feb  2 13:44:24.532: INFO: Got endpoints: latency-svc-rldtm [1.752909601s]
Feb  2 13:44:24.559: INFO: Created: latency-svc-zh8lq
Feb  2 13:44:24.564: INFO: Got endpoints: latency-svc-zh8lq [1.624803869s]
Feb  2 13:44:24.620: INFO: Created: latency-svc-2h89p
Feb  2 13:44:24.726: INFO: Got endpoints: latency-svc-2h89p [1.719289521s]
Feb  2 13:44:24.759: INFO: Created: latency-svc-92kqc
Feb  2 13:44:24.767: INFO: Got endpoints: latency-svc-92kqc [1.563058694s]
Feb  2 13:44:24.817: INFO: Created: latency-svc-cgc74
Feb  2 13:44:24.820: INFO: Got endpoints: latency-svc-cgc74 [1.42243335s]
Feb  2 13:44:24.956: INFO: Created: latency-svc-zn5b2
Feb  2 13:44:24.979: INFO: Got endpoints: latency-svc-zn5b2 [1.425778476s]
Feb  2 13:44:25.075: INFO: Created: latency-svc-bz7hg
Feb  2 13:44:25.086: INFO: Got endpoints: latency-svc-bz7hg [1.49810977s]
Feb  2 13:44:25.131: INFO: Created: latency-svc-627rk
Feb  2 13:44:25.132: INFO: Got endpoints: latency-svc-627rk [1.388982601s]
Feb  2 13:44:25.267: INFO: Created: latency-svc-klcnz
Feb  2 13:44:25.267: INFO: Got endpoints: latency-svc-klcnz [1.496117264s]
Feb  2 13:44:25.302: INFO: Created: latency-svc-hn8sb
Feb  2 13:44:25.310: INFO: Got endpoints: latency-svc-hn8sb [1.358421643s]
Feb  2 13:44:25.346: INFO: Created: latency-svc-d97c8
Feb  2 13:44:25.425: INFO: Got endpoints: latency-svc-d97c8 [1.419010109s]
Feb  2 13:44:25.462: INFO: Created: latency-svc-xhx5m
Feb  2 13:44:25.482: INFO: Got endpoints: latency-svc-xhx5m [1.293280258s]
Feb  2 13:44:25.512: INFO: Created: latency-svc-6svjg
Feb  2 13:44:25.519: INFO: Got endpoints: latency-svc-6svjg [1.175721267s]
Feb  2 13:44:25.645: INFO: Created: latency-svc-blnbr
Feb  2 13:44:25.660: INFO: Got endpoints: latency-svc-blnbr [1.314623967s]
Feb  2 13:44:25.711: INFO: Created: latency-svc-nj9xf
Feb  2 13:44:25.717: INFO: Got endpoints: latency-svc-nj9xf [1.331150934s]
Feb  2 13:44:25.851: INFO: Created: latency-svc-5tzb5
Feb  2 13:44:25.877: INFO: Got endpoints: latency-svc-5tzb5 [1.34531656s]
Feb  2 13:44:25.882: INFO: Created: latency-svc-s8t7s
Feb  2 13:44:25.890: INFO: Got endpoints: latency-svc-s8t7s [1.325582477s]
Feb  2 13:44:26.035: INFO: Created: latency-svc-jn95g
Feb  2 13:44:26.082: INFO: Got endpoints: latency-svc-jn95g [1.354966709s]
Feb  2 13:44:26.090: INFO: Created: latency-svc-qkds4
Feb  2 13:44:26.094: INFO: Got endpoints: latency-svc-qkds4 [1.327081029s]
Feb  2 13:44:26.136: INFO: Created: latency-svc-hsx6t
Feb  2 13:44:26.253: INFO: Got endpoints: latency-svc-hsx6t [1.432357125s]
Feb  2 13:44:26.516: INFO: Created: latency-svc-lgzpg
Feb  2 13:44:26.524: INFO: Got endpoints: latency-svc-lgzpg [1.545005343s]
Feb  2 13:44:26.577: INFO: Created: latency-svc-lbxsb
Feb  2 13:44:26.676: INFO: Got endpoints: latency-svc-lbxsb [1.589385479s]
Feb  2 13:44:26.734: INFO: Created: latency-svc-fq8h9
Feb  2 13:44:26.752: INFO: Created: latency-svc-r2xmg
Feb  2 13:44:26.756: INFO: Got endpoints: latency-svc-fq8h9 [1.623318122s]
Feb  2 13:44:26.758: INFO: Got endpoints: latency-svc-r2xmg [1.490807322s]
Feb  2 13:44:26.758: INFO: Latencies: [148.414945ms 155.935378ms 178.047758ms 214.926506ms 354.725127ms 398.343564ms 596.22508ms 783.583641ms 928.534801ms 998.893264ms 1.175721267s 1.281128039s 1.293280258s 1.300182822s 1.308096552s 1.314623967s 1.323042174s 1.325582477s 1.327081029s 1.329112513s 1.331150934s 1.34531656s 1.346866899s 1.348589288s 1.354399067s 1.354966709s 1.356593103s 1.358421643s 1.364488146s 1.369310152s 1.374538221s 1.388982601s 1.415362994s 1.419010109s 1.42243335s 1.425778476s 1.42829908s 1.432357125s 1.433657467s 1.442211103s 1.443822005s 1.446864098s 1.466722458s 1.467154242s 1.467831276s 1.47675938s 1.478810999s 1.479356082s 1.490807322s 1.496117264s 1.49810977s 1.500629296s 1.50536929s 1.512558347s 1.515162922s 1.51561517s 1.517487157s 1.521154787s 1.528652244s 1.529591166s 1.533122802s 1.53543643s 1.53715722s 1.540488678s 1.545005343s 1.546031775s 1.547621808s 1.550619427s 1.554268185s 1.563058694s 1.571553798s 1.572193727s 1.574956198s 1.576595113s 1.584783004s 1.588370549s 1.588371186s 1.589385479s 1.591886354s 1.596669123s 1.597041208s 1.600675463s 1.60098685s 1.602805076s 1.604003558s 1.605189488s 1.610527046s 1.617655404s 1.623318122s 1.624803869s 1.631245766s 1.63469329s 1.638297687s 1.639606547s 1.639677012s 1.645831033s 1.648786892s 1.648882316s 1.65036556s 1.652211573s 1.652290626s 1.654504968s 1.654521055s 1.6581674s 1.661256174s 1.66285609s 1.664997149s 1.676019916s 1.677543255s 1.678851636s 1.681663308s 1.689468973s 1.690843921s 1.694278334s 1.708185628s 1.709828107s 1.712444638s 1.713452405s 1.713480006s 1.713481303s 1.713503569s 1.713876503s 1.718234374s 1.719289521s 1.724149566s 1.724190381s 1.724208777s 1.729164116s 1.730176467s 1.730378071s 1.733553667s 1.735278666s 1.735661992s 1.742639268s 1.743479071s 1.752909601s 1.755110878s 1.755758202s 1.777250183s 1.796295225s 1.796356771s 1.802306793s 1.802633936s 1.80457777s 1.805284133s 1.811869959s 1.818678093s 1.824542997s 1.825909086s 1.837113069s 1.83846855s 1.842851744s 1.846103072s 1.863209545s 1.868603172s 1.869220586s 1.872860786s 1.874223021s 1.878427992s 1.890551928s 1.892091545s 1.893171069s 1.89435132s 1.90744594s 1.907917307s 1.908688504s 1.910798127s 1.912301025s 1.917753525s 1.919685901s 1.93200648s 1.935321576s 1.940653921s 1.941187325s 1.96514327s 1.970581182s 1.988888471s 1.989630181s 1.990547646s 2.00099397s 2.002074224s 2.007229683s 2.024900078s 2.031842714s 2.046162912s 2.131892458s 2.165889482s 2.185838225s 2.189255849s 2.239492661s 2.245848732s 2.278350746s 2.300685128s 2.322838826s 2.331632409s 2.421970402s 2.583536905s 2.60547905s 2.648668566s 2.690961687s]
Feb  2 13:44:26.758: INFO: 50 %ile: 1.652290626s
Feb  2 13:44:26.758: INFO: 90 %ile: 2.002074224s
Feb  2 13:44:26.758: INFO: 99 %ile: 2.648668566s
Feb  2 13:44:26.758: INFO: Total sample count: 200
[AfterEach] [sig-network] Service endpoints latency
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  2 13:44:26.758: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svc-latency-1115" for this suite.
Feb  2 13:45:08.870: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 13:45:08.972: INFO: namespace svc-latency-1115 deletion completed in 42.209386244s

• [SLOW TEST:74.291 seconds]
[sig-network] Service endpoints latency
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  2 13:45:08.973: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-4a0ea60a-fb18-49a7-8dfd-25697db1d0e5
STEP: Creating a pod to test consume configMaps
Feb  2 13:45:09.105: INFO: Waiting up to 5m0s for pod "pod-configmaps-d2bdcffe-461d-4ffd-88c4-2b8aef4790c3" in namespace "configmap-8243" to be "success or failure"
Feb  2 13:45:09.127: INFO: Pod "pod-configmaps-d2bdcffe-461d-4ffd-88c4-2b8aef4790c3": Phase="Pending", Reason="", readiness=false. Elapsed: 21.527386ms
Feb  2 13:45:11.135: INFO: Pod "pod-configmaps-d2bdcffe-461d-4ffd-88c4-2b8aef4790c3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030222067s
Feb  2 13:45:13.143: INFO: Pod "pod-configmaps-d2bdcffe-461d-4ffd-88c4-2b8aef4790c3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.037824621s
Feb  2 13:45:15.154: INFO: Pod "pod-configmaps-d2bdcffe-461d-4ffd-88c4-2b8aef4790c3": Phase="Pending", Reason="", readiness=false. Elapsed: 6.048515031s
Feb  2 13:45:17.160: INFO: Pod "pod-configmaps-d2bdcffe-461d-4ffd-88c4-2b8aef4790c3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.054799184s
STEP: Saw pod success
Feb  2 13:45:17.160: INFO: Pod "pod-configmaps-d2bdcffe-461d-4ffd-88c4-2b8aef4790c3" satisfied condition "success or failure"
Feb  2 13:45:17.190: INFO: Trying to get logs from node iruya-node pod pod-configmaps-d2bdcffe-461d-4ffd-88c4-2b8aef4790c3 container configmap-volume-test: 
STEP: delete the pod
Feb  2 13:45:17.290: INFO: Waiting for pod pod-configmaps-d2bdcffe-461d-4ffd-88c4-2b8aef4790c3 to disappear
Feb  2 13:45:17.350: INFO: Pod pod-configmaps-d2bdcffe-461d-4ffd-88c4-2b8aef4790c3 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  2 13:45:17.351: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-8243" for this suite.
Feb  2 13:45:23.396: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 13:45:23.492: INFO: namespace configmap-8243 deletion completed in 6.133206667s

• [SLOW TEST:14.520 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  2 13:45:23.492: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-map-19e21012-4139-4c81-8c5c-a5b37f209ac3
STEP: Creating a pod to test consume configMaps
Feb  2 13:45:23.593: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-0ad4357e-2a25-4d1d-bf2a-4d93f62caaee" in namespace "projected-8800" to be "success or failure"
Feb  2 13:45:23.692: INFO: Pod "pod-projected-configmaps-0ad4357e-2a25-4d1d-bf2a-4d93f62caaee": Phase="Pending", Reason="", readiness=false. Elapsed: 98.572031ms
Feb  2 13:45:25.698: INFO: Pod "pod-projected-configmaps-0ad4357e-2a25-4d1d-bf2a-4d93f62caaee": Phase="Pending", Reason="", readiness=false. Elapsed: 2.105184677s
Feb  2 13:45:27.706: INFO: Pod "pod-projected-configmaps-0ad4357e-2a25-4d1d-bf2a-4d93f62caaee": Phase="Pending", Reason="", readiness=false. Elapsed: 4.112374485s
Feb  2 13:45:29.713: INFO: Pod "pod-projected-configmaps-0ad4357e-2a25-4d1d-bf2a-4d93f62caaee": Phase="Pending", Reason="", readiness=false. Elapsed: 6.120030872s
Feb  2 13:45:31.729: INFO: Pod "pod-projected-configmaps-0ad4357e-2a25-4d1d-bf2a-4d93f62caaee": Phase="Pending", Reason="", readiness=false. Elapsed: 8.135957388s
Feb  2 13:45:33.751: INFO: Pod "pod-projected-configmaps-0ad4357e-2a25-4d1d-bf2a-4d93f62caaee": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.158022098s
STEP: Saw pod success
Feb  2 13:45:33.752: INFO: Pod "pod-projected-configmaps-0ad4357e-2a25-4d1d-bf2a-4d93f62caaee" satisfied condition "success or failure"
Feb  2 13:45:33.765: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-0ad4357e-2a25-4d1d-bf2a-4d93f62caaee container projected-configmap-volume-test: 
STEP: delete the pod
Feb  2 13:45:34.030: INFO: Waiting for pod pod-projected-configmaps-0ad4357e-2a25-4d1d-bf2a-4d93f62caaee to disappear
Feb  2 13:45:34.041: INFO: Pod pod-projected-configmaps-0ad4357e-2a25-4d1d-bf2a-4d93f62caaee no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  2 13:45:34.041: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8800" for this suite.
Feb  2 13:45:40.063: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 13:45:40.271: INFO: namespace projected-8800 deletion completed in 6.224312899s

• [SLOW TEST:16.779 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Burst scaling should run to completion even with unhealthy pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  2 13:45:40.272: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75
STEP: Creating service test in namespace statefulset-6374
[It] Burst scaling should run to completion even with unhealthy pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating stateful set ss in namespace statefulset-6374
STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-6374
Feb  2 13:45:40.401: INFO: Found 0 stateful pods, waiting for 1
Feb  2 13:45:50.413: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod
Feb  2 13:45:50.420: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6374 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb  2 13:45:50.968: INFO: stderr: "I0202 13:45:50.695236    1701 log.go:172] (0xc00091c370) (0xc000882640) Create stream\nI0202 13:45:50.695561    1701 log.go:172] (0xc00091c370) (0xc000882640) Stream added, broadcasting: 1\nI0202 13:45:50.700886    1701 log.go:172] (0xc00091c370) Reply frame received for 1\nI0202 13:45:50.700924    1701 log.go:172] (0xc00091c370) (0xc000948000) Create stream\nI0202 13:45:50.700932    1701 log.go:172] (0xc00091c370) (0xc000948000) Stream added, broadcasting: 3\nI0202 13:45:50.703212    1701 log.go:172] (0xc00091c370) Reply frame received for 3\nI0202 13:45:50.703283    1701 log.go:172] (0xc00091c370) (0xc0008826e0) Create stream\nI0202 13:45:50.703292    1701 log.go:172] (0xc00091c370) (0xc0008826e0) Stream added, broadcasting: 5\nI0202 13:45:50.704610    1701 log.go:172] (0xc00091c370) Reply frame received for 5\nI0202 13:45:50.806349    1701 log.go:172] (0xc00091c370) Data frame received for 5\nI0202 13:45:50.806393    1701 log.go:172] (0xc0008826e0) (5) Data frame handling\nI0202 13:45:50.806421    1701 log.go:172] (0xc0008826e0) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0202 13:45:50.838020    1701 log.go:172] (0xc00091c370) Data frame received for 3\nI0202 13:45:50.838080    1701 log.go:172] (0xc000948000) (3) Data frame handling\nI0202 13:45:50.838101    1701 log.go:172] (0xc000948000) (3) Data frame sent\nI0202 13:45:50.958466    1701 log.go:172] (0xc00091c370) (0xc000948000) Stream removed, broadcasting: 3\nI0202 13:45:50.958910    1701 log.go:172] (0xc00091c370) Data frame received for 1\nI0202 13:45:50.958987    1701 log.go:172] (0xc00091c370) (0xc0008826e0) Stream removed, broadcasting: 5\nI0202 13:45:50.959035    1701 log.go:172] (0xc000882640) (1) Data frame handling\nI0202 13:45:50.959064    1701 log.go:172] (0xc000882640) (1) Data frame sent\nI0202 13:45:50.959073    1701 log.go:172] (0xc00091c370) (0xc000882640) Stream removed, broadcasting: 1\nI0202 13:45:50.959100    1701 log.go:172] (0xc00091c370) Go away received\nI0202 13:45:50.960349    1701 log.go:172] (0xc00091c370) (0xc000882640) Stream removed, broadcasting: 1\nI0202 13:45:50.960377    1701 log.go:172] (0xc00091c370) (0xc000948000) Stream removed, broadcasting: 3\nI0202 13:45:50.960387    1701 log.go:172] (0xc00091c370) (0xc0008826e0) Stream removed, broadcasting: 5\n"
Feb  2 13:45:50.968: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb  2 13:45:50.968: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Feb  2 13:45:50.976: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
Feb  2 13:46:00.986: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Feb  2 13:46:00.986: INFO: Waiting for statefulset status.replicas updated to 0
Feb  2 13:46:01.022: INFO: POD   NODE        PHASE    GRACE  CONDITIONS
Feb  2 13:46:01.022: INFO: ss-0  iruya-node  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-02 13:45:40 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-02 13:45:51 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-02 13:45:51 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-02 13:45:40 +0000 UTC  }]
Feb  2 13:46:01.023: INFO: 
Feb  2 13:46:01.023: INFO: StatefulSet ss has not reached scale 3, at 1
Feb  2 13:46:02.454: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.979868792s
Feb  2 13:46:03.849: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.548553636s
Feb  2 13:46:04.859: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.153273851s
Feb  2 13:46:05.938: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.143385449s
Feb  2 13:46:08.730: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.064579798s
Feb  2 13:46:09.744: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.272019438s
Feb  2 13:46:10.751: INFO: Verifying statefulset ss doesn't scale past 3 for another 258.256946ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-6374
Feb  2 13:46:11.767: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6374 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  2 13:46:12.428: INFO: stderr: "I0202 13:46:12.041531    1720 log.go:172] (0xc0009904d0) (0xc0008d4780) Create stream\nI0202 13:46:12.041783    1720 log.go:172] (0xc0009904d0) (0xc0008d4780) Stream added, broadcasting: 1\nI0202 13:46:12.055280    1720 log.go:172] (0xc0009904d0) Reply frame received for 1\nI0202 13:46:12.055379    1720 log.go:172] (0xc0009904d0) (0xc0008d4000) Create stream\nI0202 13:46:12.055403    1720 log.go:172] (0xc0009904d0) (0xc0008d4000) Stream added, broadcasting: 3\nI0202 13:46:12.056837    1720 log.go:172] (0xc0009904d0) Reply frame received for 3\nI0202 13:46:12.056883    1720 log.go:172] (0xc0009904d0) (0xc0009c4000) Create stream\nI0202 13:46:12.056894    1720 log.go:172] (0xc0009904d0) (0xc0009c4000) Stream added, broadcasting: 5\nI0202 13:46:12.058809    1720 log.go:172] (0xc0009904d0) Reply frame received for 5\nI0202 13:46:12.243624    1720 log.go:172] (0xc0009904d0) Data frame received for 5\nI0202 13:46:12.243852    1720 log.go:172] (0xc0009c4000) (5) Data frame handling\nI0202 13:46:12.243889    1720 log.go:172] (0xc0009c4000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0202 13:46:12.246070    1720 log.go:172] (0xc0009904d0) Data frame received for 3\nI0202 13:46:12.246293    1720 log.go:172] (0xc0008d4000) (3) Data frame handling\nI0202 13:46:12.246359    1720 log.go:172] (0xc0008d4000) (3) Data frame sent\nI0202 13:46:12.417007    1720 log.go:172] (0xc0009904d0) (0xc0008d4000) Stream removed, broadcasting: 3\nI0202 13:46:12.417344    1720 log.go:172] (0xc0009904d0) Data frame received for 1\nI0202 13:46:12.417375    1720 log.go:172] (0xc0008d4780) (1) Data frame handling\nI0202 13:46:12.417414    1720 log.go:172] (0xc0008d4780) (1) Data frame sent\nI0202 13:46:12.417426    1720 log.go:172] (0xc0009904d0) (0xc0009c4000) Stream removed, broadcasting: 5\nI0202 13:46:12.417526    1720 log.go:172] (0xc0009904d0) (0xc0008d4780) Stream removed, broadcasting: 1\nI0202 13:46:12.417560    1720 log.go:172] (0xc0009904d0) Go away received\nI0202 13:46:12.418709    1720 log.go:172] (0xc0009904d0) (0xc0008d4780) Stream removed, broadcasting: 1\nI0202 13:46:12.418755    1720 log.go:172] (0xc0009904d0) (0xc0008d4000) Stream removed, broadcasting: 3\nI0202 13:46:12.418765    1720 log.go:172] (0xc0009904d0) (0xc0009c4000) Stream removed, broadcasting: 5\n"
Feb  2 13:46:12.428: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Feb  2 13:46:12.428: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Feb  2 13:46:12.428: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6374 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  2 13:46:12.980: INFO: stderr: "I0202 13:46:12.621420    1741 log.go:172] (0xc0008ee000) (0xc000620140) Create stream\nI0202 13:46:12.621859    1741 log.go:172] (0xc0008ee000) (0xc000620140) Stream added, broadcasting: 1\nI0202 13:46:12.626949    1741 log.go:172] (0xc0008ee000) Reply frame received for 1\nI0202 13:46:12.627034    1741 log.go:172] (0xc0008ee000) (0xc000858000) Create stream\nI0202 13:46:12.627045    1741 log.go:172] (0xc0008ee000) (0xc000858000) Stream added, broadcasting: 3\nI0202 13:46:12.628672    1741 log.go:172] (0xc0008ee000) Reply frame received for 3\nI0202 13:46:12.628728    1741 log.go:172] (0xc0008ee000) (0xc0006da000) Create stream\nI0202 13:46:12.628743    1741 log.go:172] (0xc0008ee000) (0xc0006da000) Stream added, broadcasting: 5\nI0202 13:46:12.630109    1741 log.go:172] (0xc0008ee000) Reply frame received for 5\nI0202 13:46:12.794084    1741 log.go:172] (0xc0008ee000) Data frame received for 5\nI0202 13:46:12.794579    1741 log.go:172] (0xc0006da000) (5) Data frame handling\nI0202 13:46:12.794681    1741 log.go:172] (0xc0006da000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0202 13:46:12.795641    1741 log.go:172] (0xc0008ee000) Data frame received for 3\nI0202 13:46:12.795746    1741 log.go:172] (0xc000858000) (3) Data frame handling\nI0202 13:46:12.795788    1741 log.go:172] (0xc000858000) (3) Data frame sent\nI0202 13:46:12.963720    1741 log.go:172] (0xc0008ee000) (0xc000858000) Stream removed, broadcasting: 3\nI0202 13:46:12.964001    1741 log.go:172] (0xc0008ee000) Data frame received for 1\nI0202 13:46:12.964026    1741 log.go:172] (0xc000620140) (1) Data frame handling\nI0202 13:46:12.964047    1741 log.go:172] (0xc000620140) (1) Data frame sent\nI0202 13:46:12.964125    1741 log.go:172] (0xc0008ee000) (0xc000620140) Stream removed, broadcasting: 1\nI0202 13:46:12.964240    1741 log.go:172] (0xc0008ee000) (0xc0006da000) Stream removed, broadcasting: 5\nI0202 13:46:12.964374    1741 log.go:172] (0xc0008ee000) Go away received\nI0202 13:46:12.965339    1741 log.go:172] (0xc0008ee000) (0xc000620140) Stream removed, broadcasting: 1\nI0202 13:46:12.965403    1741 log.go:172] (0xc0008ee000) (0xc000858000) Stream removed, broadcasting: 3\nI0202 13:46:12.965457    1741 log.go:172] (0xc0008ee000) (0xc0006da000) Stream removed, broadcasting: 5\n"
Feb  2 13:46:12.981: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Feb  2 13:46:12.981: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Feb  2 13:46:12.981: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6374 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  2 13:46:13.770: INFO: stderr: "I0202 13:46:13.351420    1758 log.go:172] (0xc0001460b0) (0xc0007a66e0) Create stream\nI0202 13:46:13.351890    1758 log.go:172] (0xc0001460b0) (0xc0007a66e0) Stream added, broadcasting: 1\nI0202 13:46:13.357903    1758 log.go:172] (0xc0001460b0) Reply frame received for 1\nI0202 13:46:13.357996    1758 log.go:172] (0xc0001460b0) (0xc0003bc320) Create stream\nI0202 13:46:13.358012    1758 log.go:172] (0xc0001460b0) (0xc0003bc320) Stream added, broadcasting: 3\nI0202 13:46:13.359391    1758 log.go:172] (0xc0001460b0) Reply frame received for 3\nI0202 13:46:13.359440    1758 log.go:172] (0xc0001460b0) (0xc000474320) Create stream\nI0202 13:46:13.359451    1758 log.go:172] (0xc0001460b0) (0xc000474320) Stream added, broadcasting: 5\nI0202 13:46:13.360851    1758 log.go:172] (0xc0001460b0) Reply frame received for 5\nI0202 13:46:13.522042    1758 log.go:172] (0xc0001460b0) Data frame received for 3\nI0202 13:46:13.522222    1758 log.go:172] (0xc0003bc320) (3) Data frame handling\nI0202 13:46:13.522238    1758 log.go:172] (0xc0003bc320) (3) Data frame sent\nI0202 13:46:13.522291    1758 log.go:172] (0xc0001460b0) Data frame received for 5\nI0202 13:46:13.522298    1758 log.go:172] (0xc000474320) (5) Data frame handling\nI0202 13:46:13.522311    1758 log.go:172] (0xc000474320) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0202 13:46:13.743492    1758 log.go:172] (0xc0001460b0) Data frame received for 1\nI0202 13:46:13.744072    1758 log.go:172] (0xc0001460b0) (0xc0003bc320) Stream removed, broadcasting: 3\nI0202 13:46:13.744522    1758 log.go:172] (0xc0007a66e0) (1) Data frame handling\nI0202 13:46:13.744959    1758 log.go:172] (0xc0007a66e0) (1) Data frame sent\nI0202 13:46:13.745113    1758 log.go:172] (0xc0001460b0) (0xc000474320) Stream removed, broadcasting: 5\nI0202 13:46:13.745244    1758 log.go:172] (0xc0001460b0) (0xc0007a66e0) Stream removed, broadcasting: 1\nI0202 13:46:13.745532    1758 log.go:172] (0xc0001460b0) Go away received\nI0202 13:46:13.748009    1758 log.go:172] (0xc0001460b0) (0xc0007a66e0) Stream removed, broadcasting: 1\nI0202 13:46:13.748105    1758 log.go:172] (0xc0001460b0) (0xc0003bc320) Stream removed, broadcasting: 3\nI0202 13:46:13.748128    1758 log.go:172] (0xc0001460b0) (0xc000474320) Stream removed, broadcasting: 5\n"
Feb  2 13:46:13.770: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Feb  2 13:46:13.770: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Feb  2 13:46:13.790: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Feb  2 13:46:13.790: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Feb  2 13:46:13.790: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Scale down will not halt with unhealthy stateful pod
Feb  2 13:46:13.796: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6374 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb  2 13:46:14.484: INFO: stderr: "I0202 13:46:14.034945    1778 log.go:172] (0xc0009a60b0) (0xc0007b4640) Create stream\nI0202 13:46:14.035153    1778 log.go:172] (0xc0009a60b0) (0xc0007b4640) Stream added, broadcasting: 1\nI0202 13:46:14.042043    1778 log.go:172] (0xc0009a60b0) Reply frame received for 1\nI0202 13:46:14.042089    1778 log.go:172] (0xc0009a60b0) (0xc0005d21e0) Create stream\nI0202 13:46:14.042102    1778 log.go:172] (0xc0009a60b0) (0xc0005d21e0) Stream added, broadcasting: 3\nI0202 13:46:14.046372    1778 log.go:172] (0xc0009a60b0) Reply frame received for 3\nI0202 13:46:14.046395    1778 log.go:172] (0xc0009a60b0) (0xc0005d2280) Create stream\nI0202 13:46:14.046404    1778 log.go:172] (0xc0009a60b0) (0xc0005d2280) Stream added, broadcasting: 5\nI0202 13:46:14.049552    1778 log.go:172] (0xc0009a60b0) Reply frame received for 5\nI0202 13:46:14.282771    1778 log.go:172] (0xc0009a60b0) Data frame received for 3\nI0202 13:46:14.283073    1778 log.go:172] (0xc0005d21e0) (3) Data frame handling\nI0202 13:46:14.283176    1778 log.go:172] (0xc0005d21e0) (3) Data frame sent\nI0202 13:46:14.283297    1778 log.go:172] (0xc0009a60b0) Data frame received for 5\nI0202 13:46:14.283470    1778 log.go:172] (0xc0005d2280) (5) Data frame handling\nI0202 13:46:14.283497    1778 log.go:172] (0xc0005d2280) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0202 13:46:14.469303    1778 log.go:172] (0xc0009a60b0) (0xc0005d21e0) Stream removed, broadcasting: 3\nI0202 13:46:14.469561    1778 log.go:172] (0xc0009a60b0) Data frame received for 1\nI0202 13:46:14.469576    1778 log.go:172] (0xc0009a60b0) (0xc0005d2280) Stream removed, broadcasting: 5\nI0202 13:46:14.469639    1778 log.go:172] (0xc0007b4640) (1) Data frame handling\nI0202 13:46:14.469662    1778 log.go:172] (0xc0007b4640) (1) Data frame sent\nI0202 13:46:14.469679    1778 log.go:172] (0xc0009a60b0) (0xc0007b4640) Stream removed, broadcasting: 1\nI0202 13:46:14.469728    1778 log.go:172] (0xc0009a60b0) Go away received\nI0202 13:46:14.471150    1778 log.go:172] (0xc0009a60b0) (0xc0007b4640) Stream removed, broadcasting: 1\nI0202 13:46:14.471168    1778 log.go:172] (0xc0009a60b0) (0xc0005d21e0) Stream removed, broadcasting: 3\nI0202 13:46:14.471179    1778 log.go:172] (0xc0009a60b0) (0xc0005d2280) Stream removed, broadcasting: 5\n"
Feb  2 13:46:14.484: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb  2 13:46:14.484: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Feb  2 13:46:14.485: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6374 ss-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb  2 13:46:14.912: INFO: stderr: "I0202 13:46:14.663996    1795 log.go:172] (0xc00087a0b0) (0xc0008a40a0) Create stream\nI0202 13:46:14.664101    1795 log.go:172] (0xc00087a0b0) (0xc0008a40a0) Stream added, broadcasting: 1\nI0202 13:46:14.673117    1795 log.go:172] (0xc00087a0b0) Reply frame received for 1\nI0202 13:46:14.673185    1795 log.go:172] (0xc00087a0b0) (0xc0001041e0) Create stream\nI0202 13:46:14.673196    1795 log.go:172] (0xc00087a0b0) (0xc0001041e0) Stream added, broadcasting: 3\nI0202 13:46:14.675856    1795 log.go:172] (0xc00087a0b0) Reply frame received for 3\nI0202 13:46:14.675913    1795 log.go:172] (0xc00087a0b0) (0xc0008a4140) Create stream\nI0202 13:46:14.675924    1795 log.go:172] (0xc00087a0b0) (0xc0008a4140) Stream added, broadcasting: 5\nI0202 13:46:14.677314    1795 log.go:172] (0xc00087a0b0) Reply frame received for 5\nI0202 13:46:14.760872    1795 log.go:172] (0xc00087a0b0) Data frame received for 5\nI0202 13:46:14.760974    1795 log.go:172] (0xc0008a4140) (5) Data frame handling\nI0202 13:46:14.760994    1795 log.go:172] (0xc0008a4140) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0202 13:46:14.802101    1795 log.go:172] (0xc00087a0b0) Data frame received for 3\nI0202 13:46:14.802152    1795 log.go:172] (0xc0001041e0) (3) Data frame handling\nI0202 13:46:14.802161    1795 log.go:172] (0xc0001041e0) (3) Data frame sent\nI0202 13:46:14.904680    1795 log.go:172] (0xc00087a0b0) (0xc0008a4140) Stream removed, broadcasting: 5\nI0202 13:46:14.904970    1795 log.go:172] (0xc00087a0b0) Data frame received for 1\nI0202 13:46:14.905023    1795 log.go:172] (0xc00087a0b0) (0xc0001041e0) Stream removed, broadcasting: 3\nI0202 13:46:14.905051    1795 log.go:172] (0xc0008a40a0) (1) Data frame handling\nI0202 13:46:14.905079    1795 log.go:172] (0xc0008a40a0) (1) Data frame sent\nI0202 13:46:14.905108    1795 log.go:172] (0xc00087a0b0) (0xc0008a40a0) Stream removed, broadcasting: 1\nI0202 13:46:14.905118    1795 log.go:172] (0xc00087a0b0) Go away received\nI0202 13:46:14.905969    1795 log.go:172] (0xc00087a0b0) (0xc0008a40a0) Stream removed, broadcasting: 1\nI0202 13:46:14.905985    1795 log.go:172] (0xc00087a0b0) (0xc0001041e0) Stream removed, broadcasting: 3\nI0202 13:46:14.905995    1795 log.go:172] (0xc00087a0b0) (0xc0008a4140) Stream removed, broadcasting: 5\n"
Feb  2 13:46:14.912: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb  2 13:46:14.912: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Feb  2 13:46:14.912: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6374 ss-2 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb  2 13:46:15.371: INFO: stderr: "I0202 13:46:15.052218    1809 log.go:172] (0xc00012a630) (0xc0008526e0) Create stream\nI0202 13:46:15.052350    1809 log.go:172] (0xc00012a630) (0xc0008526e0) Stream added, broadcasting: 1\nI0202 13:46:15.058654    1809 log.go:172] (0xc00012a630) Reply frame received for 1\nI0202 13:46:15.058742    1809 log.go:172] (0xc00012a630) (0xc000882000) Create stream\nI0202 13:46:15.058752    1809 log.go:172] (0xc00012a630) (0xc000882000) Stream added, broadcasting: 3\nI0202 13:46:15.060171    1809 log.go:172] (0xc00012a630) Reply frame received for 3\nI0202 13:46:15.060190    1809 log.go:172] (0xc00012a630) (0xc0008820a0) Create stream\nI0202 13:46:15.060199    1809 log.go:172] (0xc00012a630) (0xc0008820a0) Stream added, broadcasting: 5\nI0202 13:46:15.061003    1809 log.go:172] (0xc00012a630) Reply frame received for 5\nI0202 13:46:15.167563    1809 log.go:172] (0xc00012a630) Data frame received for 5\nI0202 13:46:15.167682    1809 log.go:172] (0xc0008820a0) (5) Data frame handling\nI0202 13:46:15.167716    1809 log.go:172] (0xc0008820a0) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0202 13:46:15.272944    1809 log.go:172] (0xc00012a630) Data frame received for 3\nI0202 13:46:15.273029    1809 log.go:172] (0xc000882000) (3) Data frame handling\nI0202 13:46:15.273050    1809 log.go:172] (0xc000882000) (3) Data frame sent\nI0202 13:46:15.359743    1809 log.go:172] (0xc00012a630) Data frame received for 1\nI0202 13:46:15.359898    1809 log.go:172] (0xc00012a630) (0xc0008820a0) Stream removed, broadcasting: 5\nI0202 13:46:15.359988    1809 log.go:172] (0xc0008526e0) (1) Data frame handling\nI0202 13:46:15.360023    1809 log.go:172] (0xc00012a630) (0xc000882000) Stream removed, broadcasting: 3\nI0202 13:46:15.360068    1809 log.go:172] (0xc0008526e0) (1) Data frame sent\nI0202 13:46:15.360096    1809 log.go:172] (0xc00012a630) (0xc0008526e0) Stream removed, broadcasting: 1\nI0202 13:46:15.360126    1809 log.go:172] (0xc00012a630) Go away received\nI0202 13:46:15.360882    1809 log.go:172] (0xc00012a630) (0xc0008526e0) Stream removed, broadcasting: 1\nI0202 13:46:15.360908    1809 log.go:172] (0xc00012a630) (0xc000882000) Stream removed, broadcasting: 3\nI0202 13:46:15.360921    1809 log.go:172] (0xc00012a630) (0xc0008820a0) Stream removed, broadcasting: 5\n"
Feb  2 13:46:15.371: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb  2 13:46:15.371: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Feb  2 13:46:15.371: INFO: Waiting for statefulset status.replicas updated to 0
Feb  2 13:46:15.378: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2
Feb  2 13:46:25.388: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Feb  2 13:46:25.388: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Feb  2 13:46:25.388: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Feb  2 13:46:25.475: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Feb  2 13:46:25.476: INFO: ss-0  iruya-node                 Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-02 13:45:40 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-02 13:46:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-02 13:46:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-02 13:45:40 +0000 UTC  }]
Feb  2 13:46:25.476: INFO: ss-1  iruya-server-sfge57q7djm7  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-02 13:46:01 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-02 13:46:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-02 13:46:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-02 13:46:01 +0000 UTC  }]
Feb  2 13:46:25.476: INFO: ss-2  iruya-node                 Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-02 13:46:01 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-02 13:46:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-02 13:46:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-02 13:46:01 +0000 UTC  }]
Feb  2 13:46:25.476: INFO: 
Feb  2 13:46:25.476: INFO: StatefulSet ss has not reached scale 0, at 3
Feb  2 13:46:27.675: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Feb  2 13:46:27.676: INFO: ss-0  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-02 13:45:40 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-02 13:46:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-02 13:46:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-02 13:45:40 +0000 UTC  }]
Feb  2 13:46:27.676: INFO: ss-1  iruya-server-sfge57q7djm7  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-02 13:46:01 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-02 13:46:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-02 13:46:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-02 13:46:01 +0000 UTC  }]
Feb  2 13:46:27.676: INFO: ss-2  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-02 13:46:01 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-02 13:46:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-02 13:46:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-02 13:46:01 +0000 UTC  }]
Feb  2 13:46:27.676: INFO: 
Feb  2 13:46:27.676: INFO: StatefulSet ss has not reached scale 0, at 3
Feb  2 13:46:28.691: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Feb  2 13:46:28.692: INFO: ss-0  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-02 13:45:40 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-02 13:46:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-02 13:46:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-02 13:45:40 +0000 UTC  }]
Feb  2 13:46:28.692: INFO: ss-1  iruya-server-sfge57q7djm7  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-02 13:46:01 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-02 13:46:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-02 13:46:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-02 13:46:01 +0000 UTC  }]
Feb  2 13:46:28.692: INFO: ss-2  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-02 13:46:01 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-02 13:46:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-02 13:46:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-02 13:46:01 +0000 UTC  }]
Feb  2 13:46:28.692: INFO: 
Feb  2 13:46:28.692: INFO: StatefulSet ss has not reached scale 0, at 3
Feb  2 13:46:29.709: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Feb  2 13:46:29.709: INFO: ss-0  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-02 13:45:40 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-02 13:46:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-02 13:46:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-02 13:45:40 +0000 UTC  }]
Feb  2 13:46:29.709: INFO: ss-1  iruya-server-sfge57q7djm7  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-02 13:46:01 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-02 13:46:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-02 13:46:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-02 13:46:01 +0000 UTC  }]
Feb  2 13:46:29.710: INFO: ss-2  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-02 13:46:01 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-02 13:46:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-02 13:46:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-02 13:46:01 +0000 UTC  }]
Feb  2 13:46:29.710: INFO: 
Feb  2 13:46:29.710: INFO: StatefulSet ss has not reached scale 0, at 3
Feb  2 13:46:30.933: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Feb  2 13:46:30.933: INFO: ss-0  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-02 13:45:40 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-02 13:46:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-02 13:46:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-02 13:45:40 +0000 UTC  }]
Feb  2 13:46:30.933: INFO: ss-1  iruya-server-sfge57q7djm7  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-02 13:46:01 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-02 13:46:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-02 13:46:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-02 13:46:01 +0000 UTC  }]
Feb  2 13:46:30.933: INFO: ss-2  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-02 13:46:01 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-02 13:46:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-02 13:46:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-02 13:46:01 +0000 UTC  }]
Feb  2 13:46:30.933: INFO: 
Feb  2 13:46:30.933: INFO: StatefulSet ss has not reached scale 0, at 3
Feb  2 13:46:31.941: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Feb  2 13:46:31.941: INFO: ss-0  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-02 13:45:40 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-02 13:46:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-02 13:46:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-02 13:45:40 +0000 UTC  }]
Feb  2 13:46:31.941: INFO: ss-1  iruya-server-sfge57q7djm7  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-02 13:46:01 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-02 13:46:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-02 13:46:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-02 13:46:01 +0000 UTC  }]
Feb  2 13:46:31.941: INFO: ss-2  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-02 13:46:01 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-02 13:46:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-02 13:46:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-02 13:46:01 +0000 UTC  }]
Feb  2 13:46:31.941: INFO: 
Feb  2 13:46:31.941: INFO: StatefulSet ss has not reached scale 0, at 3
Feb  2 13:46:32.959: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Feb  2 13:46:32.960: INFO: ss-0  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-02 13:45:40 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-02 13:46:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-02 13:46:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-02 13:45:40 +0000 UTC  }]
Feb  2 13:46:32.960: INFO: ss-1  iruya-server-sfge57q7djm7  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-02 13:46:01 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-02 13:46:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-02 13:46:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-02 13:46:01 +0000 UTC  }]
Feb  2 13:46:32.960: INFO: ss-2  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-02 13:46:01 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-02 13:46:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-02 13:46:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-02 13:46:01 +0000 UTC  }]
Feb  2 13:46:32.960: INFO: 
Feb  2 13:46:32.960: INFO: StatefulSet ss has not reached scale 0, at 3
Feb  2 13:46:33.973: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Feb  2 13:46:33.973: INFO: ss-0  iruya-node                 Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-02 13:45:40 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-02 13:46:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-02 13:46:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-02 13:45:40 +0000 UTC  }]
Feb  2 13:46:33.974: INFO: ss-1  iruya-server-sfge57q7djm7  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-02 13:46:01 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-02 13:46:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-02 13:46:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-02 13:46:01 +0000 UTC  }]
Feb  2 13:46:33.974: INFO: ss-2  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-02 13:46:01 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-02 13:46:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-02 13:46:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-02 13:46:01 +0000 UTC  }]
Feb  2 13:46:33.974: INFO: 
Feb  2 13:46:33.974: INFO: StatefulSet ss has not reached scale 0, at 3
Feb  2 13:46:34.987: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Feb  2 13:46:34.987: INFO: ss-1  iruya-server-sfge57q7djm7  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-02 13:46:01 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-02 13:46:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-02 13:46:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-02 13:46:01 +0000 UTC  }]
Feb  2 13:46:34.987: INFO: ss-2  iruya-node                 Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-02 13:46:01 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-02 13:46:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-02 13:46:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-02 13:46:01 +0000 UTC  }]
Feb  2 13:46:34.987: INFO: 
Feb  2 13:46:34.987: INFO: StatefulSet ss has not reached scale 0, at 2
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-6374
Feb  2 13:46:35.995: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6374 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  2 13:46:36.188: INFO: rc: 1
Feb  2 13:46:36.188: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6374 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    error: unable to upgrade connection: container not found ("nginx")
 []  0xc002c640c0 exit status 1   true [0xc00211d160 0xc00211d178 0xc00211d190] [0xc00211d160 0xc00211d178 0xc00211d190] [0xc00211d170 0xc00211d188] [0xba6c50 0xba6c50] 0xc002bc2720 }:
Command stdout:

stderr:
error: unable to upgrade connection: container not found ("nginx")

error:
exit status 1
Feb  2 13:46:46.189: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6374 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  2 13:46:46.471: INFO: rc: 1
Feb  2 13:46:46.471: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6374 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc0029d0090 exit status 1   true [0xc000010108 0xc0000103d0 0xc000010440] [0xc000010108 0xc0000103d0 0xc000010440] [0xc000010398 0xc000010400] [0xba6c50 0xba6c50] 0xc002232240 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Feb  2 13:46:56.472: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6374 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  2 13:46:56.638: INFO: rc: 1
Feb  2 13:46:56.638: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6374 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc0029d0180 exit status 1   true [0xc0000104b8 0xc000010598 0xc0000105c8] [0xc0000104b8 0xc000010598 0xc0000105c8] [0xc000010560 0xc0000105a8] [0xba6c50 0xba6c50] 0xc002232840 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Feb  2 13:47:06.640: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6374 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  2 13:47:06.836: INFO: rc: 1
Feb  2 13:47:06.836: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6374 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc0029d0240 exit status 1   true [0xc000010640 0xc000010720 0xc0000107a8] [0xc000010640 0xc000010720 0xc0000107a8] [0xc0000106f0 0xc000010748] [0xba6c50 0xba6c50] 0xc002232d20 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Feb  2 13:47:16.837: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6374 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  2 13:47:17.003: INFO: rc: 1
Feb  2 13:47:17.003: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6374 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc001d58090 exit status 1   true [0xc000b9c050 0xc000b9c1a0 0xc000b9c208] [0xc000b9c050 0xc000b9c1a0 0xc000b9c208] [0xc000b9c138 0xc000b9c1f0] [0xba6c50 0xba6c50] 0xc002f06ba0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Feb  2 13:47:27.004: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6374 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  2 13:47:27.208: INFO: rc: 1
Feb  2 13:47:27.208: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6374 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc002d60090 exit status 1   true [0xc0006f0b40 0xc0006f0c18 0xc0006f0ee8] [0xc0006f0b40 0xc0006f0c18 0xc0006f0ee8] [0xc0006f0be8 0xc0006f0e20] [0xba6c50 0xba6c50] 0xc001a48240 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Feb  2 13:47:37.209: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6374 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  2 13:47:37.400: INFO: rc: 1
Feb  2 13:47:37.401: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6374 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc002d601b0 exit status 1   true [0xc0006f0f18 0xc0006f1058 0xc0006f1678] [0xc0006f0f18 0xc0006f1058 0xc0006f1678] [0xc0006f0f90 0xc0006f1618] [0xba6c50 0xba6c50] 0xc001a48540 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Feb  2 13:47:47.402: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6374 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  2 13:47:47.579: INFO: rc: 1
Feb  2 13:47:47.580: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6374 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc0029d0390 exit status 1   true [0xc0000107c0 0xc000010880 0xc000010900] [0xc0000107c0 0xc000010880 0xc000010900] [0xc000010820 0xc0000108f0] [0xba6c50 0xba6c50] 0xc002233080 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Feb  2 13:47:57.580: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6374 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  2 13:47:57.768: INFO: rc: 1
Feb  2 13:47:57.769: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6374 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc0029d0450 exit status 1   true [0xc000010958 0xc0000109a8 0xc000010ac8] [0xc000010958 0xc0000109a8 0xc000010ac8] [0xc000010990 0xc000010a68] [0xba6c50 0xba6c50] 0xc0022333e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Feb  2 13:48:07.769: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6374 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  2 13:48:07.978: INFO: rc: 1
Feb  2 13:48:07.979: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6374 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc002edc180 exit status 1   true [0xc000b800d0 0xc000b802c0 0xc000b805c8] [0xc000b800d0 0xc000b802c0 0xc000b805c8] [0xc000b801b8 0xc000b804d0] [0xba6c50 0xba6c50] 0xc0025f41e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Feb  2 13:48:17.979: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6374 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  2 13:48:18.146: INFO: rc: 1
Feb  2 13:48:18.146: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6374 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc0029d0540 exit status 1   true [0xc000010b18 0xc000010c30 0xc000010cd8] [0xc000010b18 0xc000010c30 0xc000010cd8] [0xc000010bb0 0xc000010cb8] [0xba6c50 0xba6c50] 0xc002233800 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Feb  2 13:48:28.147: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6374 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  2 13:48:28.373: INFO: rc: 1
Feb  2 13:48:28.373: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6374 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc0029d0630 exit status 1   true [0xc000010cf8 0xc000010d80 0xc000010db8] [0xc000010cf8 0xc000010d80 0xc000010db8] [0xc000010d78 0xc000010d98] [0xba6c50 0xba6c50] 0xc002233bc0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Feb  2 13:48:38.374: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6374 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  2 13:48:38.613: INFO: rc: 1
Feb  2 13:48:38.614: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6374 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc002eec060 exit status 1   true [0xc0006f1688 0xc0006f1708 0xc0006f1828] [0xc0006f1688 0xc0006f1708 0xc0006f1828] [0xc0006f16d8 0xc0006f17b0] [0xba6c50 0xba6c50] 0xc001fa20c0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Feb  2 13:48:48.614: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6374 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  2 13:48:48.762: INFO: rc: 1
Feb  2 13:48:48.763: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6374 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc002eec120 exit status 1   true [0xc0006f0b40 0xc0006f0c18 0xc0006f0ee8] [0xc0006f0b40 0xc0006f0c18 0xc0006f0ee8] [0xc0006f0be8 0xc0006f0e20] [0xba6c50 0xba6c50] 0xc002232240 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Feb  2 13:48:58.763: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6374 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  2 13:48:58.949: INFO: rc: 1
Feb  2 13:48:58.949: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6374 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc002eec210 exit status 1   true [0xc0006f0f18 0xc0006f1058 0xc0006f1678] [0xc0006f0f18 0xc0006f1058 0xc0006f1678] [0xc0006f0f90 0xc0006f1618] [0xba6c50 0xba6c50] 0xc002232840 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Feb  2 13:49:08.950: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6374 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  2 13:49:09.131: INFO: rc: 1
Feb  2 13:49:09.131: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6374 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc002eec2d0 exit status 1   true [0xc0006f16c0 0xc0006f18c0 0xc0006f1ab8] [0xc0006f16c0 0xc0006f18c0 0xc0006f1ab8] [0xc0006f1850 0xc0006f1a18] [0xba6c50 0xba6c50] 0xc002232d20 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Feb  2 13:49:19.132: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6374 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  2 13:49:19.336: INFO: rc: 1
Feb  2 13:49:19.336: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6374 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc0029d00f0 exit status 1   true [0xc000010108 0xc0000103d0 0xc000010440] [0xc000010108 0xc0000103d0 0xc000010440] [0xc000010398 0xc000010400] [0xba6c50 0xba6c50] 0xc001fa24e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Feb  2 13:49:29.337: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6374 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  2 13:49:29.525: INFO: rc: 1
Feb  2 13:49:29.525: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6374 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc0029d0210 exit status 1   true [0xc0000104b8 0xc000010598 0xc0000105c8] [0xc0000104b8 0xc000010598 0xc0000105c8] [0xc000010560 0xc0000105a8] [0xba6c50 0xba6c50] 0xc001fa29c0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Feb  2 13:49:39.526: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6374 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  2 13:49:39.716: INFO: rc: 1
Feb  2 13:49:39.716: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6374 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc002eec390 exit status 1   true [0xc0006f1b20 0xc0006f1bb0 0xc0006f1c90] [0xc0006f1b20 0xc0006f1bb0 0xc0006f1c90] [0xc0006f1b70 0xc0006f1c20] [0xba6c50 0xba6c50] 0xc002233080 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Feb  2 13:49:49.718: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6374 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  2 13:49:49.881: INFO: rc: 1
Feb  2 13:49:49.882: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6374 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc002eec450 exit status 1   true [0xc0006f1ca0 0xc0006f1ce8 0xc0006f1d78] [0xc0006f1ca0 0xc0006f1ce8 0xc0006f1d78] [0xc0006f1cb0 0xc0006f1d60] [0xba6c50 0xba6c50] 0xc0022333e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Feb  2 13:49:59.882: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6374 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  2 13:50:00.086: INFO: rc: 1
Feb  2 13:50:00.086: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6374 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc002d601e0 exit status 1   true [0xc000b800d0 0xc000b802c0 0xc000b805c8] [0xc000b800d0 0xc000b802c0 0xc000b805c8] [0xc000b801b8 0xc000b804d0] [0xba6c50 0xba6c50] 0xc001a48240 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Feb  2 13:50:10.087: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6374 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  2 13:50:10.217: INFO: rc: 1
Feb  2 13:50:10.217: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6374 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc0029d0300 exit status 1   true [0xc000010640 0xc000010720 0xc0000107a8] [0xc000010640 0xc000010720 0xc0000107a8] [0xc0000106f0 0xc000010748] [0xba6c50 0xba6c50] 0xc001fa2e40 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Feb  2 13:50:20.218: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6374 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  2 13:50:20.374: INFO: rc: 1
Feb  2 13:50:20.374: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6374 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc002edc150 exit status 1   true [0xc000b9c050 0xc000b9c1a0 0xc000b9c208] [0xc000b9c050 0xc000b9c1a0 0xc000b9c208] [0xc000b9c138 0xc000b9c1f0] [0xba6c50 0xba6c50] 0xc0025f41e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Feb  2 13:50:30.375: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6374 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  2 13:50:30.612: INFO: rc: 1
Feb  2 13:50:30.612: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6374 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc0029d03f0 exit status 1   true [0xc0000107c0 0xc000010880 0xc000010900] [0xc0000107c0 0xc000010880 0xc000010900] [0xc000010820 0xc0000108f0] [0xba6c50 0xba6c50] 0xc001fa3320 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Feb  2 13:50:40.613: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6374 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  2 13:50:40.747: INFO: rc: 1
Feb  2 13:50:40.748: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6374 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc0029d0090 exit status 1   true [0xc000010108 0xc0000103d0 0xc000010440] [0xc000010108 0xc0000103d0 0xc000010440] [0xc000010398 0xc000010400] [0xba6c50 0xba6c50] 0xc001fa20c0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Feb  2 13:50:50.749: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6374 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  2 13:50:50.889: INFO: rc: 1
Feb  2 13:50:50.889: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6374 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc002edc0f0 exit status 1   true [0xc000b9c050 0xc000b9c1a0 0xc000b9c208] [0xc000b9c050 0xc000b9c1a0 0xc000b9c208] [0xc000b9c138 0xc000b9c1f0] [0xba6c50 0xba6c50] 0xc0025f43c0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Feb  2 13:51:00.890: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6374 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  2 13:51:01.098: INFO: rc: 1
Feb  2 13:51:01.098: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6374 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc002edc210 exit status 1   true [0xc000b9c230 0xc000b9c270 0xc000b9c3b0] [0xc000b9c230 0xc000b9c270 0xc000b9c3b0] [0xc000b9c240 0xc000b9c348] [0xba6c50 0xba6c50] 0xc0025f4840 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Feb  2 13:51:11.099: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6374 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  2 13:51:11.292: INFO: rc: 1
Feb  2 13:51:11.292: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6374 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc002eec0c0 exit status 1   true [0xc0006f0b40 0xc0006f0c18 0xc0006f0ee8] [0xc0006f0b40 0xc0006f0c18 0xc0006f0ee8] [0xc0006f0be8 0xc0006f0e20] [0xba6c50 0xba6c50] 0xc002232240 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Feb  2 13:51:21.293: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6374 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  2 13:51:21.466: INFO: rc: 1
Feb  2 13:51:21.466: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6374 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc0029d01b0 exit status 1   true [0xc0000104b8 0xc000010598 0xc0000105c8] [0xc0000104b8 0xc000010598 0xc0000105c8] [0xc000010560 0xc0000105a8] [0xba6c50 0xba6c50] 0xc001fa24e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Feb  2 13:51:31.467: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6374 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  2 13:51:31.665: INFO: rc: 1
Feb  2 13:51:31.665: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6374 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc002d60090 exit status 1   true [0xc000b800d0 0xc000b802c0 0xc000b805c8] [0xc000b800d0 0xc000b802c0 0xc000b805c8] [0xc000b801b8 0xc000b804d0] [0xba6c50 0xba6c50] 0xc001a48240 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Feb  2 13:51:41.666: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6374 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  2 13:51:41.856: INFO: rc: 1
Feb  2 13:51:41.856: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: 
Feb  2 13:51:41.856: INFO: Scaling statefulset ss to 0
Feb  2 13:51:41.873: INFO: Waiting for statefulset status.replicas updated to 0
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86
Feb  2 13:51:41.875: INFO: Deleting all statefulset in ns statefulset-6374
Feb  2 13:51:41.878: INFO: Scaling statefulset ss to 0
Feb  2 13:51:41.892: INFO: Waiting for statefulset status.replicas updated to 0
Feb  2 13:51:41.895: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  2 13:51:41.928: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-6374" for this suite.
Feb  2 13:51:47.981: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 13:51:48.159: INFO: namespace statefulset-6374 deletion completed in 6.218386537s

• [SLOW TEST:367.887 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    Burst scaling should run to completion even with unhealthy pods [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  2 13:51:48.160: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating replication controller my-hostname-basic-6d3719b3-68c3-4c76-ba9f-f32373042a00
Feb  2 13:51:48.413: INFO: Pod name my-hostname-basic-6d3719b3-68c3-4c76-ba9f-f32373042a00: Found 0 pods out of 1
Feb  2 13:51:53.427: INFO: Pod name my-hostname-basic-6d3719b3-68c3-4c76-ba9f-f32373042a00: Found 1 pods out of 1
Feb  2 13:51:53.427: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-6d3719b3-68c3-4c76-ba9f-f32373042a00" are running
Feb  2 13:51:55.441: INFO: Pod "my-hostname-basic-6d3719b3-68c3-4c76-ba9f-f32373042a00-fn62r" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-02 13:51:48 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-02 13:51:48 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-6d3719b3-68c3-4c76-ba9f-f32373042a00]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-02 13:51:48 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-6d3719b3-68c3-4c76-ba9f-f32373042a00]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-02 13:51:48 +0000 UTC Reason: Message:}])
Feb  2 13:51:55.441: INFO: Trying to dial the pod
Feb  2 13:52:00.492: INFO: Controller my-hostname-basic-6d3719b3-68c3-4c76-ba9f-f32373042a00: Got expected result from replica 1 [my-hostname-basic-6d3719b3-68c3-4c76-ba9f-f32373042a00-fn62r]: "my-hostname-basic-6d3719b3-68c3-4c76-ba9f-f32373042a00-fn62r", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  2 13:52:00.493: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-5151" for this suite.
Feb  2 13:52:06.534: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 13:52:06.680: INFO: namespace replication-controller-5151 deletion completed in 6.177327424s

• [SLOW TEST:18.520 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  2 13:52:06.681: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
[It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb  2 13:52:06.882: INFO: Creating simple daemon set daemon-set
STEP: Check that daemon pods launch on every node of the cluster.
Feb  2 13:52:06.914: INFO: Number of nodes with available pods: 0
Feb  2 13:52:06.914: INFO: Node iruya-node is running more than one daemon pod
Feb  2 13:52:07.935: INFO: Number of nodes with available pods: 0
Feb  2 13:52:07.935: INFO: Node iruya-node is running more than one daemon pod
Feb  2 13:52:08.937: INFO: Number of nodes with available pods: 0
Feb  2 13:52:08.937: INFO: Node iruya-node is running more than one daemon pod
Feb  2 13:52:09.931: INFO: Number of nodes with available pods: 0
Feb  2 13:52:09.931: INFO: Node iruya-node is running more than one daemon pod
Feb  2 13:52:10.948: INFO: Number of nodes with available pods: 0
Feb  2 13:52:10.948: INFO: Node iruya-node is running more than one daemon pod
Feb  2 13:52:13.145: INFO: Number of nodes with available pods: 0
Feb  2 13:52:13.145: INFO: Node iruya-node is running more than one daemon pod
Feb  2 13:52:14.643: INFO: Number of nodes with available pods: 0
Feb  2 13:52:14.644: INFO: Node iruya-node is running more than one daemon pod
Feb  2 13:52:14.942: INFO: Number of nodes with available pods: 0
Feb  2 13:52:14.942: INFO: Node iruya-node is running more than one daemon pod
Feb  2 13:52:15.929: INFO: Number of nodes with available pods: 0
Feb  2 13:52:15.929: INFO: Node iruya-node is running more than one daemon pod
Feb  2 13:52:16.953: INFO: Number of nodes with available pods: 1
Feb  2 13:52:16.953: INFO: Node iruya-node is running more than one daemon pod
Feb  2 13:52:17.925: INFO: Number of nodes with available pods: 2
Feb  2 13:52:17.925: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Update daemon pods image.
STEP: Check that daemon pods images are updated.
Feb  2 13:52:17.974: INFO: Wrong image for pod: daemon-set-7hz9d. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  2 13:52:17.974: INFO: Wrong image for pod: daemon-set-zthpw. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  2 13:52:19.051: INFO: Wrong image for pod: daemon-set-7hz9d. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  2 13:52:19.051: INFO: Wrong image for pod: daemon-set-zthpw. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  2 13:52:20.054: INFO: Wrong image for pod: daemon-set-7hz9d. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  2 13:52:20.054: INFO: Wrong image for pod: daemon-set-zthpw. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  2 13:52:21.051: INFO: Wrong image for pod: daemon-set-7hz9d. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  2 13:52:21.052: INFO: Wrong image for pod: daemon-set-zthpw. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  2 13:52:22.067: INFO: Wrong image for pod: daemon-set-7hz9d. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  2 13:52:22.067: INFO: Wrong image for pod: daemon-set-zthpw. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  2 13:52:23.061: INFO: Wrong image for pod: daemon-set-7hz9d. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  2 13:52:23.061: INFO: Pod daemon-set-7hz9d is not available
Feb  2 13:52:23.062: INFO: Wrong image for pod: daemon-set-zthpw. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  2 13:52:24.053: INFO: Wrong image for pod: daemon-set-7hz9d. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  2 13:52:24.053: INFO: Pod daemon-set-7hz9d is not available
Feb  2 13:52:24.053: INFO: Wrong image for pod: daemon-set-zthpw. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  2 13:52:25.052: INFO: Wrong image for pod: daemon-set-7hz9d. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  2 13:52:25.052: INFO: Pod daemon-set-7hz9d is not available
Feb  2 13:52:25.052: INFO: Wrong image for pod: daemon-set-zthpw. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  2 13:52:26.052: INFO: Wrong image for pod: daemon-set-7hz9d. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  2 13:52:26.053: INFO: Pod daemon-set-7hz9d is not available
Feb  2 13:52:26.053: INFO: Wrong image for pod: daemon-set-zthpw. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  2 13:52:27.051: INFO: Wrong image for pod: daemon-set-7hz9d. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  2 13:52:27.051: INFO: Pod daemon-set-7hz9d is not available
Feb  2 13:52:27.051: INFO: Wrong image for pod: daemon-set-zthpw. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  2 13:52:28.057: INFO: Pod daemon-set-gmv86 is not available
Feb  2 13:52:28.057: INFO: Wrong image for pod: daemon-set-zthpw. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  2 13:52:29.053: INFO: Pod daemon-set-gmv86 is not available
Feb  2 13:52:29.053: INFO: Wrong image for pod: daemon-set-zthpw. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  2 13:52:30.108: INFO: Pod daemon-set-gmv86 is not available
Feb  2 13:52:30.108: INFO: Wrong image for pod: daemon-set-zthpw. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  2 13:52:31.053: INFO: Pod daemon-set-gmv86 is not available
Feb  2 13:52:31.053: INFO: Wrong image for pod: daemon-set-zthpw. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  2 13:52:32.055: INFO: Pod daemon-set-gmv86 is not available
Feb  2 13:52:32.055: INFO: Wrong image for pod: daemon-set-zthpw. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  2 13:52:33.626: INFO: Pod daemon-set-gmv86 is not available
Feb  2 13:52:33.626: INFO: Wrong image for pod: daemon-set-zthpw. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  2 13:52:35.053: INFO: Pod daemon-set-gmv86 is not available
Feb  2 13:52:35.053: INFO: Wrong image for pod: daemon-set-zthpw. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  2 13:52:36.086: INFO: Wrong image for pod: daemon-set-zthpw. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  2 13:52:37.058: INFO: Wrong image for pod: daemon-set-zthpw. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  2 13:52:38.049: INFO: Wrong image for pod: daemon-set-zthpw. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  2 13:52:39.049: INFO: Wrong image for pod: daemon-set-zthpw. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  2 13:52:40.078: INFO: Wrong image for pod: daemon-set-zthpw. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  2 13:52:40.078: INFO: Pod daemon-set-zthpw is not available
Feb  2 13:52:41.054: INFO: Wrong image for pod: daemon-set-zthpw. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  2 13:52:41.054: INFO: Pod daemon-set-zthpw is not available
Feb  2 13:52:42.050: INFO: Pod daemon-set-xb6rr is not available
STEP: Check that daemon pods are still running on every node of the cluster.
Feb  2 13:52:42.063: INFO: Number of nodes with available pods: 1
Feb  2 13:52:42.063: INFO: Node iruya-node is running more than one daemon pod
Feb  2 13:52:43.077: INFO: Number of nodes with available pods: 1
Feb  2 13:52:43.077: INFO: Node iruya-node is running more than one daemon pod
Feb  2 13:52:44.075: INFO: Number of nodes with available pods: 1
Feb  2 13:52:44.075: INFO: Node iruya-node is running more than one daemon pod
Feb  2 13:52:45.077: INFO: Number of nodes with available pods: 1
Feb  2 13:52:45.077: INFO: Node iruya-node is running more than one daemon pod
Feb  2 13:52:46.083: INFO: Number of nodes with available pods: 1
Feb  2 13:52:46.083: INFO: Node iruya-node is running more than one daemon pod
Feb  2 13:52:47.078: INFO: Number of nodes with available pods: 1
Feb  2 13:52:47.079: INFO: Node iruya-node is running more than one daemon pod
Feb  2 13:52:48.100: INFO: Number of nodes with available pods: 2
Feb  2 13:52:48.100: INFO: Number of running nodes: 2, number of available pods: 2
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-767, will wait for the garbage collector to delete the pods
Feb  2 13:52:48.198: INFO: Deleting DaemonSet.extensions daemon-set took: 9.830863ms
Feb  2 13:52:48.499: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.551431ms
Feb  2 13:52:55.103: INFO: Number of nodes with available pods: 0
Feb  2 13:52:55.103: INFO: Number of running nodes: 0, number of available pods: 0
Feb  2 13:52:55.107: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-767/daemonsets","resourceVersion":"22821874"},"items":null}

Feb  2 13:52:55.110: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-767/pods","resourceVersion":"22821874"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  2 13:52:55.132: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-767" for this suite.
Feb  2 13:53:01.161: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 13:53:01.259: INFO: namespace daemonsets-767 deletion completed in 6.122258736s

• [SLOW TEST:54.578 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  2 13:53:01.259: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap configmap-6229/configmap-test-bbfa80d3-79de-441d-83b7-3e1298444fb9
STEP: Creating a pod to test consume configMaps
Feb  2 13:53:01.418: INFO: Waiting up to 5m0s for pod "pod-configmaps-8bab176b-a48a-4af6-b9f6-a58e6d4832b0" in namespace "configmap-6229" to be "success or failure"
Feb  2 13:53:01.428: INFO: Pod "pod-configmaps-8bab176b-a48a-4af6-b9f6-a58e6d4832b0": Phase="Pending", Reason="", readiness=false. Elapsed: 9.804924ms
Feb  2 13:53:03.437: INFO: Pod "pod-configmaps-8bab176b-a48a-4af6-b9f6-a58e6d4832b0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018308882s
Feb  2 13:53:05.569: INFO: Pod "pod-configmaps-8bab176b-a48a-4af6-b9f6-a58e6d4832b0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.15011946s
Feb  2 13:53:07.577: INFO: Pod "pod-configmaps-8bab176b-a48a-4af6-b9f6-a58e6d4832b0": Phase="Pending", Reason="", readiness=false. Elapsed: 6.158519382s
Feb  2 13:53:09.587: INFO: Pod "pod-configmaps-8bab176b-a48a-4af6-b9f6-a58e6d4832b0": Phase="Pending", Reason="", readiness=false. Elapsed: 8.168933471s
Feb  2 13:53:11.596: INFO: Pod "pod-configmaps-8bab176b-a48a-4af6-b9f6-a58e6d4832b0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.178003255s
STEP: Saw pod success
Feb  2 13:53:11.597: INFO: Pod "pod-configmaps-8bab176b-a48a-4af6-b9f6-a58e6d4832b0" satisfied condition "success or failure"
Feb  2 13:53:11.602: INFO: Trying to get logs from node iruya-node pod pod-configmaps-8bab176b-a48a-4af6-b9f6-a58e6d4832b0 container env-test: 
STEP: delete the pod
Feb  2 13:53:11.684: INFO: Waiting for pod pod-configmaps-8bab176b-a48a-4af6-b9f6-a58e6d4832b0 to disappear
Feb  2 13:53:11.694: INFO: Pod pod-configmaps-8bab176b-a48a-4af6-b9f6-a58e6d4832b0 no longer exists
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  2 13:53:11.695: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-6229" for this suite.
Feb  2 13:53:17.822: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 13:53:17.976: INFO: namespace configmap-6229 deletion completed in 6.255192656s

• [SLOW TEST:16.716 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Proxy server 
  should support --unix-socket=/path  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  2 13:53:17.977: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should support --unix-socket=/path  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Starting the proxy
Feb  2 13:53:18.104: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix347006905/test'
STEP: retrieving proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  2 13:53:18.198: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6957" for this suite.
Feb  2 13:53:24.255: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 13:53:24.458: INFO: namespace kubectl-6957 deletion completed in 6.231252015s

• [SLOW TEST:6.482 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Proxy server
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should support --unix-socket=/path  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  2 13:53:24.460: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod busybox-f36a519e-5fd3-45d2-84c2-82cd199d29d2 in namespace container-probe-3399
Feb  2 13:53:32.658: INFO: Started pod busybox-f36a519e-5fd3-45d2-84c2-82cd199d29d2 in namespace container-probe-3399
STEP: checking the pod's current state and verifying that restartCount is present
Feb  2 13:53:32.666: INFO: Initial restart count of pod busybox-f36a519e-5fd3-45d2-84c2-82cd199d29d2 is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  2 13:57:34.499: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-3399" for this suite.
Feb  2 13:57:40.559: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 13:57:40.732: INFO: namespace container-probe-3399 deletion completed in 6.204168263s

• [SLOW TEST:256.272 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl replace 
  should update a single-container pod's image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  2 13:57:40.733: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1721
[It] should update a single-container pod's image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Feb  2 13:57:40.842: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --labels=run=e2e-test-nginx-pod --namespace=kubectl-826'
Feb  2 13:57:42.723: INFO: stderr: ""
Feb  2 13:57:42.723: INFO: stdout: "pod/e2e-test-nginx-pod created\n"
STEP: verifying the pod e2e-test-nginx-pod is running
STEP: verifying the pod e2e-test-nginx-pod was created
Feb  2 13:57:52.774: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-nginx-pod --namespace=kubectl-826 -o json'
Feb  2 13:57:52.938: INFO: stderr: ""
Feb  2 13:57:52.939: INFO: stdout: "{\n    \"apiVersion\": \"v1\",\n    \"kind\": \"Pod\",\n    \"metadata\": {\n        \"creationTimestamp\": \"2020-02-02T13:57:42Z\",\n        \"labels\": {\n            \"run\": \"e2e-test-nginx-pod\"\n        },\n        \"name\": \"e2e-test-nginx-pod\",\n        \"namespace\": \"kubectl-826\",\n        \"resourceVersion\": \"22822367\",\n        \"selfLink\": \"/api/v1/namespaces/kubectl-826/pods/e2e-test-nginx-pod\",\n        \"uid\": \"ecfec61f-54ed-4779-83f6-047b069d9663\"\n    },\n    \"spec\": {\n        \"containers\": [\n            {\n                \"image\": \"docker.io/library/nginx:1.14-alpine\",\n                \"imagePullPolicy\": \"IfNotPresent\",\n                \"name\": \"e2e-test-nginx-pod\",\n                \"resources\": {},\n                \"terminationMessagePath\": \"/dev/termination-log\",\n                \"terminationMessagePolicy\": \"File\",\n                \"volumeMounts\": [\n                    {\n                        \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n                        \"name\": \"default-token-jwfw5\",\n                        \"readOnly\": true\n                    }\n                ]\n            }\n        ],\n        \"dnsPolicy\": \"ClusterFirst\",\n        \"enableServiceLinks\": true,\n        \"nodeName\": \"iruya-node\",\n        \"priority\": 0,\n        \"restartPolicy\": \"Always\",\n        \"schedulerName\": \"default-scheduler\",\n        \"securityContext\": {},\n        \"serviceAccount\": \"default\",\n        \"serviceAccountName\": \"default\",\n        \"terminationGracePeriodSeconds\": 30,\n        \"tolerations\": [\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/not-ready\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            },\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/unreachable\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            }\n        ],\n        \"volumes\": [\n            {\n                \"name\": \"default-token-jwfw5\",\n                \"secret\": {\n                    \"defaultMode\": 420,\n                    \"secretName\": \"default-token-jwfw5\"\n                }\n            }\n        ]\n    },\n    \"status\": {\n        \"conditions\": [\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-02-02T13:57:42Z\",\n                \"status\": \"True\",\n                \"type\": \"Initialized\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-02-02T13:57:49Z\",\n                \"status\": \"True\",\n                \"type\": \"Ready\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-02-02T13:57:49Z\",\n                \"status\": \"True\",\n                \"type\": \"ContainersReady\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-02-02T13:57:42Z\",\n                \"status\": \"True\",\n                \"type\": \"PodScheduled\"\n            }\n        ],\n        \"containerStatuses\": [\n            {\n                \"containerID\": \"docker://70a9d6a7512ddb763ea49a6d4d68252f8697fc5fd29fcc6c0246d321b9c51903\",\n                \"image\": \"nginx:1.14-alpine\",\n                \"imageID\": \"docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7\",\n                \"lastState\": {},\n                \"name\": \"e2e-test-nginx-pod\",\n                \"ready\": true,\n                \"restartCount\": 0,\n                \"state\": {\n                    \"running\": {\n                        \"startedAt\": \"2020-02-02T13:57:48Z\"\n                    }\n                }\n            }\n        ],\n        \"hostIP\": \"10.96.3.65\",\n        \"phase\": \"Running\",\n        \"podIP\": \"10.44.0.1\",\n        \"qosClass\": \"BestEffort\",\n        \"startTime\": \"2020-02-02T13:57:42Z\"\n    }\n}\n"
STEP: replace the image in the pod
Feb  2 13:57:52.939: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-826'
Feb  2 13:57:53.484: INFO: stderr: ""
Feb  2 13:57:53.484: INFO: stdout: "pod/e2e-test-nginx-pod replaced\n"
STEP: verifying the pod e2e-test-nginx-pod has the right image docker.io/library/busybox:1.29
[AfterEach] [k8s.io] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1726
Feb  2 13:57:53.518: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=kubectl-826'
Feb  2 13:58:01.187: INFO: stderr: ""
Feb  2 13:58:01.187: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  2 13:58:01.188: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-826" for this suite.
Feb  2 13:58:07.221: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 13:58:07.435: INFO: namespace kubectl-826 deletion completed in 6.239410603s

• [SLOW TEST:26.702 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should update a single-container pod's image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  2 13:58:07.435: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Cleaning up the secret
STEP: Cleaning up the configmap
STEP: Cleaning up the pod
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  2 13:58:15.710: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-wrapper-8678" for this suite.
Feb  2 13:58:21.871: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 13:58:22.055: INFO: namespace emptydir-wrapper-8678 deletion completed in 6.29747585s

• [SLOW TEST:14.620 seconds]
[sig-storage] EmptyDir wrapper volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl version 
  should check is all data is printed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  2 13:58:22.058: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should check is all data is printed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb  2 13:58:22.227: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version'
Feb  2 13:58:22.418: INFO: stderr: ""
Feb  2 13:58:22.418: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"15\", GitVersion:\"v1.15.7\", GitCommit:\"6c143d35bb11d74970e7bc0b6c45b6bfdffc0bd4\", GitTreeState:\"clean\", BuildDate:\"2019-12-22T16:55:20Z\", GoVersion:\"go1.12.14\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"15\", GitVersion:\"v1.15.1\", GitCommit:\"4485c6f18cee9a5d3c3b4e523bd27972b1b53892\", GitTreeState:\"clean\", BuildDate:\"2019-07-18T09:09:21Z\", GoVersion:\"go1.12.5\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  2 13:58:22.418: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-249" for this suite.
Feb  2 13:58:28.478: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 13:58:28.628: INFO: namespace kubectl-249 deletion completed in 6.204225679s

• [SLOW TEST:6.570 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl version
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should check is all data is printed  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  2 13:58:28.629: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name projected-secret-test-c569bd61-d8dc-47e8-8a9a-a1aa215aec13
STEP: Creating a pod to test consume secrets
Feb  2 13:58:28.944: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-4e60988b-4dbe-4afd-aae6-c7b5e931b78b" in namespace "projected-9018" to be "success or failure"
Feb  2 13:58:28.953: INFO: Pod "pod-projected-secrets-4e60988b-4dbe-4afd-aae6-c7b5e931b78b": Phase="Pending", Reason="", readiness=false. Elapsed: 8.920149ms
Feb  2 13:58:30.964: INFO: Pod "pod-projected-secrets-4e60988b-4dbe-4afd-aae6-c7b5e931b78b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020360391s
Feb  2 13:58:32.974: INFO: Pod "pod-projected-secrets-4e60988b-4dbe-4afd-aae6-c7b5e931b78b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.029629314s
Feb  2 13:58:34.989: INFO: Pod "pod-projected-secrets-4e60988b-4dbe-4afd-aae6-c7b5e931b78b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.045197539s
Feb  2 13:58:37.002: INFO: Pod "pod-projected-secrets-4e60988b-4dbe-4afd-aae6-c7b5e931b78b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.057485622s
STEP: Saw pod success
Feb  2 13:58:37.002: INFO: Pod "pod-projected-secrets-4e60988b-4dbe-4afd-aae6-c7b5e931b78b" satisfied condition "success or failure"
Feb  2 13:58:37.006: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-4e60988b-4dbe-4afd-aae6-c7b5e931b78b container projected-secret-volume-test: 
STEP: delete the pod
Feb  2 13:58:37.095: INFO: Waiting for pod pod-projected-secrets-4e60988b-4dbe-4afd-aae6-c7b5e931b78b to disappear
Feb  2 13:58:37.204: INFO: Pod pod-projected-secrets-4e60988b-4dbe-4afd-aae6-c7b5e931b78b no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  2 13:58:37.204: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9018" for this suite.
Feb  2 13:58:43.236: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 13:58:43.403: INFO: namespace projected-9018 deletion completed in 6.190142302s

• [SLOW TEST:14.774 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-storage] Projected combined 
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected combined
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  2 13:58:43.404: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-projected-all-test-volume-03dee19b-41ce-4049-8d92-e8d1d5c14ab3
STEP: Creating secret with name secret-projected-all-test-volume-f139219c-9af6-40e2-9b9f-972e006bb0e8
STEP: Creating a pod to test Check all projections for projected volume plugin
Feb  2 13:58:43.507: INFO: Waiting up to 5m0s for pod "projected-volume-6a425d6f-ae16-4364-af71-dad1ea4bb56d" in namespace "projected-3118" to be "success or failure"
Feb  2 13:58:43.568: INFO: Pod "projected-volume-6a425d6f-ae16-4364-af71-dad1ea4bb56d": Phase="Pending", Reason="", readiness=false. Elapsed: 60.637297ms
Feb  2 13:58:45.576: INFO: Pod "projected-volume-6a425d6f-ae16-4364-af71-dad1ea4bb56d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.069490457s
Feb  2 13:58:47.583: INFO: Pod "projected-volume-6a425d6f-ae16-4364-af71-dad1ea4bb56d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.076269988s
Feb  2 13:58:49.596: INFO: Pod "projected-volume-6a425d6f-ae16-4364-af71-dad1ea4bb56d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.088824246s
Feb  2 13:58:51.606: INFO: Pod "projected-volume-6a425d6f-ae16-4364-af71-dad1ea4bb56d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.098779979s
STEP: Saw pod success
Feb  2 13:58:51.606: INFO: Pod "projected-volume-6a425d6f-ae16-4364-af71-dad1ea4bb56d" satisfied condition "success or failure"
Feb  2 13:58:51.612: INFO: Trying to get logs from node iruya-node pod projected-volume-6a425d6f-ae16-4364-af71-dad1ea4bb56d container projected-all-volume-test: 
STEP: delete the pod
Feb  2 13:58:51.673: INFO: Waiting for pod projected-volume-6a425d6f-ae16-4364-af71-dad1ea4bb56d to disappear
Feb  2 13:58:51.701: INFO: Pod projected-volume-6a425d6f-ae16-4364-af71-dad1ea4bb56d no longer exists
[AfterEach] [sig-storage] Projected combined
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  2 13:58:51.701: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3118" for this suite.
Feb  2 13:58:57.818: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 13:58:57.966: INFO: namespace projected-3118 deletion completed in 6.197105632s

• [SLOW TEST:14.562 seconds]
[sig-storage] Projected combined
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  2 13:58:57.966: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-ae40d895-a4c9-41d9-82df-39a4155dafeb
STEP: Creating a pod to test consume secrets
Feb  2 13:58:58.122: INFO: Waiting up to 5m0s for pod "pod-secrets-8a8c35e5-4b4b-438f-93cd-5f75dbae712a" in namespace "secrets-9421" to be "success or failure"
Feb  2 13:58:58.133: INFO: Pod "pod-secrets-8a8c35e5-4b4b-438f-93cd-5f75dbae712a": Phase="Pending", Reason="", readiness=false. Elapsed: 11.054698ms
Feb  2 13:59:00.153: INFO: Pod "pod-secrets-8a8c35e5-4b4b-438f-93cd-5f75dbae712a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030832458s
Feb  2 13:59:02.163: INFO: Pod "pod-secrets-8a8c35e5-4b4b-438f-93cd-5f75dbae712a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.041397424s
Feb  2 13:59:04.177: INFO: Pod "pod-secrets-8a8c35e5-4b4b-438f-93cd-5f75dbae712a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.055140391s
Feb  2 13:59:06.190: INFO: Pod "pod-secrets-8a8c35e5-4b4b-438f-93cd-5f75dbae712a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.068000464s
STEP: Saw pod success
Feb  2 13:59:06.190: INFO: Pod "pod-secrets-8a8c35e5-4b4b-438f-93cd-5f75dbae712a" satisfied condition "success or failure"
Feb  2 13:59:06.226: INFO: Trying to get logs from node iruya-node pod pod-secrets-8a8c35e5-4b4b-438f-93cd-5f75dbae712a container secret-env-test: 
STEP: delete the pod
Feb  2 13:59:06.285: INFO: Waiting for pod pod-secrets-8a8c35e5-4b4b-438f-93cd-5f75dbae712a to disappear
Feb  2 13:59:06.289: INFO: Pod pod-secrets-8a8c35e5-4b4b-438f-93cd-5f75dbae712a no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  2 13:59:06.289: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-9421" for this suite.
Feb  2 13:59:12.332: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 13:59:12.495: INFO: namespace secrets-9421 deletion completed in 6.19998257s

• [SLOW TEST:14.529 seconds]
[sig-api-machinery] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  2 13:59:12.496: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb  2 13:59:12.628: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a49adda0-7d72-4da4-8f2e-407c1264c2b8" in namespace "projected-7954" to be "success or failure"
Feb  2 13:59:12.643: INFO: Pod "downwardapi-volume-a49adda0-7d72-4da4-8f2e-407c1264c2b8": Phase="Pending", Reason="", readiness=false. Elapsed: 14.223315ms
Feb  2 13:59:14.657: INFO: Pod "downwardapi-volume-a49adda0-7d72-4da4-8f2e-407c1264c2b8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028241408s
Feb  2 13:59:17.121: INFO: Pod "downwardapi-volume-a49adda0-7d72-4da4-8f2e-407c1264c2b8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.49216471s
Feb  2 13:59:19.142: INFO: Pod "downwardapi-volume-a49adda0-7d72-4da4-8f2e-407c1264c2b8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.513243311s
Feb  2 13:59:21.149: INFO: Pod "downwardapi-volume-a49adda0-7d72-4da4-8f2e-407c1264c2b8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.520007007s
STEP: Saw pod success
Feb  2 13:59:21.149: INFO: Pod "downwardapi-volume-a49adda0-7d72-4da4-8f2e-407c1264c2b8" satisfied condition "success or failure"
Feb  2 13:59:21.153: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-a49adda0-7d72-4da4-8f2e-407c1264c2b8 container client-container: 
STEP: delete the pod
Feb  2 13:59:21.290: INFO: Waiting for pod downwardapi-volume-a49adda0-7d72-4da4-8f2e-407c1264c2b8 to disappear
Feb  2 13:59:21.301: INFO: Pod downwardapi-volume-a49adda0-7d72-4da4-8f2e-407c1264c2b8 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  2 13:59:21.301: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7954" for this suite.
Feb  2 13:59:27.391: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 13:59:27.575: INFO: namespace projected-7954 deletion completed in 6.266190259s

• [SLOW TEST:15.079 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  2 13:59:27.576: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for all rs to be garbage collected
STEP: expected 0 rs, got 1 rs
STEP: expected 0 pods, got 2 pods
STEP: expected 0 pods, got 2 pods
STEP: Gathering metrics
W0202 13:59:31.237573       8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Feb  2 13:59:31.237: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  2 13:59:31.237: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-4839" for this suite.
Feb  2 13:59:37.291: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 13:59:37.427: INFO: namespace gc-4839 deletion completed in 6.18534436s

• [SLOW TEST:9.852 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  2 13:59:37.428: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-map-4c056f39-0bcf-4ce5-9531-bcae014a0c8a
STEP: Creating a pod to test consume secrets
Feb  2 13:59:37.554: INFO: Waiting up to 5m0s for pod "pod-secrets-bf12e2c1-2e50-46fd-9cb4-ace8c8ccbb76" in namespace "secrets-9804" to be "success or failure"
Feb  2 13:59:37.566: INFO: Pod "pod-secrets-bf12e2c1-2e50-46fd-9cb4-ace8c8ccbb76": Phase="Pending", Reason="", readiness=false. Elapsed: 11.472635ms
Feb  2 13:59:39.575: INFO: Pod "pod-secrets-bf12e2c1-2e50-46fd-9cb4-ace8c8ccbb76": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020527939s
Feb  2 13:59:41.582: INFO: Pod "pod-secrets-bf12e2c1-2e50-46fd-9cb4-ace8c8ccbb76": Phase="Pending", Reason="", readiness=false. Elapsed: 4.028145526s
Feb  2 13:59:43.594: INFO: Pod "pod-secrets-bf12e2c1-2e50-46fd-9cb4-ace8c8ccbb76": Phase="Pending", Reason="", readiness=false. Elapsed: 6.039624313s
Feb  2 13:59:45.603: INFO: Pod "pod-secrets-bf12e2c1-2e50-46fd-9cb4-ace8c8ccbb76": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.048501098s
STEP: Saw pod success
Feb  2 13:59:45.603: INFO: Pod "pod-secrets-bf12e2c1-2e50-46fd-9cb4-ace8c8ccbb76" satisfied condition "success or failure"
Feb  2 13:59:45.608: INFO: Trying to get logs from node iruya-node pod pod-secrets-bf12e2c1-2e50-46fd-9cb4-ace8c8ccbb76 container secret-volume-test: 
STEP: delete the pod
Feb  2 13:59:45.677: INFO: Waiting for pod pod-secrets-bf12e2c1-2e50-46fd-9cb4-ace8c8ccbb76 to disappear
Feb  2 13:59:45.708: INFO: Pod pod-secrets-bf12e2c1-2e50-46fd-9cb4-ace8c8ccbb76 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  2 13:59:45.708: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-9804" for this suite.
Feb  2 13:59:51.744: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 13:59:51.900: INFO: namespace secrets-9804 deletion completed in 6.181920823s

• [SLOW TEST:14.473 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[k8s.io] Probing container 
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  2 13:59:51.901: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  2 14:00:52.062: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-4709" for this suite.
Feb  2 14:01:14.147: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 14:01:14.255: INFO: namespace container-probe-4709 deletion completed in 22.188023343s

• [SLOW TEST:82.354 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  2 14:01:14.255: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward api env vars
Feb  2 14:01:14.326: INFO: Waiting up to 5m0s for pod "downward-api-86e62854-a83c-4315-905f-fbf3e64f418c" in namespace "downward-api-804" to be "success or failure"
Feb  2 14:01:14.398: INFO: Pod "downward-api-86e62854-a83c-4315-905f-fbf3e64f418c": Phase="Pending", Reason="", readiness=false. Elapsed: 71.837953ms
Feb  2 14:01:16.410: INFO: Pod "downward-api-86e62854-a83c-4315-905f-fbf3e64f418c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.083994005s
Feb  2 14:01:18.423: INFO: Pod "downward-api-86e62854-a83c-4315-905f-fbf3e64f418c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.097003355s
Feb  2 14:01:20.471: INFO: Pod "downward-api-86e62854-a83c-4315-905f-fbf3e64f418c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.144606803s
Feb  2 14:01:22.485: INFO: Pod "downward-api-86e62854-a83c-4315-905f-fbf3e64f418c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.159048363s
STEP: Saw pod success
Feb  2 14:01:22.485: INFO: Pod "downward-api-86e62854-a83c-4315-905f-fbf3e64f418c" satisfied condition "success or failure"
Feb  2 14:01:22.490: INFO: Trying to get logs from node iruya-node pod downward-api-86e62854-a83c-4315-905f-fbf3e64f418c container dapi-container: 
STEP: delete the pod
Feb  2 14:01:22.582: INFO: Waiting for pod downward-api-86e62854-a83c-4315-905f-fbf3e64f418c to disappear
Feb  2 14:01:22.589: INFO: Pod downward-api-86e62854-a83c-4315-905f-fbf3e64f418c no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  2 14:01:22.589: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-804" for this suite.
Feb  2 14:01:28.626: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 14:01:28.725: INFO: namespace downward-api-804 deletion completed in 6.128704389s

• [SLOW TEST:14.470 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32
  should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run rc 
  should create an rc from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  2 14:01:28.725: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1456
[It] should create an rc from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Feb  2 14:01:28.853: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=kubectl-3688'
Feb  2 14:01:29.050: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Feb  2 14:01:29.050: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n"
STEP: verifying the rc e2e-test-nginx-rc was created
STEP: verifying the pod controlled by rc e2e-test-nginx-rc was created
STEP: confirm that you can get logs from an rc
Feb  2 14:01:29.075: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-nginx-rc-p62tz]
Feb  2 14:01:29.075: INFO: Waiting up to 5m0s for pod "e2e-test-nginx-rc-p62tz" in namespace "kubectl-3688" to be "running and ready"
Feb  2 14:01:29.148: INFO: Pod "e2e-test-nginx-rc-p62tz": Phase="Pending", Reason="", readiness=false. Elapsed: 72.130937ms
Feb  2 14:01:31.156: INFO: Pod "e2e-test-nginx-rc-p62tz": Phase="Pending", Reason="", readiness=false. Elapsed: 2.080293107s
Feb  2 14:01:33.161: INFO: Pod "e2e-test-nginx-rc-p62tz": Phase="Pending", Reason="", readiness=false. Elapsed: 4.085331684s
Feb  2 14:01:35.172: INFO: Pod "e2e-test-nginx-rc-p62tz": Phase="Pending", Reason="", readiness=false. Elapsed: 6.096977071s
Feb  2 14:01:37.182: INFO: Pod "e2e-test-nginx-rc-p62tz": Phase="Running", Reason="", readiness=true. Elapsed: 8.106493873s
Feb  2 14:01:37.182: INFO: Pod "e2e-test-nginx-rc-p62tz" satisfied condition "running and ready"
Feb  2 14:01:37.182: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-nginx-rc-p62tz]
Feb  2 14:01:37.182: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-nginx-rc --namespace=kubectl-3688'
Feb  2 14:01:37.394: INFO: stderr: ""
Feb  2 14:01:37.394: INFO: stdout: ""
[AfterEach] [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1461
Feb  2 14:01:37.395: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=kubectl-3688'
Feb  2 14:01:37.612: INFO: stderr: ""
Feb  2 14:01:37.612: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  2 14:01:37.612: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3688" for this suite.
Feb  2 14:01:59.637: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 14:01:59.884: INFO: namespace kubectl-3688 deletion completed in 22.266613458s

• [SLOW TEST:31.159 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create an rc from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class 
  should be submitted and removed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  2 14:01:59.884: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods Set QOS Class
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:179
[It] should be submitted and removed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying QOS class is set on the pod
[AfterEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  2 14:02:00.042: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-3215" for this suite.
Feb  2 14:02:24.123: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 14:02:24.280: INFO: namespace pods-3215 deletion completed in 24.219776793s

• [SLOW TEST:24.396 seconds]
[k8s.io] [sig-node] Pods Extended
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  [k8s.io] Pods Set QOS Class
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should be submitted and removed  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl patch 
  should add annotations for pods in rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  2 14:02:24.280: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should add annotations for pods in rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating Redis RC
Feb  2 14:02:24.373: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6505'
Feb  2 14:02:24.798: INFO: stderr: ""
Feb  2 14:02:24.798: INFO: stdout: "replicationcontroller/redis-master created\n"
STEP: Waiting for Redis master to start.
Feb  2 14:02:25.812: INFO: Selector matched 1 pods for map[app:redis]
Feb  2 14:02:25.812: INFO: Found 0 / 1
Feb  2 14:02:26.807: INFO: Selector matched 1 pods for map[app:redis]
Feb  2 14:02:26.807: INFO: Found 0 / 1
Feb  2 14:02:27.825: INFO: Selector matched 1 pods for map[app:redis]
Feb  2 14:02:27.825: INFO: Found 0 / 1
Feb  2 14:02:28.805: INFO: Selector matched 1 pods for map[app:redis]
Feb  2 14:02:28.805: INFO: Found 0 / 1
Feb  2 14:02:29.807: INFO: Selector matched 1 pods for map[app:redis]
Feb  2 14:02:29.807: INFO: Found 0 / 1
Feb  2 14:02:30.809: INFO: Selector matched 1 pods for map[app:redis]
Feb  2 14:02:30.810: INFO: Found 0 / 1
Feb  2 14:02:31.808: INFO: Selector matched 1 pods for map[app:redis]
Feb  2 14:02:31.809: INFO: Found 1 / 1
Feb  2 14:02:31.809: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
STEP: patching all pods
Feb  2 14:02:31.819: INFO: Selector matched 1 pods for map[app:redis]
Feb  2 14:02:31.819: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Feb  2 14:02:31.820: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod redis-master-s8kv9 --namespace=kubectl-6505 -p {"metadata":{"annotations":{"x":"y"}}}'
Feb  2 14:02:32.027: INFO: stderr: ""
Feb  2 14:02:32.027: INFO: stdout: "pod/redis-master-s8kv9 patched\n"
STEP: checking annotations
Feb  2 14:02:32.034: INFO: Selector matched 1 pods for map[app:redis]
Feb  2 14:02:32.034: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  2 14:02:32.034: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6505" for this suite.
Feb  2 14:02:54.069: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 14:02:54.152: INFO: namespace kubectl-6505 deletion completed in 22.114957309s

• [SLOW TEST:29.872 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl patch
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should add annotations for pods in rc  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  2 14:02:54.153: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name projected-secret-test-676dddb2-683e-441a-a59a-c8ecdfe9c1b5
STEP: Creating a pod to test consume secrets
Feb  2 14:02:54.292: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-3b46a15d-aa17-4361-80af-88b1b35e8417" in namespace "projected-8857" to be "success or failure"
Feb  2 14:02:54.301: INFO: Pod "pod-projected-secrets-3b46a15d-aa17-4361-80af-88b1b35e8417": Phase="Pending", Reason="", readiness=false. Elapsed: 9.087504ms
Feb  2 14:02:56.315: INFO: Pod "pod-projected-secrets-3b46a15d-aa17-4361-80af-88b1b35e8417": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022893591s
Feb  2 14:02:58.328: INFO: Pod "pod-projected-secrets-3b46a15d-aa17-4361-80af-88b1b35e8417": Phase="Pending", Reason="", readiness=false. Elapsed: 4.035935953s
Feb  2 14:03:00.342: INFO: Pod "pod-projected-secrets-3b46a15d-aa17-4361-80af-88b1b35e8417": Phase="Pending", Reason="", readiness=false. Elapsed: 6.050244623s
Feb  2 14:03:02.354: INFO: Pod "pod-projected-secrets-3b46a15d-aa17-4361-80af-88b1b35e8417": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.062280016s
STEP: Saw pod success
Feb  2 14:03:02.354: INFO: Pod "pod-projected-secrets-3b46a15d-aa17-4361-80af-88b1b35e8417" satisfied condition "success or failure"
Feb  2 14:03:02.360: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-3b46a15d-aa17-4361-80af-88b1b35e8417 container secret-volume-test: 
STEP: delete the pod
Feb  2 14:03:02.444: INFO: Waiting for pod pod-projected-secrets-3b46a15d-aa17-4361-80af-88b1b35e8417 to disappear
Feb  2 14:03:02.456: INFO: Pod pod-projected-secrets-3b46a15d-aa17-4361-80af-88b1b35e8417 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  2 14:03:02.456: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8857" for this suite.
Feb  2 14:03:08.628: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 14:03:08.784: INFO: namespace projected-8857 deletion completed in 6.316784022s

• [SLOW TEST:14.631 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  2 14:03:08.784: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Performing setup for networking test in namespace pod-network-test-3923
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Feb  2 14:03:08.899: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Feb  2 14:03:45.139: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostName&protocol=udp&host=10.32.0.4&port=8081&tries=1'] Namespace:pod-network-test-3923 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb  2 14:03:45.140: INFO: >>> kubeConfig: /root/.kube/config
I0202 14:03:45.244746       8 log.go:172] (0xc0015ea4d0) (0xc002d7cc80) Create stream
I0202 14:03:45.244822       8 log.go:172] (0xc0015ea4d0) (0xc002d7cc80) Stream added, broadcasting: 1
I0202 14:03:45.253294       8 log.go:172] (0xc0015ea4d0) Reply frame received for 1
I0202 14:03:45.253334       8 log.go:172] (0xc0015ea4d0) (0xc001af5540) Create stream
I0202 14:03:45.253347       8 log.go:172] (0xc0015ea4d0) (0xc001af5540) Stream added, broadcasting: 3
I0202 14:03:45.255450       8 log.go:172] (0xc0015ea4d0) Reply frame received for 3
I0202 14:03:45.255475       8 log.go:172] (0xc0015ea4d0) (0xc001af55e0) Create stream
I0202 14:03:45.255485       8 log.go:172] (0xc0015ea4d0) (0xc001af55e0) Stream added, broadcasting: 5
I0202 14:03:45.258464       8 log.go:172] (0xc0015ea4d0) Reply frame received for 5
I0202 14:03:45.469505       8 log.go:172] (0xc0015ea4d0) Data frame received for 3
I0202 14:03:45.469635       8 log.go:172] (0xc001af5540) (3) Data frame handling
I0202 14:03:45.469671       8 log.go:172] (0xc001af5540) (3) Data frame sent
I0202 14:03:45.622815       8 log.go:172] (0xc0015ea4d0) (0xc001af5540) Stream removed, broadcasting: 3
I0202 14:03:45.623062       8 log.go:172] (0xc0015ea4d0) Data frame received for 1
I0202 14:03:45.623142       8 log.go:172] (0xc0015ea4d0) (0xc001af55e0) Stream removed, broadcasting: 5
I0202 14:03:45.623206       8 log.go:172] (0xc002d7cc80) (1) Data frame handling
I0202 14:03:45.623248       8 log.go:172] (0xc002d7cc80) (1) Data frame sent
I0202 14:03:45.623259       8 log.go:172] (0xc0015ea4d0) (0xc002d7cc80) Stream removed, broadcasting: 1
I0202 14:03:45.623296       8 log.go:172] (0xc0015ea4d0) Go away received
I0202 14:03:45.623823       8 log.go:172] (0xc0015ea4d0) (0xc002d7cc80) Stream removed, broadcasting: 1
I0202 14:03:45.623834       8 log.go:172] (0xc0015ea4d0) (0xc001af5540) Stream removed, broadcasting: 3
I0202 14:03:45.623837       8 log.go:172] (0xc0015ea4d0) (0xc001af55e0) Stream removed, broadcasting: 5
Feb  2 14:03:45.623: INFO: Waiting for endpoints: map[]
Feb  2 14:03:45.634: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostName&protocol=udp&host=10.44.0.1&port=8081&tries=1'] Namespace:pod-network-test-3923 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb  2 14:03:45.634: INFO: >>> kubeConfig: /root/.kube/config
I0202 14:03:45.692585       8 log.go:172] (0xc0015eadc0) (0xc002d7d400) Create stream
I0202 14:03:45.692661       8 log.go:172] (0xc0015eadc0) (0xc002d7d400) Stream added, broadcasting: 1
I0202 14:03:45.699355       8 log.go:172] (0xc0015eadc0) Reply frame received for 1
I0202 14:03:45.699397       8 log.go:172] (0xc0015eadc0) (0xc001332820) Create stream
I0202 14:03:45.699411       8 log.go:172] (0xc0015eadc0) (0xc001332820) Stream added, broadcasting: 3
I0202 14:03:45.700777       8 log.go:172] (0xc0015eadc0) Reply frame received for 3
I0202 14:03:45.700803       8 log.go:172] (0xc0015eadc0) (0xc002d7d540) Create stream
I0202 14:03:45.700812       8 log.go:172] (0xc0015eadc0) (0xc002d7d540) Stream added, broadcasting: 5
I0202 14:03:45.702203       8 log.go:172] (0xc0015eadc0) Reply frame received for 5
I0202 14:03:45.838956       8 log.go:172] (0xc0015eadc0) Data frame received for 3
I0202 14:03:45.839037       8 log.go:172] (0xc001332820) (3) Data frame handling
I0202 14:03:45.839058       8 log.go:172] (0xc001332820) (3) Data frame sent
I0202 14:03:45.963170       8 log.go:172] (0xc0015eadc0) (0xc001332820) Stream removed, broadcasting: 3
I0202 14:03:45.963258       8 log.go:172] (0xc0015eadc0) Data frame received for 1
I0202 14:03:45.963286       8 log.go:172] (0xc002d7d400) (1) Data frame handling
I0202 14:03:45.963329       8 log.go:172] (0xc002d7d400) (1) Data frame sent
I0202 14:03:45.963348       8 log.go:172] (0xc0015eadc0) (0xc002d7d540) Stream removed, broadcasting: 5
I0202 14:03:45.963377       8 log.go:172] (0xc0015eadc0) (0xc002d7d400) Stream removed, broadcasting: 1
I0202 14:03:45.963390       8 log.go:172] (0xc0015eadc0) Go away received
I0202 14:03:45.963660       8 log.go:172] (0xc0015eadc0) (0xc002d7d400) Stream removed, broadcasting: 1
I0202 14:03:45.963676       8 log.go:172] (0xc0015eadc0) (0xc001332820) Stream removed, broadcasting: 3
I0202 14:03:45.963690       8 log.go:172] (0xc0015eadc0) (0xc002d7d540) Stream removed, broadcasting: 5
Feb  2 14:03:45.963: INFO: Waiting for endpoints: map[]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  2 14:03:45.963: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-3923" for this suite.
Feb  2 14:04:10.005: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 14:04:10.115: INFO: namespace pod-network-test-3923 deletion completed in 24.13830683s

• [SLOW TEST:61.330 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  2 14:04:10.115: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0644 on node default medium
Feb  2 14:04:10.321: INFO: Waiting up to 5m0s for pod "pod-04f20b39-b1b5-4f36-a58b-4cc1f313ae63" in namespace "emptydir-2155" to be "success or failure"
Feb  2 14:04:10.328: INFO: Pod "pod-04f20b39-b1b5-4f36-a58b-4cc1f313ae63": Phase="Pending", Reason="", readiness=false. Elapsed: 6.933262ms
Feb  2 14:04:12.334: INFO: Pod "pod-04f20b39-b1b5-4f36-a58b-4cc1f313ae63": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012874537s
Feb  2 14:04:14.341: INFO: Pod "pod-04f20b39-b1b5-4f36-a58b-4cc1f313ae63": Phase="Pending", Reason="", readiness=false. Elapsed: 4.020307089s
Feb  2 14:04:16.702: INFO: Pod "pod-04f20b39-b1b5-4f36-a58b-4cc1f313ae63": Phase="Pending", Reason="", readiness=false. Elapsed: 6.381208603s
Feb  2 14:04:18.711: INFO: Pod "pod-04f20b39-b1b5-4f36-a58b-4cc1f313ae63": Phase="Pending", Reason="", readiness=false. Elapsed: 8.389834257s
Feb  2 14:04:20.719: INFO: Pod "pod-04f20b39-b1b5-4f36-a58b-4cc1f313ae63": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.398334139s
STEP: Saw pod success
Feb  2 14:04:20.719: INFO: Pod "pod-04f20b39-b1b5-4f36-a58b-4cc1f313ae63" satisfied condition "success or failure"
Feb  2 14:04:20.728: INFO: Trying to get logs from node iruya-node pod pod-04f20b39-b1b5-4f36-a58b-4cc1f313ae63 container test-container: 
STEP: delete the pod
Feb  2 14:04:20.886: INFO: Waiting for pod pod-04f20b39-b1b5-4f36-a58b-4cc1f313ae63 to disappear
Feb  2 14:04:20.930: INFO: Pod pod-04f20b39-b1b5-4f36-a58b-4cc1f313ae63 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  2 14:04:20.931: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-2155" for this suite.
Feb  2 14:04:26.973: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 14:04:27.097: INFO: namespace emptydir-2155 deletion completed in 6.15798303s

• [SLOW TEST:16.982 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Job 
  should delete a job [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  2 14:04:27.098: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete a job [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a job
STEP: Ensuring active pods == parallelism
STEP: delete a job
STEP: deleting Job.batch foo in namespace job-1317, will wait for the garbage collector to delete the pods
Feb  2 14:04:37.274: INFO: Deleting Job.batch foo took: 12.194395ms
Feb  2 14:04:37.574: INFO: Terminating Job.batch foo pods took: 300.502645ms
STEP: Ensuring job was deleted
[AfterEach] [sig-apps] Job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  2 14:05:26.785: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-1317" for this suite.
Feb  2 14:05:32.814: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 14:05:32.929: INFO: namespace job-1317 deletion completed in 6.138614483s

• [SLOW TEST:65.832 seconds]
[sig-apps] Job
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should delete a job [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  2 14:05:32.930: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-3ef4554c-585c-491b-94bd-883d0f5b0716
STEP: Creating a pod to test consume secrets
Feb  2 14:05:33.040: INFO: Waiting up to 5m0s for pod "pod-secrets-288f8cc7-9b78-479c-85f5-bea2773d3287" in namespace "secrets-9059" to be "success or failure"
Feb  2 14:05:33.051: INFO: Pod "pod-secrets-288f8cc7-9b78-479c-85f5-bea2773d3287": Phase="Pending", Reason="", readiness=false. Elapsed: 10.969743ms
Feb  2 14:05:35.064: INFO: Pod "pod-secrets-288f8cc7-9b78-479c-85f5-bea2773d3287": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02414847s
Feb  2 14:05:37.079: INFO: Pod "pod-secrets-288f8cc7-9b78-479c-85f5-bea2773d3287": Phase="Pending", Reason="", readiness=false. Elapsed: 4.038531644s
Feb  2 14:05:39.256: INFO: Pod "pod-secrets-288f8cc7-9b78-479c-85f5-bea2773d3287": Phase="Pending", Reason="", readiness=false. Elapsed: 6.215840182s
Feb  2 14:05:41.268: INFO: Pod "pod-secrets-288f8cc7-9b78-479c-85f5-bea2773d3287": Phase="Pending", Reason="", readiness=false. Elapsed: 8.227754881s
Feb  2 14:05:43.273: INFO: Pod "pod-secrets-288f8cc7-9b78-479c-85f5-bea2773d3287": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.232868269s
STEP: Saw pod success
Feb  2 14:05:43.273: INFO: Pod "pod-secrets-288f8cc7-9b78-479c-85f5-bea2773d3287" satisfied condition "success or failure"
Feb  2 14:05:43.275: INFO: Trying to get logs from node iruya-node pod pod-secrets-288f8cc7-9b78-479c-85f5-bea2773d3287 container secret-volume-test: 
STEP: delete the pod
Feb  2 14:05:43.334: INFO: Waiting for pod pod-secrets-288f8cc7-9b78-479c-85f5-bea2773d3287 to disappear
Feb  2 14:05:43.397: INFO: Pod pod-secrets-288f8cc7-9b78-479c-85f5-bea2773d3287 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  2 14:05:43.397: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-9059" for this suite.
Feb  2 14:05:49.421: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 14:05:49.584: INFO: namespace secrets-9059 deletion completed in 6.182035293s

• [SLOW TEST:16.655 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  2 14:05:49.585: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Performing setup for networking test in namespace pod-network-test-7055
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Feb  2 14:05:49.687: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Feb  2 14:06:22.593: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.44.0.1 8081 | grep -v '^\s*$'] Namespace:pod-network-test-7055 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb  2 14:06:22.593: INFO: >>> kubeConfig: /root/.kube/config
I0202 14:06:22.684678       8 log.go:172] (0xc001a9e420) (0xc00053a5a0) Create stream
I0202 14:06:22.684901       8 log.go:172] (0xc001a9e420) (0xc00053a5a0) Stream added, broadcasting: 1
I0202 14:06:22.692522       8 log.go:172] (0xc001a9e420) Reply frame received for 1
I0202 14:06:22.692594       8 log.go:172] (0xc001a9e420) (0xc0002f4dc0) Create stream
I0202 14:06:22.692604       8 log.go:172] (0xc001a9e420) (0xc0002f4dc0) Stream added, broadcasting: 3
I0202 14:06:22.698415       8 log.go:172] (0xc001a9e420) Reply frame received for 3
I0202 14:06:22.698450       8 log.go:172] (0xc001a9e420) (0xc0001ca640) Create stream
I0202 14:06:22.698463       8 log.go:172] (0xc001a9e420) (0xc0001ca640) Stream added, broadcasting: 5
I0202 14:06:22.701964       8 log.go:172] (0xc001a9e420) Reply frame received for 5
I0202 14:06:23.858430       8 log.go:172] (0xc001a9e420) Data frame received for 3
I0202 14:06:23.858577       8 log.go:172] (0xc0002f4dc0) (3) Data frame handling
I0202 14:06:23.858626       8 log.go:172] (0xc0002f4dc0) (3) Data frame sent
I0202 14:06:24.054860       8 log.go:172] (0xc001a9e420) (0xc0001ca640) Stream removed, broadcasting: 5
I0202 14:06:24.055154       8 log.go:172] (0xc001a9e420) Data frame received for 1
I0202 14:06:24.055193       8 log.go:172] (0xc001a9e420) (0xc0002f4dc0) Stream removed, broadcasting: 3
I0202 14:06:24.055265       8 log.go:172] (0xc00053a5a0) (1) Data frame handling
I0202 14:06:24.055308       8 log.go:172] (0xc00053a5a0) (1) Data frame sent
I0202 14:06:24.055316       8 log.go:172] (0xc001a9e420) (0xc00053a5a0) Stream removed, broadcasting: 1
I0202 14:06:24.055335       8 log.go:172] (0xc001a9e420) Go away received
I0202 14:06:24.055767       8 log.go:172] (0xc001a9e420) (0xc00053a5a0) Stream removed, broadcasting: 1
I0202 14:06:24.055800       8 log.go:172] (0xc001a9e420) (0xc0002f4dc0) Stream removed, broadcasting: 3
I0202 14:06:24.055808       8 log.go:172] (0xc001a9e420) (0xc0001ca640) Stream removed, broadcasting: 5
Feb  2 14:06:24.055: INFO: Found all expected endpoints: [netserver-0]
Feb  2 14:06:24.065: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.32.0.4 8081 | grep -v '^\s*$'] Namespace:pod-network-test-7055 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb  2 14:06:24.065: INFO: >>> kubeConfig: /root/.kube/config
I0202 14:06:24.130762       8 log.go:172] (0xc002186e70) (0xc0002f5040) Create stream
I0202 14:06:24.130887       8 log.go:172] (0xc002186e70) (0xc0002f5040) Stream added, broadcasting: 1
I0202 14:06:24.140961       8 log.go:172] (0xc002186e70) Reply frame received for 1
I0202 14:06:24.141033       8 log.go:172] (0xc002186e70) (0xc00053a6e0) Create stream
I0202 14:06:24.141050       8 log.go:172] (0xc002186e70) (0xc00053a6e0) Stream added, broadcasting: 3
I0202 14:06:24.143301       8 log.go:172] (0xc002186e70) Reply frame received for 3
I0202 14:06:24.143344       8 log.go:172] (0xc002186e70) (0xc001af5040) Create stream
I0202 14:06:24.143354       8 log.go:172] (0xc002186e70) (0xc001af5040) Stream added, broadcasting: 5
I0202 14:06:24.145521       8 log.go:172] (0xc002186e70) Reply frame received for 5
I0202 14:06:25.284090       8 log.go:172] (0xc002186e70) Data frame received for 3
I0202 14:06:25.284206       8 log.go:172] (0xc00053a6e0) (3) Data frame handling
I0202 14:06:25.284235       8 log.go:172] (0xc00053a6e0) (3) Data frame sent
I0202 14:06:25.476164       8 log.go:172] (0xc002186e70) (0xc00053a6e0) Stream removed, broadcasting: 3
I0202 14:06:25.476317       8 log.go:172] (0xc002186e70) Data frame received for 1
I0202 14:06:25.476362       8 log.go:172] (0xc0002f5040) (1) Data frame handling
I0202 14:06:25.476448       8 log.go:172] (0xc0002f5040) (1) Data frame sent
I0202 14:06:25.476475       8 log.go:172] (0xc002186e70) (0xc001af5040) Stream removed, broadcasting: 5
I0202 14:06:25.476566       8 log.go:172] (0xc002186e70) (0xc0002f5040) Stream removed, broadcasting: 1
I0202 14:06:25.476647       8 log.go:172] (0xc002186e70) Go away received
I0202 14:06:25.477058       8 log.go:172] (0xc002186e70) (0xc0002f5040) Stream removed, broadcasting: 1
I0202 14:06:25.477095       8 log.go:172] (0xc002186e70) (0xc00053a6e0) Stream removed, broadcasting: 3
I0202 14:06:25.477115       8 log.go:172] (0xc002186e70) (0xc001af5040) Stream removed, broadcasting: 5
Feb  2 14:06:25.477: INFO: Found all expected endpoints: [netserver-1]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  2 14:06:25.477: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-7055" for this suite.
Feb  2 14:06:47.622: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 14:06:47.711: INFO: namespace pod-network-test-7055 deletion completed in 22.222354521s

• [SLOW TEST:58.126 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not cause race condition when used for configmaps [Serial] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  2 14:06:47.712: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not cause race condition when used for configmaps [Serial] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating 50 configmaps
STEP: Creating RC which spawns configmap-volume pods
Feb  2 14:06:48.592: INFO: Pod name wrapped-volume-race-70bf4b8d-e975-4a1a-85f4-6a2843d98814: Found 0 pods out of 5
Feb  2 14:06:53.603: INFO: Pod name wrapped-volume-race-70bf4b8d-e975-4a1a-85f4-6a2843d98814: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-70bf4b8d-e975-4a1a-85f4-6a2843d98814 in namespace emptydir-wrapper-763, will wait for the garbage collector to delete the pods
Feb  2 14:07:21.764: INFO: Deleting ReplicationController wrapped-volume-race-70bf4b8d-e975-4a1a-85f4-6a2843d98814 took: 11.97029ms
Feb  2 14:07:22.165: INFO: Terminating ReplicationController wrapped-volume-race-70bf4b8d-e975-4a1a-85f4-6a2843d98814 pods took: 400.801262ms
STEP: Creating RC which spawns configmap-volume pods
Feb  2 14:08:16.835: INFO: Pod name wrapped-volume-race-5c74d3c5-1a7c-4d0e-8f81-1e3211ab3c28: Found 0 pods out of 5
Feb  2 14:08:21.850: INFO: Pod name wrapped-volume-race-5c74d3c5-1a7c-4d0e-8f81-1e3211ab3c28: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-5c74d3c5-1a7c-4d0e-8f81-1e3211ab3c28 in namespace emptydir-wrapper-763, will wait for the garbage collector to delete the pods
Feb  2 14:08:52.089: INFO: Deleting ReplicationController wrapped-volume-race-5c74d3c5-1a7c-4d0e-8f81-1e3211ab3c28 took: 21.840477ms
Feb  2 14:08:52.490: INFO: Terminating ReplicationController wrapped-volume-race-5c74d3c5-1a7c-4d0e-8f81-1e3211ab3c28 pods took: 400.963051ms
STEP: Creating RC which spawns configmap-volume pods
Feb  2 14:09:36.850: INFO: Pod name wrapped-volume-race-bd928dfc-1081-4cdc-b9b3-afca003560f8: Found 0 pods out of 5
Feb  2 14:09:41.875: INFO: Pod name wrapped-volume-race-bd928dfc-1081-4cdc-b9b3-afca003560f8: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-bd928dfc-1081-4cdc-b9b3-afca003560f8 in namespace emptydir-wrapper-763, will wait for the garbage collector to delete the pods
Feb  2 14:10:12.022: INFO: Deleting ReplicationController wrapped-volume-race-bd928dfc-1081-4cdc-b9b3-afca003560f8 took: 16.684938ms
Feb  2 14:10:12.423: INFO: Terminating ReplicationController wrapped-volume-race-bd928dfc-1081-4cdc-b9b3-afca003560f8 pods took: 401.121953ms
STEP: Cleaning up the configMaps
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  2 14:10:58.389: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-wrapper-763" for this suite.
Feb  2 14:11:08.418: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 14:11:08.571: INFO: namespace emptydir-wrapper-763 deletion completed in 10.173609736s

• [SLOW TEST:260.860 seconds]
[sig-storage] EmptyDir wrapper volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  should not cause race condition when used for configmaps [Serial] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  2 14:11:08.572: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-map-c9acc90d-5f2c-47bd-be7a-b0ebd8cf23c1
STEP: Creating a pod to test consume configMaps
Feb  2 14:11:08.679: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-b88893a4-39a4-4691-8762-b7253021e33f" in namespace "projected-969" to be "success or failure"
Feb  2 14:11:08.705: INFO: Pod "pod-projected-configmaps-b88893a4-39a4-4691-8762-b7253021e33f": Phase="Pending", Reason="", readiness=false. Elapsed: 25.431598ms
Feb  2 14:11:10.737: INFO: Pod "pod-projected-configmaps-b88893a4-39a4-4691-8762-b7253021e33f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.057737037s
Feb  2 14:11:12.745: INFO: Pod "pod-projected-configmaps-b88893a4-39a4-4691-8762-b7253021e33f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.065822455s
Feb  2 14:11:14.760: INFO: Pod "pod-projected-configmaps-b88893a4-39a4-4691-8762-b7253021e33f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.080939155s
Feb  2 14:11:16.769: INFO: Pod "pod-projected-configmaps-b88893a4-39a4-4691-8762-b7253021e33f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.08940074s
Feb  2 14:11:18.787: INFO: Pod "pod-projected-configmaps-b88893a4-39a4-4691-8762-b7253021e33f": Phase="Pending", Reason="", readiness=false. Elapsed: 10.107668758s
Feb  2 14:11:20.794: INFO: Pod "pod-projected-configmaps-b88893a4-39a4-4691-8762-b7253021e33f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.114838255s
STEP: Saw pod success
Feb  2 14:11:20.794: INFO: Pod "pod-projected-configmaps-b88893a4-39a4-4691-8762-b7253021e33f" satisfied condition "success or failure"
Feb  2 14:11:20.801: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-b88893a4-39a4-4691-8762-b7253021e33f container projected-configmap-volume-test: 
STEP: delete the pod
Feb  2 14:11:21.065: INFO: Waiting for pod pod-projected-configmaps-b88893a4-39a4-4691-8762-b7253021e33f to disappear
Feb  2 14:11:21.132: INFO: Pod pod-projected-configmaps-b88893a4-39a4-4691-8762-b7253021e33f no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  2 14:11:21.132: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-969" for this suite.
Feb  2 14:11:27.173: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 14:11:27.292: INFO: namespace projected-969 deletion completed in 6.149370626s

• [SLOW TEST:18.721 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-api-machinery] Watchers 
  should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  2 14:11:27.293: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a watch on configmaps
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: closing the watch once it receives two notifications
Feb  2 14:11:27.392: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-5697,SelfLink:/api/v1/namespaces/watch-5697/configmaps/e2e-watch-test-watch-closed,UID:b4e91c80-7b72-4627-a4fc-b51a8811fe99,ResourceVersion:22824932,Generation:0,CreationTimestamp:2020-02-02 14:11:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
Feb  2 14:11:27.392: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-5697,SelfLink:/api/v1/namespaces/watch-5697/configmaps/e2e-watch-test-watch-closed,UID:b4e91c80-7b72-4627-a4fc-b51a8811fe99,ResourceVersion:22824933,Generation:0,CreationTimestamp:2020-02-02 14:11:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying the configmap a second time, while the watch is closed
STEP: creating a new watch on configmaps from the last resource version observed by the first watch
STEP: deleting the configmap
STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed
Feb  2 14:11:27.418: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-5697,SelfLink:/api/v1/namespaces/watch-5697/configmaps/e2e-watch-test-watch-closed,UID:b4e91c80-7b72-4627-a4fc-b51a8811fe99,ResourceVersion:22824934,Generation:0,CreationTimestamp:2020-02-02 14:11:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Feb  2 14:11:27.419: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-5697,SelfLink:/api/v1/namespaces/watch-5697/configmaps/e2e-watch-test-watch-closed,UID:b4e91c80-7b72-4627-a4fc-b51a8811fe99,ResourceVersion:22824935,Generation:0,CreationTimestamp:2020-02-02 14:11:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  2 14:11:27.419: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-5697" for this suite.
Feb  2 14:11:33.485: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 14:11:33.623: INFO: namespace watch-5697 deletion completed in 6.174849225s

• [SLOW TEST:6.331 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  2 14:11:33.624: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81
Feb  2 14:11:33.680: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Feb  2 14:11:33.708: INFO: Waiting for terminating namespaces to be deleted...
Feb  2 14:11:33.713: INFO: 
Logging pods the kubelet thinks is on node iruya-node before test
Feb  2 14:11:33.726: INFO: weave-net-rlp57 from kube-system started at 2019-10-12 11:56:39 +0000 UTC (2 container statuses recorded)
Feb  2 14:11:33.726: INFO: 	Container weave ready: true, restart count 0
Feb  2 14:11:33.726: INFO: 	Container weave-npc ready: true, restart count 0
Feb  2 14:11:33.726: INFO: kube-proxy-976zl from kube-system started at 2019-08-04 09:01:39 +0000 UTC (1 container statuses recorded)
Feb  2 14:11:33.726: INFO: 	Container kube-proxy ready: true, restart count 0
Feb  2 14:11:33.726: INFO: 
Logging pods the kubelet thinks is on node iruya-server-sfge57q7djm7 before test
Feb  2 14:11:33.738: INFO: coredns-5c98db65d4-bm4gs from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded)
Feb  2 14:11:33.738: INFO: 	Container coredns ready: true, restart count 0
Feb  2 14:11:33.738: INFO: etcd-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:38 +0000 UTC (1 container statuses recorded)
Feb  2 14:11:33.738: INFO: 	Container etcd ready: true, restart count 0
Feb  2 14:11:33.738: INFO: weave-net-bzl4d from kube-system started at 2019-08-04 08:52:37 +0000 UTC (2 container statuses recorded)
Feb  2 14:11:33.738: INFO: 	Container weave ready: true, restart count 0
Feb  2 14:11:33.738: INFO: 	Container weave-npc ready: true, restart count 0
Feb  2 14:11:33.738: INFO: kube-controller-manager-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:42 +0000 UTC (1 container statuses recorded)
Feb  2 14:11:33.738: INFO: 	Container kube-controller-manager ready: true, restart count 19
Feb  2 14:11:33.738: INFO: kube-proxy-58v95 from kube-system started at 2019-08-04 08:52:37 +0000 UTC (1 container statuses recorded)
Feb  2 14:11:33.738: INFO: 	Container kube-proxy ready: true, restart count 0
Feb  2 14:11:33.738: INFO: kube-apiserver-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:39 +0000 UTC (1 container statuses recorded)
Feb  2 14:11:33.738: INFO: 	Container kube-apiserver ready: true, restart count 0
Feb  2 14:11:33.738: INFO: kube-scheduler-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:43 +0000 UTC (1 container statuses recorded)
Feb  2 14:11:33.738: INFO: 	Container kube-scheduler ready: true, restart count 13
Feb  2 14:11:33.738: INFO: coredns-5c98db65d4-xx8w8 from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded)
Feb  2 14:11:33.738: INFO: 	Container coredns ready: true, restart count 0
[It] validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: verifying the node has the label node iruya-node
STEP: verifying the node has the label node iruya-server-sfge57q7djm7
Feb  2 14:11:33.955: INFO: Pod coredns-5c98db65d4-bm4gs requesting resource cpu=100m on Node iruya-server-sfge57q7djm7
Feb  2 14:11:33.956: INFO: Pod coredns-5c98db65d4-xx8w8 requesting resource cpu=100m on Node iruya-server-sfge57q7djm7
Feb  2 14:11:33.956: INFO: Pod etcd-iruya-server-sfge57q7djm7 requesting resource cpu=0m on Node iruya-server-sfge57q7djm7
Feb  2 14:11:33.956: INFO: Pod kube-apiserver-iruya-server-sfge57q7djm7 requesting resource cpu=250m on Node iruya-server-sfge57q7djm7
Feb  2 14:11:33.956: INFO: Pod kube-controller-manager-iruya-server-sfge57q7djm7 requesting resource cpu=200m on Node iruya-server-sfge57q7djm7
Feb  2 14:11:33.956: INFO: Pod kube-proxy-58v95 requesting resource cpu=0m on Node iruya-server-sfge57q7djm7
Feb  2 14:11:33.956: INFO: Pod kube-proxy-976zl requesting resource cpu=0m on Node iruya-node
Feb  2 14:11:33.956: INFO: Pod kube-scheduler-iruya-server-sfge57q7djm7 requesting resource cpu=100m on Node iruya-server-sfge57q7djm7
Feb  2 14:11:33.956: INFO: Pod weave-net-bzl4d requesting resource cpu=20m on Node iruya-server-sfge57q7djm7
Feb  2 14:11:33.956: INFO: Pod weave-net-rlp57 requesting resource cpu=20m on Node iruya-node
STEP: Starting Pods to consume most of the cluster CPU.
STEP: Creating another pod that requires unavailable amount of CPU.
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-69537485-05e6-4530-8c21-7f54a221dbef.15ef9b4026eeb654], Reason = [Scheduled], Message = [Successfully assigned sched-pred-9404/filler-pod-69537485-05e6-4530-8c21-7f54a221dbef to iruya-server-sfge57q7djm7]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-69537485-05e6-4530-8c21-7f54a221dbef.15ef9b41468132b7], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-69537485-05e6-4530-8c21-7f54a221dbef.15ef9b4235a21864], Reason = [Created], Message = [Created container filler-pod-69537485-05e6-4530-8c21-7f54a221dbef]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-69537485-05e6-4530-8c21-7f54a221dbef.15ef9b4254264981], Reason = [Started], Message = [Started container filler-pod-69537485-05e6-4530-8c21-7f54a221dbef]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-6fb66234-f395-439b-b745-9bfe6abd0041.15ef9b402784e821], Reason = [Scheduled], Message = [Successfully assigned sched-pred-9404/filler-pod-6fb66234-f395-439b-b745-9bfe6abd0041 to iruya-node]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-6fb66234-f395-439b-b745-9bfe6abd0041.15ef9b414976a0f6], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-6fb66234-f395-439b-b745-9bfe6abd0041.15ef9b422a405316], Reason = [Created], Message = [Created container filler-pod-6fb66234-f395-439b-b745-9bfe6abd0041]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-6fb66234-f395-439b-b745-9bfe6abd0041.15ef9b424cbc2252], Reason = [Started], Message = [Started container filler-pod-6fb66234-f395-439b-b745-9bfe6abd0041]
STEP: Considering event: 
Type = [Warning], Name = [additional-pod.15ef9b42f693f745], Reason = [FailedScheduling], Message = [0/2 nodes are available: 2 Insufficient cpu.]
STEP: removing the label node off the node iruya-node
STEP: verifying the node doesn't have the label node
STEP: removing the label node off the node iruya-server-sfge57q7djm7
STEP: verifying the node doesn't have the label node
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  2 14:11:47.984: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-9404" for this suite.
Feb  2 14:11:56.026: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 14:11:56.109: INFO: namespace sched-pred-9404 deletion completed in 8.121032209s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72

• [SLOW TEST:22.485 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23
  validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  2 14:11:56.109: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-133471dd-ac04-48e0-b584-470a8af29383
STEP: Creating a pod to test consume configMaps
Feb  2 14:11:57.916: INFO: Waiting up to 5m0s for pod "pod-configmaps-77fa2873-921f-489e-a54f-029e81c59e7b" in namespace "configmap-8105" to be "success or failure"
Feb  2 14:11:57.944: INFO: Pod "pod-configmaps-77fa2873-921f-489e-a54f-029e81c59e7b": Phase="Pending", Reason="", readiness=false. Elapsed: 27.793199ms
Feb  2 14:11:59.953: INFO: Pod "pod-configmaps-77fa2873-921f-489e-a54f-029e81c59e7b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03681395s
Feb  2 14:12:01.960: INFO: Pod "pod-configmaps-77fa2873-921f-489e-a54f-029e81c59e7b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.043481125s
Feb  2 14:12:03.981: INFO: Pod "pod-configmaps-77fa2873-921f-489e-a54f-029e81c59e7b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.063956832s
Feb  2 14:12:06.022: INFO: Pod "pod-configmaps-77fa2873-921f-489e-a54f-029e81c59e7b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.104932635s
STEP: Saw pod success
Feb  2 14:12:06.022: INFO: Pod "pod-configmaps-77fa2873-921f-489e-a54f-029e81c59e7b" satisfied condition "success or failure"
Feb  2 14:12:06.044: INFO: Trying to get logs from node iruya-node pod pod-configmaps-77fa2873-921f-489e-a54f-029e81c59e7b container configmap-volume-test: 
STEP: delete the pod
Feb  2 14:12:06.160: INFO: Waiting for pod pod-configmaps-77fa2873-921f-489e-a54f-029e81c59e7b to disappear
Feb  2 14:12:06.167: INFO: Pod pod-configmaps-77fa2873-921f-489e-a54f-029e81c59e7b no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  2 14:12:06.167: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-8105" for this suite.
Feb  2 14:12:12.216: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 14:12:12.291: INFO: namespace configmap-8105 deletion completed in 6.117167123s

• [SLOW TEST:16.181 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with projected pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  2 14:12:12.291: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with projected pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod pod-subpath-test-projected-j2mn
STEP: Creating a pod to test atomic-volume-subpath
Feb  2 14:12:12.447: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-j2mn" in namespace "subpath-540" to be "success or failure"
Feb  2 14:12:12.484: INFO: Pod "pod-subpath-test-projected-j2mn": Phase="Pending", Reason="", readiness=false. Elapsed: 35.922871ms
Feb  2 14:12:14.501: INFO: Pod "pod-subpath-test-projected-j2mn": Phase="Pending", Reason="", readiness=false. Elapsed: 2.05347854s
Feb  2 14:12:16.516: INFO: Pod "pod-subpath-test-projected-j2mn": Phase="Pending", Reason="", readiness=false. Elapsed: 4.067962834s
Feb  2 14:12:18.543: INFO: Pod "pod-subpath-test-projected-j2mn": Phase="Pending", Reason="", readiness=false. Elapsed: 6.095383481s
Feb  2 14:12:20.556: INFO: Pod "pod-subpath-test-projected-j2mn": Phase="Pending", Reason="", readiness=false. Elapsed: 8.108064671s
Feb  2 14:12:22.570: INFO: Pod "pod-subpath-test-projected-j2mn": Phase="Running", Reason="", readiness=true. Elapsed: 10.122823347s
Feb  2 14:12:24.584: INFO: Pod "pod-subpath-test-projected-j2mn": Phase="Running", Reason="", readiness=true. Elapsed: 12.136325399s
Feb  2 14:12:26.600: INFO: Pod "pod-subpath-test-projected-j2mn": Phase="Running", Reason="", readiness=true. Elapsed: 14.152270821s
Feb  2 14:12:28.624: INFO: Pod "pod-subpath-test-projected-j2mn": Phase="Running", Reason="", readiness=true. Elapsed: 16.176593152s
Feb  2 14:12:30.636: INFO: Pod "pod-subpath-test-projected-j2mn": Phase="Running", Reason="", readiness=true. Elapsed: 18.188584316s
Feb  2 14:12:33.213: INFO: Pod "pod-subpath-test-projected-j2mn": Phase="Running", Reason="", readiness=true. Elapsed: 20.765473552s
Feb  2 14:12:35.223: INFO: Pod "pod-subpath-test-projected-j2mn": Phase="Running", Reason="", readiness=true. Elapsed: 22.775217451s
Feb  2 14:12:37.231: INFO: Pod "pod-subpath-test-projected-j2mn": Phase="Running", Reason="", readiness=true. Elapsed: 24.783921015s
Feb  2 14:12:39.241: INFO: Pod "pod-subpath-test-projected-j2mn": Phase="Running", Reason="", readiness=true. Elapsed: 26.793460143s
Feb  2 14:12:41.266: INFO: Pod "pod-subpath-test-projected-j2mn": Phase="Succeeded", Reason="", readiness=false. Elapsed: 28.818801511s
STEP: Saw pod success
Feb  2 14:12:41.267: INFO: Pod "pod-subpath-test-projected-j2mn" satisfied condition "success or failure"
Feb  2 14:12:41.273: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-projected-j2mn container test-container-subpath-projected-j2mn: 
STEP: delete the pod
Feb  2 14:12:41.359: INFO: Waiting for pod pod-subpath-test-projected-j2mn to disappear
Feb  2 14:12:41.415: INFO: Pod pod-subpath-test-projected-j2mn no longer exists
STEP: Deleting pod pod-subpath-test-projected-j2mn
Feb  2 14:12:41.415: INFO: Deleting pod "pod-subpath-test-projected-j2mn" in namespace "subpath-540"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  2 14:12:41.420: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-540" for this suite.
Feb  2 14:12:47.496: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 14:12:47.582: INFO: namespace subpath-540 deletion completed in 6.155819686s

• [SLOW TEST:35.291 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with projected pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  2 14:12:47.582: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb  2 14:12:47.690: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0975a2fa-2b0b-4cfe-9c8e-a65c33056181" in namespace "downward-api-7905" to be "success or failure"
Feb  2 14:12:47.703: INFO: Pod "downwardapi-volume-0975a2fa-2b0b-4cfe-9c8e-a65c33056181": Phase="Pending", Reason="", readiness=false. Elapsed: 12.357967ms
Feb  2 14:12:49.713: INFO: Pod "downwardapi-volume-0975a2fa-2b0b-4cfe-9c8e-a65c33056181": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022982927s
Feb  2 14:12:51.722: INFO: Pod "downwardapi-volume-0975a2fa-2b0b-4cfe-9c8e-a65c33056181": Phase="Pending", Reason="", readiness=false. Elapsed: 4.031543087s
Feb  2 14:12:53.731: INFO: Pod "downwardapi-volume-0975a2fa-2b0b-4cfe-9c8e-a65c33056181": Phase="Pending", Reason="", readiness=false. Elapsed: 6.041182557s
Feb  2 14:12:55.739: INFO: Pod "downwardapi-volume-0975a2fa-2b0b-4cfe-9c8e-a65c33056181": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.048790126s
STEP: Saw pod success
Feb  2 14:12:55.739: INFO: Pod "downwardapi-volume-0975a2fa-2b0b-4cfe-9c8e-a65c33056181" satisfied condition "success or failure"
Feb  2 14:12:55.743: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-0975a2fa-2b0b-4cfe-9c8e-a65c33056181 container client-container: 
STEP: delete the pod
Feb  2 14:12:55.837: INFO: Waiting for pod downwardapi-volume-0975a2fa-2b0b-4cfe-9c8e-a65c33056181 to disappear
Feb  2 14:12:55.846: INFO: Pod downwardapi-volume-0975a2fa-2b0b-4cfe-9c8e-a65c33056181 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  2 14:12:55.846: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-7905" for this suite.
Feb  2 14:13:01.927: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 14:13:02.026: INFO: namespace downward-api-7905 deletion completed in 6.170510778s

• [SLOW TEST:14.444 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  2 14:13:02.027: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test use defaults
Feb  2 14:13:02.157: INFO: Waiting up to 5m0s for pod "client-containers-6abc962e-1aae-44de-93ef-6bb3843e9371" in namespace "containers-9990" to be "success or failure"
Feb  2 14:13:02.170: INFO: Pod "client-containers-6abc962e-1aae-44de-93ef-6bb3843e9371": Phase="Pending", Reason="", readiness=false. Elapsed: 12.752575ms
Feb  2 14:13:04.181: INFO: Pod "client-containers-6abc962e-1aae-44de-93ef-6bb3843e9371": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023784079s
Feb  2 14:13:06.188: INFO: Pod "client-containers-6abc962e-1aae-44de-93ef-6bb3843e9371": Phase="Pending", Reason="", readiness=false. Elapsed: 4.031449764s
Feb  2 14:13:08.204: INFO: Pod "client-containers-6abc962e-1aae-44de-93ef-6bb3843e9371": Phase="Pending", Reason="", readiness=false. Elapsed: 6.046761001s
Feb  2 14:13:10.227: INFO: Pod "client-containers-6abc962e-1aae-44de-93ef-6bb3843e9371": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.070163416s
STEP: Saw pod success
Feb  2 14:13:10.227: INFO: Pod "client-containers-6abc962e-1aae-44de-93ef-6bb3843e9371" satisfied condition "success or failure"
Feb  2 14:13:10.230: INFO: Trying to get logs from node iruya-node pod client-containers-6abc962e-1aae-44de-93ef-6bb3843e9371 container test-container: 
STEP: delete the pod
Feb  2 14:13:10.355: INFO: Waiting for pod client-containers-6abc962e-1aae-44de-93ef-6bb3843e9371 to disappear
Feb  2 14:13:10.365: INFO: Pod client-containers-6abc962e-1aae-44de-93ef-6bb3843e9371 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  2 14:13:10.365: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-9990" for this suite.
Feb  2 14:13:16.397: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 14:13:16.560: INFO: namespace containers-9990 deletion completed in 6.189355122s

• [SLOW TEST:14.534 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  2 14:13:16.562: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb  2 14:13:16.820: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"8b58cafd-f70b-47d8-97c2-905dbda0fed9", Controller:(*bool)(0xc001ef8b3a), BlockOwnerDeletion:(*bool)(0xc001ef8b3b)}}
Feb  2 14:13:16.962: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"6e2074d6-c5f9-4c4b-b36c-69b1b19cdcad", Controller:(*bool)(0xc002a34ec2), BlockOwnerDeletion:(*bool)(0xc002a34ec3)}}
Feb  2 14:13:17.003: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"95848ee0-491b-4bd6-aa9c-fab8b881c23c", Controller:(*bool)(0xc001ef8fb2), BlockOwnerDeletion:(*bool)(0xc001ef8fb3)}}
[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  2 14:13:22.078: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-6829" for this suite.
Feb  2 14:13:28.125: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 14:13:28.271: INFO: namespace gc-6829 deletion completed in 6.182638782s

• [SLOW TEST:11.710 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  2 14:13:28.272: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-1ae2c90e-8c73-4dd5-9841-4b17fa823987
STEP: Creating a pod to test consume secrets
Feb  2 14:13:28.366: INFO: Waiting up to 5m0s for pod "pod-secrets-f2c8faa6-20a1-4916-a3ea-f2815e8f7424" in namespace "secrets-9432" to be "success or failure"
Feb  2 14:13:28.369: INFO: Pod "pod-secrets-f2c8faa6-20a1-4916-a3ea-f2815e8f7424": Phase="Pending", Reason="", readiness=false. Elapsed: 3.234104ms
Feb  2 14:13:30.376: INFO: Pod "pod-secrets-f2c8faa6-20a1-4916-a3ea-f2815e8f7424": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010168419s
Feb  2 14:13:32.384: INFO: Pod "pod-secrets-f2c8faa6-20a1-4916-a3ea-f2815e8f7424": Phase="Pending", Reason="", readiness=false. Elapsed: 4.017705203s
Feb  2 14:13:34.428: INFO: Pod "pod-secrets-f2c8faa6-20a1-4916-a3ea-f2815e8f7424": Phase="Pending", Reason="", readiness=false. Elapsed: 6.061697045s
Feb  2 14:13:36.433: INFO: Pod "pod-secrets-f2c8faa6-20a1-4916-a3ea-f2815e8f7424": Phase="Pending", Reason="", readiness=false. Elapsed: 8.066919535s
Feb  2 14:13:38.446: INFO: Pod "pod-secrets-f2c8faa6-20a1-4916-a3ea-f2815e8f7424": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.079804124s
STEP: Saw pod success
Feb  2 14:13:38.446: INFO: Pod "pod-secrets-f2c8faa6-20a1-4916-a3ea-f2815e8f7424" satisfied condition "success or failure"
Feb  2 14:13:38.452: INFO: Trying to get logs from node iruya-node pod pod-secrets-f2c8faa6-20a1-4916-a3ea-f2815e8f7424 container secret-volume-test: 
STEP: delete the pod
Feb  2 14:13:38.527: INFO: Waiting for pod pod-secrets-f2c8faa6-20a1-4916-a3ea-f2815e8f7424 to disappear
Feb  2 14:13:38.536: INFO: Pod pod-secrets-f2c8faa6-20a1-4916-a3ea-f2815e8f7424 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  2 14:13:38.537: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-9432" for this suite.
Feb  2 14:13:44.682: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 14:13:44.816: INFO: namespace secrets-9432 deletion completed in 6.159761136s

• [SLOW TEST:16.544 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  2 14:13:44.817: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs
STEP: Gathering metrics
W0202 14:14:15.482115       8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Feb  2 14:14:15.482: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  2 14:14:15.482: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-7138" for this suite.
Feb  2 14:14:21.538: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 14:14:21.637: INFO: namespace gc-7138 deletion completed in 6.150833817s

• [SLOW TEST:36.820 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run job 
  should create a job from an image when restart is OnFailure  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  2 14:14:21.637: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1612
[It] should create a job from an image when restart is OnFailure  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Feb  2 14:14:24.226: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-7490'
Feb  2 14:14:27.594: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Feb  2 14:14:27.594: INFO: stdout: "job.batch/e2e-test-nginx-job created\n"
STEP: verifying the job e2e-test-nginx-job was created
[AfterEach] [k8s.io] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1617
Feb  2 14:14:27.622: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-nginx-job --namespace=kubectl-7490'
Feb  2 14:14:27.775: INFO: stderr: ""
Feb  2 14:14:27.775: INFO: stdout: "job.batch \"e2e-test-nginx-job\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  2 14:14:27.775: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-7490" for this suite.
Feb  2 14:14:33.806: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 14:14:33.949: INFO: namespace kubectl-7490 deletion completed in 6.165293442s

• [SLOW TEST:12.313 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create a job from an image when restart is OnFailure  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  2 14:14:33.950: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test override all
Feb  2 14:14:34.098: INFO: Waiting up to 5m0s for pod "client-containers-b932f6bf-90dc-4a77-bd67-839c8754cf99" in namespace "containers-104" to be "success or failure"
Feb  2 14:14:34.104: INFO: Pod "client-containers-b932f6bf-90dc-4a77-bd67-839c8754cf99": Phase="Pending", Reason="", readiness=false. Elapsed: 6.601123ms
Feb  2 14:14:36.112: INFO: Pod "client-containers-b932f6bf-90dc-4a77-bd67-839c8754cf99": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013710312s
Feb  2 14:14:38.119: INFO: Pod "client-containers-b932f6bf-90dc-4a77-bd67-839c8754cf99": Phase="Pending", Reason="", readiness=false. Elapsed: 4.021107662s
Feb  2 14:14:40.125: INFO: Pod "client-containers-b932f6bf-90dc-4a77-bd67-839c8754cf99": Phase="Pending", Reason="", readiness=false. Elapsed: 6.02683365s
Feb  2 14:14:42.139: INFO: Pod "client-containers-b932f6bf-90dc-4a77-bd67-839c8754cf99": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.041580717s
STEP: Saw pod success
Feb  2 14:14:42.140: INFO: Pod "client-containers-b932f6bf-90dc-4a77-bd67-839c8754cf99" satisfied condition "success or failure"
Feb  2 14:14:42.144: INFO: Trying to get logs from node iruya-node pod client-containers-b932f6bf-90dc-4a77-bd67-839c8754cf99 container test-container: 
STEP: delete the pod
Feb  2 14:14:42.305: INFO: Waiting for pod client-containers-b932f6bf-90dc-4a77-bd67-839c8754cf99 to disappear
Feb  2 14:14:42.327: INFO: Pod client-containers-b932f6bf-90dc-4a77-bd67-839c8754cf99 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  2 14:14:42.327: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-104" for this suite.
Feb  2 14:14:48.407: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 14:14:48.596: INFO: namespace containers-104 deletion completed in 6.242602039s

• [SLOW TEST:14.647 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform rolling updates and roll backs of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  2 14:14:48.597: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75
STEP: Creating service test in namespace statefulset-2504
[It] should perform rolling updates and roll backs of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a new StatefulSet
Feb  2 14:14:48.731: INFO: Found 0 stateful pods, waiting for 3
Feb  2 14:14:58.770: INFO: Found 2 stateful pods, waiting for 3
Feb  2 14:15:08.752: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Feb  2 14:15:08.753: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Feb  2 14:15:08.753: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Feb  2 14:15:18.740: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Feb  2 14:15:18.740: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Feb  2 14:15:18.740: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
Feb  2 14:15:18.754: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2504 ss2-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb  2 14:15:19.163: INFO: stderr: "I0202 14:15:18.970931    2686 log.go:172] (0xc00094a370) (0xc000646a00) Create stream\nI0202 14:15:18.971160    2686 log.go:172] (0xc00094a370) (0xc000646a00) Stream added, broadcasting: 1\nI0202 14:15:18.979046    2686 log.go:172] (0xc00094a370) Reply frame received for 1\nI0202 14:15:18.979181    2686 log.go:172] (0xc00094a370) (0xc0007c4000) Create stream\nI0202 14:15:18.979202    2686 log.go:172] (0xc00094a370) (0xc0007c4000) Stream added, broadcasting: 3\nI0202 14:15:18.980868    2686 log.go:172] (0xc00094a370) Reply frame received for 3\nI0202 14:15:18.980915    2686 log.go:172] (0xc00094a370) (0xc000646aa0) Create stream\nI0202 14:15:18.980928    2686 log.go:172] (0xc00094a370) (0xc000646aa0) Stream added, broadcasting: 5\nI0202 14:15:18.982256    2686 log.go:172] (0xc00094a370) Reply frame received for 5\nI0202 14:15:19.062568    2686 log.go:172] (0xc00094a370) Data frame received for 5\nI0202 14:15:19.062674    2686 log.go:172] (0xc000646aa0) (5) Data frame handling\nI0202 14:15:19.062694    2686 log.go:172] (0xc000646aa0) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0202 14:15:19.082112    2686 log.go:172] (0xc00094a370) Data frame received for 3\nI0202 14:15:19.082133    2686 log.go:172] (0xc0007c4000) (3) Data frame handling\nI0202 14:15:19.082147    2686 log.go:172] (0xc0007c4000) (3) Data frame sent\nI0202 14:15:19.152572    2686 log.go:172] (0xc00094a370) Data frame received for 1\nI0202 14:15:19.152639    2686 log.go:172] (0xc000646a00) (1) Data frame handling\nI0202 14:15:19.152660    2686 log.go:172] (0xc000646a00) (1) Data frame sent\nI0202 14:15:19.152680    2686 log.go:172] (0xc00094a370) (0xc000646a00) Stream removed, broadcasting: 1\nI0202 14:15:19.153923    2686 log.go:172] (0xc00094a370) (0xc000646aa0) Stream removed, broadcasting: 5\nI0202 14:15:19.154166    2686 log.go:172] (0xc00094a370) (0xc0007c4000) Stream removed, broadcasting: 3\nI0202 14:15:19.154218    2686 log.go:172] (0xc00094a370) (0xc000646a00) Stream removed, broadcasting: 1\nI0202 14:15:19.154248    2686 log.go:172] (0xc00094a370) (0xc0007c4000) Stream removed, broadcasting: 3\nI0202 14:15:19.154277    2686 log.go:172] (0xc00094a370) (0xc000646aa0) Stream removed, broadcasting: 5\nI0202 14:15:19.154702    2686 log.go:172] (0xc00094a370) Go away received\n"
Feb  2 14:15:19.163: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb  2 14:15:19.163: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

STEP: Updating StatefulSet template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine
Feb  2 14:15:19.285: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Updating Pods in reverse ordinal order
Feb  2 14:15:29.331: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2504 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  2 14:15:29.809: INFO: stderr: "I0202 14:15:29.583195    2706 log.go:172] (0xc000a44630) (0xc0006e0a00) Create stream\nI0202 14:15:29.583504    2706 log.go:172] (0xc000a44630) (0xc0006e0a00) Stream added, broadcasting: 1\nI0202 14:15:29.604971    2706 log.go:172] (0xc000a44630) Reply frame received for 1\nI0202 14:15:29.605113    2706 log.go:172] (0xc000a44630) (0xc0006e0280) Create stream\nI0202 14:15:29.605129    2706 log.go:172] (0xc000a44630) (0xc0006e0280) Stream added, broadcasting: 3\nI0202 14:15:29.607969    2706 log.go:172] (0xc000a44630) Reply frame received for 3\nI0202 14:15:29.608141    2706 log.go:172] (0xc000a44630) (0xc00091c000) Create stream\nI0202 14:15:29.608167    2706 log.go:172] (0xc000a44630) (0xc00091c000) Stream added, broadcasting: 5\nI0202 14:15:29.610104    2706 log.go:172] (0xc000a44630) Reply frame received for 5\nI0202 14:15:29.701277    2706 log.go:172] (0xc000a44630) Data frame received for 3\nI0202 14:15:29.701363    2706 log.go:172] (0xc0006e0280) (3) Data frame handling\nI0202 14:15:29.701379    2706 log.go:172] (0xc0006e0280) (3) Data frame sent\nI0202 14:15:29.701435    2706 log.go:172] (0xc000a44630) Data frame received for 5\nI0202 14:15:29.701445    2706 log.go:172] (0xc00091c000) (5) Data frame handling\nI0202 14:15:29.701472    2706 log.go:172] (0xc00091c000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0202 14:15:29.798920    2706 log.go:172] (0xc000a44630) (0xc00091c000) Stream removed, broadcasting: 5\nI0202 14:15:29.799138    2706 log.go:172] (0xc000a44630) Data frame received for 1\nI0202 14:15:29.799188    2706 log.go:172] (0xc000a44630) (0xc0006e0280) Stream removed, broadcasting: 3\nI0202 14:15:29.799289    2706 log.go:172] (0xc0006e0a00) (1) Data frame handling\nI0202 14:15:29.799332    2706 log.go:172] (0xc0006e0a00) (1) Data frame sent\nI0202 14:15:29.799352    2706 log.go:172] (0xc000a44630) (0xc0006e0a00) Stream removed, broadcasting: 1\nI0202 14:15:29.799991    2706 log.go:172] (0xc000a44630) Go away received\nI0202 14:15:29.800471    2706 log.go:172] (0xc000a44630) (0xc0006e0a00) Stream removed, broadcasting: 1\nI0202 14:15:29.800494    2706 log.go:172] (0xc000a44630) (0xc0006e0280) Stream removed, broadcasting: 3\nI0202 14:15:29.800501    2706 log.go:172] (0xc000a44630) (0xc00091c000) Stream removed, broadcasting: 5\n"
Feb  2 14:15:29.809: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Feb  2 14:15:29.809: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Feb  2 14:15:39.863: INFO: Waiting for StatefulSet statefulset-2504/ss2 to complete update
Feb  2 14:15:39.863: INFO: Waiting for Pod statefulset-2504/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb  2 14:15:39.863: INFO: Waiting for Pod statefulset-2504/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb  2 14:15:39.863: INFO: Waiting for Pod statefulset-2504/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb  2 14:15:49.890: INFO: Waiting for StatefulSet statefulset-2504/ss2 to complete update
Feb  2 14:15:49.890: INFO: Waiting for Pod statefulset-2504/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb  2 14:15:49.890: INFO: Waiting for Pod statefulset-2504/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb  2 14:15:59.885: INFO: Waiting for StatefulSet statefulset-2504/ss2 to complete update
Feb  2 14:15:59.885: INFO: Waiting for Pod statefulset-2504/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb  2 14:15:59.885: INFO: Waiting for Pod statefulset-2504/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb  2 14:16:09.917: INFO: Waiting for StatefulSet statefulset-2504/ss2 to complete update
Feb  2 14:16:09.917: INFO: Waiting for Pod statefulset-2504/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb  2 14:16:19.897: INFO: Waiting for StatefulSet statefulset-2504/ss2 to complete update
STEP: Rolling back to a previous revision
Feb  2 14:16:29.901: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2504 ss2-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb  2 14:16:30.424: INFO: stderr: "I0202 14:16:30.130307    2726 log.go:172] (0xc0007b6580) (0xc00081b860) Create stream\nI0202 14:16:30.130613    2726 log.go:172] (0xc0007b6580) (0xc00081b860) Stream added, broadcasting: 1\nI0202 14:16:30.140307    2726 log.go:172] (0xc0007b6580) Reply frame received for 1\nI0202 14:16:30.140403    2726 log.go:172] (0xc0007b6580) (0xc00080b220) Create stream\nI0202 14:16:30.140444    2726 log.go:172] (0xc0007b6580) (0xc00080b220) Stream added, broadcasting: 3\nI0202 14:16:30.143168    2726 log.go:172] (0xc0007b6580) Reply frame received for 3\nI0202 14:16:30.143289    2726 log.go:172] (0xc0007b6580) (0xc00080b2c0) Create stream\nI0202 14:16:30.143301    2726 log.go:172] (0xc0007b6580) (0xc00080b2c0) Stream added, broadcasting: 5\nI0202 14:16:30.144794    2726 log.go:172] (0xc0007b6580) Reply frame received for 5\nI0202 14:16:30.253622    2726 log.go:172] (0xc0007b6580) Data frame received for 5\nI0202 14:16:30.253716    2726 log.go:172] (0xc00080b2c0) (5) Data frame handling\nI0202 14:16:30.253740    2726 log.go:172] (0xc00080b2c0) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0202 14:16:30.295972    2726 log.go:172] (0xc0007b6580) Data frame received for 3\nI0202 14:16:30.296036    2726 log.go:172] (0xc00080b220) (3) Data frame handling\nI0202 14:16:30.296069    2726 log.go:172] (0xc00080b220) (3) Data frame sent\nI0202 14:16:30.407191    2726 log.go:172] (0xc0007b6580) (0xc00080b2c0) Stream removed, broadcasting: 5\nI0202 14:16:30.407500    2726 log.go:172] (0xc0007b6580) Data frame received for 1\nI0202 14:16:30.407530    2726 log.go:172] (0xc00081b860) (1) Data frame handling\nI0202 14:16:30.407552    2726 log.go:172] (0xc00081b860) (1) Data frame sent\nI0202 14:16:30.407668    2726 log.go:172] (0xc0007b6580) (0xc00081b860) Stream removed, broadcasting: 1\nI0202 14:16:30.409078    2726 log.go:172] (0xc0007b6580) (0xc00080b220) Stream removed, broadcasting: 3\nI0202 14:16:30.409162    2726 log.go:172] (0xc0007b6580) Go away received\nI0202 14:16:30.409931    2726 log.go:172] (0xc0007b6580) (0xc00081b860) Stream removed, broadcasting: 1\nI0202 14:16:30.410157    2726 log.go:172] (0xc0007b6580) (0xc00080b220) Stream removed, broadcasting: 3\nI0202 14:16:30.410178    2726 log.go:172] (0xc0007b6580) (0xc00080b2c0) Stream removed, broadcasting: 5\n"
Feb  2 14:16:30.425: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb  2 14:16:30.425: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Feb  2 14:16:40.483: INFO: Updating stateful set ss2
STEP: Rolling back update in reverse ordinal order
Feb  2 14:16:50.556: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2504 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  2 14:16:50.957: INFO: stderr: "I0202 14:16:50.762199    2744 log.go:172] (0xc000116790) (0xc000554820) Create stream\nI0202 14:16:50.762964    2744 log.go:172] (0xc000116790) (0xc000554820) Stream added, broadcasting: 1\nI0202 14:16:50.768232    2744 log.go:172] (0xc000116790) Reply frame received for 1\nI0202 14:16:50.768312    2744 log.go:172] (0xc000116790) (0xc000708000) Create stream\nI0202 14:16:50.768325    2744 log.go:172] (0xc000116790) (0xc000708000) Stream added, broadcasting: 3\nI0202 14:16:50.769832    2744 log.go:172] (0xc000116790) Reply frame received for 3\nI0202 14:16:50.769858    2744 log.go:172] (0xc000116790) (0xc0005548c0) Create stream\nI0202 14:16:50.769865    2744 log.go:172] (0xc000116790) (0xc0005548c0) Stream added, broadcasting: 5\nI0202 14:16:50.771946    2744 log.go:172] (0xc000116790) Reply frame received for 5\nI0202 14:16:50.870058    2744 log.go:172] (0xc000116790) Data frame received for 5\nI0202 14:16:50.870133    2744 log.go:172] (0xc0005548c0) (5) Data frame handling\nI0202 14:16:50.870155    2744 log.go:172] (0xc0005548c0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0202 14:16:50.870196    2744 log.go:172] (0xc000116790) Data frame received for 3\nI0202 14:16:50.870203    2744 log.go:172] (0xc000708000) (3) Data frame handling\nI0202 14:16:50.870210    2744 log.go:172] (0xc000708000) (3) Data frame sent\nI0202 14:16:50.946049    2744 log.go:172] (0xc000116790) (0xc000708000) Stream removed, broadcasting: 3\nI0202 14:16:50.947131    2744 log.go:172] (0xc000116790) (0xc0005548c0) Stream removed, broadcasting: 5\nI0202 14:16:50.947306    2744 log.go:172] (0xc000116790) Data frame received for 1\nI0202 14:16:50.947329    2744 log.go:172] (0xc000554820) (1) Data frame handling\nI0202 14:16:50.947374    2744 log.go:172] (0xc000554820) (1) Data frame sent\nI0202 14:16:50.947383    2744 log.go:172] (0xc000116790) (0xc000554820) Stream removed, broadcasting: 1\nI0202 14:16:50.947395    2744 log.go:172] (0xc000116790) Go away received\nI0202 14:16:50.948626    2744 log.go:172] (0xc000116790) (0xc000554820) Stream removed, broadcasting: 1\nI0202 14:16:50.948642    2744 log.go:172] (0xc000116790) (0xc000708000) Stream removed, broadcasting: 3\nI0202 14:16:50.948647    2744 log.go:172] (0xc000116790) (0xc0005548c0) Stream removed, broadcasting: 5\n"
Feb  2 14:16:50.957: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Feb  2 14:16:50.958: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Feb  2 14:17:01.010: INFO: Waiting for StatefulSet statefulset-2504/ss2 to complete update
Feb  2 14:17:01.010: INFO: Waiting for Pod statefulset-2504/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Feb  2 14:17:01.010: INFO: Waiting for Pod statefulset-2504/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Feb  2 14:17:11.033: INFO: Waiting for StatefulSet statefulset-2504/ss2 to complete update
Feb  2 14:17:11.033: INFO: Waiting for Pod statefulset-2504/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Feb  2 14:17:21.022: INFO: Waiting for StatefulSet statefulset-2504/ss2 to complete update
Feb  2 14:17:21.022: INFO: Waiting for Pod statefulset-2504/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Feb  2 14:17:31.022: INFO: Waiting for StatefulSet statefulset-2504/ss2 to complete update
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86
Feb  2 14:17:41.024: INFO: Deleting all statefulset in ns statefulset-2504
Feb  2 14:17:41.028: INFO: Scaling statefulset ss2 to 0
Feb  2 14:18:21.059: INFO: Waiting for statefulset status.replicas updated to 0
Feb  2 14:18:21.064: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  2 14:18:21.095: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-2504" for this suite.
Feb  2 14:18:29.135: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 14:18:29.264: INFO: namespace statefulset-2504 deletion completed in 8.163787187s

• [SLOW TEST:220.668 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should perform rolling updates and roll backs of template modifications [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  2 14:18:29.265: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb  2 14:18:29.527: INFO: Waiting up to 5m0s for pod "downwardapi-volume-74a0ee64-d5ad-4534-a283-7336d8475b30" in namespace "downward-api-8765" to be "success or failure"
Feb  2 14:18:29.542: INFO: Pod "downwardapi-volume-74a0ee64-d5ad-4534-a283-7336d8475b30": Phase="Pending", Reason="", readiness=false. Elapsed: 15.175827ms
Feb  2 14:18:31.552: INFO: Pod "downwardapi-volume-74a0ee64-d5ad-4534-a283-7336d8475b30": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025077903s
Feb  2 14:18:33.563: INFO: Pod "downwardapi-volume-74a0ee64-d5ad-4534-a283-7336d8475b30": Phase="Pending", Reason="", readiness=false. Elapsed: 4.035460986s
Feb  2 14:18:35.571: INFO: Pod "downwardapi-volume-74a0ee64-d5ad-4534-a283-7336d8475b30": Phase="Pending", Reason="", readiness=false. Elapsed: 6.043605072s
Feb  2 14:18:37.580: INFO: Pod "downwardapi-volume-74a0ee64-d5ad-4534-a283-7336d8475b30": Phase="Pending", Reason="", readiness=false. Elapsed: 8.052654311s
Feb  2 14:18:39.620: INFO: Pod "downwardapi-volume-74a0ee64-d5ad-4534-a283-7336d8475b30": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.092841592s
STEP: Saw pod success
Feb  2 14:18:39.620: INFO: Pod "downwardapi-volume-74a0ee64-d5ad-4534-a283-7336d8475b30" satisfied condition "success or failure"
Feb  2 14:18:39.628: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-74a0ee64-d5ad-4534-a283-7336d8475b30 container client-container: 
STEP: delete the pod
Feb  2 14:18:39.775: INFO: Waiting for pod downwardapi-volume-74a0ee64-d5ad-4534-a283-7336d8475b30 to disappear
Feb  2 14:18:39.792: INFO: Pod downwardapi-volume-74a0ee64-d5ad-4534-a283-7336d8475b30 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  2 14:18:39.792: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-8765" for this suite.
Feb  2 14:18:45.839: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 14:18:46.001: INFO: namespace downward-api-8765 deletion completed in 6.195515232s

• [SLOW TEST:16.736 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-auth] ServiceAccounts 
  should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  2 14:18:46.001: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: getting the auto-created API token
STEP: reading a file in the container
Feb  2 14:18:54.704: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-6740 pod-service-account-2fb817b1-10d4-4de1-b2f8-d0ab8ec0f119 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token'
STEP: reading a file in the container
Feb  2 14:18:55.291: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-6740 pod-service-account-2fb817b1-10d4-4de1-b2f8-d0ab8ec0f119 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt'
STEP: reading a file in the container
Feb  2 14:18:55.782: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-6740 pod-service-account-2fb817b1-10d4-4de1-b2f8-d0ab8ec0f119 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace'
[AfterEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  2 14:18:56.414: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-6740" for this suite.
Feb  2 14:19:02.468: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 14:19:02.665: INFO: namespace svcaccounts-6740 deletion completed in 6.245758069s

• [SLOW TEST:16.664 seconds]
[sig-auth] ServiceAccounts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23
  should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-node] ConfigMap 
  should fail to create ConfigMap with empty key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  2 14:19:02.665: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create ConfigMap with empty key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap that has name configmap-test-emptyKey-99fe6539-23fb-43f9-bad1-c3787cae3e17
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  2 14:19:02.763: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-2464" for this suite.
Feb  2 14:19:08.782: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 14:19:08.886: INFO: namespace configmap-2464 deletion completed in 6.118802903s

• [SLOW TEST:6.221 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should fail to create ConfigMap with empty key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  2 14:19:08.886: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name cm-test-opt-del-32543c67-3310-4371-aef8-52acc0887440
STEP: Creating configMap with name cm-test-opt-upd-df2ac1f5-c03c-410c-b2bd-8f4e998a63ca
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-32543c67-3310-4371-aef8-52acc0887440
STEP: Updating configmap cm-test-opt-upd-df2ac1f5-c03c-410c-b2bd-8f4e998a63ca
STEP: Creating configMap with name cm-test-opt-create-ce82601b-17bd-4839-af58-9a8c88e31328
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  2 14:20:47.560: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4682" for this suite.
Feb  2 14:21:09.615: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 14:21:09.790: INFO: namespace projected-4682 deletion completed in 22.222594485s

• [SLOW TEST:120.904 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  2 14:21:09.791: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating pod
Feb  2 14:21:18.024: INFO: Pod pod-hostip-50a6f0b3-cd28-41cb-9a10-198ed391ae2f has hostIP: 10.96.3.65
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  2 14:21:18.024: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-6242" for this suite.
Feb  2 14:21:40.055: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 14:21:40.181: INFO: namespace pods-6242 deletion completed in 22.151797275s

• [SLOW TEST:30.390 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  2 14:21:40.181: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-f8cc1745-2da5-4c03-b2c8-bca0f7644e0b
STEP: Creating a pod to test consume secrets
Feb  2 14:21:40.394: INFO: Waiting up to 5m0s for pod "pod-secrets-30b0c7fd-7881-4da1-93e5-1b741b4821c0" in namespace "secrets-8807" to be "success or failure"
Feb  2 14:21:40.415: INFO: Pod "pod-secrets-30b0c7fd-7881-4da1-93e5-1b741b4821c0": Phase="Pending", Reason="", readiness=false. Elapsed: 20.421407ms
Feb  2 14:21:42.422: INFO: Pod "pod-secrets-30b0c7fd-7881-4da1-93e5-1b741b4821c0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027428405s
Feb  2 14:21:44.433: INFO: Pod "pod-secrets-30b0c7fd-7881-4da1-93e5-1b741b4821c0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.038154835s
Feb  2 14:21:46.441: INFO: Pod "pod-secrets-30b0c7fd-7881-4da1-93e5-1b741b4821c0": Phase="Pending", Reason="", readiness=false. Elapsed: 6.046720206s
Feb  2 14:21:48.461: INFO: Pod "pod-secrets-30b0c7fd-7881-4da1-93e5-1b741b4821c0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.066926852s
STEP: Saw pod success
Feb  2 14:21:48.462: INFO: Pod "pod-secrets-30b0c7fd-7881-4da1-93e5-1b741b4821c0" satisfied condition "success or failure"
Feb  2 14:21:48.466: INFO: Trying to get logs from node iruya-node pod pod-secrets-30b0c7fd-7881-4da1-93e5-1b741b4821c0 container secret-volume-test: 
STEP: delete the pod
Feb  2 14:21:48.587: INFO: Waiting for pod pod-secrets-30b0c7fd-7881-4da1-93e5-1b741b4821c0 to disappear
Feb  2 14:21:48.600: INFO: Pod pod-secrets-30b0c7fd-7881-4da1-93e5-1b741b4821c0 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  2 14:21:48.600: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-8807" for this suite.
Feb  2 14:21:54.638: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 14:21:54.829: INFO: namespace secrets-8807 deletion completed in 6.222006472s
STEP: Destroying namespace "secret-namespace-7262" for this suite.
Feb  2 14:22:00.866: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 14:22:01.037: INFO: namespace secret-namespace-7262 deletion completed in 6.208074624s

• [SLOW TEST:20.856 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  2 14:22:01.038: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb  2 14:22:01.154: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7f08e127-c495-4322-a8ca-4dfbe2124755" in namespace "projected-130" to be "success or failure"
Feb  2 14:22:01.168: INFO: Pod "downwardapi-volume-7f08e127-c495-4322-a8ca-4dfbe2124755": Phase="Pending", Reason="", readiness=false. Elapsed: 14.275338ms
Feb  2 14:22:03.191: INFO: Pod "downwardapi-volume-7f08e127-c495-4322-a8ca-4dfbe2124755": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03658988s
Feb  2 14:22:05.196: INFO: Pod "downwardapi-volume-7f08e127-c495-4322-a8ca-4dfbe2124755": Phase="Pending", Reason="", readiness=false. Elapsed: 4.042521025s
Feb  2 14:22:07.209: INFO: Pod "downwardapi-volume-7f08e127-c495-4322-a8ca-4dfbe2124755": Phase="Pending", Reason="", readiness=false. Elapsed: 6.05521144s
Feb  2 14:22:09.216: INFO: Pod "downwardapi-volume-7f08e127-c495-4322-a8ca-4dfbe2124755": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.062427117s
STEP: Saw pod success
Feb  2 14:22:09.216: INFO: Pod "downwardapi-volume-7f08e127-c495-4322-a8ca-4dfbe2124755" satisfied condition "success or failure"
Feb  2 14:22:09.223: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-7f08e127-c495-4322-a8ca-4dfbe2124755 container client-container: 
STEP: delete the pod
Feb  2 14:22:09.352: INFO: Waiting for pod downwardapi-volume-7f08e127-c495-4322-a8ca-4dfbe2124755 to disappear
Feb  2 14:22:09.379: INFO: Pod downwardapi-volume-7f08e127-c495-4322-a8ca-4dfbe2124755 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  2 14:22:09.380: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-130" for this suite.
Feb  2 14:22:15.541: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 14:22:15.670: INFO: namespace projected-130 deletion completed in 6.275337337s

• [SLOW TEST:14.633 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  2 14:22:15.672: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72
[It] deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb  2 14:22:15.755: INFO: Creating deployment "nginx-deployment"
Feb  2 14:22:15.765: INFO: Waiting for observed generation 1
Feb  2 14:22:19.094: INFO: Waiting for all required pods to come up
Feb  2 14:22:19.104: INFO: Pod name nginx: Found 10 pods out of 10
STEP: ensuring each pod is running
Feb  2 14:22:43.365: INFO: Waiting for deployment "nginx-deployment" to complete
Feb  2 14:22:43.375: INFO: Updating deployment "nginx-deployment" with a non-existent image
Feb  2 14:22:43.390: INFO: Updating deployment nginx-deployment
Feb  2 14:22:43.390: INFO: Waiting for observed generation 2
Feb  2 14:22:46.489: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8
Feb  2 14:22:46.526: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8
Feb  2 14:22:46.538: INFO: Waiting for the first rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas
Feb  2 14:22:46.553: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0
Feb  2 14:22:46.553: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5
Feb  2 14:22:46.557: INFO: Waiting for the second rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas
Feb  2 14:22:46.583: INFO: Verifying that deployment "nginx-deployment" has minimum required number of available replicas
Feb  2 14:22:46.583: INFO: Scaling up the deployment "nginx-deployment" from 10 to 30
Feb  2 14:22:47.439: INFO: Updating deployment nginx-deployment
Feb  2 14:22:47.439: INFO: Waiting for the replicasets of deployment "nginx-deployment" to have desired number of replicas
Feb  2 14:22:47.762: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20
Feb  2 14:22:51.514: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66
Feb  2 14:22:53.815: INFO: Deployment "nginx-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment,GenerateName:,Namespace:deployment-6815,SelfLink:/apis/apps/v1/namespaces/deployment-6815/deployments/nginx-deployment,UID:5a60e939-5617-4486-b295-4db38054bd9d,ResourceVersion:22826909,Generation:3,CreationTimestamp:2020-02-02 14:22:15 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*30,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:21,UpdatedReplicas:13,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[{Available False 2020-02-02 14:22:47 +0000 UTC 2020-02-02 14:22:47 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2020-02-02 14:22:51 +0000 UTC 2020-02-02 14:22:15 +0000 UTC ReplicaSetUpdated ReplicaSet "nginx-deployment-55fb7cb77f" is progressing.}],ReadyReplicas:8,CollisionCount:nil,},}

Feb  2 14:22:54.831: INFO: New ReplicaSet "nginx-deployment-55fb7cb77f" of Deployment "nginx-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f,GenerateName:,Namespace:deployment-6815,SelfLink:/apis/apps/v1/namespaces/deployment-6815/replicasets/nginx-deployment-55fb7cb77f,UID:4a90f535-0f6e-40ca-9676-f6e032e7e023,ResourceVersion:22826907,Generation:3,CreationTimestamp:2020-02-02 14:22:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 5a60e939-5617-4486-b295-4db38054bd9d 0xc00289e127 0xc00289e128}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*13,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Feb  2 14:22:54.831: INFO: All old ReplicaSets of Deployment "nginx-deployment":
Feb  2 14:22:54.831: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498,GenerateName:,Namespace:deployment-6815,SelfLink:/apis/apps/v1/namespaces/deployment-6815/replicasets/nginx-deployment-7b8c6f4498,UID:5f7c9c84-4a77-4de9-b720-34ca5849dd76,ResourceVersion:22826914,Generation:3,CreationTimestamp:2020-02-02 14:22:15 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 5a60e939-5617-4486-b295-4db38054bd9d 0xc00289e1f7 0xc00289e1f8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*20,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[],},}
Feb  2 14:22:57.052: INFO: Pod "nginx-deployment-55fb7cb77f-7s7wz" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-7s7wz,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-6815,SelfLink:/api/v1/namespaces/deployment-6815/pods/nginx-deployment-55fb7cb77f-7s7wz,UID:098b5666-6293-4f54-83af-9ccb8f5abb49,ResourceVersion:22826891,Generation:0,CreationTimestamp:2020-02-02 14:22:48 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 4a90f535-0f6e-40ca-9676-f6e032e7e023 0xc001b1a1e7 0xc001b1a1e8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-hqwlz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hqwlz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-hqwlz true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001b1a260} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001b1a280}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-02 14:22:48 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  2 14:22:57.053: INFO: Pod "nginx-deployment-55fb7cb77f-bb6gx" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-bb6gx,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-6815,SelfLink:/api/v1/namespaces/deployment-6815/pods/nginx-deployment-55fb7cb77f-bb6gx,UID:dab007ee-1e2c-45c3-a839-edf146bfe3ac,ResourceVersion:22826841,Generation:0,CreationTimestamp:2020-02-02 14:22:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 4a90f535-0f6e-40ca-9676-f6e032e7e023 0xc001b1a3a7 0xc001b1a3a8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-hqwlz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hqwlz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-hqwlz true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001b1a4f0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001b1a510}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-02 14:22:43 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-02 14:22:43 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-02 14:22:43 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-02 14:22:43 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2020-02-02 14:22:43 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  2 14:22:57.054: INFO: Pod "nginx-deployment-55fb7cb77f-gwrqh" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-gwrqh,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-6815,SelfLink:/api/v1/namespaces/deployment-6815/pods/nginx-deployment-55fb7cb77f-gwrqh,UID:afa68060-4375-4ff5-bad1-3577e7faa850,ResourceVersion:22826818,Generation:0,CreationTimestamp:2020-02-02 14:22:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 4a90f535-0f6e-40ca-9676-f6e032e7e023 0xc001b1a667 0xc001b1a668}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-hqwlz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hqwlz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-hqwlz true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001b1a6e0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001b1a700}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-02 14:22:43 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-02 14:22:43 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-02 14:22:43 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-02 14:22:43 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2020-02-02 14:22:43 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  2 14:22:57.054: INFO: Pod "nginx-deployment-55fb7cb77f-h2t5v" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-h2t5v,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-6815,SelfLink:/api/v1/namespaces/deployment-6815/pods/nginx-deployment-55fb7cb77f-h2t5v,UID:ad0d32a4-6f11-44e3-929e-69f825edd04a,ResourceVersion:22826840,Generation:0,CreationTimestamp:2020-02-02 14:22:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 4a90f535-0f6e-40ca-9676-f6e032e7e023 0xc001b1a8e7 0xc001b1a8e8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-hqwlz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hqwlz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-hqwlz true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001b1a970} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001b1a9f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-02 14:22:43 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-02 14:22:43 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-02 14:22:43 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-02 14:22:43 +0000 UTC  }],Message:,Reason:,HostIP:10.96.2.216,PodIP:,StartTime:2020-02-02 14:22:43 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  2 14:22:57.055: INFO: Pod "nginx-deployment-55fb7cb77f-j58k9" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-j58k9,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-6815,SelfLink:/api/v1/namespaces/deployment-6815/pods/nginx-deployment-55fb7cb77f-j58k9,UID:485eaf8e-1de4-4fe5-995d-0937b4a94ea4,ResourceVersion:22826901,Generation:0,CreationTimestamp:2020-02-02 14:22:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 4a90f535-0f6e-40ca-9676-f6e032e7e023 0xc001b1aaf7 0xc001b1aaf8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-hqwlz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hqwlz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-hqwlz true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001b1ab60} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001b1ab80}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-02 14:22:48 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-02 14:22:48 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-02 14:22:48 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-02 14:22:47 +0000 UTC  }],Message:,Reason:,HostIP:10.96.2.216,PodIP:,StartTime:2020-02-02 14:22:48 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  2 14:22:57.056: INFO: Pod "nginx-deployment-55fb7cb77f-n8wxd" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-n8wxd,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-6815,SelfLink:/api/v1/namespaces/deployment-6815/pods/nginx-deployment-55fb7cb77f-n8wxd,UID:541abdbd-ed23-458e-8eb5-182b12aa0e2c,ResourceVersion:22826806,Generation:0,CreationTimestamp:2020-02-02 14:22:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 4a90f535-0f6e-40ca-9676-f6e032e7e023 0xc001b1add7 0xc001b1add8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-hqwlz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hqwlz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-hqwlz true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001b1ae50} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001b1ae70}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-02 14:22:43 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-02 14:22:43 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-02 14:22:43 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-02 14:22:43 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2020-02-02 14:22:43 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  2 14:22:57.056: INFO: Pod "nginx-deployment-55fb7cb77f-nkczl" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-nkczl,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-6815,SelfLink:/api/v1/namespaces/deployment-6815/pods/nginx-deployment-55fb7cb77f-nkczl,UID:b5fdcca1-238e-4ec2-8121-bf2b166405d7,ResourceVersion:22826912,Generation:0,CreationTimestamp:2020-02-02 14:22:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 4a90f535-0f6e-40ca-9676-f6e032e7e023 0xc001b1b017 0xc001b1b018}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-hqwlz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hqwlz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-hqwlz true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001b1b0e0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001b1b120}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-02 14:22:48 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-02 14:22:48 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-02 14:22:48 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-02 14:22:48 +0000 UTC  }],Message:,Reason:,HostIP:10.96.2.216,PodIP:,StartTime:2020-02-02 14:22:48 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  2 14:22:57.057: INFO: Pod "nginx-deployment-55fb7cb77f-nrhrt" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-nrhrt,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-6815,SelfLink:/api/v1/namespaces/deployment-6815/pods/nginx-deployment-55fb7cb77f-nrhrt,UID:31a9becd-8d37-4bdb-88f6-56a1297cffb3,ResourceVersion:22826895,Generation:0,CreationTimestamp:2020-02-02 14:22:48 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 4a90f535-0f6e-40ca-9676-f6e032e7e023 0xc001b1b2b7 0xc001b1b2b8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-hqwlz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hqwlz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-hqwlz true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001b1b340} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001b1b360}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-02 14:22:48 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  2 14:22:57.057: INFO: Pod "nginx-deployment-55fb7cb77f-trm9b" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-trm9b,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-6815,SelfLink:/api/v1/namespaces/deployment-6815/pods/nginx-deployment-55fb7cb77f-trm9b,UID:b21703ec-ebc0-4bfe-ba89-4134bacc2b35,ResourceVersion:22826898,Generation:0,CreationTimestamp:2020-02-02 14:22:48 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 4a90f535-0f6e-40ca-9676-f6e032e7e023 0xc001b1b467 0xc001b1b468}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-hqwlz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hqwlz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-hqwlz true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001b1b520} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001b1b540}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-02 14:22:48 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  2 14:22:57.058: INFO: Pod "nginx-deployment-55fb7cb77f-vrhtt" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-vrhtt,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-6815,SelfLink:/api/v1/namespaces/deployment-6815/pods/nginx-deployment-55fb7cb77f-vrhtt,UID:f30a8c4e-14e0-47e6-bee7-421430315efe,ResourceVersion:22826875,Generation:0,CreationTimestamp:2020-02-02 14:22:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 4a90f535-0f6e-40ca-9676-f6e032e7e023 0xc001b1b5e7 0xc001b1b5e8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-hqwlz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hqwlz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-hqwlz true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001b1b660} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001b1b680}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-02 14:22:48 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  2 14:22:57.058: INFO: Pod "nginx-deployment-55fb7cb77f-wf2dz" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-wf2dz,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-6815,SelfLink:/api/v1/namespaces/deployment-6815/pods/nginx-deployment-55fb7cb77f-wf2dz,UID:d1deda49-7315-4f5e-9c2c-1999e24e8cdc,ResourceVersion:22826897,Generation:0,CreationTimestamp:2020-02-02 14:22:48 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 4a90f535-0f6e-40ca-9676-f6e032e7e023 0xc001b1b7e7 0xc001b1b7e8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-hqwlz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hqwlz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-hqwlz true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001b1ba10} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001b1ba30}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-02 14:22:48 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  2 14:22:57.059: INFO: Pod "nginx-deployment-55fb7cb77f-wgkn2" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-wgkn2,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-6815,SelfLink:/api/v1/namespaces/deployment-6815/pods/nginx-deployment-55fb7cb77f-wgkn2,UID:6195ff5a-138a-4a73-8a5c-647b95202485,ResourceVersion:22826906,Generation:0,CreationTimestamp:2020-02-02 14:22:48 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 4a90f535-0f6e-40ca-9676-f6e032e7e023 0xc001b1bb17 0xc001b1bb18}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-hqwlz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hqwlz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-hqwlz true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001b1bbc0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001b1bbe0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-02 14:22:48 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  2 14:22:57.059: INFO: Pod "nginx-deployment-55fb7cb77f-xlmgk" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-xlmgk,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-6815,SelfLink:/api/v1/namespaces/deployment-6815/pods/nginx-deployment-55fb7cb77f-xlmgk,UID:cf9396d7-ad6a-4e45-8c5b-169c00ab9b2e,ResourceVersion:22826816,Generation:0,CreationTimestamp:2020-02-02 14:22:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 4a90f535-0f6e-40ca-9676-f6e032e7e023 0xc001b1bc67 0xc001b1bc68}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-hqwlz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hqwlz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-hqwlz true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001b1bcd0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001b1bcf0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-02 14:22:43 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-02 14:22:43 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-02 14:22:43 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-02 14:22:43 +0000 UTC  }],Message:,Reason:,HostIP:10.96.2.216,PodIP:,StartTime:2020-02-02 14:22:43 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  2 14:22:57.060: INFO: Pod "nginx-deployment-7b8c6f4498-226mb" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-226mb,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6815,SelfLink:/api/v1/namespaces/deployment-6815/pods/nginx-deployment-7b8c6f4498-226mb,UID:5ce68332-f375-41ba-9d63-1fa580b2e8e5,ResourceVersion:22826870,Generation:0,CreationTimestamp:2020-02-02 14:22:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 5f7c9c84-4a77-4de9-b720-34ca5849dd76 0xc001b1bdd7 0xc001b1bdd8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-hqwlz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hqwlz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-hqwlz true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001b1be50} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001b1be70}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-02 14:22:48 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  2 14:22:57.060: INFO: Pod "nginx-deployment-7b8c6f4498-2fn7h" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-2fn7h,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6815,SelfLink:/api/v1/namespaces/deployment-6815/pods/nginx-deployment-7b8c6f4498-2fn7h,UID:2a9c39f5-14f6-4fef-acf3-8b5c7c77a6fc,ResourceVersion:22826772,Generation:0,CreationTimestamp:2020-02-02 14:22:15 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 5f7c9c84-4a77-4de9-b720-34ca5849dd76 0xc001b1bef7 0xc001b1bef8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-hqwlz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hqwlz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-hqwlz true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001b1bf60} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001b1bf80}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-02 14:22:16 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-02 14:22:41 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-02 14:22:41 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-02 14:22:16 +0000 UTC  }],Message:,Reason:,HostIP:10.96.2.216,PodIP:10.32.0.4,StartTime:2020-02-02 14:22:16 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-02 14:22:39 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://63923b12e5b8dfbc92499fbb244150a48e61db6bbec2b495738b253ffb613bd8}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  2 14:22:57.061: INFO: Pod "nginx-deployment-7b8c6f4498-2tpd6" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-2tpd6,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6815,SelfLink:/api/v1/namespaces/deployment-6815/pods/nginx-deployment-7b8c6f4498-2tpd6,UID:830dbd07-af93-4d8e-94d9-13168e72d5ea,ResourceVersion:22826888,Generation:0,CreationTimestamp:2020-02-02 14:22:48 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 5f7c9c84-4a77-4de9-b720-34ca5849dd76 0xc001fae057 0xc001fae058}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-hqwlz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hqwlz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-hqwlz true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001fae0d0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001fae0f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-02 14:22:48 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  2 14:22:57.061: INFO: Pod "nginx-deployment-7b8c6f4498-6kgp2" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-6kgp2,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6815,SelfLink:/api/v1/namespaces/deployment-6815/pods/nginx-deployment-7b8c6f4498-6kgp2,UID:5cd0af0d-3604-46f1-b3ce-10d4f2b25a06,ResourceVersion:22826753,Generation:0,CreationTimestamp:2020-02-02 14:22:15 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 5f7c9c84-4a77-4de9-b720-34ca5849dd76 0xc001fae177 0xc001fae178}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-hqwlz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hqwlz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-hqwlz true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001fae1e0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001fae200}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-02 14:22:16 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-02 14:22:40 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-02 14:22:40 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-02 14:22:16 +0000 UTC  }],Message:,Reason:,HostIP:10.96.2.216,PodIP:10.32.0.8,StartTime:2020-02-02 14:22:16 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-02 14:22:39 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://97eebb63099eeb49f778e88e9df6d8031bdb84a3195bb2905a4c6bf4e29a4a34}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  2 14:22:57.061: INFO: Pod "nginx-deployment-7b8c6f4498-75lkp" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-75lkp,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6815,SelfLink:/api/v1/namespaces/deployment-6815/pods/nginx-deployment-7b8c6f4498-75lkp,UID:6d5ea049-03c2-42de-b4e7-943907e1d4cb,ResourceVersion:22826923,Generation:0,CreationTimestamp:2020-02-02 14:22:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 5f7c9c84-4a77-4de9-b720-34ca5849dd76 0xc001fae2d7 0xc001fae2d8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-hqwlz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hqwlz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-hqwlz true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001fae350} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001fae370}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-02 14:22:48 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-02 14:22:48 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-02 14:22:48 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-02 14:22:47 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2020-02-02 14:22:48 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  2 14:22:57.062: INFO: Pod "nginx-deployment-7b8c6f4498-9kp2g" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-9kp2g,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6815,SelfLink:/api/v1/namespaces/deployment-6815/pods/nginx-deployment-7b8c6f4498-9kp2g,UID:35fafc0f-46ce-4687-a759-661a5d0e7356,ResourceVersion:22826896,Generation:0,CreationTimestamp:2020-02-02 14:22:48 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 5f7c9c84-4a77-4de9-b720-34ca5849dd76 0xc001fae437 0xc001fae438}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-hqwlz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hqwlz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-hqwlz true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001fae4a0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001fae4c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-02 14:22:48 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  2 14:22:57.062: INFO: Pod "nginx-deployment-7b8c6f4498-db6qk" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-db6qk,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6815,SelfLink:/api/v1/namespaces/deployment-6815/pods/nginx-deployment-7b8c6f4498-db6qk,UID:2f6fd399-7e26-4d03-abc8-b09d6c1e654f,ResourceVersion:22826921,Generation:0,CreationTimestamp:2020-02-02 14:22:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 5f7c9c84-4a77-4de9-b720-34ca5849dd76 0xc001fae567 0xc001fae568}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-hqwlz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hqwlz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-hqwlz true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001fae5d0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001fae5f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-02 14:22:50 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-02 14:22:50 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-02 14:22:50 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-02 14:22:48 +0000 UTC  }],Message:,Reason:,HostIP:10.96.2.216,PodIP:,StartTime:2020-02-02 14:22:50 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  2 14:22:57.062: INFO: Pod "nginx-deployment-7b8c6f4498-fxqz9" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-fxqz9,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6815,SelfLink:/api/v1/namespaces/deployment-6815/pods/nginx-deployment-7b8c6f4498-fxqz9,UID:03c9e084-887b-4e85-9c1c-5b87f73b434f,ResourceVersion:22826892,Generation:0,CreationTimestamp:2020-02-02 14:22:48 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 5f7c9c84-4a77-4de9-b720-34ca5849dd76 0xc001fae6b7 0xc001fae6b8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-hqwlz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hqwlz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-hqwlz true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001fae730} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001fae750}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-02 14:22:48 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  2 14:22:57.062: INFO: Pod "nginx-deployment-7b8c6f4498-gt2dt" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-gt2dt,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6815,SelfLink:/api/v1/namespaces/deployment-6815/pods/nginx-deployment-7b8c6f4498-gt2dt,UID:7e9f93f6-c5fa-475a-a051-0e97f4e7d68f,ResourceVersion:22826751,Generation:0,CreationTimestamp:2020-02-02 14:22:15 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 5f7c9c84-4a77-4de9-b720-34ca5849dd76 0xc001fae7d7 0xc001fae7d8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-hqwlz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hqwlz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-hqwlz true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001fae840} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001fae860}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-02 14:22:16 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-02 14:22:40 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-02 14:22:40 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-02 14:22:15 +0000 UTC  }],Message:,Reason:,HostIP:10.96.2.216,PodIP:10.32.0.5,StartTime:2020-02-02 14:22:16 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-02 14:22:39 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://8e82ae13fd3503f277390822c72f419e64aa2af3945f7a18e825c94d56947b88}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  2 14:22:57.063: INFO: Pod "nginx-deployment-7b8c6f4498-hsn5x" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-hsn5x,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6815,SelfLink:/api/v1/namespaces/deployment-6815/pods/nginx-deployment-7b8c6f4498-hsn5x,UID:bd8b5b1f-2713-4f8b-a370-1e875bff0bd8,ResourceVersion:22826766,Generation:0,CreationTimestamp:2020-02-02 14:22:15 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 5f7c9c84-4a77-4de9-b720-34ca5849dd76 0xc001fae937 0xc001fae938}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-hqwlz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hqwlz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-hqwlz true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001fae9b0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001fae9d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-02 14:22:16 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-02 14:22:41 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-02 14:22:41 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-02 14:22:15 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.3,StartTime:2020-02-02 14:22:16 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-02 14:22:40 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://79fcf12b8c992a6b3fb9865e75d426cc994ceec6a713ac32761c90ae8ff41e14}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  2 14:22:57.063: INFO: Pod "nginx-deployment-7b8c6f4498-jkjph" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-jkjph,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6815,SelfLink:/api/v1/namespaces/deployment-6815/pods/nginx-deployment-7b8c6f4498-jkjph,UID:f2939e87-0b79-4b28-b993-6a5d42fcb6ff,ResourceVersion:22826887,Generation:0,CreationTimestamp:2020-02-02 14:22:48 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 5f7c9c84-4a77-4de9-b720-34ca5849dd76 0xc001faeaa7 0xc001faeaa8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-hqwlz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hqwlz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-hqwlz true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001faeb20} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001faeb40}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-02 14:22:48 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  2 14:22:57.064: INFO: Pod "nginx-deployment-7b8c6f4498-jl6l2" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-jl6l2,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6815,SelfLink:/api/v1/namespaces/deployment-6815/pods/nginx-deployment-7b8c6f4498-jl6l2,UID:c6301559-c3dc-44da-be06-0214161f9a02,ResourceVersion:22826873,Generation:0,CreationTimestamp:2020-02-02 14:22:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 5f7c9c84-4a77-4de9-b720-34ca5849dd76 0xc001faebc7 0xc001faebc8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-hqwlz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hqwlz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-hqwlz true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001faec40} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001faec60}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-02 14:22:48 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  2 14:22:57.064: INFO: Pod "nginx-deployment-7b8c6f4498-lwcrx" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-lwcrx,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6815,SelfLink:/api/v1/namespaces/deployment-6815/pods/nginx-deployment-7b8c6f4498-lwcrx,UID:87cec6f7-ee5d-44ad-9c68-df084912cc9a,ResourceVersion:22826781,Generation:0,CreationTimestamp:2020-02-02 14:22:15 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 5f7c9c84-4a77-4de9-b720-34ca5849dd76 0xc001faece7 0xc001faece8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-hqwlz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hqwlz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-hqwlz true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001faed60} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001faed80}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-02 14:22:16 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-02 14:22:42 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-02 14:22:42 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-02 14:22:15 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.1,StartTime:2020-02-02 14:22:16 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-02 14:22:38 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://00ac9420ab1c5115b97699649d4c272899d9276837f84f7ce093d4c934753223}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  2 14:22:57.064: INFO: Pod "nginx-deployment-7b8c6f4498-ns5pj" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-ns5pj,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6815,SelfLink:/api/v1/namespaces/deployment-6815/pods/nginx-deployment-7b8c6f4498-ns5pj,UID:f2c4e668-1967-4453-9403-57861eeb751c,ResourceVersion:22826757,Generation:0,CreationTimestamp:2020-02-02 14:22:15 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 5f7c9c84-4a77-4de9-b720-34ca5849dd76 0xc001faee57 0xc001faee58}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-hqwlz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hqwlz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-hqwlz true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001faeee0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001faef00}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-02 14:22:16 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-02 14:22:40 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-02 14:22:40 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-02 14:22:15 +0000 UTC  }],Message:,Reason:,HostIP:10.96.2.216,PodIP:10.32.0.6,StartTime:2020-02-02 14:22:16 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-02 14:22:39 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://4e2813115c7d387e884020bc1bb9f31b41b3f6de4e1c8015af9aa03119354198}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  2 14:22:57.065: INFO: Pod "nginx-deployment-7b8c6f4498-rgwth" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-rgwth,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6815,SelfLink:/api/v1/namespaces/deployment-6815/pods/nginx-deployment-7b8c6f4498-rgwth,UID:cc321a3a-0eee-45d0-88a9-1d85c6eea0e9,ResourceVersion:22826917,Generation:0,CreationTimestamp:2020-02-02 14:22:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 5f7c9c84-4a77-4de9-b720-34ca5849dd76 0xc001faefe7 0xc001faefe8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-hqwlz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hqwlz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-hqwlz true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001faf060} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001faf090}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-02 14:22:48 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-02 14:22:48 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-02 14:22:48 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-02 14:22:47 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2020-02-02 14:22:48 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  2 14:22:57.065: INFO: Pod "nginx-deployment-7b8c6f4498-sch4c" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-sch4c,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6815,SelfLink:/api/v1/namespaces/deployment-6815/pods/nginx-deployment-7b8c6f4498-sch4c,UID:8ff3e50e-fe6d-4931-9311-53826ae228e2,ResourceVersion:22826769,Generation:0,CreationTimestamp:2020-02-02 14:22:15 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 5f7c9c84-4a77-4de9-b720-34ca5849dd76 0xc001faf157 0xc001faf158}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-hqwlz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hqwlz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-hqwlz true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001faf1c0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001faf1e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-02 14:22:16 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-02 14:22:41 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-02 14:22:41 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-02 14:22:15 +0000 UTC  }],Message:,Reason:,HostIP:10.96.2.216,PodIP:10.32.0.7,StartTime:2020-02-02 14:22:16 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-02 14:22:40 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://81bd5281da851b5f0bc8d3b72c5fcd4a415e89b2fca9f6092291d61eaf46fc13}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  2 14:22:57.065: INFO: Pod "nginx-deployment-7b8c6f4498-sfn9g" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-sfn9g,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6815,SelfLink:/api/v1/namespaces/deployment-6815/pods/nginx-deployment-7b8c6f4498-sfn9g,UID:e3a1d2b1-14c6-4b11-874c-0ee304406512,ResourceVersion:22826902,Generation:0,CreationTimestamp:2020-02-02 14:22:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 5f7c9c84-4a77-4de9-b720-34ca5849dd76 0xc001faf2b7 0xc001faf2b8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-hqwlz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hqwlz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-hqwlz true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001faf330} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001faf350}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-02 14:22:48 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-02 14:22:48 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-02 14:22:48 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-02 14:22:47 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2020-02-02 14:22:48 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  2 14:22:57.066: INFO: Pod "nginx-deployment-7b8c6f4498-vhktz" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-vhktz,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6815,SelfLink:/api/v1/namespaces/deployment-6815/pods/nginx-deployment-7b8c6f4498-vhktz,UID:968ffaac-0864-4a58-a8cd-4a6d218d6a74,ResourceVersion:22826778,Generation:0,CreationTimestamp:2020-02-02 14:22:15 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 5f7c9c84-4a77-4de9-b720-34ca5849dd76 0xc001faf417 0xc001faf418}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-hqwlz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hqwlz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-hqwlz true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001faf490} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001faf4b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-02 14:22:16 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-02 14:22:42 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-02 14:22:42 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-02 14:22:16 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.4,StartTime:2020-02-02 14:22:16 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-02 14:22:40 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://2d83132e378e33ec70a95f57bd2519365b369dd6597ad2972f4a863e4ec5aa5a}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  2 14:22:57.066: INFO: Pod "nginx-deployment-7b8c6f4498-vqgn2" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-vqgn2,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6815,SelfLink:/api/v1/namespaces/deployment-6815/pods/nginx-deployment-7b8c6f4498-vqgn2,UID:8f51d861-8b4c-4d38-bc7a-8fb638b1f1af,ResourceVersion:22826885,Generation:0,CreationTimestamp:2020-02-02 14:22:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 5f7c9c84-4a77-4de9-b720-34ca5849dd76 0xc001faf587 0xc001faf588}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-hqwlz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hqwlz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-hqwlz true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001faf5f0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001faf610}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-02 14:22:48 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  2 14:22:57.066: INFO: Pod "nginx-deployment-7b8c6f4498-xzbb7" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-xzbb7,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6815,SelfLink:/api/v1/namespaces/deployment-6815/pods/nginx-deployment-7b8c6f4498-xzbb7,UID:dddb97c5-b479-46d4-836e-f3de59baf4ba,ResourceVersion:22826894,Generation:0,CreationTimestamp:2020-02-02 14:22:48 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 5f7c9c84-4a77-4de9-b720-34ca5849dd76 0xc001faf697 0xc001faf698}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-hqwlz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hqwlz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-hqwlz true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001faf700} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001faf730}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-02 14:22:48 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  2 14:22:57.067: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-6815" for this suite.
Feb  2 14:24:01.502: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 14:24:01.644: INFO: namespace deployment-6815 deletion completed in 1m3.148878236s

• [SLOW TEST:105.973 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  2 14:24:01.645: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating the pod
Feb  2 14:24:14.455: INFO: Successfully updated pod "annotationupdate0427c4c0-f3ba-40c3-a668-b7dc8b16a938"
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  2 14:24:16.545: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8579" for this suite.
Feb  2 14:24:38.606: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 14:24:38.705: INFO: namespace projected-8579 deletion completed in 22.148956795s

• [SLOW TEST:37.060 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  2 14:24:38.706: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating the pod
Feb  2 14:24:49.409: INFO: Successfully updated pod "labelsupdatefb688b6c-345c-4db2-b750-df00bc18cfb0"
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  2 14:24:51.491: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-6084" for this suite.
Feb  2 14:25:13.553: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 14:25:13.678: INFO: namespace downward-api-6084 deletion completed in 22.149997629s

• [SLOW TEST:34.973 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-apps] Deployment 
  deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  2 14:25:13.678: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72
[It] deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb  2 14:25:13.779: INFO: Pod name cleanup-pod: Found 0 pods out of 1
Feb  2 14:25:18.786: INFO: Pod name cleanup-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Feb  2 14:25:20.839: INFO: Creating deployment test-cleanup-deployment
STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66
Feb  2 14:25:20.878: INFO: Deployment "test-cleanup-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment,GenerateName:,Namespace:deployment-7578,SelfLink:/apis/apps/v1/namespaces/deployment-7578/deployments/test-cleanup-deployment,UID:d1100b4c-9d34-4def-b6c9-ce352ea780d0,ResourceVersion:22827394,Generation:1,CreationTimestamp:2020-02-02 14:25:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[],ReadyReplicas:0,CollisionCount:nil,},}

Feb  2 14:25:20.886: INFO: New ReplicaSet "test-cleanup-deployment-55bbcbc84c" of Deployment "test-cleanup-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment-55bbcbc84c,GenerateName:,Namespace:deployment-7578,SelfLink:/apis/apps/v1/namespaces/deployment-7578/replicasets/test-cleanup-deployment-55bbcbc84c,UID:5db75ccc-cc4f-47b1-ab21-c7696c843709,ResourceVersion:22827396,Generation:1,CreationTimestamp:2020-02-02 14:25:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment d1100b4c-9d34-4def-b6c9-ce352ea780d0 0xc00295ecd7 0xc00295ecd8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:0,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Feb  2 14:25:20.886: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment":
Feb  2 14:25:20.886: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller,GenerateName:,Namespace:deployment-7578,SelfLink:/apis/apps/v1/namespaces/deployment-7578/replicasets/test-cleanup-controller,UID:25a1f770-6218-4a54-9fbf-fe568a56150c,ResourceVersion:22827395,Generation:1,CreationTimestamp:2020-02-02 14:25:13 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment d1100b4c-9d34-4def-b6c9-ce352ea780d0 0xc00295ec07 0xc00295ec08}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Feb  2 14:25:20.968: INFO: Pod "test-cleanup-controller-mk5mh" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller-mk5mh,GenerateName:test-cleanup-controller-,Namespace:deployment-7578,SelfLink:/api/v1/namespaces/deployment-7578/pods/test-cleanup-controller-mk5mh,UID:23cf9baf-ff3b-4f9b-8ce3-d383f5fd7637,ResourceVersion:22827392,Generation:0,CreationTimestamp:2020-02-02 14:25:13 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-controller 25a1f770-6218-4a54-9fbf-fe568a56150c 0xc00295f6d7 0xc00295f6d8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-dj2gz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dj2gz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-dj2gz true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00295f750} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00295f770}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-02 14:25:13 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-02 14:25:20 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-02 14:25:20 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-02 14:25:13 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.1,StartTime:2020-02-02 14:25:13 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-02 14:25:20 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://a89fb4fd4eaf304ae3ae8092d8ed5340917a0e751e7d0ee9c815640cbfe74e42}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  2 14:25:20.969: INFO: Pod "test-cleanup-deployment-55bbcbc84c-xxf5m" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment-55bbcbc84c-xxf5m,GenerateName:test-cleanup-deployment-55bbcbc84c-,Namespace:deployment-7578,SelfLink:/api/v1/namespaces/deployment-7578/pods/test-cleanup-deployment-55bbcbc84c-xxf5m,UID:ae583f10-297f-41c0-90bd-5e4d72520cbf,ResourceVersion:22827397,Generation:0,CreationTimestamp:2020-02-02 14:25:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-deployment-55bbcbc84c 5db75ccc-cc4f-47b1-ab21-c7696c843709 0xc00295f857 0xc00295f858}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-dj2gz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dj2gz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [{default-token-dj2gz true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00295f8c0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00295f8e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  2 14:25:20.969: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-7578" for this suite.
Feb  2 14:25:27.105: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 14:25:27.236: INFO: namespace deployment-7578 deletion completed in 6.260591526s

• [SLOW TEST:13.558 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  2 14:25:27.237: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Feb  2 14:25:45.280: INFO: Expected: &{} to match Container's Termination Message:  --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  2 14:25:45.483: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-5909" for this suite.
Feb  2 14:25:51.697: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 14:25:51.823: INFO: namespace container-runtime-5909 deletion completed in 6.333205136s

• [SLOW TEST:24.587 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129
      should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  2 14:25:51.824: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir volume type on tmpfs
Feb  2 14:25:52.063: INFO: Waiting up to 5m0s for pod "pod-52acbaa5-7407-4dcc-bd88-41c26c20b1ae" in namespace "emptydir-5349" to be "success or failure"
Feb  2 14:25:52.123: INFO: Pod "pod-52acbaa5-7407-4dcc-bd88-41c26c20b1ae": Phase="Pending", Reason="", readiness=false. Elapsed: 59.429445ms
Feb  2 14:25:54.131: INFO: Pod "pod-52acbaa5-7407-4dcc-bd88-41c26c20b1ae": Phase="Pending", Reason="", readiness=false. Elapsed: 2.067822175s
Feb  2 14:25:56.149: INFO: Pod "pod-52acbaa5-7407-4dcc-bd88-41c26c20b1ae": Phase="Pending", Reason="", readiness=false. Elapsed: 4.085270239s
Feb  2 14:25:58.156: INFO: Pod "pod-52acbaa5-7407-4dcc-bd88-41c26c20b1ae": Phase="Pending", Reason="", readiness=false. Elapsed: 6.092905493s
Feb  2 14:26:00.165: INFO: Pod "pod-52acbaa5-7407-4dcc-bd88-41c26c20b1ae": Phase="Pending", Reason="", readiness=false. Elapsed: 8.101662248s
Feb  2 14:26:02.180: INFO: Pod "pod-52acbaa5-7407-4dcc-bd88-41c26c20b1ae": Phase="Pending", Reason="", readiness=false. Elapsed: 10.116213332s
Feb  2 14:26:04.189: INFO: Pod "pod-52acbaa5-7407-4dcc-bd88-41c26c20b1ae": Phase="Pending", Reason="", readiness=false. Elapsed: 12.125396827s
Feb  2 14:26:06.201: INFO: Pod "pod-52acbaa5-7407-4dcc-bd88-41c26c20b1ae": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.137508426s
STEP: Saw pod success
Feb  2 14:26:06.201: INFO: Pod "pod-52acbaa5-7407-4dcc-bd88-41c26c20b1ae" satisfied condition "success or failure"
Feb  2 14:26:06.205: INFO: Trying to get logs from node iruya-node pod pod-52acbaa5-7407-4dcc-bd88-41c26c20b1ae container test-container: 
STEP: delete the pod
Feb  2 14:26:06.300: INFO: Waiting for pod pod-52acbaa5-7407-4dcc-bd88-41c26c20b1ae to disappear
Feb  2 14:26:06.399: INFO: Pod pod-52acbaa5-7407-4dcc-bd88-41c26c20b1ae no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  2 14:26:06.399: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-5349" for this suite.
Feb  2 14:26:12.449: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 14:26:12.552: INFO: namespace emptydir-5349 deletion completed in 6.145508297s

• [SLOW TEST:20.729 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  2 14:26:12.553: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the rc1
STEP: create the rc2
STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well
STEP: delete the rc simpletest-rc-to-be-deleted
STEP: wait for the rc to be deleted
STEP: Gathering metrics
W0202 14:26:26.879855       8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Feb  2 14:26:26.879: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  2 14:26:26.880: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-1717" for this suite.
Feb  2 14:26:55.275: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 14:26:56.869: INFO: namespace gc-1717 deletion completed in 29.986501436s

• [SLOW TEST:44.317 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  2 14:26:56.870: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Feb  2 14:27:36.163: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb  2 14:27:36.327: INFO: Pod pod-with-prestop-exec-hook still exists
Feb  2 14:27:38.327: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb  2 14:27:38.343: INFO: Pod pod-with-prestop-exec-hook still exists
Feb  2 14:27:40.327: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb  2 14:27:40.342: INFO: Pod pod-with-prestop-exec-hook still exists
Feb  2 14:27:42.327: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb  2 14:27:42.417: INFO: Pod pod-with-prestop-exec-hook still exists
Feb  2 14:27:44.327: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb  2 14:27:44.338: INFO: Pod pod-with-prestop-exec-hook still exists
Feb  2 14:27:46.327: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb  2 14:27:46.344: INFO: Pod pod-with-prestop-exec-hook still exists
Feb  2 14:27:48.328: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb  2 14:27:48.341: INFO: Pod pod-with-prestop-exec-hook still exists
Feb  2 14:27:50.327: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb  2 14:27:50.338: INFO: Pod pod-with-prestop-exec-hook still exists
Feb  2 14:27:52.327: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb  2 14:27:52.339: INFO: Pod pod-with-prestop-exec-hook still exists
Feb  2 14:27:54.327: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb  2 14:27:54.337: INFO: Pod pod-with-prestop-exec-hook still exists
Feb  2 14:27:56.327: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb  2 14:27:56.336: INFO: Pod pod-with-prestop-exec-hook still exists
Feb  2 14:27:58.327: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb  2 14:27:58.941: INFO: Pod pod-with-prestop-exec-hook still exists
Feb  2 14:28:00.327: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb  2 14:28:00.341: INFO: Pod pod-with-prestop-exec-hook still exists
Feb  2 14:28:02.327: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb  2 14:28:02.335: INFO: Pod pod-with-prestop-exec-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  2 14:28:02.361: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-4089" for this suite.
Feb  2 14:28:22.454: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 14:28:22.600: INFO: namespace container-lifecycle-hook-4089 deletion completed in 20.233217086s

• [SLOW TEST:85.730 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute prestop exec hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-storage] HostPath 
  should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  2 14:28:22.600: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename hostpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37
[It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test hostPath mode
Feb  2 14:28:22.797: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-9117" to be "success or failure"
Feb  2 14:28:22.820: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 22.923203ms
Feb  2 14:28:24.833: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035473877s
Feb  2 14:28:26.854: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.056589583s
Feb  2 14:28:28.874: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 6.076859264s
Feb  2 14:28:30.882: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 8.08538013s
Feb  2 14:28:32.889: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 10.091553199s
Feb  2 14:28:34.903: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 12.106306799s
Feb  2 14:28:36.915: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 14.117590821s
Feb  2 14:28:38.929: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 16.131514476s
Feb  2 14:28:40.937: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 18.13990187s
Feb  2 14:28:42.944: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 20.146579177s
STEP: Saw pod success
Feb  2 14:28:42.944: INFO: Pod "pod-host-path-test" satisfied condition "success or failure"
Feb  2 14:28:42.946: INFO: Trying to get logs from node iruya-node pod pod-host-path-test container test-container-1: 
STEP: delete the pod
Feb  2 14:28:43.097: INFO: Waiting for pod pod-host-path-test to disappear
Feb  2 14:28:43.164: INFO: Pod pod-host-path-test no longer exists
[AfterEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  2 14:28:43.164: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "hostpath-9117" for this suite.
Feb  2 14:28:49.332: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 14:28:49.435: INFO: namespace hostpath-9117 deletion completed in 6.266083563s

• [SLOW TEST:26.835 seconds]
[sig-storage] HostPath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34
  should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl rolling-update 
  should support rolling-update to same image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  2 14:28:49.436: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1516
[It] should support rolling-update to same image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Feb  2 14:28:49.597: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=kubectl-5686'
Feb  2 14:28:51.773: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Feb  2 14:28:51.773: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n"
STEP: verifying the rc e2e-test-nginx-rc was created
Feb  2 14:28:51.827: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0
Feb  2 14:28:51.883: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
STEP: rolling-update to same image controller
Feb  2 14:28:51.944: INFO: scanned /root for discovery docs: 
Feb  2 14:28:51.944: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-nginx-rc --update-period=1s --image=docker.io/library/nginx:1.14-alpine --image-pull-policy=IfNotPresent --namespace=kubectl-5686'
Feb  2 14:29:25.054: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Feb  2 14:29:25.055: INFO: stdout: "Created e2e-test-nginx-rc-4929c8586a6e5d105c8e558a5432a02f\nScaling up e2e-test-nginx-rc-4929c8586a6e5d105c8e558a5432a02f from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-4929c8586a6e5d105c8e558a5432a02f up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-4929c8586a6e5d105c8e558a5432a02f to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n"
Feb  2 14:29:25.055: INFO: stdout: "Created e2e-test-nginx-rc-4929c8586a6e5d105c8e558a5432a02f\nScaling up e2e-test-nginx-rc-4929c8586a6e5d105c8e558a5432a02f from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-4929c8586a6e5d105c8e558a5432a02f up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-4929c8586a6e5d105c8e558a5432a02f to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n"
STEP: waiting for all containers in run=e2e-test-nginx-rc pods to come up.
Feb  2 14:29:25.055: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-5686'
Feb  2 14:29:25.193: INFO: stderr: ""
Feb  2 14:29:25.194: INFO: stdout: "e2e-test-nginx-rc-4929c8586a6e5d105c8e558a5432a02f-6j5qh e2e-test-nginx-rc-jhl5f "
STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2
Feb  2 14:29:30.194: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-5686'
Feb  2 14:29:30.425: INFO: stderr: ""
Feb  2 14:29:30.425: INFO: stdout: "e2e-test-nginx-rc-4929c8586a6e5d105c8e558a5432a02f-6j5qh "
Feb  2 14:29:30.426: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-4929c8586a6e5d105c8e558a5432a02f-6j5qh -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-nginx-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5686'
Feb  2 14:29:30.623: INFO: stderr: ""
Feb  2 14:29:30.623: INFO: stdout: "true"
Feb  2 14:29:30.623: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-4929c8586a6e5d105c8e558a5432a02f-6j5qh -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-nginx-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5686'
Feb  2 14:29:30.736: INFO: stderr: ""
Feb  2 14:29:30.736: INFO: stdout: "docker.io/library/nginx:1.14-alpine"
Feb  2 14:29:30.736: INFO: e2e-test-nginx-rc-4929c8586a6e5d105c8e558a5432a02f-6j5qh is verified up and running
[AfterEach] [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1522
Feb  2 14:29:30.737: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=kubectl-5686'
Feb  2 14:29:30.860: INFO: stderr: ""
Feb  2 14:29:30.860: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  2 14:29:30.860: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5686" for this suite.
Feb  2 14:29:56.112: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 14:29:56.353: INFO: namespace kubectl-5686 deletion completed in 25.486165774s

• [SLOW TEST:66.917 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should support rolling-update to same image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  2 14:29:56.353: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward api env vars
Feb  2 14:29:56.543: INFO: Waiting up to 5m0s for pod "downward-api-b2a3adc8-c6e5-48b5-9d76-e2156c78b7d3" in namespace "downward-api-857" to be "success or failure"
Feb  2 14:29:56.551: INFO: Pod "downward-api-b2a3adc8-c6e5-48b5-9d76-e2156c78b7d3": Phase="Pending", Reason="", readiness=false. Elapsed: 7.47008ms
Feb  2 14:29:58.569: INFO: Pod "downward-api-b2a3adc8-c6e5-48b5-9d76-e2156c78b7d3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026378217s
Feb  2 14:30:00.591: INFO: Pod "downward-api-b2a3adc8-c6e5-48b5-9d76-e2156c78b7d3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.04838197s
Feb  2 14:30:02.604: INFO: Pod "downward-api-b2a3adc8-c6e5-48b5-9d76-e2156c78b7d3": Phase="Pending", Reason="", readiness=false. Elapsed: 6.060939517s
Feb  2 14:30:04.624: INFO: Pod "downward-api-b2a3adc8-c6e5-48b5-9d76-e2156c78b7d3": Phase="Pending", Reason="", readiness=false. Elapsed: 8.080563288s
Feb  2 14:30:06.647: INFO: Pod "downward-api-b2a3adc8-c6e5-48b5-9d76-e2156c78b7d3": Phase="Pending", Reason="", readiness=false. Elapsed: 10.103634411s
Feb  2 14:30:08.656: INFO: Pod "downward-api-b2a3adc8-c6e5-48b5-9d76-e2156c78b7d3": Phase="Pending", Reason="", readiness=false. Elapsed: 12.113019759s
Feb  2 14:30:10.678: INFO: Pod "downward-api-b2a3adc8-c6e5-48b5-9d76-e2156c78b7d3": Phase="Pending", Reason="", readiness=false. Elapsed: 14.13519875s
Feb  2 14:30:12.685: INFO: Pod "downward-api-b2a3adc8-c6e5-48b5-9d76-e2156c78b7d3": Phase="Pending", Reason="", readiness=false. Elapsed: 16.142304775s
Feb  2 14:30:14.698: INFO: Pod "downward-api-b2a3adc8-c6e5-48b5-9d76-e2156c78b7d3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 18.155377478s
STEP: Saw pod success
Feb  2 14:30:14.699: INFO: Pod "downward-api-b2a3adc8-c6e5-48b5-9d76-e2156c78b7d3" satisfied condition "success or failure"
Feb  2 14:30:14.702: INFO: Trying to get logs from node iruya-node pod downward-api-b2a3adc8-c6e5-48b5-9d76-e2156c78b7d3 container dapi-container: 
STEP: delete the pod
Feb  2 14:30:15.160: INFO: Waiting for pod downward-api-b2a3adc8-c6e5-48b5-9d76-e2156c78b7d3 to disappear
Feb  2 14:30:15.193: INFO: Pod downward-api-b2a3adc8-c6e5-48b5-9d76-e2156c78b7d3 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  2 14:30:15.194: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-857" for this suite.
Feb  2 14:30:21.352: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 14:30:21.505: INFO: namespace downward-api-857 deletion completed in 6.305192458s

• [SLOW TEST:25.152 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  2 14:30:21.505: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb  2 14:30:21.679: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0e388819-110d-478e-9a80-ebec9990ad93" in namespace "downward-api-2981" to be "success or failure"
Feb  2 14:30:21.695: INFO: Pod "downwardapi-volume-0e388819-110d-478e-9a80-ebec9990ad93": Phase="Pending", Reason="", readiness=false. Elapsed: 16.075328ms
Feb  2 14:30:23.706: INFO: Pod "downwardapi-volume-0e388819-110d-478e-9a80-ebec9990ad93": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027593309s
Feb  2 14:30:25.716: INFO: Pod "downwardapi-volume-0e388819-110d-478e-9a80-ebec9990ad93": Phase="Pending", Reason="", readiness=false. Elapsed: 4.037021867s
Feb  2 14:30:27.838: INFO: Pod "downwardapi-volume-0e388819-110d-478e-9a80-ebec9990ad93": Phase="Pending", Reason="", readiness=false. Elapsed: 6.158822945s
Feb  2 14:30:29.848: INFO: Pod "downwardapi-volume-0e388819-110d-478e-9a80-ebec9990ad93": Phase="Pending", Reason="", readiness=false. Elapsed: 8.16873766s
Feb  2 14:30:31.862: INFO: Pod "downwardapi-volume-0e388819-110d-478e-9a80-ebec9990ad93": Phase="Pending", Reason="", readiness=false. Elapsed: 10.182760656s
Feb  2 14:30:33.882: INFO: Pod "downwardapi-volume-0e388819-110d-478e-9a80-ebec9990ad93": Phase="Pending", Reason="", readiness=false. Elapsed: 12.203018444s
Feb  2 14:30:35.897: INFO: Pod "downwardapi-volume-0e388819-110d-478e-9a80-ebec9990ad93": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.218281472s
STEP: Saw pod success
Feb  2 14:30:35.897: INFO: Pod "downwardapi-volume-0e388819-110d-478e-9a80-ebec9990ad93" satisfied condition "success or failure"
Feb  2 14:30:35.904: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-0e388819-110d-478e-9a80-ebec9990ad93 container client-container: 
STEP: delete the pod
Feb  2 14:30:35.982: INFO: Waiting for pod downwardapi-volume-0e388819-110d-478e-9a80-ebec9990ad93 to disappear
Feb  2 14:30:36.030: INFO: Pod downwardapi-volume-0e388819-110d-478e-9a80-ebec9990ad93 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  2 14:30:36.030: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-2981" for this suite.
Feb  2 14:30:42.095: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 14:30:42.203: INFO: namespace downward-api-2981 deletion completed in 6.143025256s

• [SLOW TEST:20.698 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should have an terminated reason [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  2 14:30:42.204: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should have an terminated reason [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  2 14:30:59.462: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-5043" for this suite.
Feb  2 14:31:05.618: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 14:31:05.730: INFO: namespace kubelet-test-5043 deletion completed in 6.247249659s

• [SLOW TEST:23.526 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78
    should have an terminated reason [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  2 14:31:05.730: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0777 on tmpfs
Feb  2 14:31:05.891: INFO: Waiting up to 5m0s for pod "pod-8a8e7b09-729c-48a2-b9f6-4b69c39c8a05" in namespace "emptydir-1693" to be "success or failure"
Feb  2 14:31:05.929: INFO: Pod "pod-8a8e7b09-729c-48a2-b9f6-4b69c39c8a05": Phase="Pending", Reason="", readiness=false. Elapsed: 38.692605ms
Feb  2 14:31:07.937: INFO: Pod "pod-8a8e7b09-729c-48a2-b9f6-4b69c39c8a05": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0463901s
Feb  2 14:31:09.945: INFO: Pod "pod-8a8e7b09-729c-48a2-b9f6-4b69c39c8a05": Phase="Pending", Reason="", readiness=false. Elapsed: 4.054278882s
Feb  2 14:31:11.967: INFO: Pod "pod-8a8e7b09-729c-48a2-b9f6-4b69c39c8a05": Phase="Pending", Reason="", readiness=false. Elapsed: 6.075924866s
Feb  2 14:31:13.977: INFO: Pod "pod-8a8e7b09-729c-48a2-b9f6-4b69c39c8a05": Phase="Pending", Reason="", readiness=false. Elapsed: 8.085918515s
Feb  2 14:31:15.984: INFO: Pod "pod-8a8e7b09-729c-48a2-b9f6-4b69c39c8a05": Phase="Pending", Reason="", readiness=false. Elapsed: 10.093772289s
Feb  2 14:31:18.101: INFO: Pod "pod-8a8e7b09-729c-48a2-b9f6-4b69c39c8a05": Phase="Pending", Reason="", readiness=false. Elapsed: 12.209827108s
Feb  2 14:31:20.111: INFO: Pod "pod-8a8e7b09-729c-48a2-b9f6-4b69c39c8a05": Phase="Pending", Reason="", readiness=false. Elapsed: 14.220199314s
Feb  2 14:31:22.131: INFO: Pod "pod-8a8e7b09-729c-48a2-b9f6-4b69c39c8a05": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.24016371s
STEP: Saw pod success
Feb  2 14:31:22.131: INFO: Pod "pod-8a8e7b09-729c-48a2-b9f6-4b69c39c8a05" satisfied condition "success or failure"
Feb  2 14:31:22.137: INFO: Trying to get logs from node iruya-node pod pod-8a8e7b09-729c-48a2-b9f6-4b69c39c8a05 container test-container: 
STEP: delete the pod
Feb  2 14:31:22.201: INFO: Waiting for pod pod-8a8e7b09-729c-48a2-b9f6-4b69c39c8a05 to disappear
Feb  2 14:31:22.295: INFO: Pod pod-8a8e7b09-729c-48a2-b9f6-4b69c39c8a05 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  2 14:31:22.295: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-1693" for this suite.
Feb  2 14:31:28.682: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 14:31:28.850: INFO: namespace emptydir-1693 deletion completed in 6.550607742s

• [SLOW TEST:23.121 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period 
  should be submitted and removed [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  2 14:31:28.851: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Delete Grace Period
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:47
[It] should be submitted and removed [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: setting up selector
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
Feb  2 14:31:43.106: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0'
STEP: deleting the pod gracefully
STEP: verifying the kubelet observed the termination notice
Feb  2 14:31:58.290: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed
[AfterEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  2 14:31:58.300: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-8493" for this suite.
Feb  2 14:32:04.343: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 14:32:04.516: INFO: namespace pods-8493 deletion completed in 6.204914843s

• [SLOW TEST:35.665 seconds]
[k8s.io] [sig-node] Pods Extended
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  [k8s.io] Delete Grace Period
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should be submitted and removed [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl describe 
  should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  2 14:32:04.516: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb  2 14:32:04.583: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9475'
Feb  2 14:32:04.927: INFO: stderr: ""
Feb  2 14:32:04.927: INFO: stdout: "replicationcontroller/redis-master created\n"
Feb  2 14:32:04.927: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9475'
Feb  2 14:32:05.854: INFO: stderr: ""
Feb  2 14:32:05.854: INFO: stdout: "service/redis-master created\n"
STEP: Waiting for Redis master to start.
Feb  2 14:32:06.879: INFO: Selector matched 1 pods for map[app:redis]
Feb  2 14:32:06.879: INFO: Found 0 / 1
Feb  2 14:32:07.879: INFO: Selector matched 1 pods for map[app:redis]
Feb  2 14:32:07.879: INFO: Found 0 / 1
Feb  2 14:32:08.890: INFO: Selector matched 1 pods for map[app:redis]
Feb  2 14:32:08.890: INFO: Found 0 / 1
Feb  2 14:32:09.882: INFO: Selector matched 1 pods for map[app:redis]
Feb  2 14:32:09.882: INFO: Found 0 / 1
Feb  2 14:32:10.870: INFO: Selector matched 1 pods for map[app:redis]
Feb  2 14:32:10.870: INFO: Found 0 / 1
Feb  2 14:32:11.863: INFO: Selector matched 1 pods for map[app:redis]
Feb  2 14:32:11.863: INFO: Found 0 / 1
Feb  2 14:32:12.866: INFO: Selector matched 1 pods for map[app:redis]
Feb  2 14:32:12.866: INFO: Found 0 / 1
Feb  2 14:32:13.867: INFO: Selector matched 1 pods for map[app:redis]
Feb  2 14:32:13.867: INFO: Found 0 / 1
Feb  2 14:32:14.866: INFO: Selector matched 1 pods for map[app:redis]
Feb  2 14:32:14.866: INFO: Found 0 / 1
Feb  2 14:32:15.868: INFO: Selector matched 1 pods for map[app:redis]
Feb  2 14:32:15.868: INFO: Found 0 / 1
Feb  2 14:32:16.868: INFO: Selector matched 1 pods for map[app:redis]
Feb  2 14:32:16.869: INFO: Found 0 / 1
Feb  2 14:32:17.866: INFO: Selector matched 1 pods for map[app:redis]
Feb  2 14:32:17.866: INFO: Found 1 / 1
Feb  2 14:32:17.866: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Feb  2 14:32:17.871: INFO: Selector matched 1 pods for map[app:redis]
Feb  2 14:32:17.871: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Feb  2 14:32:17.871: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe pod redis-master-pb98t --namespace=kubectl-9475'
Feb  2 14:32:18.101: INFO: stderr: ""
Feb  2 14:32:18.101: INFO: stdout: "Name:           redis-master-pb98t\nNamespace:      kubectl-9475\nPriority:       0\nNode:           iruya-node/10.96.3.65\nStart Time:     Sun, 02 Feb 2020 14:32:05 +0000\nLabels:         app=redis\n                role=master\nAnnotations:    \nStatus:         Running\nIP:             10.44.0.1\nControlled By:  ReplicationController/redis-master\nContainers:\n  redis-master:\n    Container ID:   docker://bc0f98187870df07ed36df8d43aa4bb94853140c6e1b7e96163bf5445fb069e1\n    Image:          gcr.io/kubernetes-e2e-test-images/redis:1.0\n    Image ID:       docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830\n    Port:           6379/TCP\n    Host Port:      0/TCP\n    State:          Running\n      Started:      Sun, 02 Feb 2020 14:32:17 +0000\n    Ready:          True\n    Restart Count:  0\n    Environment:    \n    Mounts:\n      /var/run/secrets/kubernetes.io/serviceaccount from default-token-q7nmq (ro)\nConditions:\n  Type              Status\n  Initialized       True \n  Ready             True \n  ContainersReady   True \n  PodScheduled      True \nVolumes:\n  default-token-q7nmq:\n    Type:        Secret (a volume populated by a Secret)\n    SecretName:  default-token-q7nmq\n    Optional:    false\nQoS Class:       BestEffort\nNode-Selectors:  \nTolerations:     node.kubernetes.io/not-ready:NoExecute for 300s\n                 node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n  Type    Reason     Age   From                 Message\n  ----    ------     ----  ----                 -------\n  Normal  Scheduled  13s   default-scheduler    Successfully assigned kubectl-9475/redis-master-pb98t to iruya-node\n  Normal  Pulled     7s    kubelet, iruya-node  Container image \"gcr.io/kubernetes-e2e-test-images/redis:1.0\" already present on machine\n  Normal  Created    2s    kubelet, iruya-node  Created container redis-master\n  Normal  Started    1s    kubelet, iruya-node  Started container redis-master\n"
Feb  2 14:32:18.101: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe rc redis-master --namespace=kubectl-9475'
Feb  2 14:32:18.400: INFO: stderr: ""
Feb  2 14:32:18.400: INFO: stdout: "Name:         redis-master\nNamespace:    kubectl-9475\nSelector:     app=redis,role=master\nLabels:       app=redis\n              role=master\nAnnotations:  \nReplicas:     1 current / 1 desired\nPods Status:  1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n  Labels:  app=redis\n           role=master\n  Containers:\n   redis-master:\n    Image:        gcr.io/kubernetes-e2e-test-images/redis:1.0\n    Port:         6379/TCP\n    Host Port:    0/TCP\n    Environment:  \n    Mounts:       \n  Volumes:        \nEvents:\n  Type    Reason            Age   From                    Message\n  ----    ------            ----  ----                    -------\n  Normal  SuccessfulCreate  13s   replication-controller  Created pod: redis-master-pb98t\n"
Feb  2 14:32:18.400: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe service redis-master --namespace=kubectl-9475'
Feb  2 14:32:18.621: INFO: stderr: ""
Feb  2 14:32:18.621: INFO: stdout: "Name:              redis-master\nNamespace:         kubectl-9475\nLabels:            app=redis\n                   role=master\nAnnotations:       \nSelector:          app=redis,role=master\nType:              ClusterIP\nIP:                10.99.212.166\nPort:                6379/TCP\nTargetPort:        redis-server/TCP\nEndpoints:         10.44.0.1:6379\nSession Affinity:  None\nEvents:            \n"
Feb  2 14:32:18.625: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe node iruya-node'
Feb  2 14:32:18.746: INFO: stderr: ""
Feb  2 14:32:18.746: INFO: stdout: "Name:               iruya-node\nRoles:              \nLabels:             beta.kubernetes.io/arch=amd64\n                    beta.kubernetes.io/os=linux\n                    kubernetes.io/arch=amd64\n                    kubernetes.io/hostname=iruya-node\n                    kubernetes.io/os=linux\nAnnotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock\n                    node.alpha.kubernetes.io/ttl: 0\n                    volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp:  Sun, 04 Aug 2019 09:01:39 +0000\nTaints:             \nUnschedulable:      false\nConditions:\n  Type                 Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message\n  ----                 ------  -----------------                 ------------------                ------                       -------\n  NetworkUnavailable   False   Sat, 12 Oct 2019 11:56:49 +0000   Sat, 12 Oct 2019 11:56:49 +0000   WeaveIsUp                    Weave pod has set this\n  MemoryPressure       False   Sun, 02 Feb 2020 14:32:18 +0000   Sun, 04 Aug 2019 09:01:39 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available\n  DiskPressure         False   Sun, 02 Feb 2020 14:32:18 +0000   Sun, 04 Aug 2019 09:01:39 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure\n  PIDPressure          False   Sun, 02 Feb 2020 14:32:18 +0000   Sun, 04 Aug 2019 09:01:39 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available\n  Ready                True    Sun, 02 Feb 2020 14:32:18 +0000   Sun, 04 Aug 2019 09:02:19 +0000   KubeletReady                 kubelet is posting ready status. AppArmor enabled\nAddresses:\n  InternalIP:  10.96.3.65\n  Hostname:    iruya-node\nCapacity:\n cpu:                4\n ephemeral-storage:  20145724Ki\n hugepages-2Mi:      0\n memory:             4039076Ki\n pods:               110\nAllocatable:\n cpu:                4\n ephemeral-storage:  18566299208\n hugepages-2Mi:      0\n memory:             3936676Ki\n pods:               110\nSystem Info:\n Machine ID:                 f573dcf04d6f4a87856a35d266a2fa7a\n System UUID:                F573DCF0-4D6F-4A87-856A-35D266A2FA7A\n Boot ID:                    8baf4beb-8391-43e6-b17b-b1e184b5370a\n Kernel Version:             4.15.0-52-generic\n OS Image:                   Ubuntu 18.04.2 LTS\n Operating System:           linux\n Architecture:               amd64\n Container Runtime Version:  docker://18.9.7\n Kubelet Version:            v1.15.1\n Kube-Proxy Version:         v1.15.1\nPodCIDR:                     10.96.1.0/24\nNon-terminated Pods:         (3 in total)\n  Namespace                  Name                  CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE\n  ---------                  ----                  ------------  ----------  ---------------  -------------  ---\n  kube-system                kube-proxy-976zl      0 (0%)        0 (0%)      0 (0%)           0 (0%)         182d\n  kube-system                weave-net-rlp57       20m (0%)      0 (0%)      0 (0%)           0 (0%)         113d\n  kubectl-9475               redis-master-pb98t    0 (0%)        0 (0%)      0 (0%)           0 (0%)         14s\nAllocated resources:\n  (Total limits may be over 100 percent, i.e., overcommitted.)\n  Resource           Requests  Limits\n  --------           --------  ------\n  cpu                20m (0%)  0 (0%)\n  memory             0 (0%)    0 (0%)\n  ephemeral-storage  0 (0%)    0 (0%)\nEvents:              \n"
Feb  2 14:32:18.747: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe namespace kubectl-9475'
Feb  2 14:32:18.834: INFO: stderr: ""
Feb  2 14:32:18.834: INFO: stdout: "Name:         kubectl-9475\nLabels:       e2e-framework=kubectl\n              e2e-run=76bbf8e8-e6fd-40b9-810b-62960b987b33\nAnnotations:  \nStatus:       Active\n\nNo resource quota.\n\nNo resource limits.\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  2 14:32:18.834: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-9475" for this suite.
Feb  2 14:32:42.873: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 14:32:42.991: INFO: namespace kubectl-9475 deletion completed in 24.151666371s

• [SLOW TEST:38.475 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl describe
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should check if kubectl describe prints relevant information for rc and pods  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-apps] ReplicationController 
  should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  2 14:32:42.991: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Given a Pod with a 'name' label pod-adoption is created
STEP: When a replication controller with a matching selector is created
STEP: Then the orphan pod is adopted
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  2 14:33:00.276: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-2930" for this suite.
Feb  2 14:33:22.336: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 14:33:22.443: INFO: namespace replication-controller-2930 deletion completed in 22.156646218s

• [SLOW TEST:39.452 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  2 14:33:22.444: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb  2 14:33:22.761: INFO: Waiting up to 5m0s for pod "downwardapi-volume-457bdeb4-efa9-4c1f-af33-c53cec4543fb" in namespace "projected-7978" to be "success or failure"
Feb  2 14:33:22.781: INFO: Pod "downwardapi-volume-457bdeb4-efa9-4c1f-af33-c53cec4543fb": Phase="Pending", Reason="", readiness=false. Elapsed: 19.485807ms
Feb  2 14:33:24.792: INFO: Pod "downwardapi-volume-457bdeb4-efa9-4c1f-af33-c53cec4543fb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030613316s
Feb  2 14:33:26.797: INFO: Pod "downwardapi-volume-457bdeb4-efa9-4c1f-af33-c53cec4543fb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.035822213s
Feb  2 14:33:28.802: INFO: Pod "downwardapi-volume-457bdeb4-efa9-4c1f-af33-c53cec4543fb": Phase="Pending", Reason="", readiness=false. Elapsed: 6.040552262s
Feb  2 14:33:30.824: INFO: Pod "downwardapi-volume-457bdeb4-efa9-4c1f-af33-c53cec4543fb": Phase="Pending", Reason="", readiness=false. Elapsed: 8.062316638s
Feb  2 14:33:32.836: INFO: Pod "downwardapi-volume-457bdeb4-efa9-4c1f-af33-c53cec4543fb": Phase="Pending", Reason="", readiness=false. Elapsed: 10.075187063s
Feb  2 14:33:34.854: INFO: Pod "downwardapi-volume-457bdeb4-efa9-4c1f-af33-c53cec4543fb": Phase="Pending", Reason="", readiness=false. Elapsed: 12.092690408s
Feb  2 14:33:36.887: INFO: Pod "downwardapi-volume-457bdeb4-efa9-4c1f-af33-c53cec4543fb": Phase="Pending", Reason="", readiness=false. Elapsed: 14.12580756s
Feb  2 14:33:39.158: INFO: Pod "downwardapi-volume-457bdeb4-efa9-4c1f-af33-c53cec4543fb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.396367664s
STEP: Saw pod success
Feb  2 14:33:39.158: INFO: Pod "downwardapi-volume-457bdeb4-efa9-4c1f-af33-c53cec4543fb" satisfied condition "success or failure"
Feb  2 14:33:39.166: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-457bdeb4-efa9-4c1f-af33-c53cec4543fb container client-container: 
STEP: delete the pod
Feb  2 14:33:39.221: INFO: Waiting for pod downwardapi-volume-457bdeb4-efa9-4c1f-af33-c53cec4543fb to disappear
Feb  2 14:33:39.340: INFO: Pod downwardapi-volume-457bdeb4-efa9-4c1f-af33-c53cec4543fb no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  2 14:33:39.340: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7978" for this suite.
Feb  2 14:33:45.388: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 14:33:45.537: INFO: namespace projected-7978 deletion completed in 6.183742612s

• [SLOW TEST:23.094 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  2 14:33:45.539: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name s-test-opt-del-d1718e61-652c-4d6c-81fe-ac1078bcd54a
STEP: Creating secret with name s-test-opt-upd-6fc37442-adac-41b1-86ca-fcc37e63dd48
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-d1718e61-652c-4d6c-81fe-ac1078bcd54a
STEP: Updating secret s-test-opt-upd-6fc37442-adac-41b1-86ca-fcc37e63dd48
STEP: Creating secret with name s-test-opt-create-14e0f835-8752-4139-9fd9-40c70ce9c8e2
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  2 14:34:12.238: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1324" for this suite.
Feb  2 14:34:52.259: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 14:34:52.385: INFO: namespace projected-1324 deletion completed in 40.143674264s

• [SLOW TEST:66.847 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run pod 
  should create a pod from an image when restart is Never  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  2 14:34:52.386: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1685
[It] should create a pod from an image when restart is Never  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Feb  2 14:34:52.537: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-6319'
Feb  2 14:34:52.669: INFO: stderr: ""
Feb  2 14:34:52.669: INFO: stdout: "pod/e2e-test-nginx-pod created\n"
STEP: verifying the pod e2e-test-nginx-pod was created
[AfterEach] [k8s.io] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1690
Feb  2 14:34:52.702: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=kubectl-6319'
Feb  2 14:34:56.570: INFO: stderr: ""
Feb  2 14:34:56.570: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  2 14:34:56.571: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6319" for this suite.
Feb  2 14:35:02.704: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 14:35:02.799: INFO: namespace kubectl-6319 deletion completed in 6.159351945s

• [SLOW TEST:10.413 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create a pod from an image when restart is Never  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  2 14:35:02.800: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test substitution in container's command
Feb  2 14:35:03.066: INFO: Waiting up to 5m0s for pod "var-expansion-96314a79-a506-44ef-8fd8-b0ce28b8661f" in namespace "var-expansion-2556" to be "success or failure"
Feb  2 14:35:03.087: INFO: Pod "var-expansion-96314a79-a506-44ef-8fd8-b0ce28b8661f": Phase="Pending", Reason="", readiness=false. Elapsed: 20.836506ms
Feb  2 14:35:05.097: INFO: Pod "var-expansion-96314a79-a506-44ef-8fd8-b0ce28b8661f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030499907s
Feb  2 14:35:07.106: INFO: Pod "var-expansion-96314a79-a506-44ef-8fd8-b0ce28b8661f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.039829222s
Feb  2 14:35:09.114: INFO: Pod "var-expansion-96314a79-a506-44ef-8fd8-b0ce28b8661f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.047812701s
Feb  2 14:35:11.121: INFO: Pod "var-expansion-96314a79-a506-44ef-8fd8-b0ce28b8661f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.054885059s
Feb  2 14:35:13.129: INFO: Pod "var-expansion-96314a79-a506-44ef-8fd8-b0ce28b8661f": Phase="Pending", Reason="", readiness=false. Elapsed: 10.062731853s
Feb  2 14:35:15.140: INFO: Pod "var-expansion-96314a79-a506-44ef-8fd8-b0ce28b8661f": Phase="Pending", Reason="", readiness=false. Elapsed: 12.073184294s
Feb  2 14:35:17.150: INFO: Pod "var-expansion-96314a79-a506-44ef-8fd8-b0ce28b8661f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.083581929s
STEP: Saw pod success
Feb  2 14:35:17.150: INFO: Pod "var-expansion-96314a79-a506-44ef-8fd8-b0ce28b8661f" satisfied condition "success or failure"
Feb  2 14:35:17.153: INFO: Trying to get logs from node iruya-node pod var-expansion-96314a79-a506-44ef-8fd8-b0ce28b8661f container dapi-container: 
STEP: delete the pod
Feb  2 14:35:17.228: INFO: Waiting for pod var-expansion-96314a79-a506-44ef-8fd8-b0ce28b8661f to disappear
Feb  2 14:35:17.244: INFO: Pod var-expansion-96314a79-a506-44ef-8fd8-b0ce28b8661f no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  2 14:35:17.244: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-2556" for this suite.
Feb  2 14:35:23.420: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 14:35:23.488: INFO: namespace var-expansion-2556 deletion completed in 6.23800779s

• [SLOW TEST:20.689 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  2 14:35:23.488: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-upd-9a8bc09a-7b1d-47d0-89cf-119c08209f9d
STEP: Creating the pod
STEP: Waiting for pod with text data
STEP: Waiting for pod with binary data
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  2 14:35:43.873: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-5851" for this suite.
Feb  2 14:36:23.917: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 14:36:24.018: INFO: namespace configmap-5851 deletion completed in 40.135097024s

• [SLOW TEST:60.530 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  2 14:36:24.019: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-map-49d52c2a-e72d-4677-a706-cd6327bbdfa9
STEP: Creating a pod to test consume configMaps
Feb  2 14:36:24.329: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-b98a0294-d5e5-4abd-9049-e1c71ac95363" in namespace "projected-2478" to be "success or failure"
Feb  2 14:36:24.342: INFO: Pod "pod-projected-configmaps-b98a0294-d5e5-4abd-9049-e1c71ac95363": Phase="Pending", Reason="", readiness=false. Elapsed: 13.55963ms
Feb  2 14:36:26.352: INFO: Pod "pod-projected-configmaps-b98a0294-d5e5-4abd-9049-e1c71ac95363": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023132159s
Feb  2 14:36:28.367: INFO: Pod "pod-projected-configmaps-b98a0294-d5e5-4abd-9049-e1c71ac95363": Phase="Pending", Reason="", readiness=false. Elapsed: 4.038177619s
Feb  2 14:36:30.395: INFO: Pod "pod-projected-configmaps-b98a0294-d5e5-4abd-9049-e1c71ac95363": Phase="Pending", Reason="", readiness=false. Elapsed: 6.066611985s
Feb  2 14:36:32.403: INFO: Pod "pod-projected-configmaps-b98a0294-d5e5-4abd-9049-e1c71ac95363": Phase="Pending", Reason="", readiness=false. Elapsed: 8.074694401s
Feb  2 14:36:34.413: INFO: Pod "pod-projected-configmaps-b98a0294-d5e5-4abd-9049-e1c71ac95363": Phase="Pending", Reason="", readiness=false. Elapsed: 10.084770795s
Feb  2 14:36:36.423: INFO: Pod "pod-projected-configmaps-b98a0294-d5e5-4abd-9049-e1c71ac95363": Phase="Pending", Reason="", readiness=false. Elapsed: 12.094241585s
Feb  2 14:36:38.655: INFO: Pod "pod-projected-configmaps-b98a0294-d5e5-4abd-9049-e1c71ac95363": Phase="Pending", Reason="", readiness=false. Elapsed: 14.326507487s
Feb  2 14:36:40.665: INFO: Pod "pod-projected-configmaps-b98a0294-d5e5-4abd-9049-e1c71ac95363": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.336186454s
STEP: Saw pod success
Feb  2 14:36:40.665: INFO: Pod "pod-projected-configmaps-b98a0294-d5e5-4abd-9049-e1c71ac95363" satisfied condition "success or failure"
Feb  2 14:36:40.670: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-b98a0294-d5e5-4abd-9049-e1c71ac95363 container projected-configmap-volume-test: 
STEP: delete the pod
Feb  2 14:36:41.013: INFO: Waiting for pod pod-projected-configmaps-b98a0294-d5e5-4abd-9049-e1c71ac95363 to disappear
Feb  2 14:36:41.041: INFO: Pod pod-projected-configmaps-b98a0294-d5e5-4abd-9049-e1c71ac95363 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  2 14:36:41.041: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2478" for this suite.
Feb  2 14:36:47.094: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 14:36:47.199: INFO: namespace projected-2478 deletion completed in 6.148985141s

• [SLOW TEST:23.180 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  2 14:36:47.199: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
[It] should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Feb  2 14:36:47.541: INFO: Number of nodes with available pods: 0
Feb  2 14:36:47.542: INFO: Node iruya-node is running more than one daemon pod
Feb  2 14:36:48.561: INFO: Number of nodes with available pods: 0
Feb  2 14:36:48.562: INFO: Node iruya-node is running more than one daemon pod
Feb  2 14:36:49.875: INFO: Number of nodes with available pods: 0
Feb  2 14:36:49.875: INFO: Node iruya-node is running more than one daemon pod
Feb  2 14:36:50.672: INFO: Number of nodes with available pods: 0
Feb  2 14:36:50.672: INFO: Node iruya-node is running more than one daemon pod
Feb  2 14:36:52.293: INFO: Number of nodes with available pods: 0
Feb  2 14:36:52.293: INFO: Node iruya-node is running more than one daemon pod
Feb  2 14:36:52.569: INFO: Number of nodes with available pods: 0
Feb  2 14:36:52.569: INFO: Node iruya-node is running more than one daemon pod
Feb  2 14:36:53.549: INFO: Number of nodes with available pods: 0
Feb  2 14:36:53.549: INFO: Node iruya-node is running more than one daemon pod
Feb  2 14:36:54.556: INFO: Number of nodes with available pods: 0
Feb  2 14:36:54.556: INFO: Node iruya-node is running more than one daemon pod
Feb  2 14:36:56.400: INFO: Number of nodes with available pods: 0
Feb  2 14:36:56.401: INFO: Node iruya-node is running more than one daemon pod
Feb  2 14:36:57.988: INFO: Number of nodes with available pods: 0
Feb  2 14:36:57.988: INFO: Node iruya-node is running more than one daemon pod
Feb  2 14:36:58.673: INFO: Number of nodes with available pods: 0
Feb  2 14:36:58.674: INFO: Node iruya-node is running more than one daemon pod
Feb  2 14:36:59.555: INFO: Number of nodes with available pods: 0
Feb  2 14:36:59.555: INFO: Node iruya-node is running more than one daemon pod
Feb  2 14:37:01.530: INFO: Number of nodes with available pods: 0
Feb  2 14:37:01.530: INFO: Node iruya-node is running more than one daemon pod
Feb  2 14:37:01.648: INFO: Number of nodes with available pods: 0
Feb  2 14:37:01.648: INFO: Node iruya-node is running more than one daemon pod
Feb  2 14:37:02.559: INFO: Number of nodes with available pods: 1
Feb  2 14:37:02.559: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Feb  2 14:37:03.650: INFO: Number of nodes with available pods: 2
Feb  2 14:37:03.650: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Stop a daemon pod, check that the daemon pod is revived.
Feb  2 14:37:03.714: INFO: Number of nodes with available pods: 1
Feb  2 14:37:03.714: INFO: Node iruya-node is running more than one daemon pod
Feb  2 14:37:04.729: INFO: Number of nodes with available pods: 1
Feb  2 14:37:04.729: INFO: Node iruya-node is running more than one daemon pod
Feb  2 14:37:05.800: INFO: Number of nodes with available pods: 1
Feb  2 14:37:05.800: INFO: Node iruya-node is running more than one daemon pod
Feb  2 14:37:06.725: INFO: Number of nodes with available pods: 1
Feb  2 14:37:06.725: INFO: Node iruya-node is running more than one daemon pod
Feb  2 14:37:07.732: INFO: Number of nodes with available pods: 1
Feb  2 14:37:07.732: INFO: Node iruya-node is running more than one daemon pod
Feb  2 14:37:08.771: INFO: Number of nodes with available pods: 1
Feb  2 14:37:08.771: INFO: Node iruya-node is running more than one daemon pod
Feb  2 14:37:09.732: INFO: Number of nodes with available pods: 1
Feb  2 14:37:09.732: INFO: Node iruya-node is running more than one daemon pod
Feb  2 14:37:10.733: INFO: Number of nodes with available pods: 1
Feb  2 14:37:10.734: INFO: Node iruya-node is running more than one daemon pod
Feb  2 14:37:11.793: INFO: Number of nodes with available pods: 1
Feb  2 14:37:11.793: INFO: Node iruya-node is running more than one daemon pod
Feb  2 14:37:12.730: INFO: Number of nodes with available pods: 1
Feb  2 14:37:12.730: INFO: Node iruya-node is running more than one daemon pod
Feb  2 14:37:13.741: INFO: Number of nodes with available pods: 1
Feb  2 14:37:13.741: INFO: Node iruya-node is running more than one daemon pod
Feb  2 14:37:14.739: INFO: Number of nodes with available pods: 1
Feb  2 14:37:14.740: INFO: Node iruya-node is running more than one daemon pod
Feb  2 14:37:15.732: INFO: Number of nodes with available pods: 1
Feb  2 14:37:15.733: INFO: Node iruya-node is running more than one daemon pod
Feb  2 14:37:16.723: INFO: Number of nodes with available pods: 1
Feb  2 14:37:16.723: INFO: Node iruya-node is running more than one daemon pod
Feb  2 14:37:17.732: INFO: Number of nodes with available pods: 1
Feb  2 14:37:17.732: INFO: Node iruya-node is running more than one daemon pod
Feb  2 14:37:18.729: INFO: Number of nodes with available pods: 1
Feb  2 14:37:18.729: INFO: Node iruya-node is running more than one daemon pod
Feb  2 14:37:19.825: INFO: Number of nodes with available pods: 1
Feb  2 14:37:19.825: INFO: Node iruya-node is running more than one daemon pod
Feb  2 14:37:20.732: INFO: Number of nodes with available pods: 1
Feb  2 14:37:20.732: INFO: Node iruya-node is running more than one daemon pod
Feb  2 14:37:21.731: INFO: Number of nodes with available pods: 1
Feb  2 14:37:21.731: INFO: Node iruya-node is running more than one daemon pod
Feb  2 14:37:22.730: INFO: Number of nodes with available pods: 1
Feb  2 14:37:22.731: INFO: Node iruya-node is running more than one daemon pod
Feb  2 14:37:23.735: INFO: Number of nodes with available pods: 1
Feb  2 14:37:23.735: INFO: Node iruya-node is running more than one daemon pod
Feb  2 14:37:24.733: INFO: Number of nodes with available pods: 1
Feb  2 14:37:24.733: INFO: Node iruya-node is running more than one daemon pod
Feb  2 14:37:25.737: INFO: Number of nodes with available pods: 1
Feb  2 14:37:25.737: INFO: Node iruya-node is running more than one daemon pod
Feb  2 14:37:26.727: INFO: Number of nodes with available pods: 2
Feb  2 14:37:26.728: INFO: Number of running nodes: 2, number of available pods: 2
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-1174, will wait for the garbage collector to delete the pods
Feb  2 14:37:26.848: INFO: Deleting DaemonSet.extensions daemon-set took: 9.468078ms
Feb  2 14:37:27.148: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.477204ms
Feb  2 14:37:36.955: INFO: Number of nodes with available pods: 0
Feb  2 14:37:36.955: INFO: Number of running nodes: 0, number of available pods: 0
Feb  2 14:37:36.959: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-1174/daemonsets","resourceVersion":"22829095"},"items":null}

Feb  2 14:37:36.962: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-1174/pods","resourceVersion":"22829095"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  2 14:37:36.973: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-1174" for this suite.
Feb  2 14:37:43.102: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 14:37:43.298: INFO: namespace daemonsets-1174 deletion completed in 6.241029879s

• [SLOW TEST:56.099 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  2 14:37:43.298: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb  2 14:37:43.611: INFO: Waiting up to 5m0s for pod "downwardapi-volume-cbd0d943-1f54-45e3-ba2b-0e2291311fe4" in namespace "downward-api-2518" to be "success or failure"
Feb  2 14:37:43.626: INFO: Pod "downwardapi-volume-cbd0d943-1f54-45e3-ba2b-0e2291311fe4": Phase="Pending", Reason="", readiness=false. Elapsed: 15.539204ms
Feb  2 14:37:45.635: INFO: Pod "downwardapi-volume-cbd0d943-1f54-45e3-ba2b-0e2291311fe4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023756744s
Feb  2 14:37:47.642: INFO: Pod "downwardapi-volume-cbd0d943-1f54-45e3-ba2b-0e2291311fe4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.031423234s
Feb  2 14:37:49.652: INFO: Pod "downwardapi-volume-cbd0d943-1f54-45e3-ba2b-0e2291311fe4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.040784966s
Feb  2 14:37:51.719: INFO: Pod "downwardapi-volume-cbd0d943-1f54-45e3-ba2b-0e2291311fe4": Phase="Pending", Reason="", readiness=false. Elapsed: 8.107700413s
Feb  2 14:37:53.727: INFO: Pod "downwardapi-volume-cbd0d943-1f54-45e3-ba2b-0e2291311fe4": Phase="Pending", Reason="", readiness=false. Elapsed: 10.116340011s
Feb  2 14:37:55.736: INFO: Pod "downwardapi-volume-cbd0d943-1f54-45e3-ba2b-0e2291311fe4": Phase="Pending", Reason="", readiness=false. Elapsed: 12.125148985s
Feb  2 14:37:57.746: INFO: Pod "downwardapi-volume-cbd0d943-1f54-45e3-ba2b-0e2291311fe4": Phase="Pending", Reason="", readiness=false. Elapsed: 14.135381383s
Feb  2 14:37:59.757: INFO: Pod "downwardapi-volume-cbd0d943-1f54-45e3-ba2b-0e2291311fe4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.146253867s
STEP: Saw pod success
Feb  2 14:37:59.757: INFO: Pod "downwardapi-volume-cbd0d943-1f54-45e3-ba2b-0e2291311fe4" satisfied condition "success or failure"
Feb  2 14:37:59.765: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-cbd0d943-1f54-45e3-ba2b-0e2291311fe4 container client-container: 
STEP: delete the pod
Feb  2 14:37:59.921: INFO: Waiting for pod downwardapi-volume-cbd0d943-1f54-45e3-ba2b-0e2291311fe4 to disappear
Feb  2 14:37:59.932: INFO: Pod downwardapi-volume-cbd0d943-1f54-45e3-ba2b-0e2291311fe4 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  2 14:37:59.932: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-2518" for this suite.
Feb  2 14:38:05.973: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 14:38:06.099: INFO: namespace downward-api-2518 deletion completed in 6.158135835s

• [SLOW TEST:22.801 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  2 14:38:06.101: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb  2 14:38:06.319: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  2 14:38:20.481: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-7292" for this suite.
Feb  2 14:39:06.526: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 14:39:06.626: INFO: namespace pods-7292 deletion completed in 46.135065113s

• [SLOW TEST:60.525 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  2 14:39:06.627: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-95ee3068-cd13-4ec9-b8e6-20a9f269ac24
STEP: Creating a pod to test consume configMaps
Feb  2 14:39:06.800: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-b078bc3d-2bbd-4c74-a7a3-b39b85da3d36" in namespace "projected-9078" to be "success or failure"
Feb  2 14:39:06.949: INFO: Pod "pod-projected-configmaps-b078bc3d-2bbd-4c74-a7a3-b39b85da3d36": Phase="Pending", Reason="", readiness=false. Elapsed: 149.228205ms
Feb  2 14:39:08.958: INFO: Pod "pod-projected-configmaps-b078bc3d-2bbd-4c74-a7a3-b39b85da3d36": Phase="Pending", Reason="", readiness=false. Elapsed: 2.15774907s
Feb  2 14:39:10.965: INFO: Pod "pod-projected-configmaps-b078bc3d-2bbd-4c74-a7a3-b39b85da3d36": Phase="Pending", Reason="", readiness=false. Elapsed: 4.164995697s
Feb  2 14:39:12.973: INFO: Pod "pod-projected-configmaps-b078bc3d-2bbd-4c74-a7a3-b39b85da3d36": Phase="Pending", Reason="", readiness=false. Elapsed: 6.173300266s
Feb  2 14:39:14.983: INFO: Pod "pod-projected-configmaps-b078bc3d-2bbd-4c74-a7a3-b39b85da3d36": Phase="Pending", Reason="", readiness=false. Elapsed: 8.18350149s
Feb  2 14:39:16.992: INFO: Pod "pod-projected-configmaps-b078bc3d-2bbd-4c74-a7a3-b39b85da3d36": Phase="Pending", Reason="", readiness=false. Elapsed: 10.19212171s
Feb  2 14:39:19.001: INFO: Pod "pod-projected-configmaps-b078bc3d-2bbd-4c74-a7a3-b39b85da3d36": Phase="Pending", Reason="", readiness=false. Elapsed: 12.200975784s
Feb  2 14:39:21.007: INFO: Pod "pod-projected-configmaps-b078bc3d-2bbd-4c74-a7a3-b39b85da3d36": Phase="Pending", Reason="", readiness=false. Elapsed: 14.20762349s
Feb  2 14:39:23.015: INFO: Pod "pod-projected-configmaps-b078bc3d-2bbd-4c74-a7a3-b39b85da3d36": Phase="Pending", Reason="", readiness=false. Elapsed: 16.215395345s
Feb  2 14:39:25.025: INFO: Pod "pod-projected-configmaps-b078bc3d-2bbd-4c74-a7a3-b39b85da3d36": Phase="Succeeded", Reason="", readiness=false. Elapsed: 18.225462528s
STEP: Saw pod success
Feb  2 14:39:25.025: INFO: Pod "pod-projected-configmaps-b078bc3d-2bbd-4c74-a7a3-b39b85da3d36" satisfied condition "success or failure"
Feb  2 14:39:25.032: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-b078bc3d-2bbd-4c74-a7a3-b39b85da3d36 container projected-configmap-volume-test: 
STEP: delete the pod
Feb  2 14:39:25.693: INFO: Waiting for pod pod-projected-configmaps-b078bc3d-2bbd-4c74-a7a3-b39b85da3d36 to disappear
Feb  2 14:39:25.704: INFO: Pod pod-projected-configmaps-b078bc3d-2bbd-4c74-a7a3-b39b85da3d36 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  2 14:39:25.704: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9078" for this suite.
Feb  2 14:39:31.746: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 14:39:31.935: INFO: namespace projected-9078 deletion completed in 6.225939393s

• [SLOW TEST:25.309 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  2 14:39:31.935: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-ea0c424b-6973-4c0d-b760-852547e36f1c
STEP: Creating a pod to test consume configMaps
Feb  2 14:39:32.378: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-8bc95562-10c1-47ae-8661-c45099629107" in namespace "projected-2438" to be "success or failure"
Feb  2 14:39:32.384: INFO: Pod "pod-projected-configmaps-8bc95562-10c1-47ae-8661-c45099629107": Phase="Pending", Reason="", readiness=false. Elapsed: 6.558913ms
Feb  2 14:39:34.395: INFO: Pod "pod-projected-configmaps-8bc95562-10c1-47ae-8661-c45099629107": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017309134s
Feb  2 14:39:36.405: INFO: Pod "pod-projected-configmaps-8bc95562-10c1-47ae-8661-c45099629107": Phase="Pending", Reason="", readiness=false. Elapsed: 4.02665773s
Feb  2 14:39:38.414: INFO: Pod "pod-projected-configmaps-8bc95562-10c1-47ae-8661-c45099629107": Phase="Pending", Reason="", readiness=false. Elapsed: 6.036469536s
Feb  2 14:39:40.423: INFO: Pod "pod-projected-configmaps-8bc95562-10c1-47ae-8661-c45099629107": Phase="Pending", Reason="", readiness=false. Elapsed: 8.045537742s
Feb  2 14:39:42.433: INFO: Pod "pod-projected-configmaps-8bc95562-10c1-47ae-8661-c45099629107": Phase="Pending", Reason="", readiness=false. Elapsed: 10.054677378s
Feb  2 14:39:44.457: INFO: Pod "pod-projected-configmaps-8bc95562-10c1-47ae-8661-c45099629107": Phase="Pending", Reason="", readiness=false. Elapsed: 12.079036046s
Feb  2 14:39:46.480: INFO: Pod "pod-projected-configmaps-8bc95562-10c1-47ae-8661-c45099629107": Phase="Pending", Reason="", readiness=false. Elapsed: 14.102482178s
Feb  2 14:39:48.500: INFO: Pod "pod-projected-configmaps-8bc95562-10c1-47ae-8661-c45099629107": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.12181423s
STEP: Saw pod success
Feb  2 14:39:48.500: INFO: Pod "pod-projected-configmaps-8bc95562-10c1-47ae-8661-c45099629107" satisfied condition "success or failure"
Feb  2 14:39:48.513: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-8bc95562-10c1-47ae-8661-c45099629107 container projected-configmap-volume-test: 
STEP: delete the pod
Feb  2 14:39:49.039: INFO: Waiting for pod pod-projected-configmaps-8bc95562-10c1-47ae-8661-c45099629107 to disappear
Feb  2 14:39:49.072: INFO: Pod pod-projected-configmaps-8bc95562-10c1-47ae-8661-c45099629107 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  2 14:39:49.072: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2438" for this suite.
Feb  2 14:39:55.122: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 14:39:55.219: INFO: namespace projected-2438 deletion completed in 6.139795839s

• [SLOW TEST:23.284 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[k8s.io] Kubelet when scheduling a read only busybox container 
  should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  2 14:39:55.220: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  2 14:40:09.674: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-7819" for this suite.
Feb  2 14:41:01.774: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 14:41:02.076: INFO: namespace kubelet-test-7819 deletion completed in 52.388923307s

• [SLOW TEST:66.857 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when scheduling a read only busybox container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:187
    should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should create and stop a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  2 14:41:02.077: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should create and stop a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a replication controller
Feb  2 14:41:02.250: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5248'
Feb  2 14:41:05.121: INFO: stderr: ""
Feb  2 14:41:05.121: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Feb  2 14:41:05.122: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5248'
Feb  2 14:41:05.417: INFO: stderr: ""
Feb  2 14:41:05.417: INFO: stdout: "update-demo-nautilus-jz9h7 update-demo-nautilus-rzg22 "
Feb  2 14:41:05.417: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jz9h7 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5248'
Feb  2 14:41:05.621: INFO: stderr: ""
Feb  2 14:41:05.621: INFO: stdout: ""
Feb  2 14:41:05.621: INFO: update-demo-nautilus-jz9h7 is created but not running
Feb  2 14:41:10.622: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5248'
Feb  2 14:41:10.987: INFO: stderr: ""
Feb  2 14:41:10.987: INFO: stdout: "update-demo-nautilus-jz9h7 update-demo-nautilus-rzg22 "
Feb  2 14:41:10.987: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jz9h7 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5248'
Feb  2 14:41:11.118: INFO: stderr: ""
Feb  2 14:41:11.118: INFO: stdout: ""
Feb  2 14:41:11.119: INFO: update-demo-nautilus-jz9h7 is created but not running
Feb  2 14:41:16.119: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5248'
Feb  2 14:41:16.307: INFO: stderr: ""
Feb  2 14:41:16.307: INFO: stdout: "update-demo-nautilus-jz9h7 update-demo-nautilus-rzg22 "
Feb  2 14:41:16.308: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jz9h7 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5248'
Feb  2 14:41:16.481: INFO: stderr: ""
Feb  2 14:41:16.481: INFO: stdout: ""
Feb  2 14:41:16.481: INFO: update-demo-nautilus-jz9h7 is created but not running
Feb  2 14:41:21.481: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5248'
Feb  2 14:41:21.667: INFO: stderr: ""
Feb  2 14:41:21.667: INFO: stdout: "update-demo-nautilus-jz9h7 update-demo-nautilus-rzg22 "
Feb  2 14:41:21.667: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jz9h7 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5248'
Feb  2 14:41:21.945: INFO: stderr: ""
Feb  2 14:41:21.945: INFO: stdout: "true"
Feb  2 14:41:21.946: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jz9h7 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5248'
Feb  2 14:41:22.082: INFO: stderr: ""
Feb  2 14:41:22.082: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb  2 14:41:22.082: INFO: validating pod update-demo-nautilus-jz9h7
Feb  2 14:41:22.118: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb  2 14:41:22.118: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb  2 14:41:22.118: INFO: update-demo-nautilus-jz9h7 is verified up and running
Feb  2 14:41:22.118: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-rzg22 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5248'
Feb  2 14:41:22.287: INFO: stderr: ""
Feb  2 14:41:22.287: INFO: stdout: "true"
Feb  2 14:41:22.287: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-rzg22 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5248'
Feb  2 14:41:22.375: INFO: stderr: ""
Feb  2 14:41:22.375: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb  2 14:41:22.375: INFO: validating pod update-demo-nautilus-rzg22
Feb  2 14:41:22.415: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb  2 14:41:22.415: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb  2 14:41:22.415: INFO: update-demo-nautilus-rzg22 is verified up and running
STEP: using delete to clean up resources
Feb  2 14:41:22.415: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5248'
Feb  2 14:41:22.557: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb  2 14:41:22.557: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Feb  2 14:41:22.558: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-5248'
Feb  2 14:41:22.709: INFO: stderr: "No resources found.\n"
Feb  2 14:41:22.709: INFO: stdout: ""
Feb  2 14:41:22.710: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-5248 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Feb  2 14:41:22.928: INFO: stderr: ""
Feb  2 14:41:22.928: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  2 14:41:22.928: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5248" for this suite.
Feb  2 14:41:46.971: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 14:41:47.056: INFO: namespace kubectl-5248 deletion completed in 24.112294615s

• [SLOW TEST:44.979 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create and stop a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  2 14:41:47.057: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44
[It] should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
Feb  2 14:41:47.210: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  2 14:42:16.054: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-3732" for this suite.
Feb  2 14:42:40.184: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 14:42:40.294: INFO: namespace init-container-3732 deletion completed in 24.222709623s

• [SLOW TEST:53.238 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test when starting a container that exits 
  should run with the expected status [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  2 14:42:40.295: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run with the expected status [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpa': should get the expected 'State'
STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpof': should get the expected 'State'
STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpn': should get the expected 'State'
STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance]
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  2 14:44:06.548: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-4930" for this suite.
Feb  2 14:44:12.675: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 14:44:12.748: INFO: namespace container-runtime-4930 deletion completed in 6.19640438s

• [SLOW TEST:92.454 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    when starting a container that exits
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:39
      should run with the expected status [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  2 14:44:12.749: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0666 on node default medium
Feb  2 14:44:13.071: INFO: Waiting up to 5m0s for pod "pod-1177332d-c189-4a60-968d-80189129e162" in namespace "emptydir-7332" to be "success or failure"
Feb  2 14:44:13.107: INFO: Pod "pod-1177332d-c189-4a60-968d-80189129e162": Phase="Pending", Reason="", readiness=false. Elapsed: 36.740626ms
Feb  2 14:44:15.118: INFO: Pod "pod-1177332d-c189-4a60-968d-80189129e162": Phase="Pending", Reason="", readiness=false. Elapsed: 2.047605755s
Feb  2 14:44:17.126: INFO: Pod "pod-1177332d-c189-4a60-968d-80189129e162": Phase="Pending", Reason="", readiness=false. Elapsed: 4.055694027s
Feb  2 14:44:19.198: INFO: Pod "pod-1177332d-c189-4a60-968d-80189129e162": Phase="Pending", Reason="", readiness=false. Elapsed: 6.127630656s
Feb  2 14:44:21.210: INFO: Pod "pod-1177332d-c189-4a60-968d-80189129e162": Phase="Pending", Reason="", readiness=false. Elapsed: 8.138770781s
Feb  2 14:44:23.276: INFO: Pod "pod-1177332d-c189-4a60-968d-80189129e162": Phase="Pending", Reason="", readiness=false. Elapsed: 10.20540754s
Feb  2 14:44:25.401: INFO: Pod "pod-1177332d-c189-4a60-968d-80189129e162": Phase="Pending", Reason="", readiness=false. Elapsed: 12.329926086s
Feb  2 14:44:27.409: INFO: Pod "pod-1177332d-c189-4a60-968d-80189129e162": Phase="Pending", Reason="", readiness=false. Elapsed: 14.338316756s
Feb  2 14:44:29.417: INFO: Pod "pod-1177332d-c189-4a60-968d-80189129e162": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.346098109s
STEP: Saw pod success
Feb  2 14:44:29.417: INFO: Pod "pod-1177332d-c189-4a60-968d-80189129e162" satisfied condition "success or failure"
Feb  2 14:44:29.420: INFO: Trying to get logs from node iruya-node pod pod-1177332d-c189-4a60-968d-80189129e162 container test-container: 
STEP: delete the pod
Feb  2 14:44:29.477: INFO: Waiting for pod pod-1177332d-c189-4a60-968d-80189129e162 to disappear
Feb  2 14:44:29.500: INFO: Pod pod-1177332d-c189-4a60-968d-80189129e162 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  2 14:44:29.500: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-7332" for this suite.
Feb  2 14:44:35.656: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 14:44:35.796: INFO: namespace emptydir-7332 deletion completed in 6.292112022s

• [SLOW TEST:23.048 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  2 14:44:35.798: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0666 on tmpfs
Feb  2 14:44:35.964: INFO: Waiting up to 5m0s for pod "pod-3d29f08a-5b3a-4fd2-a55e-d6c220c55477" in namespace "emptydir-5361" to be "success or failure"
Feb  2 14:44:35.969: INFO: Pod "pod-3d29f08a-5b3a-4fd2-a55e-d6c220c55477": Phase="Pending", Reason="", readiness=false. Elapsed: 4.961544ms
Feb  2 14:44:37.978: INFO: Pod "pod-3d29f08a-5b3a-4fd2-a55e-d6c220c55477": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013771014s
Feb  2 14:44:39.985: INFO: Pod "pod-3d29f08a-5b3a-4fd2-a55e-d6c220c55477": Phase="Pending", Reason="", readiness=false. Elapsed: 4.020875317s
Feb  2 14:44:41.998: INFO: Pod "pod-3d29f08a-5b3a-4fd2-a55e-d6c220c55477": Phase="Pending", Reason="", readiness=false. Elapsed: 6.034014393s
Feb  2 14:44:44.009: INFO: Pod "pod-3d29f08a-5b3a-4fd2-a55e-d6c220c55477": Phase="Pending", Reason="", readiness=false. Elapsed: 8.045319419s
Feb  2 14:44:46.017: INFO: Pod "pod-3d29f08a-5b3a-4fd2-a55e-d6c220c55477": Phase="Pending", Reason="", readiness=false. Elapsed: 10.052929504s
Feb  2 14:44:48.028: INFO: Pod "pod-3d29f08a-5b3a-4fd2-a55e-d6c220c55477": Phase="Pending", Reason="", readiness=false. Elapsed: 12.063981421s
Feb  2 14:44:50.035: INFO: Pod "pod-3d29f08a-5b3a-4fd2-a55e-d6c220c55477": Phase="Pending", Reason="", readiness=false. Elapsed: 14.07100991s
Feb  2 14:44:52.042: INFO: Pod "pod-3d29f08a-5b3a-4fd2-a55e-d6c220c55477": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.078116912s
STEP: Saw pod success
Feb  2 14:44:52.042: INFO: Pod "pod-3d29f08a-5b3a-4fd2-a55e-d6c220c55477" satisfied condition "success or failure"
Feb  2 14:44:52.047: INFO: Trying to get logs from node iruya-node pod pod-3d29f08a-5b3a-4fd2-a55e-d6c220c55477 container test-container: 
STEP: delete the pod
Feb  2 14:44:52.189: INFO: Waiting for pod pod-3d29f08a-5b3a-4fd2-a55e-d6c220c55477 to disappear
Feb  2 14:44:52.199: INFO: Pod pod-3d29f08a-5b3a-4fd2-a55e-d6c220c55477 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  2 14:44:52.200: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-5361" for this suite.
Feb  2 14:44:58.274: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 14:44:58.420: INFO: namespace emptydir-5361 deletion completed in 6.21429912s

• [SLOW TEST:22.623 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  2 14:44:58.421: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44
[It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
Feb  2 14:44:58.588: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  2 14:45:23.290: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-4943" for this suite.
Feb  2 14:45:31.485: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 14:45:31.645: INFO: namespace init-container-4943 deletion completed in 8.296854368s

• [SLOW TEST:33.224 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  2 14:45:31.645: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb  2 14:45:31.822: INFO: Waiting up to 5m0s for pod "downwardapi-volume-03189f17-f6ee-4272-b7f1-d352f849f004" in namespace "downward-api-9558" to be "success or failure"
Feb  2 14:45:31.840: INFO: Pod "downwardapi-volume-03189f17-f6ee-4272-b7f1-d352f849f004": Phase="Pending", Reason="", readiness=false. Elapsed: 17.948274ms
Feb  2 14:45:33.857: INFO: Pod "downwardapi-volume-03189f17-f6ee-4272-b7f1-d352f849f004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034964394s
Feb  2 14:45:35.870: INFO: Pod "downwardapi-volume-03189f17-f6ee-4272-b7f1-d352f849f004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.047922042s
Feb  2 14:45:37.879: INFO: Pod "downwardapi-volume-03189f17-f6ee-4272-b7f1-d352f849f004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.056149721s
Feb  2 14:45:39.887: INFO: Pod "downwardapi-volume-03189f17-f6ee-4272-b7f1-d352f849f004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.064436929s
Feb  2 14:45:41.894: INFO: Pod "downwardapi-volume-03189f17-f6ee-4272-b7f1-d352f849f004": Phase="Pending", Reason="", readiness=false. Elapsed: 10.071869728s
Feb  2 14:45:43.907: INFO: Pod "downwardapi-volume-03189f17-f6ee-4272-b7f1-d352f849f004": Phase="Pending", Reason="", readiness=false. Elapsed: 12.08496193s
Feb  2 14:45:45.921: INFO: Pod "downwardapi-volume-03189f17-f6ee-4272-b7f1-d352f849f004": Phase="Running", Reason="", readiness=true. Elapsed: 14.098904131s
Feb  2 14:45:47.937: INFO: Pod "downwardapi-volume-03189f17-f6ee-4272-b7f1-d352f849f004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.114366587s
STEP: Saw pod success
Feb  2 14:45:47.937: INFO: Pod "downwardapi-volume-03189f17-f6ee-4272-b7f1-d352f849f004" satisfied condition "success or failure"
Feb  2 14:45:47.942: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-03189f17-f6ee-4272-b7f1-d352f849f004 container client-container: 
STEP: delete the pod
Feb  2 14:45:48.104: INFO: Waiting for pod downwardapi-volume-03189f17-f6ee-4272-b7f1-d352f849f004 to disappear
Feb  2 14:45:48.111: INFO: Pod downwardapi-volume-03189f17-f6ee-4272-b7f1-d352f849f004 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  2 14:45:48.111: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-9558" for this suite.
Feb  2 14:45:54.173: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 14:45:54.272: INFO: namespace downward-api-9558 deletion completed in 6.155364062s

• [SLOW TEST:22.627 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform canary updates and phased rolling updates of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  2 14:45:54.272: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75
STEP: Creating service test in namespace statefulset-674
[It] should perform canary updates and phased rolling updates of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a new StatefulSet
Feb  2 14:45:54.481: INFO: Found 0 stateful pods, waiting for 3
Feb  2 14:46:04.502: INFO: Found 1 stateful pods, waiting for 3
Feb  2 14:46:14.500: INFO: Found 2 stateful pods, waiting for 3
Feb  2 14:46:24.536: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Feb  2 14:46:24.536: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Feb  2 14:46:24.536: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Feb  2 14:46:34.498: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Feb  2 14:46:34.498: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Feb  2 14:46:34.498: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Feb  2 14:46:44.496: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Feb  2 14:46:44.496: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Feb  2 14:46:44.496: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Updating stateful set template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine
Feb  2 14:46:44.539: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Not applying an update when the partition is greater than the number of replicas
STEP: Performing a canary update
Feb  2 14:46:54.748: INFO: Updating stateful set ss2
Feb  2 14:46:54.835: INFO: Waiting for Pod statefulset-674/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
STEP: Restoring Pods to the correct revision when they are deleted
Feb  2 14:47:05.570: INFO: Found 2 stateful pods, waiting for 3
Feb  2 14:47:15.580: INFO: Found 2 stateful pods, waiting for 3
Feb  2 14:47:25.582: INFO: Found 2 stateful pods, waiting for 3
Feb  2 14:47:35.584: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Feb  2 14:47:35.585: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Feb  2 14:47:35.585: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Feb  2 14:47:46.301: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Feb  2 14:47:46.301: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Feb  2 14:47:46.301: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Performing a phased rolling update
Feb  2 14:47:46.364: INFO: Updating stateful set ss2
Feb  2 14:47:46.546: INFO: Waiting for Pod statefulset-674/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb  2 14:47:57.037: INFO: Updating stateful set ss2
Feb  2 14:47:57.219: INFO: Waiting for StatefulSet statefulset-674/ss2 to complete update
Feb  2 14:47:57.219: INFO: Waiting for Pod statefulset-674/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb  2 14:48:07.230: INFO: Waiting for StatefulSet statefulset-674/ss2 to complete update
Feb  2 14:48:07.230: INFO: Waiting for Pod statefulset-674/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb  2 14:48:17.231: INFO: Waiting for StatefulSet statefulset-674/ss2 to complete update
Feb  2 14:48:17.232: INFO: Waiting for Pod statefulset-674/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb  2 14:48:27.234: INFO: Waiting for StatefulSet statefulset-674/ss2 to complete update
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86
Feb  2 14:48:37.233: INFO: Deleting all statefulset in ns statefulset-674
Feb  2 14:48:37.237: INFO: Scaling statefulset ss2 to 0
Feb  2 14:49:07.270: INFO: Waiting for statefulset status.replicas updated to 0
Feb  2 14:49:07.274: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  2 14:49:07.302: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-674" for this suite.
Feb  2 14:49:15.343: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 14:49:15.540: INFO: namespace statefulset-674 deletion completed in 8.232845932s

• [SLOW TEST:201.268 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should perform canary updates and phased rolling updates of template modifications [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should have monotonically increasing restart count [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  2 14:49:15.540: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should have monotonically increasing restart count [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod liveness-ad2cbd90-6cc2-4840-8a82-097bba1c12d0 in namespace container-probe-4782
Feb  2 14:49:29.802: INFO: Started pod liveness-ad2cbd90-6cc2-4840-8a82-097bba1c12d0 in namespace container-probe-4782
STEP: checking the pod's current state and verifying that restartCount is present
Feb  2 14:49:29.816: INFO: Initial restart count of pod liveness-ad2cbd90-6cc2-4840-8a82-097bba1c12d0 is 0
Feb  2 14:49:51.979: INFO: Restart count of pod container-probe-4782/liveness-ad2cbd90-6cc2-4840-8a82-097bba1c12d0 is now 1 (22.162869628s elapsed)
Feb  2 14:50:12.267: INFO: Restart count of pod container-probe-4782/liveness-ad2cbd90-6cc2-4840-8a82-097bba1c12d0 is now 2 (42.450508115s elapsed)
Feb  2 14:50:30.490: INFO: Restart count of pod container-probe-4782/liveness-ad2cbd90-6cc2-4840-8a82-097bba1c12d0 is now 3 (1m0.674108577s elapsed)
Feb  2 14:50:48.818: INFO: Restart count of pod container-probe-4782/liveness-ad2cbd90-6cc2-4840-8a82-097bba1c12d0 is now 4 (1m19.001449962s elapsed)
Feb  2 14:51:49.330: INFO: Restart count of pod container-probe-4782/liveness-ad2cbd90-6cc2-4840-8a82-097bba1c12d0 is now 5 (2m19.513456012s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  2 14:51:49.364: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-4782" for this suite.
Feb  2 14:51:55.538: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 14:51:55.675: INFO: namespace container-probe-4782 deletion completed in 6.290690289s

• [SLOW TEST:160.135 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should have monotonically increasing restart count [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  2 14:51:55.676: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating the pod
Feb  2 14:52:10.521: INFO: Successfully updated pod "labelsupdate2a75a722-5ecb-4d73-9873-f47438817d7d"
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  2 14:52:12.612: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3001" for this suite.
Feb  2 14:52:50.689: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 14:52:50.769: INFO: namespace projected-3001 deletion completed in 38.1532222s

• [SLOW TEST:55.094 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  2 14:52:50.770: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0666 on tmpfs
Feb  2 14:52:50.957: INFO: Waiting up to 5m0s for pod "pod-27e480e2-6abb-4db6-aa79-ea56b7be12ca" in namespace "emptydir-3873" to be "success or failure"
Feb  2 14:52:51.047: INFO: Pod "pod-27e480e2-6abb-4db6-aa79-ea56b7be12ca": Phase="Pending", Reason="", readiness=false. Elapsed: 89.715909ms
Feb  2 14:52:53.058: INFO: Pod "pod-27e480e2-6abb-4db6-aa79-ea56b7be12ca": Phase="Pending", Reason="", readiness=false. Elapsed: 2.100292368s
Feb  2 14:52:55.065: INFO: Pod "pod-27e480e2-6abb-4db6-aa79-ea56b7be12ca": Phase="Pending", Reason="", readiness=false. Elapsed: 4.107840973s
Feb  2 14:52:57.077: INFO: Pod "pod-27e480e2-6abb-4db6-aa79-ea56b7be12ca": Phase="Pending", Reason="", readiness=false. Elapsed: 6.119905264s
Feb  2 14:52:59.086: INFO: Pod "pod-27e480e2-6abb-4db6-aa79-ea56b7be12ca": Phase="Pending", Reason="", readiness=false. Elapsed: 8.128166921s
Feb  2 14:53:01.095: INFO: Pod "pod-27e480e2-6abb-4db6-aa79-ea56b7be12ca": Phase="Pending", Reason="", readiness=false. Elapsed: 10.137840001s
Feb  2 14:53:03.113: INFO: Pod "pod-27e480e2-6abb-4db6-aa79-ea56b7be12ca": Phase="Pending", Reason="", readiness=false. Elapsed: 12.15542871s
Feb  2 14:53:05.123: INFO: Pod "pod-27e480e2-6abb-4db6-aa79-ea56b7be12ca": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.165320065s
STEP: Saw pod success
Feb  2 14:53:05.123: INFO: Pod "pod-27e480e2-6abb-4db6-aa79-ea56b7be12ca" satisfied condition "success or failure"
Feb  2 14:53:05.127: INFO: Trying to get logs from node iruya-node pod pod-27e480e2-6abb-4db6-aa79-ea56b7be12ca container test-container: 
STEP: delete the pod
Feb  2 14:53:05.225: INFO: Waiting for pod pod-27e480e2-6abb-4db6-aa79-ea56b7be12ca to disappear
Feb  2 14:53:05.230: INFO: Pod pod-27e480e2-6abb-4db6-aa79-ea56b7be12ca no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  2 14:53:05.231: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-3873" for this suite.
Feb  2 14:53:11.359: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 14:53:11.473: INFO: namespace emptydir-3873 deletion completed in 6.236628835s

• [SLOW TEST:20.703 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  2 14:53:11.473: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Feb  2 14:53:25.333: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  2 14:53:25.455: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-7316" for this suite.
Feb  2 14:53:31.507: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 14:53:31.628: INFO: namespace container-runtime-7316 deletion completed in 6.159595798s

• [SLOW TEST:20.155 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129
      should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-apps] Deployment 
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  2 14:53:31.628: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72
[It] RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb  2 14:53:31.779: INFO: Creating deployment "test-recreate-deployment"
Feb  2 14:53:31.810: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1
Feb  2 14:53:31.867: INFO: deployment "test-recreate-deployment" doesn't have the required revision set
Feb  2 14:53:33.898: INFO: Waiting deployment "test-recreate-deployment" to complete
Feb  2 14:53:33.901: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716252011, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716252011, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716252012, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716252011, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  2 14:53:35.916: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716252011, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716252011, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716252012, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716252011, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  2 14:53:37.983: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716252011, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716252011, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716252012, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716252011, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  2 14:53:39.926: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716252011, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716252011, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716252012, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716252011, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  2 14:53:41.915: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716252011, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716252011, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716252012, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716252011, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  2 14:53:43.970: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716252011, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716252011, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716252012, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716252011, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  2 14:53:45.916: INFO: Triggering a new rollout for deployment "test-recreate-deployment"
Feb  2 14:53:45.938: INFO: Updating deployment test-recreate-deployment
Feb  2 14:53:45.938: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66
Feb  2 14:53:46.813: INFO: Deployment "test-recreate-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment,GenerateName:,Namespace:deployment-2138,SelfLink:/apis/apps/v1/namespaces/deployment-2138/deployments/test-recreate-deployment,UID:e330b4d6-a8f7-4854-ba3f-b470963cfcb9,ResourceVersion:22831192,Generation:2,CreationTimestamp:2020-02-02 14:53:31 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[{Available False 2020-02-02 14:53:46 +0000 UTC 2020-02-02 14:53:46 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2020-02-02 14:53:46 +0000 UTC 2020-02-02 14:53:31 +0000 UTC ReplicaSetUpdated ReplicaSet "test-recreate-deployment-5c8c9cc69d" is progressing.}],ReadyReplicas:0,CollisionCount:nil,},}

Feb  2 14:53:46.820: INFO: New ReplicaSet "test-recreate-deployment-5c8c9cc69d" of Deployment "test-recreate-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5c8c9cc69d,GenerateName:,Namespace:deployment-2138,SelfLink:/apis/apps/v1/namespaces/deployment-2138/replicasets/test-recreate-deployment-5c8c9cc69d,UID:5079a8e3-1c3a-443e-a535-db544264bde3,ResourceVersion:22831190,Generation:1,CreationTimestamp:2020-02-02 14:53:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment e330b4d6-a8f7-4854-ba3f-b470963cfcb9 0xc001727147 0xc001727148}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Feb  2 14:53:46.820: INFO: All old ReplicaSets of Deployment "test-recreate-deployment":
Feb  2 14:53:46.820: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-6df85df6b9,GenerateName:,Namespace:deployment-2138,SelfLink:/apis/apps/v1/namespaces/deployment-2138/replicasets/test-recreate-deployment-6df85df6b9,UID:bc3374fa-248a-4153-8730-27d58629acfb,ResourceVersion:22831181,Generation:2,CreationTimestamp:2020-02-02 14:53:31 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment e330b4d6-a8f7-4854-ba3f-b470963cfcb9 0xc001727217 0xc001727218}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Feb  2 14:53:46.826: INFO: Pod "test-recreate-deployment-5c8c9cc69d-xcldm" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5c8c9cc69d-xcldm,GenerateName:test-recreate-deployment-5c8c9cc69d-,Namespace:deployment-2138,SelfLink:/api/v1/namespaces/deployment-2138/pods/test-recreate-deployment-5c8c9cc69d-xcldm,UID:8215e887-d2ad-49bf-8540-83e72ad0f4c1,ResourceVersion:22831187,Generation:0,CreationTimestamp:2020-02-02 14:53:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-recreate-deployment-5c8c9cc69d 5079a8e3-1c3a-443e-a535-db544264bde3 0xc001727d47 0xc001727d48}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-gdxxl {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-gdxxl,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-gdxxl true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001727dc0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001727de0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-02 14:53:46 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  2 14:53:46.826: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-2138" for this suite.
Feb  2 14:53:54.869: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 14:53:55.002: INFO: namespace deployment-2138 deletion completed in 8.169870301s

• [SLOW TEST:23.374 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  2 14:53:55.004: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0644 on node default medium
Feb  2 14:53:55.399: INFO: Waiting up to 5m0s for pod "pod-5a291c84-4ffe-4dda-b0da-82676918b998" in namespace "emptydir-6674" to be "success or failure"
Feb  2 14:53:55.444: INFO: Pod "pod-5a291c84-4ffe-4dda-b0da-82676918b998": Phase="Pending", Reason="", readiness=false. Elapsed: 44.209774ms
Feb  2 14:53:57.454: INFO: Pod "pod-5a291c84-4ffe-4dda-b0da-82676918b998": Phase="Pending", Reason="", readiness=false. Elapsed: 2.054093982s
Feb  2 14:53:59.473: INFO: Pod "pod-5a291c84-4ffe-4dda-b0da-82676918b998": Phase="Pending", Reason="", readiness=false. Elapsed: 4.07335788s
Feb  2 14:54:01.593: INFO: Pod "pod-5a291c84-4ffe-4dda-b0da-82676918b998": Phase="Pending", Reason="", readiness=false. Elapsed: 6.193366976s
Feb  2 14:54:03.605: INFO: Pod "pod-5a291c84-4ffe-4dda-b0da-82676918b998": Phase="Pending", Reason="", readiness=false. Elapsed: 8.205326435s
Feb  2 14:54:05.613: INFO: Pod "pod-5a291c84-4ffe-4dda-b0da-82676918b998": Phase="Pending", Reason="", readiness=false. Elapsed: 10.213810867s
Feb  2 14:54:07.620: INFO: Pod "pod-5a291c84-4ffe-4dda-b0da-82676918b998": Phase="Pending", Reason="", readiness=false. Elapsed: 12.220975995s
Feb  2 14:54:09.626: INFO: Pod "pod-5a291c84-4ffe-4dda-b0da-82676918b998": Phase="Pending", Reason="", readiness=false. Elapsed: 14.226893466s
Feb  2 14:54:11.635: INFO: Pod "pod-5a291c84-4ffe-4dda-b0da-82676918b998": Phase="Pending", Reason="", readiness=false. Elapsed: 16.235236486s
Feb  2 14:54:13.657: INFO: Pod "pod-5a291c84-4ffe-4dda-b0da-82676918b998": Phase="Succeeded", Reason="", readiness=false. Elapsed: 18.257416109s
STEP: Saw pod success
Feb  2 14:54:13.657: INFO: Pod "pod-5a291c84-4ffe-4dda-b0da-82676918b998" satisfied condition "success or failure"
Feb  2 14:54:13.665: INFO: Trying to get logs from node iruya-node pod pod-5a291c84-4ffe-4dda-b0da-82676918b998 container test-container: 
STEP: delete the pod
Feb  2 14:54:13.824: INFO: Waiting for pod pod-5a291c84-4ffe-4dda-b0da-82676918b998 to disappear
Feb  2 14:54:13.877: INFO: Pod pod-5a291c84-4ffe-4dda-b0da-82676918b998 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  2 14:54:13.877: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-6674" for this suite.
Feb  2 14:54:21.959: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 14:54:22.088: INFO: namespace emptydir-6674 deletion completed in 8.195241101s

• [SLOW TEST:27.084 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  2 14:54:22.089: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb  2 14:54:22.229: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e50bbcd3-96b5-4cd9-a9d7-5c3787047391" in namespace "projected-494" to be "success or failure"
Feb  2 14:54:22.251: INFO: Pod "downwardapi-volume-e50bbcd3-96b5-4cd9-a9d7-5c3787047391": Phase="Pending", Reason="", readiness=false. Elapsed: 21.907574ms
Feb  2 14:54:24.260: INFO: Pod "downwardapi-volume-e50bbcd3-96b5-4cd9-a9d7-5c3787047391": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030589657s
Feb  2 14:54:26.267: INFO: Pod "downwardapi-volume-e50bbcd3-96b5-4cd9-a9d7-5c3787047391": Phase="Pending", Reason="", readiness=false. Elapsed: 4.038405521s
Feb  2 14:54:28.274: INFO: Pod "downwardapi-volume-e50bbcd3-96b5-4cd9-a9d7-5c3787047391": Phase="Pending", Reason="", readiness=false. Elapsed: 6.045245601s
Feb  2 14:54:30.290: INFO: Pod "downwardapi-volume-e50bbcd3-96b5-4cd9-a9d7-5c3787047391": Phase="Pending", Reason="", readiness=false. Elapsed: 8.061439959s
Feb  2 14:54:32.301: INFO: Pod "downwardapi-volume-e50bbcd3-96b5-4cd9-a9d7-5c3787047391": Phase="Pending", Reason="", readiness=false. Elapsed: 10.07237793s
Feb  2 14:54:34.317: INFO: Pod "downwardapi-volume-e50bbcd3-96b5-4cd9-a9d7-5c3787047391": Phase="Pending", Reason="", readiness=false. Elapsed: 12.088135005s
Feb  2 14:54:36.674: INFO: Pod "downwardapi-volume-e50bbcd3-96b5-4cd9-a9d7-5c3787047391": Phase="Pending", Reason="", readiness=false. Elapsed: 14.444574185s
Feb  2 14:54:38.709: INFO: Pod "downwardapi-volume-e50bbcd3-96b5-4cd9-a9d7-5c3787047391": Phase="Pending", Reason="", readiness=false. Elapsed: 16.480021865s
Feb  2 14:54:40.718: INFO: Pod "downwardapi-volume-e50bbcd3-96b5-4cd9-a9d7-5c3787047391": Phase="Succeeded", Reason="", readiness=false. Elapsed: 18.488752408s
STEP: Saw pod success
Feb  2 14:54:40.718: INFO: Pod "downwardapi-volume-e50bbcd3-96b5-4cd9-a9d7-5c3787047391" satisfied condition "success or failure"
Feb  2 14:54:40.721: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-e50bbcd3-96b5-4cd9-a9d7-5c3787047391 container client-container: 
STEP: delete the pod
Feb  2 14:54:41.165: INFO: Waiting for pod downwardapi-volume-e50bbcd3-96b5-4cd9-a9d7-5c3787047391 to disappear
Feb  2 14:54:41.187: INFO: Pod downwardapi-volume-e50bbcd3-96b5-4cd9-a9d7-5c3787047391 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  2 14:54:41.187: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-494" for this suite.
Feb  2 14:54:47.320: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 14:54:47.439: INFO: namespace projected-494 deletion completed in 6.246635255s

• [SLOW TEST:25.351 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  2 14:54:47.441: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb  2 14:54:47.662: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c73d3ce9-75b9-43fe-8671-32d23db94d81" in namespace "projected-5968" to be "success or failure"
Feb  2 14:54:47.688: INFO: Pod "downwardapi-volume-c73d3ce9-75b9-43fe-8671-32d23db94d81": Phase="Pending", Reason="", readiness=false. Elapsed: 26.623437ms
Feb  2 14:54:49.696: INFO: Pod "downwardapi-volume-c73d3ce9-75b9-43fe-8671-32d23db94d81": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034785591s
Feb  2 14:54:51.705: INFO: Pod "downwardapi-volume-c73d3ce9-75b9-43fe-8671-32d23db94d81": Phase="Pending", Reason="", readiness=false. Elapsed: 4.043721307s
Feb  2 14:54:53.720: INFO: Pod "downwardapi-volume-c73d3ce9-75b9-43fe-8671-32d23db94d81": Phase="Pending", Reason="", readiness=false. Elapsed: 6.05796636s
Feb  2 14:54:55.730: INFO: Pod "downwardapi-volume-c73d3ce9-75b9-43fe-8671-32d23db94d81": Phase="Pending", Reason="", readiness=false. Elapsed: 8.068137823s
Feb  2 14:54:57.742: INFO: Pod "downwardapi-volume-c73d3ce9-75b9-43fe-8671-32d23db94d81": Phase="Pending", Reason="", readiness=false. Elapsed: 10.080519298s
Feb  2 14:54:59.755: INFO: Pod "downwardapi-volume-c73d3ce9-75b9-43fe-8671-32d23db94d81": Phase="Pending", Reason="", readiness=false. Elapsed: 12.093727288s
Feb  2 14:55:01.767: INFO: Pod "downwardapi-volume-c73d3ce9-75b9-43fe-8671-32d23db94d81": Phase="Pending", Reason="", readiness=false. Elapsed: 14.105434211s
Feb  2 14:55:03.800: INFO: Pod "downwardapi-volume-c73d3ce9-75b9-43fe-8671-32d23db94d81": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.137834625s
STEP: Saw pod success
Feb  2 14:55:03.800: INFO: Pod "downwardapi-volume-c73d3ce9-75b9-43fe-8671-32d23db94d81" satisfied condition "success or failure"
Feb  2 14:55:03.818: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-c73d3ce9-75b9-43fe-8671-32d23db94d81 container client-container: 
STEP: delete the pod
Feb  2 14:55:04.041: INFO: Waiting for pod downwardapi-volume-c73d3ce9-75b9-43fe-8671-32d23db94d81 to disappear
Feb  2 14:55:04.056: INFO: Pod downwardapi-volume-c73d3ce9-75b9-43fe-8671-32d23db94d81 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  2 14:55:04.057: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5968" for this suite.
Feb  2 14:55:10.189: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 14:55:10.322: INFO: namespace projected-5968 deletion completed in 6.24271191s

• [SLOW TEST:22.881 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should fail to create secret due to empty secret key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  2 14:55:10.322: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create secret due to empty secret key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name secret-emptykey-test-a9e8e017-b2ba-4500-9fd4-e78c66c87ae8
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  2 14:55:10.405: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-5916" for this suite.
Feb  2 14:55:18.548: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 14:55:18.939: INFO: namespace secrets-5916 deletion completed in 8.518596631s

• [SLOW TEST:8.617 seconds]
[sig-api-machinery] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31
  should fail to create secret due to empty secret key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-storage] ConfigMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  2 14:55:18.940: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-upd-5c63fa99-69a4-4de9-9d30-0574b84f4152
STEP: Creating the pod
STEP: Updating configmap configmap-test-upd-5c63fa99-69a4-4de9-9d30-0574b84f4152
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  2 14:55:36.076: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-1434" for this suite.
Feb  2 14:56:00.109: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 14:56:00.246: INFO: namespace configmap-1434 deletion completed in 24.165577008s

• [SLOW TEST:41.306 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with secret pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  2 14:56:00.247: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with secret pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod pod-subpath-test-secret-f6wn
STEP: Creating a pod to test atomic-volume-subpath
Feb  2 14:56:01.638: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-f6wn" in namespace "subpath-3215" to be "success or failure"
Feb  2 14:56:01.678: INFO: Pod "pod-subpath-test-secret-f6wn": Phase="Pending", Reason="", readiness=false. Elapsed: 40.650341ms
Feb  2 14:56:03.768: INFO: Pod "pod-subpath-test-secret-f6wn": Phase="Pending", Reason="", readiness=false. Elapsed: 2.130600828s
Feb  2 14:56:05.840: INFO: Pod "pod-subpath-test-secret-f6wn": Phase="Pending", Reason="", readiness=false. Elapsed: 4.202657664s
Feb  2 14:56:07.863: INFO: Pod "pod-subpath-test-secret-f6wn": Phase="Pending", Reason="", readiness=false. Elapsed: 6.225646163s
Feb  2 14:56:09.876: INFO: Pod "pod-subpath-test-secret-f6wn": Phase="Pending", Reason="", readiness=false. Elapsed: 8.237873438s
Feb  2 14:56:11.908: INFO: Pod "pod-subpath-test-secret-f6wn": Phase="Pending", Reason="", readiness=false. Elapsed: 10.269784473s
Feb  2 14:56:13.919: INFO: Pod "pod-subpath-test-secret-f6wn": Phase="Pending", Reason="", readiness=false. Elapsed: 12.281471493s
Feb  2 14:56:15.928: INFO: Pod "pod-subpath-test-secret-f6wn": Phase="Pending", Reason="", readiness=false. Elapsed: 14.29045639s
Feb  2 14:56:17.939: INFO: Pod "pod-subpath-test-secret-f6wn": Phase="Running", Reason="", readiness=true. Elapsed: 16.301123345s
Feb  2 14:56:19.945: INFO: Pod "pod-subpath-test-secret-f6wn": Phase="Running", Reason="", readiness=true. Elapsed: 18.306976875s
Feb  2 14:56:22.051: INFO: Pod "pod-subpath-test-secret-f6wn": Phase="Running", Reason="", readiness=true. Elapsed: 20.412908799s
Feb  2 14:56:24.058: INFO: Pod "pod-subpath-test-secret-f6wn": Phase="Running", Reason="", readiness=true. Elapsed: 22.420156116s
Feb  2 14:56:26.068: INFO: Pod "pod-subpath-test-secret-f6wn": Phase="Running", Reason="", readiness=true. Elapsed: 24.429972964s
Feb  2 14:56:28.078: INFO: Pod "pod-subpath-test-secret-f6wn": Phase="Running", Reason="", readiness=true. Elapsed: 26.439842253s
Feb  2 14:56:30.230: INFO: Pod "pod-subpath-test-secret-f6wn": Phase="Running", Reason="", readiness=true. Elapsed: 28.592425405s
Feb  2 14:56:32.238: INFO: Pod "pod-subpath-test-secret-f6wn": Phase="Running", Reason="", readiness=true. Elapsed: 30.599937357s
Feb  2 14:56:34.248: INFO: Pod "pod-subpath-test-secret-f6wn": Phase="Running", Reason="", readiness=true. Elapsed: 32.610497874s
Feb  2 14:56:36.257: INFO: Pod "pod-subpath-test-secret-f6wn": Phase="Running", Reason="", readiness=true. Elapsed: 34.619257615s
Feb  2 14:56:38.267: INFO: Pod "pod-subpath-test-secret-f6wn": Phase="Succeeded", Reason="", readiness=false. Elapsed: 36.628967725s
STEP: Saw pod success
Feb  2 14:56:38.267: INFO: Pod "pod-subpath-test-secret-f6wn" satisfied condition "success or failure"
Feb  2 14:56:38.275: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-secret-f6wn container test-container-subpath-secret-f6wn: 
STEP: delete the pod
Feb  2 14:56:38.467: INFO: Waiting for pod pod-subpath-test-secret-f6wn to disappear
Feb  2 14:56:38.497: INFO: Pod pod-subpath-test-secret-f6wn no longer exists
STEP: Deleting pod pod-subpath-test-secret-f6wn
Feb  2 14:56:38.498: INFO: Deleting pod "pod-subpath-test-secret-f6wn" in namespace "subpath-3215"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  2 14:56:38.548: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-3215" for this suite.
Feb  2 14:56:44.593: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 14:56:44.736: INFO: namespace subpath-3215 deletion completed in 6.180048907s

• [SLOW TEST:44.489 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with secret pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-storage] Secrets 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  2 14:56:44.736: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name s-test-opt-del-a48de50b-5b35-409e-a891-682215939f2a
STEP: Creating secret with name s-test-opt-upd-fdf20365-007f-4a84-b723-e3e3803b3386
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-a48de50b-5b35-409e-a891-682215939f2a
STEP: Updating secret s-test-opt-upd-fdf20365-007f-4a84-b723-e3e3803b3386
STEP: Creating secret with name s-test-opt-create-d63496e9-b5ff-47e9-a77e-afff450f4bb3
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  2 14:57:10.046: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-1250" for this suite.
Feb  2 14:57:34.144: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 14:57:34.246: INFO: namespace secrets-1250 deletion completed in 24.195102544s

• [SLOW TEST:49.510 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  2 14:57:34.247: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test override command
Feb  2 14:57:34.405: INFO: Waiting up to 5m0s for pod "client-containers-c9d55a16-123c-49c0-978e-902afb130288" in namespace "containers-8931" to be "success or failure"
Feb  2 14:57:34.415: INFO: Pod "client-containers-c9d55a16-123c-49c0-978e-902afb130288": Phase="Pending", Reason="", readiness=false. Elapsed: 9.527391ms
Feb  2 14:57:36.429: INFO: Pod "client-containers-c9d55a16-123c-49c0-978e-902afb130288": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023109781s
Feb  2 14:57:38.615: INFO: Pod "client-containers-c9d55a16-123c-49c0-978e-902afb130288": Phase="Pending", Reason="", readiness=false. Elapsed: 4.209680574s
Feb  2 14:57:40.622: INFO: Pod "client-containers-c9d55a16-123c-49c0-978e-902afb130288": Phase="Pending", Reason="", readiness=false. Elapsed: 6.216506817s
Feb  2 14:57:42.638: INFO: Pod "client-containers-c9d55a16-123c-49c0-978e-902afb130288": Phase="Pending", Reason="", readiness=false. Elapsed: 8.232251714s
Feb  2 14:57:44.660: INFO: Pod "client-containers-c9d55a16-123c-49c0-978e-902afb130288": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.254113257s
STEP: Saw pod success
Feb  2 14:57:44.660: INFO: Pod "client-containers-c9d55a16-123c-49c0-978e-902afb130288" satisfied condition "success or failure"
Feb  2 14:57:44.669: INFO: Trying to get logs from node iruya-node pod client-containers-c9d55a16-123c-49c0-978e-902afb130288 container test-container: 
STEP: delete the pod
Feb  2 14:57:44.781: INFO: Waiting for pod client-containers-c9d55a16-123c-49c0-978e-902afb130288 to disappear
Feb  2 14:57:44.784: INFO: Pod client-containers-c9d55a16-123c-49c0-978e-902afb130288 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  2 14:57:44.785: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-8931" for this suite.
Feb  2 14:57:50.903: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 14:57:51.094: INFO: namespace containers-8931 deletion completed in 6.305589791s

• [SLOW TEST:16.847 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  2 14:57:51.094: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-8d0868a4-d7cd-4d27-ad78-d0a6c097f018
STEP: Creating a pod to test consume configMaps
Feb  2 14:57:51.240: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-88dc8be1-78df-415e-8e1f-e53924e9a4a5" in namespace "projected-9233" to be "success or failure"
Feb  2 14:57:51.265: INFO: Pod "pod-projected-configmaps-88dc8be1-78df-415e-8e1f-e53924e9a4a5": Phase="Pending", Reason="", readiness=false. Elapsed: 24.459745ms
Feb  2 14:57:53.272: INFO: Pod "pod-projected-configmaps-88dc8be1-78df-415e-8e1f-e53924e9a4a5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031921783s
Feb  2 14:57:55.285: INFO: Pod "pod-projected-configmaps-88dc8be1-78df-415e-8e1f-e53924e9a4a5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.04492494s
Feb  2 14:57:57.298: INFO: Pod "pod-projected-configmaps-88dc8be1-78df-415e-8e1f-e53924e9a4a5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.058014708s
Feb  2 14:57:59.308: INFO: Pod "pod-projected-configmaps-88dc8be1-78df-415e-8e1f-e53924e9a4a5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.067923558s
STEP: Saw pod success
Feb  2 14:57:59.308: INFO: Pod "pod-projected-configmaps-88dc8be1-78df-415e-8e1f-e53924e9a4a5" satisfied condition "success or failure"
Feb  2 14:57:59.318: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-88dc8be1-78df-415e-8e1f-e53924e9a4a5 container projected-configmap-volume-test: 
STEP: delete the pod
Feb  2 14:57:59.693: INFO: Waiting for pod pod-projected-configmaps-88dc8be1-78df-415e-8e1f-e53924e9a4a5 to disappear
Feb  2 14:57:59.705: INFO: Pod pod-projected-configmaps-88dc8be1-78df-415e-8e1f-e53924e9a4a5 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  2 14:57:59.705: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9233" for this suite.
Feb  2 14:58:05.733: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 14:58:05.843: INFO: namespace projected-9233 deletion completed in 6.131688475s

• [SLOW TEST:14.750 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  2 14:58:05.844: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Feb  2 14:58:14.482: INFO: Successfully updated pod "pod-update-6ff76fa8-df7d-40a5-96bb-243a7fd17115"
STEP: verifying the updated pod is in kubernetes
Feb  2 14:58:14.532: INFO: Pod update OK
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  2 14:58:14.532: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-5686" for this suite.
Feb  2 14:58:36.615: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 14:58:36.738: INFO: namespace pods-5686 deletion completed in 22.199407549s

• [SLOW TEST:30.894 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  2 14:58:36.739: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-e3f70cf3-b9be-4406-95f8-c6a11917bf7d
STEP: Creating a pod to test consume configMaps
Feb  2 14:58:36.903: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-fe101112-857e-44ed-8d17-d9fdf80e2826" in namespace "projected-5427" to be "success or failure"
Feb  2 14:58:36.906: INFO: Pod "pod-projected-configmaps-fe101112-857e-44ed-8d17-d9fdf80e2826": Phase="Pending", Reason="", readiness=false. Elapsed: 2.812763ms
Feb  2 14:58:38.916: INFO: Pod "pod-projected-configmaps-fe101112-857e-44ed-8d17-d9fdf80e2826": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012268075s
Feb  2 14:58:40.927: INFO: Pod "pod-projected-configmaps-fe101112-857e-44ed-8d17-d9fdf80e2826": Phase="Pending", Reason="", readiness=false. Elapsed: 4.023488489s
Feb  2 14:58:42.942: INFO: Pod "pod-projected-configmaps-fe101112-857e-44ed-8d17-d9fdf80e2826": Phase="Pending", Reason="", readiness=false. Elapsed: 6.038946458s
Feb  2 14:58:44.951: INFO: Pod "pod-projected-configmaps-fe101112-857e-44ed-8d17-d9fdf80e2826": Phase="Pending", Reason="", readiness=false. Elapsed: 8.047361633s
Feb  2 14:58:46.958: INFO: Pod "pod-projected-configmaps-fe101112-857e-44ed-8d17-d9fdf80e2826": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.054904177s
STEP: Saw pod success
Feb  2 14:58:46.958: INFO: Pod "pod-projected-configmaps-fe101112-857e-44ed-8d17-d9fdf80e2826" satisfied condition "success or failure"
Feb  2 14:58:46.962: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-fe101112-857e-44ed-8d17-d9fdf80e2826 container projected-configmap-volume-test: 
STEP: delete the pod
Feb  2 14:58:47.108: INFO: Waiting for pod pod-projected-configmaps-fe101112-857e-44ed-8d17-d9fdf80e2826 to disappear
Feb  2 14:58:47.117: INFO: Pod pod-projected-configmaps-fe101112-857e-44ed-8d17-d9fdf80e2826 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  2 14:58:47.117: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5427" for this suite.
Feb  2 14:58:53.148: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 14:58:53.245: INFO: namespace projected-5427 deletion completed in 6.12359312s

• [SLOW TEST:16.506 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl expose 
  should create services for rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  2 14:58:53.245: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should create services for rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating Redis RC
Feb  2 14:58:53.363: INFO: namespace kubectl-2293
Feb  2 14:58:53.363: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2293'
Feb  2 14:58:55.432: INFO: stderr: ""
Feb  2 14:58:55.432: INFO: stdout: "replicationcontroller/redis-master created\n"
STEP: Waiting for Redis master to start.
Feb  2 14:58:56.444: INFO: Selector matched 1 pods for map[app:redis]
Feb  2 14:58:56.444: INFO: Found 0 / 1
Feb  2 14:58:57.442: INFO: Selector matched 1 pods for map[app:redis]
Feb  2 14:58:57.443: INFO: Found 0 / 1
Feb  2 14:58:58.444: INFO: Selector matched 1 pods for map[app:redis]
Feb  2 14:58:58.444: INFO: Found 0 / 1
Feb  2 14:58:59.439: INFO: Selector matched 1 pods for map[app:redis]
Feb  2 14:58:59.439: INFO: Found 0 / 1
Feb  2 14:59:00.450: INFO: Selector matched 1 pods for map[app:redis]
Feb  2 14:59:00.450: INFO: Found 0 / 1
Feb  2 14:59:01.440: INFO: Selector matched 1 pods for map[app:redis]
Feb  2 14:59:01.440: INFO: Found 0 / 1
Feb  2 14:59:02.440: INFO: Selector matched 1 pods for map[app:redis]
Feb  2 14:59:02.440: INFO: Found 0 / 1
Feb  2 14:59:03.471: INFO: Selector matched 1 pods for map[app:redis]
Feb  2 14:59:03.471: INFO: Found 1 / 1
Feb  2 14:59:03.471: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Feb  2 14:59:03.477: INFO: Selector matched 1 pods for map[app:redis]
Feb  2 14:59:03.477: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Feb  2 14:59:03.477: INFO: wait on redis-master startup in kubectl-2293 
Feb  2 14:59:03.477: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-5flk7 redis-master --namespace=kubectl-2293'
Feb  2 14:59:03.689: INFO: stderr: ""
Feb  2 14:59:03.689: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 02 Feb 14:59:02.303 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 02 Feb 14:59:02.303 # Server started, Redis version 3.2.12\n1:M 02 Feb 14:59:02.305 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 02 Feb 14:59:02.305 * The server is now ready to accept connections on port 6379\n"
STEP: exposing RC
Feb  2 14:59:03.690: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc redis-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-2293'
Feb  2 14:59:04.063: INFO: stderr: ""
Feb  2 14:59:04.063: INFO: stdout: "service/rm2 exposed\n"
Feb  2 14:59:04.074: INFO: Service rm2 in namespace kubectl-2293 found.
STEP: exposing service
Feb  2 14:59:06.095: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-2293'
Feb  2 14:59:06.395: INFO: stderr: ""
Feb  2 14:59:06.396: INFO: stdout: "service/rm3 exposed\n"
Feb  2 14:59:06.440: INFO: Service rm3 in namespace kubectl-2293 found.
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  2 14:59:08.454: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-2293" for this suite.
Feb  2 14:59:32.540: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 14:59:32.703: INFO: namespace kubectl-2293 deletion completed in 24.236880932s

• [SLOW TEST:39.458 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl expose
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create services for rc  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  2 14:59:32.704: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0777 on node default medium
Feb  2 14:59:32.891: INFO: Waiting up to 5m0s for pod "pod-1b0bfe51-8e6d-4473-af06-ca9fb7a2e22b" in namespace "emptydir-7710" to be "success or failure"
Feb  2 14:59:32.899: INFO: Pod "pod-1b0bfe51-8e6d-4473-af06-ca9fb7a2e22b": Phase="Pending", Reason="", readiness=false. Elapsed: 7.631315ms
Feb  2 14:59:34.921: INFO: Pod "pod-1b0bfe51-8e6d-4473-af06-ca9fb7a2e22b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030001093s
Feb  2 14:59:36.932: INFO: Pod "pod-1b0bfe51-8e6d-4473-af06-ca9fb7a2e22b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.040489335s
Feb  2 14:59:38.946: INFO: Pod "pod-1b0bfe51-8e6d-4473-af06-ca9fb7a2e22b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.054701738s
Feb  2 14:59:40.957: INFO: Pod "pod-1b0bfe51-8e6d-4473-af06-ca9fb7a2e22b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.065467443s
STEP: Saw pod success
Feb  2 14:59:40.957: INFO: Pod "pod-1b0bfe51-8e6d-4473-af06-ca9fb7a2e22b" satisfied condition "success or failure"
Feb  2 14:59:40.963: INFO: Trying to get logs from node iruya-node pod pod-1b0bfe51-8e6d-4473-af06-ca9fb7a2e22b container test-container: 
STEP: delete the pod
Feb  2 14:59:41.077: INFO: Waiting for pod pod-1b0bfe51-8e6d-4473-af06-ca9fb7a2e22b to disappear
Feb  2 14:59:41.084: INFO: Pod pod-1b0bfe51-8e6d-4473-af06-ca9fb7a2e22b no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  2 14:59:41.085: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-7710" for this suite.
Feb  2 14:59:47.144: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 14:59:47.264: INFO: namespace emptydir-7710 deletion completed in 6.175771856s

• [SLOW TEST:14.561 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run --rm job 
  should create a job from an image, then delete the job  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  2 14:59:47.265: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should create a job from an image, then delete the job  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: executing a command with run --rm and attach with stdin
Feb  2 14:59:47.323: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-1750 run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed''
Feb  2 14:59:55.650: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0202 14:59:54.300896    3529 log.go:172] (0xc000118630) (0xc000354140) Create stream\nI0202 14:59:54.301367    3529 log.go:172] (0xc000118630) (0xc000354140) Stream added, broadcasting: 1\nI0202 14:59:54.312856    3529 log.go:172] (0xc000118630) Reply frame received for 1\nI0202 14:59:54.313011    3529 log.go:172] (0xc000118630) (0xc0003541e0) Create stream\nI0202 14:59:54.313026    3529 log.go:172] (0xc000118630) (0xc0003541e0) Stream added, broadcasting: 3\nI0202 14:59:54.314944    3529 log.go:172] (0xc000118630) Reply frame received for 3\nI0202 14:59:54.314989    3529 log.go:172] (0xc000118630) (0xc0003ae500) Create stream\nI0202 14:59:54.315000    3529 log.go:172] (0xc000118630) (0xc0003ae500) Stream added, broadcasting: 5\nI0202 14:59:54.317042    3529 log.go:172] (0xc000118630) Reply frame received for 5\nI0202 14:59:54.317117    3529 log.go:172] (0xc000118630) (0xc0003b0000) Create stream\nI0202 14:59:54.317135    3529 log.go:172] (0xc000118630) (0xc0003b0000) Stream added, broadcasting: 7\nI0202 14:59:54.319766    3529 log.go:172] (0xc000118630) Reply frame received for 7\nI0202 14:59:54.320211    3529 log.go:172] (0xc0003541e0) (3) Writing data frame\nI0202 14:59:54.320453    3529 log.go:172] (0xc0003541e0) (3) Writing data frame\nI0202 14:59:54.330876    3529 log.go:172] (0xc000118630) Data frame received for 5\nI0202 14:59:54.330890    3529 log.go:172] (0xc0003ae500) (5) Data frame handling\nI0202 14:59:54.330909    3529 log.go:172] (0xc0003ae500) (5) Data frame sent\nI0202 14:59:54.339506    3529 log.go:172] (0xc000118630) Data frame received for 5\nI0202 14:59:54.339524    3529 log.go:172] (0xc0003ae500) (5) Data frame handling\nI0202 14:59:54.339533    3529 log.go:172] (0xc0003ae500) (5) Data frame sent\nI0202 14:59:55.594004    3529 log.go:172] (0xc000118630) Data frame received for 1\nI0202 14:59:55.594252    3529 log.go:172] (0xc000354140) (1) Data frame handling\nI0202 14:59:55.594369    3529 log.go:172] (0xc000354140) (1) Data frame sent\nI0202 14:59:55.594451    3529 log.go:172] (0xc000118630) (0xc000354140) Stream removed, broadcasting: 1\nI0202 14:59:55.596575    3529 log.go:172] (0xc000118630) (0xc0003ae500) Stream removed, broadcasting: 5\nI0202 14:59:55.596724    3529 log.go:172] (0xc000118630) (0xc0003541e0) Stream removed, broadcasting: 3\nI0202 14:59:55.597113    3529 log.go:172] (0xc000118630) (0xc0003b0000) Stream removed, broadcasting: 7\nI0202 14:59:55.597610    3529 log.go:172] (0xc000118630) (0xc000354140) Stream removed, broadcasting: 1\nI0202 14:59:55.597703    3529 log.go:172] (0xc000118630) (0xc0003541e0) Stream removed, broadcasting: 3\nI0202 14:59:55.597728    3529 log.go:172] (0xc000118630) (0xc0003ae500) Stream removed, broadcasting: 5\nI0202 14:59:55.597747    3529 log.go:172] (0xc000118630) (0xc0003b0000) Stream removed, broadcasting: 7\nI0202 14:59:55.599930    3529 log.go:172] (0xc000118630) Go away received\n"
Feb  2 14:59:55.651: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n"
STEP: verifying the job e2e-test-rm-busybox-job was deleted
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  2 14:59:57.666: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1750" for this suite.
Feb  2 15:00:03.693: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 15:00:03.871: INFO: namespace kubectl-1750 deletion completed in 6.199507169s

• [SLOW TEST:16.606 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run --rm job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create a job from an image, then delete the job  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for ExternalName services [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  2 15:00:03.871: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for ExternalName services [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a test externalName service
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-9514.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-9514.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-9514.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-9514.svc.cluster.local; sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Feb  2 15:00:16.079: INFO: File wheezy_udp@dns-test-service-3.dns-9514.svc.cluster.local from pod  dns-9514/dns-test-2d64526e-7b38-43b3-899e-3ee73d7ecf7d contains '' instead of 'foo.example.com.'
Feb  2 15:00:16.086: INFO: File jessie_udp@dns-test-service-3.dns-9514.svc.cluster.local from pod  dns-9514/dns-test-2d64526e-7b38-43b3-899e-3ee73d7ecf7d contains '' instead of 'foo.example.com.'
Feb  2 15:00:16.086: INFO: Lookups using dns-9514/dns-test-2d64526e-7b38-43b3-899e-3ee73d7ecf7d failed for: [wheezy_udp@dns-test-service-3.dns-9514.svc.cluster.local jessie_udp@dns-test-service-3.dns-9514.svc.cluster.local]

Feb  2 15:00:21.100: INFO: DNS probes using dns-test-2d64526e-7b38-43b3-899e-3ee73d7ecf7d succeeded

STEP: deleting the pod
STEP: changing the externalName to bar.example.com
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-9514.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-9514.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-9514.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-9514.svc.cluster.local; sleep 1; done

STEP: creating a second pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Feb  2 15:00:37.306: INFO: File wheezy_udp@dns-test-service-3.dns-9514.svc.cluster.local from pod  dns-9514/dns-test-974c86d8-23af-4734-b677-994b7256edbb contains 'foo.example.com.
' instead of 'bar.example.com.'
Feb  2 15:00:37.332: INFO: File jessie_udp@dns-test-service-3.dns-9514.svc.cluster.local from pod  dns-9514/dns-test-974c86d8-23af-4734-b677-994b7256edbb contains '' instead of 'bar.example.com.'
Feb  2 15:00:37.332: INFO: Lookups using dns-9514/dns-test-974c86d8-23af-4734-b677-994b7256edbb failed for: [wheezy_udp@dns-test-service-3.dns-9514.svc.cluster.local jessie_udp@dns-test-service-3.dns-9514.svc.cluster.local]

Feb  2 15:00:42.339: INFO: File wheezy_udp@dns-test-service-3.dns-9514.svc.cluster.local from pod  dns-9514/dns-test-974c86d8-23af-4734-b677-994b7256edbb contains 'foo.example.com.
' instead of 'bar.example.com.'
Feb  2 15:00:42.342: INFO: File jessie_udp@dns-test-service-3.dns-9514.svc.cluster.local from pod  dns-9514/dns-test-974c86d8-23af-4734-b677-994b7256edbb contains 'foo.example.com.
' instead of 'bar.example.com.'
Feb  2 15:00:42.342: INFO: Lookups using dns-9514/dns-test-974c86d8-23af-4734-b677-994b7256edbb failed for: [wheezy_udp@dns-test-service-3.dns-9514.svc.cluster.local jessie_udp@dns-test-service-3.dns-9514.svc.cluster.local]

Feb  2 15:00:47.344: INFO: File wheezy_udp@dns-test-service-3.dns-9514.svc.cluster.local from pod  dns-9514/dns-test-974c86d8-23af-4734-b677-994b7256edbb contains 'foo.example.com.
' instead of 'bar.example.com.'
Feb  2 15:00:47.352: INFO: File jessie_udp@dns-test-service-3.dns-9514.svc.cluster.local from pod  dns-9514/dns-test-974c86d8-23af-4734-b677-994b7256edbb contains 'foo.example.com.
' instead of 'bar.example.com.'
Feb  2 15:00:47.352: INFO: Lookups using dns-9514/dns-test-974c86d8-23af-4734-b677-994b7256edbb failed for: [wheezy_udp@dns-test-service-3.dns-9514.svc.cluster.local jessie_udp@dns-test-service-3.dns-9514.svc.cluster.local]

Feb  2 15:00:52.350: INFO: DNS probes using dns-test-974c86d8-23af-4734-b677-994b7256edbb succeeded

STEP: deleting the pod
STEP: changing the service to type=ClusterIP
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-9514.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-9514.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-9514.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-9514.svc.cluster.local; sleep 1; done

STEP: creating a third pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Feb  2 15:01:08.722: INFO: File jessie_udp@dns-test-service-3.dns-9514.svc.cluster.local from pod  dns-9514/dns-test-ee33ae9b-65db-4fe7-bbc7-4fb39c229d65 contains '' instead of '10.96.240.68'
Feb  2 15:01:08.723: INFO: Lookups using dns-9514/dns-test-ee33ae9b-65db-4fe7-bbc7-4fb39c229d65 failed for: [jessie_udp@dns-test-service-3.dns-9514.svc.cluster.local]

Feb  2 15:01:13.743: INFO: DNS probes using dns-test-ee33ae9b-65db-4fe7-bbc7-4fb39c229d65 succeeded

STEP: deleting the pod
STEP: deleting the test externalName service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  2 15:01:14.952: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-9514" for this suite.
Feb  2 15:01:21.151: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 15:01:21.228: INFO: namespace dns-9514 deletion completed in 6.201159899s

• [SLOW TEST:77.357 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for ExternalName services [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy logs on node using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  2 15:01:21.229: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy logs on node using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb  2 15:01:21.330: INFO: (0) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 12.30637ms)
Feb  2 15:01:21.334: INFO: (1) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.476239ms)
Feb  2 15:01:21.339: INFO: (2) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.524792ms)
Feb  2 15:01:21.344: INFO: (3) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.836031ms)
Feb  2 15:01:21.348: INFO: (4) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.522ms)
Feb  2 15:01:21.353: INFO: (5) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.25063ms)
Feb  2 15:01:21.357: INFO: (6) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.418037ms)
Feb  2 15:01:21.361: INFO: (7) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.103368ms)
Feb  2 15:01:21.369: INFO: (8) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.172948ms)
Feb  2 15:01:21.402: INFO: (9) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 32.497749ms)
Feb  2 15:01:21.408: INFO: (10) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.810802ms)
Feb  2 15:01:21.419: INFO: (11) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 10.869393ms)
Feb  2 15:01:21.426: INFO: (12) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.822905ms)
Feb  2 15:01:21.433: INFO: (13) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.472483ms)
Feb  2 15:01:21.442: INFO: (14) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.687481ms)
Feb  2 15:01:21.449: INFO: (15) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.173319ms)
Feb  2 15:01:21.456: INFO: (16) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.253775ms)
Feb  2 15:01:21.464: INFO: (17) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.885404ms)
Feb  2 15:01:21.470: INFO: (18) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.721278ms)
Feb  2 15:01:21.475: INFO: (19) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.611214ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  2 15:01:21.475: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-873" for this suite.
Feb  2 15:01:27.500: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 15:01:27.643: INFO: namespace proxy-873 deletion completed in 6.164838078s

• [SLOW TEST:6.415 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58
    should proxy logs on node using proxy subresource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-storage] Projected configMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  2 15:01:27.644: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with configMap that has name projected-configmap-test-upd-6fbffaa9-e1cd-4f6e-9ca2-d752061112bc
STEP: Creating the pod
STEP: Updating configmap projected-configmap-test-upd-6fbffaa9-e1cd-4f6e-9ca2-d752061112bc
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  2 15:01:37.966: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8287" for this suite.
Feb  2 15:02:00.000: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 15:02:00.128: INFO: namespace projected-8287 deletion completed in 22.154716552s

• [SLOW TEST:32.485 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  2 15:02:00.129: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name projected-secret-test-ca91ea60-56fd-4b0a-8556-0f431606eb71
STEP: Creating a pod to test consume secrets
Feb  2 15:02:00.326: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-1140d0b1-bf11-4b9b-ae47-7ca3aad3f6bf" in namespace "projected-3346" to be "success or failure"
Feb  2 15:02:00.356: INFO: Pod "pod-projected-secrets-1140d0b1-bf11-4b9b-ae47-7ca3aad3f6bf": Phase="Pending", Reason="", readiness=false. Elapsed: 29.241281ms
Feb  2 15:02:02.370: INFO: Pod "pod-projected-secrets-1140d0b1-bf11-4b9b-ae47-7ca3aad3f6bf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.043108843s
Feb  2 15:02:04.381: INFO: Pod "pod-projected-secrets-1140d0b1-bf11-4b9b-ae47-7ca3aad3f6bf": Phase="Pending", Reason="", readiness=false. Elapsed: 4.054628429s
Feb  2 15:02:06.391: INFO: Pod "pod-projected-secrets-1140d0b1-bf11-4b9b-ae47-7ca3aad3f6bf": Phase="Pending", Reason="", readiness=false. Elapsed: 6.064454577s
Feb  2 15:02:08.402: INFO: Pod "pod-projected-secrets-1140d0b1-bf11-4b9b-ae47-7ca3aad3f6bf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.075872506s
STEP: Saw pod success
Feb  2 15:02:08.402: INFO: Pod "pod-projected-secrets-1140d0b1-bf11-4b9b-ae47-7ca3aad3f6bf" satisfied condition "success or failure"
Feb  2 15:02:08.406: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-1140d0b1-bf11-4b9b-ae47-7ca3aad3f6bf container projected-secret-volume-test: 
STEP: delete the pod
Feb  2 15:02:08.537: INFO: Waiting for pod pod-projected-secrets-1140d0b1-bf11-4b9b-ae47-7ca3aad3f6bf to disappear
Feb  2 15:02:08.546: INFO: Pod pod-projected-secrets-1140d0b1-bf11-4b9b-ae47-7ca3aad3f6bf no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  2 15:02:08.547: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3346" for this suite.
Feb  2 15:02:14.586: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 15:02:14.833: INFO: namespace projected-3346 deletion completed in 6.275490708s

• [SLOW TEST:14.704 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  2 15:02:14.833: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb  2 15:02:14.943: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a6f25446-b63d-4138-941e-408ef689dcca" in namespace "downward-api-1479" to be "success or failure"
Feb  2 15:02:14.965: INFO: Pod "downwardapi-volume-a6f25446-b63d-4138-941e-408ef689dcca": Phase="Pending", Reason="", readiness=false. Elapsed: 22.417562ms
Feb  2 15:02:16.971: INFO: Pod "downwardapi-volume-a6f25446-b63d-4138-941e-408ef689dcca": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028365752s
Feb  2 15:02:18.979: INFO: Pod "downwardapi-volume-a6f25446-b63d-4138-941e-408ef689dcca": Phase="Pending", Reason="", readiness=false. Elapsed: 4.036338874s
Feb  2 15:02:20.985: INFO: Pod "downwardapi-volume-a6f25446-b63d-4138-941e-408ef689dcca": Phase="Pending", Reason="", readiness=false. Elapsed: 6.042181854s
Feb  2 15:02:22.991: INFO: Pod "downwardapi-volume-a6f25446-b63d-4138-941e-408ef689dcca": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.048324937s
STEP: Saw pod success
Feb  2 15:02:22.991: INFO: Pod "downwardapi-volume-a6f25446-b63d-4138-941e-408ef689dcca" satisfied condition "success or failure"
Feb  2 15:02:22.994: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-a6f25446-b63d-4138-941e-408ef689dcca container client-container: 
STEP: delete the pod
Feb  2 15:02:23.074: INFO: Waiting for pod downwardapi-volume-a6f25446-b63d-4138-941e-408ef689dcca to disappear
Feb  2 15:02:23.131: INFO: Pod downwardapi-volume-a6f25446-b63d-4138-941e-408ef689dcca no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  2 15:02:23.132: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-1479" for this suite.
Feb  2 15:02:29.183: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 15:02:29.319: INFO: namespace downward-api-1479 deletion completed in 6.177784568s

• [SLOW TEST:14.486 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-api-machinery] Watchers 
  should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  2 15:02:29.320: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: modifying the configmap a second time
STEP: deleting the configmap
STEP: creating a watch on configmaps from the resource version returned by the first update
STEP: Expecting to observe notifications for all changes to the configmap after the first update
Feb  2 15:02:29.487: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:watch-49,SelfLink:/api/v1/namespaces/watch-49/configmaps/e2e-watch-test-resource-version,UID:10f19fa7-1df9-4e4d-a58f-bf14483dab0b,ResourceVersion:22832488,Generation:0,CreationTimestamp:2020-02-02 15:02:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Feb  2 15:02:29.487: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:watch-49,SelfLink:/api/v1/namespaces/watch-49/configmaps/e2e-watch-test-resource-version,UID:10f19fa7-1df9-4e4d-a58f-bf14483dab0b,ResourceVersion:22832489,Generation:0,CreationTimestamp:2020-02-02 15:02:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  2 15:02:29.487: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-49" for this suite.
Feb  2 15:02:35.515: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 15:02:35.644: INFO: namespace watch-49 deletion completed in 6.150245719s

• [SLOW TEST:6.324 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-storage] EmptyDir volumes 
  pod should support shared volumes between containers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  2 15:02:35.644: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] pod should support shared volumes between containers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating Pod
STEP: Waiting for the pod running
STEP: Geting the pod
STEP: Reading file content from the nginx-container
Feb  2 15:02:47.758: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec pod-sharedvolume-1befe822-2894-4594-86f0-033ca1412b20 -c busybox-main-container --namespace=emptydir-8977 -- cat /usr/share/volumeshare/shareddata.txt'
Feb  2 15:02:48.412: INFO: stderr: "I0202 15:02:48.007599    3551 log.go:172] (0xc000a10370) (0xc00027e8c0) Create stream\nI0202 15:02:48.007794    3551 log.go:172] (0xc000a10370) (0xc00027e8c0) Stream added, broadcasting: 1\nI0202 15:02:48.045735    3551 log.go:172] (0xc000a10370) Reply frame received for 1\nI0202 15:02:48.048666    3551 log.go:172] (0xc000a10370) (0xc0009fe000) Create stream\nI0202 15:02:48.050763    3551 log.go:172] (0xc000a10370) (0xc0009fe000) Stream added, broadcasting: 3\nI0202 15:02:48.058193    3551 log.go:172] (0xc000a10370) Reply frame received for 3\nI0202 15:02:48.058451    3551 log.go:172] (0xc000a10370) (0xc00027e000) Create stream\nI0202 15:02:48.058484    3551 log.go:172] (0xc000a10370) (0xc00027e000) Stream added, broadcasting: 5\nI0202 15:02:48.061353    3551 log.go:172] (0xc000a10370) Reply frame received for 5\nI0202 15:02:48.249669    3551 log.go:172] (0xc000a10370) Data frame received for 3\nI0202 15:02:48.249769    3551 log.go:172] (0xc0009fe000) (3) Data frame handling\nI0202 15:02:48.249841    3551 log.go:172] (0xc0009fe000) (3) Data frame sent\nI0202 15:02:48.401458    3551 log.go:172] (0xc000a10370) Data frame received for 1\nI0202 15:02:48.401695    3551 log.go:172] (0xc000a10370) (0xc0009fe000) Stream removed, broadcasting: 3\nI0202 15:02:48.401800    3551 log.go:172] (0xc00027e8c0) (1) Data frame handling\nI0202 15:02:48.401829    3551 log.go:172] (0xc00027e8c0) (1) Data frame sent\nI0202 15:02:48.401978    3551 log.go:172] (0xc000a10370) (0xc00027e000) Stream removed, broadcasting: 5\nI0202 15:02:48.402032    3551 log.go:172] (0xc000a10370) (0xc00027e8c0) Stream removed, broadcasting: 1\nI0202 15:02:48.402054    3551 log.go:172] (0xc000a10370) Go away received\nI0202 15:02:48.403504    3551 log.go:172] (0xc000a10370) (0xc00027e8c0) Stream removed, broadcasting: 1\nI0202 15:02:48.403523    3551 log.go:172] (0xc000a10370) (0xc0009fe000) Stream removed, broadcasting: 3\nI0202 15:02:48.403542    3551 log.go:172] (0xc000a10370) (0xc00027e000) Stream removed, broadcasting: 5\n"
Feb  2 15:02:48.413: INFO: stdout: "Hello from the busy-box sub-container\n"
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  2 15:02:48.413: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-8977" for this suite.
Feb  2 15:02:54.443: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 15:02:54.600: INFO: namespace emptydir-8977 deletion completed in 6.180226854s

• [SLOW TEST:18.956 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  pod should support shared volumes between containers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  2 15:02:54.601: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-map-31d58bdc-ddbf-493d-a912-5843f5fc0905
STEP: Creating a pod to test consume configMaps
Feb  2 15:02:54.747: INFO: Waiting up to 5m0s for pod "pod-configmaps-51e93255-95a2-458e-a8c9-2a2ff4b29b13" in namespace "configmap-5828" to be "success or failure"
Feb  2 15:02:54.754: INFO: Pod "pod-configmaps-51e93255-95a2-458e-a8c9-2a2ff4b29b13": Phase="Pending", Reason="", readiness=false. Elapsed: 6.811927ms
Feb  2 15:02:56.766: INFO: Pod "pod-configmaps-51e93255-95a2-458e-a8c9-2a2ff4b29b13": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018631351s
Feb  2 15:02:58.781: INFO: Pod "pod-configmaps-51e93255-95a2-458e-a8c9-2a2ff4b29b13": Phase="Pending", Reason="", readiness=false. Elapsed: 4.033896933s
Feb  2 15:03:00.793: INFO: Pod "pod-configmaps-51e93255-95a2-458e-a8c9-2a2ff4b29b13": Phase="Pending", Reason="", readiness=false. Elapsed: 6.045598254s
Feb  2 15:03:02.804: INFO: Pod "pod-configmaps-51e93255-95a2-458e-a8c9-2a2ff4b29b13": Phase="Pending", Reason="", readiness=false. Elapsed: 8.05647203s
Feb  2 15:03:04.818: INFO: Pod "pod-configmaps-51e93255-95a2-458e-a8c9-2a2ff4b29b13": Phase="Running", Reason="", readiness=true. Elapsed: 10.070300027s
Feb  2 15:03:06.838: INFO: Pod "pod-configmaps-51e93255-95a2-458e-a8c9-2a2ff4b29b13": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.09077687s
STEP: Saw pod success
Feb  2 15:03:06.838: INFO: Pod "pod-configmaps-51e93255-95a2-458e-a8c9-2a2ff4b29b13" satisfied condition "success or failure"
Feb  2 15:03:06.842: INFO: Trying to get logs from node iruya-node pod pod-configmaps-51e93255-95a2-458e-a8c9-2a2ff4b29b13 container configmap-volume-test: 
STEP: delete the pod
Feb  2 15:03:06.987: INFO: Waiting for pod pod-configmaps-51e93255-95a2-458e-a8c9-2a2ff4b29b13 to disappear
Feb  2 15:03:06.994: INFO: Pod pod-configmaps-51e93255-95a2-458e-a8c9-2a2ff4b29b13 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  2 15:03:06.994: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-5828" for this suite.
Feb  2 15:03:13.015: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 15:03:13.122: INFO: namespace configmap-5828 deletion completed in 6.123398385s

• [SLOW TEST:18.521 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-api-machinery] Aggregator 
  Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  2 15:03:13.122: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename aggregator
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76
Feb  2 15:03:13.183: INFO: >>> kubeConfig: /root/.kube/config
[It] Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Registering the sample API server.
Feb  2 15:03:14.350: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set
Feb  2 15:03:16.570: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716252594, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716252594, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716252594, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716252594, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  2 15:03:18.588: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716252594, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716252594, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716252594, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716252594, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  2 15:03:20.583: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716252594, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716252594, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716252594, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716252594, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  2 15:03:22.585: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716252594, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716252594, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716252594, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716252594, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  2 15:03:24.591: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716252594, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716252594, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716252594, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716252594, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  2 15:03:30.797: INFO: Waited 4.164829246s for the sample-apiserver to be ready to handle requests.
[AfterEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67
[AfterEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  2 15:03:31.191: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "aggregator-5009" for this suite.
Feb  2 15:03:37.286: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 15:03:37.425: INFO: namespace aggregator-5009 deletion completed in 6.231320929s

• [SLOW TEST:24.303 seconds]
[sig-api-machinery] Aggregator
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  2 15:03:37.425: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test override arguments
Feb  2 15:03:37.507: INFO: Waiting up to 5m0s for pod "client-containers-098d8a80-412b-4854-9b7b-1db04d58059c" in namespace "containers-5911" to be "success or failure"
Feb  2 15:03:37.516: INFO: Pod "client-containers-098d8a80-412b-4854-9b7b-1db04d58059c": Phase="Pending", Reason="", readiness=false. Elapsed: 9.048069ms
Feb  2 15:03:39.525: INFO: Pod "client-containers-098d8a80-412b-4854-9b7b-1db04d58059c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018007996s
Feb  2 15:03:41.539: INFO: Pod "client-containers-098d8a80-412b-4854-9b7b-1db04d58059c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.032015168s
Feb  2 15:03:43.565: INFO: Pod "client-containers-098d8a80-412b-4854-9b7b-1db04d58059c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.057924081s
Feb  2 15:03:45.572: INFO: Pod "client-containers-098d8a80-412b-4854-9b7b-1db04d58059c": Phase="Pending", Reason="", readiness=false. Elapsed: 8.065487998s
Feb  2 15:03:47.580: INFO: Pod "client-containers-098d8a80-412b-4854-9b7b-1db04d58059c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.073088623s
STEP: Saw pod success
Feb  2 15:03:47.580: INFO: Pod "client-containers-098d8a80-412b-4854-9b7b-1db04d58059c" satisfied condition "success or failure"
Feb  2 15:03:47.585: INFO: Trying to get logs from node iruya-node pod client-containers-098d8a80-412b-4854-9b7b-1db04d58059c container test-container: 
STEP: delete the pod
Feb  2 15:03:47.989: INFO: Waiting for pod client-containers-098d8a80-412b-4854-9b7b-1db04d58059c to disappear
Feb  2 15:03:47.999: INFO: Pod client-containers-098d8a80-412b-4854-9b7b-1db04d58059c no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  2 15:03:47.999: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-5911" for this suite.
Feb  2 15:03:54.038: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 15:03:54.140: INFO: namespace containers-5911 deletion completed in 6.132179215s

• [SLOW TEST:16.714 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicaSet 
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  2 15:03:54.140: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb  2 15:03:54.216: INFO: Creating ReplicaSet my-hostname-basic-53e5801e-fdc9-4e0a-a370-6e75189c78b4
Feb  2 15:03:54.244: INFO: Pod name my-hostname-basic-53e5801e-fdc9-4e0a-a370-6e75189c78b4: Found 0 pods out of 1
Feb  2 15:03:59.300: INFO: Pod name my-hostname-basic-53e5801e-fdc9-4e0a-a370-6e75189c78b4: Found 1 pods out of 1
Feb  2 15:03:59.300: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-53e5801e-fdc9-4e0a-a370-6e75189c78b4" is running
Feb  2 15:04:01.311: INFO: Pod "my-hostname-basic-53e5801e-fdc9-4e0a-a370-6e75189c78b4-cd8nq" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-02 15:03:54 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-02 15:03:54 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-53e5801e-fdc9-4e0a-a370-6e75189c78b4]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-02 15:03:54 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-53e5801e-fdc9-4e0a-a370-6e75189c78b4]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-02 15:03:54 +0000 UTC Reason: Message:}])
Feb  2 15:04:01.311: INFO: Trying to dial the pod
Feb  2 15:04:06.362: INFO: Controller my-hostname-basic-53e5801e-fdc9-4e0a-a370-6e75189c78b4: Got expected result from replica 1 [my-hostname-basic-53e5801e-fdc9-4e0a-a370-6e75189c78b4-cd8nq]: "my-hostname-basic-53e5801e-fdc9-4e0a-a370-6e75189c78b4-cd8nq", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  2 15:04:06.362: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replicaset-3418" for this suite.
Feb  2 15:04:12.394: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 15:04:12.526: INFO: namespace replicaset-3418 deletion completed in 6.15319452s

• [SLOW TEST:18.386 seconds]
[sig-apps] ReplicaSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should surface a failure condition on a common issue like exceeded quota [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  2 15:04:12.527: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should surface a failure condition on a common issue like exceeded quota [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb  2 15:04:12.645: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace
STEP: Creating rc "condition-test" that asks for more than the allowed pod quota
STEP: Checking rc "condition-test" has the desired failure condition set
STEP: Scaling down rc "condition-test" to satisfy pod quota
Feb  2 15:04:15.304: INFO: Updating replication controller "condition-test"
STEP: Checking rc "condition-test" has no failure condition set
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  2 15:04:15.971: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-4979" for this suite.
Feb  2 15:04:28.314: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 15:04:28.476: INFO: namespace replication-controller-4979 deletion completed in 12.442399898s

• [SLOW TEST:15.949 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should surface a failure condition on a common issue like exceeded quota [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  2 15:04:28.476: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88
[It] should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating service multi-endpoint-test in namespace services-168
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-168 to expose endpoints map[]
Feb  2 15:04:28.703: INFO: Get endpoints failed (14.741296ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found
Feb  2 15:04:29.712: INFO: successfully validated that service multi-endpoint-test in namespace services-168 exposes endpoints map[] (1.023453156s elapsed)
STEP: Creating pod pod1 in namespace services-168
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-168 to expose endpoints map[pod1:[100]]
Feb  2 15:04:33.826: INFO: Unexpected endpoints: found map[], expected map[pod1:[100]] (4.098916213s elapsed, will retry)
Feb  2 15:04:36.873: INFO: successfully validated that service multi-endpoint-test in namespace services-168 exposes endpoints map[pod1:[100]] (7.145646296s elapsed)
STEP: Creating pod pod2 in namespace services-168
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-168 to expose endpoints map[pod1:[100] pod2:[101]]
Feb  2 15:04:42.619: INFO: Unexpected endpoints: found map[0e27646d-3e46-4649-b4a7-6d784cf50cd6:[100]], expected map[pod1:[100] pod2:[101]] (5.736883914s elapsed, will retry)
Feb  2 15:04:44.663: INFO: successfully validated that service multi-endpoint-test in namespace services-168 exposes endpoints map[pod1:[100] pod2:[101]] (7.781484856s elapsed)
STEP: Deleting pod pod1 in namespace services-168
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-168 to expose endpoints map[pod2:[101]]
Feb  2 15:04:44.728: INFO: successfully validated that service multi-endpoint-test in namespace services-168 exposes endpoints map[pod2:[101]] (46.194015ms elapsed)
STEP: Deleting pod pod2 in namespace services-168
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-168 to expose endpoints map[]
Feb  2 15:04:44.769: INFO: successfully validated that service multi-endpoint-test in namespace services-168 exposes endpoints map[] (31.376645ms elapsed)
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  2 15:04:44.862: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-168" for this suite.
Feb  2 15:05:08.905: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 15:05:09.033: INFO: namespace services-168 deletion completed in 24.158644721s
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92

• [SLOW TEST:40.557 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  2 15:05:09.034: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb  2 15:05:09.217: INFO: Waiting up to 5m0s for pod "downwardapi-volume-aa4dacdf-9610-40a1-b6c3-e7a5be12930a" in namespace "projected-148" to be "success or failure"
Feb  2 15:05:09.265: INFO: Pod "downwardapi-volume-aa4dacdf-9610-40a1-b6c3-e7a5be12930a": Phase="Pending", Reason="", readiness=false. Elapsed: 47.136826ms
Feb  2 15:05:11.280: INFO: Pod "downwardapi-volume-aa4dacdf-9610-40a1-b6c3-e7a5be12930a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.062597191s
Feb  2 15:05:13.291: INFO: Pod "downwardapi-volume-aa4dacdf-9610-40a1-b6c3-e7a5be12930a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.073958925s
Feb  2 15:05:15.300: INFO: Pod "downwardapi-volume-aa4dacdf-9610-40a1-b6c3-e7a5be12930a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.082847957s
Feb  2 15:05:17.318: INFO: Pod "downwardapi-volume-aa4dacdf-9610-40a1-b6c3-e7a5be12930a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.100288038s
STEP: Saw pod success
Feb  2 15:05:17.318: INFO: Pod "downwardapi-volume-aa4dacdf-9610-40a1-b6c3-e7a5be12930a" satisfied condition "success or failure"
Feb  2 15:05:17.323: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-aa4dacdf-9610-40a1-b6c3-e7a5be12930a container client-container: 
STEP: delete the pod
Feb  2 15:05:17.385: INFO: Waiting for pod downwardapi-volume-aa4dacdf-9610-40a1-b6c3-e7a5be12930a to disappear
Feb  2 15:05:17.393: INFO: Pod downwardapi-volume-aa4dacdf-9610-40a1-b6c3-e7a5be12930a no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  2 15:05:17.394: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-148" for this suite.
Feb  2 15:05:23.444: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 15:05:23.678: INFO: namespace projected-148 deletion completed in 6.278993334s

• [SLOW TEST:14.644 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  2 15:05:23.678: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Feb  2 15:08:26.089: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  2 15:08:26.133: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  2 15:08:28.134: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  2 15:08:28.141: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  2 15:08:30.134: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  2 15:08:30.142: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  2 15:08:32.134: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  2 15:08:32.148: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  2 15:08:34.134: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  2 15:08:34.141: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  2 15:08:36.134: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  2 15:08:36.143: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  2 15:08:38.134: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  2 15:08:38.142: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  2 15:08:40.134: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  2 15:08:40.140: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  2 15:08:42.134: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  2 15:08:42.142: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  2 15:08:44.134: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  2 15:08:44.140: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  2 15:08:46.134: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  2 15:08:46.160: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  2 15:08:48.134: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  2 15:08:48.142: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  2 15:08:50.134: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  2 15:08:50.143: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  2 15:08:52.134: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  2 15:08:52.143: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  2 15:08:54.134: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  2 15:08:54.139: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  2 15:08:56.134: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  2 15:08:56.169: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  2 15:08:58.134: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  2 15:08:58.141: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  2 15:09:00.134: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  2 15:09:00.157: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  2 15:09:02.134: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  2 15:09:02.147: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  2 15:09:04.134: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  2 15:09:04.139: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  2 15:09:06.134: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  2 15:09:06.141: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  2 15:09:08.134: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  2 15:09:08.145: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  2 15:09:10.134: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  2 15:09:10.142: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  2 15:09:12.134: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  2 15:09:12.146: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  2 15:09:14.134: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  2 15:09:14.140: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  2 15:09:16.134: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  2 15:09:16.142: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  2 15:09:18.134: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  2 15:09:18.145: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  2 15:09:20.134: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  2 15:09:20.140: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  2 15:09:22.134: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  2 15:09:22.149: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  2 15:09:24.134: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  2 15:09:24.140: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  2 15:09:26.134: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  2 15:09:26.142: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  2 15:09:28.134: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  2 15:09:28.154: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  2 15:09:30.134: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  2 15:09:30.152: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  2 15:09:32.134: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  2 15:09:32.145: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  2 15:09:34.134: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  2 15:09:34.148: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  2 15:09:36.134: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  2 15:09:36.148: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  2 15:09:38.134: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  2 15:09:38.142: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  2 15:09:40.134: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  2 15:09:40.142: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  2 15:09:42.134: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  2 15:09:42.140: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  2 15:09:44.134: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  2 15:09:44.139: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  2 15:09:46.134: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  2 15:09:46.141: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  2 15:09:48.134: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  2 15:09:48.144: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  2 15:09:50.134: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  2 15:09:50.141: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  2 15:09:52.134: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  2 15:09:52.143: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  2 15:09:54.134: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  2 15:09:54.142: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  2 15:09:56.134: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  2 15:09:56.143: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  2 15:09:58.134: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  2 15:09:58.142: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  2 15:10:00.134: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  2 15:10:00.142: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  2 15:10:02.134: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  2 15:10:02.145: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  2 15:10:04.134: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  2 15:10:04.143: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  2 15:10:06.134: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  2 15:10:06.142: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  2 15:10:08.134: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  2 15:10:08.149: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  2 15:10:10.134: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  2 15:10:10.151: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  2 15:10:12.134: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  2 15:10:12.145: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  2 15:10:14.134: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  2 15:10:14.150: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  2 15:10:16.134: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  2 15:10:16.146: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  2 15:10:18.134: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  2 15:10:18.149: INFO: Pod pod-with-poststart-exec-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  2 15:10:18.150: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-7405" for this suite.
Feb  2 15:10:42.189: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 15:10:42.330: INFO: namespace container-lifecycle-hook-7405 deletion completed in 24.168574562s

• [SLOW TEST:318.652 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute poststart exec hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  2 15:10:42.330: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-28977092-f891-49ae-bae5-3de30764bfce
STEP: Creating a pod to test consume configMaps
Feb  2 15:10:42.470: INFO: Waiting up to 5m0s for pod "pod-configmaps-3de6f756-ddcb-40ef-88ca-9eea007c53c6" in namespace "configmap-9618" to be "success or failure"
Feb  2 15:10:42.476: INFO: Pod "pod-configmaps-3de6f756-ddcb-40ef-88ca-9eea007c53c6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.646235ms
Feb  2 15:10:44.489: INFO: Pod "pod-configmaps-3de6f756-ddcb-40ef-88ca-9eea007c53c6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019263393s
Feb  2 15:10:46.502: INFO: Pod "pod-configmaps-3de6f756-ddcb-40ef-88ca-9eea007c53c6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.03206582s
Feb  2 15:10:48.519: INFO: Pod "pod-configmaps-3de6f756-ddcb-40ef-88ca-9eea007c53c6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.049448321s
Feb  2 15:10:50.537: INFO: Pod "pod-configmaps-3de6f756-ddcb-40ef-88ca-9eea007c53c6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.067095703s
STEP: Saw pod success
Feb  2 15:10:50.537: INFO: Pod "pod-configmaps-3de6f756-ddcb-40ef-88ca-9eea007c53c6" satisfied condition "success or failure"
Feb  2 15:10:50.545: INFO: Trying to get logs from node iruya-node pod pod-configmaps-3de6f756-ddcb-40ef-88ca-9eea007c53c6 container configmap-volume-test: 
STEP: delete the pod
Feb  2 15:10:50.674: INFO: Waiting for pod pod-configmaps-3de6f756-ddcb-40ef-88ca-9eea007c53c6 to disappear
Feb  2 15:10:50.681: INFO: Pod pod-configmaps-3de6f756-ddcb-40ef-88ca-9eea007c53c6 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  2 15:10:50.681: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-9618" for this suite.
Feb  2 15:10:56.762: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 15:10:56.865: INFO: namespace configmap-9618 deletion completed in 6.177514469s

• [SLOW TEST:14.535 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run deployment 
  should create a deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  2 15:10:56.865: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1557
[It] should create a deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Feb  2 15:10:56.928: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --generator=deployment/apps.v1 --namespace=kubectl-5211'
Feb  2 15:10:58.688: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Feb  2 15:10:58.689: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n"
STEP: verifying the deployment e2e-test-nginx-deployment was created
STEP: verifying the pod controlled by deployment e2e-test-nginx-deployment was created
[AfterEach] [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1562
Feb  2 15:11:00.819: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=kubectl-5211'
Feb  2 15:11:00.991: INFO: stderr: ""
Feb  2 15:11:00.992: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  2 15:11:00.992: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5211" for this suite.
Feb  2 15:11:07.074: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 15:11:07.200: INFO: namespace kubectl-5211 deletion completed in 6.198009338s

• [SLOW TEST:10.335 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create a deployment from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl cluster-info 
  should check if Kubernetes master services is included in cluster-info  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  2 15:11:07.200: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should check if Kubernetes master services is included in cluster-info  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: validating cluster-info
Feb  2 15:11:07.309: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info'
Feb  2 15:11:07.513: INFO: stderr: ""
Feb  2 15:11:07.513: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.24.4.57:6443\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.24.4.57:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  2 15:11:07.513: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-2662" for this suite.
Feb  2 15:11:13.545: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 15:11:13.689: INFO: namespace kubectl-2662 deletion completed in 6.169520402s

• [SLOW TEST:6.489 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl cluster-info
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should check if Kubernetes master services is included in cluster-info  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  2 15:11:13.689: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir volume type on node default medium
Feb  2 15:11:13.827: INFO: Waiting up to 5m0s for pod "pod-7b69f75f-2151-4a9f-9de8-478b675fb8fb" in namespace "emptydir-4562" to be "success or failure"
Feb  2 15:11:13.833: INFO: Pod "pod-7b69f75f-2151-4a9f-9de8-478b675fb8fb": Phase="Pending", Reason="", readiness=false. Elapsed: 6.299589ms
Feb  2 15:11:15.848: INFO: Pod "pod-7b69f75f-2151-4a9f-9de8-478b675fb8fb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021527346s
Feb  2 15:11:17.868: INFO: Pod "pod-7b69f75f-2151-4a9f-9de8-478b675fb8fb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.041093704s
Feb  2 15:11:19.878: INFO: Pod "pod-7b69f75f-2151-4a9f-9de8-478b675fb8fb": Phase="Pending", Reason="", readiness=false. Elapsed: 6.05123327s
Feb  2 15:11:21.890: INFO: Pod "pod-7b69f75f-2151-4a9f-9de8-478b675fb8fb": Phase="Pending", Reason="", readiness=false. Elapsed: 8.063299572s
Feb  2 15:11:23.899: INFO: Pod "pod-7b69f75f-2151-4a9f-9de8-478b675fb8fb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.072389456s
STEP: Saw pod success
Feb  2 15:11:23.899: INFO: Pod "pod-7b69f75f-2151-4a9f-9de8-478b675fb8fb" satisfied condition "success or failure"
Feb  2 15:11:23.905: INFO: Trying to get logs from node iruya-node pod pod-7b69f75f-2151-4a9f-9de8-478b675fb8fb container test-container: 
STEP: delete the pod
Feb  2 15:11:23.959: INFO: Waiting for pod pod-7b69f75f-2151-4a9f-9de8-478b675fb8fb to disappear
Feb  2 15:11:23.981: INFO: Pod pod-7b69f75f-2151-4a9f-9de8-478b675fb8fb no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  2 15:11:23.981: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-4562" for this suite.
Feb  2 15:11:30.123: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 15:11:30.299: INFO: namespace emptydir-4562 deletion completed in 6.310359996s

• [SLOW TEST:16.610 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  2 15:11:30.299: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0777 on node default medium
Feb  2 15:11:30.415: INFO: Waiting up to 5m0s for pod "pod-840eee65-86b6-4a94-89eb-6f0c08ab88ad" in namespace "emptydir-7496" to be "success or failure"
Feb  2 15:11:30.453: INFO: Pod "pod-840eee65-86b6-4a94-89eb-6f0c08ab88ad": Phase="Pending", Reason="", readiness=false. Elapsed: 37.234567ms
Feb  2 15:11:32.471: INFO: Pod "pod-840eee65-86b6-4a94-89eb-6f0c08ab88ad": Phase="Pending", Reason="", readiness=false. Elapsed: 2.055576207s
Feb  2 15:11:34.479: INFO: Pod "pod-840eee65-86b6-4a94-89eb-6f0c08ab88ad": Phase="Pending", Reason="", readiness=false. Elapsed: 4.063645381s
Feb  2 15:11:36.494: INFO: Pod "pod-840eee65-86b6-4a94-89eb-6f0c08ab88ad": Phase="Pending", Reason="", readiness=false. Elapsed: 6.078075184s
Feb  2 15:11:38.509: INFO: Pod "pod-840eee65-86b6-4a94-89eb-6f0c08ab88ad": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.093660604s
STEP: Saw pod success
Feb  2 15:11:38.509: INFO: Pod "pod-840eee65-86b6-4a94-89eb-6f0c08ab88ad" satisfied condition "success or failure"
Feb  2 15:11:38.523: INFO: Trying to get logs from node iruya-node pod pod-840eee65-86b6-4a94-89eb-6f0c08ab88ad container test-container: 
STEP: delete the pod
Feb  2 15:11:38.657: INFO: Waiting for pod pod-840eee65-86b6-4a94-89eb-6f0c08ab88ad to disappear
Feb  2 15:11:38.665: INFO: Pod pod-840eee65-86b6-4a94-89eb-6f0c08ab88ad no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  2 15:11:38.665: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-7496" for this suite.
Feb  2 15:11:44.781: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 15:11:44.933: INFO: namespace emptydir-7496 deletion completed in 6.259907556s

• [SLOW TEST:14.634 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  2 15:11:44.934: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
[It] should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb  2 15:11:45.060: INFO: Creating daemon "daemon-set" with a node selector
STEP: Initially, daemon pods should not be running on any nodes.
Feb  2 15:11:45.077: INFO: Number of nodes with available pods: 0
Feb  2 15:11:45.077: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Change node label to blue, check that daemon pod is launched.
Feb  2 15:11:45.196: INFO: Number of nodes with available pods: 0
Feb  2 15:11:45.196: INFO: Node iruya-node is running more than one daemon pod
Feb  2 15:11:46.213: INFO: Number of nodes with available pods: 0
Feb  2 15:11:46.213: INFO: Node iruya-node is running more than one daemon pod
Feb  2 15:11:47.203: INFO: Number of nodes with available pods: 0
Feb  2 15:11:47.203: INFO: Node iruya-node is running more than one daemon pod
Feb  2 15:11:48.206: INFO: Number of nodes with available pods: 0
Feb  2 15:11:48.206: INFO: Node iruya-node is running more than one daemon pod
Feb  2 15:11:49.210: INFO: Number of nodes with available pods: 0
Feb  2 15:11:49.210: INFO: Node iruya-node is running more than one daemon pod
Feb  2 15:11:50.213: INFO: Number of nodes with available pods: 0
Feb  2 15:11:50.213: INFO: Node iruya-node is running more than one daemon pod
Feb  2 15:11:51.208: INFO: Number of nodes with available pods: 0
Feb  2 15:11:51.208: INFO: Node iruya-node is running more than one daemon pod
Feb  2 15:11:52.209: INFO: Number of nodes with available pods: 1
Feb  2 15:11:52.209: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Update the node label to green, and wait for daemons to be unscheduled
Feb  2 15:11:52.280: INFO: Number of nodes with available pods: 1
Feb  2 15:11:52.280: INFO: Number of running nodes: 0, number of available pods: 1
Feb  2 15:11:53.295: INFO: Number of nodes with available pods: 0
Feb  2 15:11:53.295: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate
Feb  2 15:11:53.323: INFO: Number of nodes with available pods: 0
Feb  2 15:11:53.323: INFO: Node iruya-node is running more than one daemon pod
Feb  2 15:11:54.331: INFO: Number of nodes with available pods: 0
Feb  2 15:11:54.331: INFO: Node iruya-node is running more than one daemon pod
Feb  2 15:11:55.335: INFO: Number of nodes with available pods: 0
Feb  2 15:11:55.335: INFO: Node iruya-node is running more than one daemon pod
Feb  2 15:11:56.331: INFO: Number of nodes with available pods: 0
Feb  2 15:11:56.331: INFO: Node iruya-node is running more than one daemon pod
Feb  2 15:11:57.332: INFO: Number of nodes with available pods: 0
Feb  2 15:11:57.332: INFO: Node iruya-node is running more than one daemon pod
Feb  2 15:11:58.364: INFO: Number of nodes with available pods: 0
Feb  2 15:11:58.364: INFO: Node iruya-node is running more than one daemon pod
Feb  2 15:11:59.363: INFO: Number of nodes with available pods: 0
Feb  2 15:11:59.363: INFO: Node iruya-node is running more than one daemon pod
Feb  2 15:12:00.330: INFO: Number of nodes with available pods: 0
Feb  2 15:12:00.330: INFO: Node iruya-node is running more than one daemon pod
Feb  2 15:12:01.331: INFO: Number of nodes with available pods: 0
Feb  2 15:12:01.331: INFO: Node iruya-node is running more than one daemon pod
Feb  2 15:12:02.333: INFO: Number of nodes with available pods: 0
Feb  2 15:12:02.333: INFO: Node iruya-node is running more than one daemon pod
Feb  2 15:12:03.333: INFO: Number of nodes with available pods: 0
Feb  2 15:12:03.333: INFO: Node iruya-node is running more than one daemon pod
Feb  2 15:12:04.333: INFO: Number of nodes with available pods: 0
Feb  2 15:12:04.333: INFO: Node iruya-node is running more than one daemon pod
Feb  2 15:12:05.332: INFO: Number of nodes with available pods: 0
Feb  2 15:12:05.332: INFO: Node iruya-node is running more than one daemon pod
Feb  2 15:12:06.333: INFO: Number of nodes with available pods: 0
Feb  2 15:12:06.333: INFO: Node iruya-node is running more than one daemon pod
Feb  2 15:12:07.339: INFO: Number of nodes with available pods: 1
Feb  2 15:12:07.339: INFO: Number of running nodes: 1, number of available pods: 1
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-7414, will wait for the garbage collector to delete the pods
Feb  2 15:12:07.423: INFO: Deleting DaemonSet.extensions daemon-set took: 18.494693ms
Feb  2 15:12:07.723: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.580082ms
Feb  2 15:12:16.637: INFO: Number of nodes with available pods: 0
Feb  2 15:12:16.637: INFO: Number of running nodes: 0, number of available pods: 0
Feb  2 15:12:16.641: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-7414/daemonsets","resourceVersion":"22833789"},"items":null}

Feb  2 15:12:16.643: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-7414/pods","resourceVersion":"22833789"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  2 15:12:16.695: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-7414" for this suite.
Feb  2 15:12:22.762: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 15:12:22.905: INFO: namespace daemonsets-7414 deletion completed in 6.167177531s

• [SLOW TEST:37.972 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] Events 
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] [sig-node] Events
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  2 15:12:22.906: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename events
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: retrieving the pod
Feb  2 15:12:31.032: INFO: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:send-events-d069737d-936b-4847-9812-ba12d1727ac9,GenerateName:,Namespace:events-1919,SelfLink:/api/v1/namespaces/events-1919/pods/send-events-d069737d-936b-4847-9812-ba12d1727ac9,UID:e2e11226-d6d0-4e8c-9926-4d8662d4f359,ResourceVersion:22833837,Generation:0,CreationTimestamp:2020-02-02 15:12:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: foo,time: 965234852,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-bd8qx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-bd8qx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{p gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 [] []  [{ 0 80 TCP }] [] [] {map[] map[]} [{default-token-bd8qx true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001084c80} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001084ca0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-02 15:12:23 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-02 15:12:29 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-02 15:12:29 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-02 15:12:23 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.1,StartTime:2020-02-02 15:12:23 +0000 UTC,ContainerStatuses:[{p {nil ContainerStateRunning{StartedAt:2020-02-02 15:12:29 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 docker-pullable://gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 docker://5c83d3fb66eac31f28980aeca6b8da9f2bef69995939235d44adc958c738560f}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}

STEP: checking for scheduler event about the pod
Feb  2 15:12:33.044: INFO: Saw scheduler event for our pod.
STEP: checking for kubelet event about the pod
Feb  2 15:12:35.063: INFO: Saw kubelet event for our pod.
STEP: deleting the pod
[AfterEach] [k8s.io] [sig-node] Events
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  2 15:12:35.079: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "events-1919" for this suite.
Feb  2 15:13:19.185: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 15:13:19.319: INFO: namespace events-1919 deletion completed in 44.164133269s

• [SLOW TEST:56.413 seconds]
[k8s.io] [sig-node] Events
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  2 15:13:19.320: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb  2 15:13:19.460: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  2 15:13:28.019: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-5507" for this suite.
Feb  2 15:14:20.078: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 15:14:20.245: INFO: namespace pods-5507 deletion completed in 52.21707904s

• [SLOW TEST:60.926 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  2 15:14:20.246: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81
Feb  2 15:14:20.410: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Feb  2 15:14:20.425: INFO: Waiting for terminating namespaces to be deleted...
Feb  2 15:14:20.428: INFO: 
Logging pods the kubelet thinks is on node iruya-node before test
Feb  2 15:14:20.438: INFO: kube-proxy-976zl from kube-system started at 2019-08-04 09:01:39 +0000 UTC (1 container statuses recorded)
Feb  2 15:14:20.438: INFO: 	Container kube-proxy ready: true, restart count 0
Feb  2 15:14:20.438: INFO: weave-net-rlp57 from kube-system started at 2019-10-12 11:56:39 +0000 UTC (2 container statuses recorded)
Feb  2 15:14:20.438: INFO: 	Container weave ready: true, restart count 0
Feb  2 15:14:20.438: INFO: 	Container weave-npc ready: true, restart count 0
Feb  2 15:14:20.438: INFO: 
Logging pods the kubelet thinks is on node iruya-server-sfge57q7djm7 before test
Feb  2 15:14:20.450: INFO: etcd-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:38 +0000 UTC (1 container statuses recorded)
Feb  2 15:14:20.450: INFO: 	Container etcd ready: true, restart count 0
Feb  2 15:14:20.450: INFO: weave-net-bzl4d from kube-system started at 2019-08-04 08:52:37 +0000 UTC (2 container statuses recorded)
Feb  2 15:14:20.450: INFO: 	Container weave ready: true, restart count 0
Feb  2 15:14:20.450: INFO: 	Container weave-npc ready: true, restart count 0
Feb  2 15:14:20.450: INFO: coredns-5c98db65d4-bm4gs from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded)
Feb  2 15:14:20.450: INFO: 	Container coredns ready: true, restart count 0
Feb  2 15:14:20.450: INFO: kube-controller-manager-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:42 +0000 UTC (1 container statuses recorded)
Feb  2 15:14:20.450: INFO: 	Container kube-controller-manager ready: true, restart count 19
Feb  2 15:14:20.450: INFO: kube-proxy-58v95 from kube-system started at 2019-08-04 08:52:37 +0000 UTC (1 container statuses recorded)
Feb  2 15:14:20.450: INFO: 	Container kube-proxy ready: true, restart count 0
Feb  2 15:14:20.450: INFO: kube-apiserver-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:39 +0000 UTC (1 container statuses recorded)
Feb  2 15:14:20.450: INFO: 	Container kube-apiserver ready: true, restart count 0
Feb  2 15:14:20.450: INFO: kube-scheduler-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:43 +0000 UTC (1 container statuses recorded)
Feb  2 15:14:20.450: INFO: 	Container kube-scheduler ready: true, restart count 13
Feb  2 15:14:20.450: INFO: coredns-5c98db65d4-xx8w8 from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded)
Feb  2 15:14:20.450: INFO: 	Container coredns ready: true, restart count 0
[It] validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Trying to schedule Pod with nonempty NodeSelector.
STEP: Considering event: 
Type = [Warning], Name = [restricted-pod.15ef9ead16cf4c5b], Reason = [FailedScheduling], Message = [0/2 nodes are available: 2 node(s) didn't match node selector.]
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  2 15:14:21.492: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-9550" for this suite.
Feb  2 15:14:27.626: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  2 15:14:27.857: INFO: namespace sched-pred-9550 deletion completed in 6.357857039s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72

• [SLOW TEST:7.611 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23
  validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSFeb  2 15:14:27.857: INFO: Running AfterSuite actions on all nodes
Feb  2 15:14:27.857: INFO: Running AfterSuite actions on node 1
Feb  2 15:14:27.857: INFO: Skipping dumping logs from cluster

Ran 215 of 4412 Specs in 8288.602 seconds
SUCCESS! -- 215 Passed | 0 Failed | 0 Pending | 4197 Skipped
PASS