I0129 10:47:03.744456 8 e2e.go:224] Starting e2e run "af211b51-4284-11ea-8d54-0242ac110005" on Ginkgo node 1 Running Suite: Kubernetes e2e suite =================================== Random Seed: 1580294822 - Will randomize all specs Will run 201 of 2164 specs Jan 29 10:47:04.214: INFO: >>> kubeConfig: /root/.kube/config Jan 29 10:47:04.217: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Jan 29 10:47:04.250: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Jan 29 10:47:04.304: INFO: 8 / 8 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Jan 29 10:47:04.304: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Jan 29 10:47:04.304: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Jan 29 10:47:04.318: INFO: 1 / 1 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Jan 29 10:47:04.318: INFO: 1 / 1 pods ready in namespace 'kube-system' in daemonset 'weave-net' (0 seconds elapsed) Jan 29 10:47:04.318: INFO: e2e test version: v1.13.12 Jan 29 10:47:04.319: INFO: kube-apiserver version: v1.13.8 SSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 29 10:47:04.319: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected Jan 29 10:47:04.576: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jan 29 10:47:04.740: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b01992a0-4284-11ea-8d54-0242ac110005" in namespace "e2e-tests-projected-lp7xm" to be "success or failure" Jan 29 10:47:04.747: INFO: Pod "downwardapi-volume-b01992a0-4284-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.078128ms Jan 29 10:47:07.005: INFO: Pod "downwardapi-volume-b01992a0-4284-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.264987754s Jan 29 10:47:09.014: INFO: Pod "downwardapi-volume-b01992a0-4284-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.273248372s Jan 29 10:47:11.521: INFO: Pod "downwardapi-volume-b01992a0-4284-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.780238587s Jan 29 10:47:13.537: INFO: Pod "downwardapi-volume-b01992a0-4284-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.796546793s Jan 29 10:47:15.551: INFO: Pod "downwardapi-volume-b01992a0-4284-11ea-8d54-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.811137448s STEP: Saw pod success Jan 29 10:47:15.552: INFO: Pod "downwardapi-volume-b01992a0-4284-11ea-8d54-0242ac110005" satisfied condition "success or failure" Jan 29 10:47:15.558: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-b01992a0-4284-11ea-8d54-0242ac110005 container client-container: STEP: delete the pod Jan 29 10:47:15.832: INFO: Waiting for pod downwardapi-volume-b01992a0-4284-11ea-8d54-0242ac110005 to disappear Jan 29 10:47:15.862: INFO: Pod downwardapi-volume-b01992a0-4284-11ea-8d54-0242ac110005 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 29 10:47:15.863: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-lp7xm" for this suite. Jan 29 10:47:22.014: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 29 10:47:22.137: INFO: namespace: e2e-tests-projected-lp7xm, resource: bindings, ignored listing per whitelist Jan 29 10:47:22.165: INFO: namespace e2e-tests-projected-lp7xm deletion completed in 6.286927983s • [SLOW TEST:17.846 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 29 10:47:22.166: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295 [It] should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a replication controller Jan 29 10:47:22.468: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-ksz28' Jan 29 10:47:24.636: INFO: stderr: "" Jan 29 10:47:24.636: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Jan 29 10:47:24.636: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-ksz28' Jan 29 10:47:24.796: INFO: stderr: "" Jan 29 10:47:24.796: INFO: stdout: "update-demo-nautilus-gn9ps update-demo-nautilus-vgb5v " Jan 29 10:47:24.796: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-gn9ps -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-ksz28' Jan 29 10:47:25.051: INFO: stderr: "" Jan 29 10:47:25.051: INFO: stdout: "" Jan 29 10:47:25.051: INFO: update-demo-nautilus-gn9ps is created but not running Jan 29 10:47:30.052: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-ksz28' Jan 29 10:47:30.353: INFO: stderr: "" Jan 29 10:47:30.353: INFO: stdout: "update-demo-nautilus-gn9ps update-demo-nautilus-vgb5v " Jan 29 10:47:30.353: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-gn9ps -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-ksz28' Jan 29 10:47:30.704: INFO: stderr: "" Jan 29 10:47:30.705: INFO: stdout: "" Jan 29 10:47:30.705: INFO: update-demo-nautilus-gn9ps is created but not running Jan 29 10:47:35.705: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-ksz28' Jan 29 10:47:35.956: INFO: stderr: "" Jan 29 10:47:35.956: INFO: stdout: "update-demo-nautilus-gn9ps update-demo-nautilus-vgb5v " Jan 29 10:47:35.956: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-gn9ps -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-ksz28' Jan 29 10:47:36.098: INFO: stderr: "" Jan 29 10:47:36.098: INFO: stdout: "true" Jan 29 10:47:36.099: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-gn9ps -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-ksz28' Jan 29 10:47:36.188: INFO: stderr: "" Jan 29 10:47:36.188: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 29 10:47:36.188: INFO: validating pod update-demo-nautilus-gn9ps Jan 29 10:47:36.225: INFO: got data: { "image": "nautilus.jpg" } Jan 29 10:47:36.225: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 29 10:47:36.225: INFO: update-demo-nautilus-gn9ps is verified up and running Jan 29 10:47:36.225: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vgb5v -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-ksz28' Jan 29 10:47:36.316: INFO: stderr: "" Jan 29 10:47:36.316: INFO: stdout: "" Jan 29 10:47:36.316: INFO: update-demo-nautilus-vgb5v is created but not running Jan 29 10:47:41.316: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-ksz28' Jan 29 10:47:41.517: INFO: stderr: "" Jan 29 10:47:41.517: INFO: stdout: "update-demo-nautilus-gn9ps update-demo-nautilus-vgb5v " Jan 29 10:47:41.517: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-gn9ps -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-ksz28' Jan 29 10:47:41.675: INFO: stderr: "" Jan 29 10:47:41.675: INFO: stdout: "true" Jan 29 10:47:41.675: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-gn9ps -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-ksz28' Jan 29 10:47:41.834: INFO: stderr: "" Jan 29 10:47:41.834: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 29 10:47:41.834: INFO: validating pod update-demo-nautilus-gn9ps Jan 29 10:47:41.843: INFO: got data: { "image": "nautilus.jpg" } Jan 29 10:47:41.843: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 29 10:47:41.843: INFO: update-demo-nautilus-gn9ps is verified up and running Jan 29 10:47:41.843: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vgb5v -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-ksz28' Jan 29 10:47:41.958: INFO: stderr: "" Jan 29 10:47:41.958: INFO: stdout: "true" Jan 29 10:47:41.958: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vgb5v -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-ksz28' Jan 29 10:47:42.091: INFO: stderr: "" Jan 29 10:47:42.091: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 29 10:47:42.091: INFO: validating pod update-demo-nautilus-vgb5v Jan 29 10:47:42.104: INFO: got data: { "image": "nautilus.jpg" } Jan 29 10:47:42.104: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 29 10:47:42.104: INFO: update-demo-nautilus-vgb5v is verified up and running STEP: scaling down the replication controller Jan 29 10:47:42.106: INFO: scanned /root for discovery docs: Jan 29 10:47:42.106: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=e2e-tests-kubectl-ksz28' Jan 29 10:47:43.385: INFO: stderr: "" Jan 29 10:47:43.385: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Jan 29 10:47:43.385: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-ksz28' Jan 29 10:47:43.669: INFO: stderr: "" Jan 29 10:47:43.669: INFO: stdout: "update-demo-nautilus-gn9ps update-demo-nautilus-vgb5v " STEP: Replicas for name=update-demo: expected=1 actual=2 Jan 29 10:47:48.670: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-ksz28' Jan 29 10:47:48.861: INFO: stderr: "" Jan 29 10:47:48.861: INFO: stdout: "update-demo-nautilus-gn9ps update-demo-nautilus-vgb5v " STEP: Replicas for name=update-demo: expected=1 actual=2 Jan 29 10:47:53.862: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-ksz28' Jan 29 10:47:54.046: INFO: stderr: "" Jan 29 10:47:54.046: INFO: stdout: "update-demo-nautilus-gn9ps " Jan 29 10:47:54.047: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-gn9ps -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-ksz28' Jan 29 10:47:54.162: INFO: stderr: "" Jan 29 10:47:54.162: INFO: stdout: "true" Jan 29 10:47:54.163: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-gn9ps -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-ksz28' Jan 29 10:47:54.285: INFO: stderr: "" Jan 29 10:47:54.285: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 29 10:47:54.285: INFO: validating pod update-demo-nautilus-gn9ps Jan 29 10:47:54.293: INFO: got data: { "image": "nautilus.jpg" } Jan 29 10:47:54.293: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 29 10:47:54.293: INFO: update-demo-nautilus-gn9ps is verified up and running STEP: scaling up the replication controller Jan 29 10:47:54.295: INFO: scanned /root for discovery docs: Jan 29 10:47:54.295: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=e2e-tests-kubectl-ksz28' Jan 29 10:47:55.483: INFO: stderr: "" Jan 29 10:47:55.483: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Jan 29 10:47:55.483: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-ksz28' Jan 29 10:47:55.624: INFO: stderr: "" Jan 29 10:47:55.624: INFO: stdout: "update-demo-nautilus-gn9ps update-demo-nautilus-n4zlm " Jan 29 10:47:55.624: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-gn9ps -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-ksz28' Jan 29 10:47:55.749: INFO: stderr: "" Jan 29 10:47:55.749: INFO: stdout: "true" Jan 29 10:47:55.749: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-gn9ps -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-ksz28' Jan 29 10:47:56.331: INFO: stderr: "" Jan 29 10:47:56.332: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 29 10:47:56.332: INFO: validating pod update-demo-nautilus-gn9ps Jan 29 10:47:56.347: INFO: got data: { "image": "nautilus.jpg" } Jan 29 10:47:56.347: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 29 10:47:56.347: INFO: update-demo-nautilus-gn9ps is verified up and running Jan 29 10:47:56.347: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-n4zlm -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-ksz28' Jan 29 10:47:56.517: INFO: stderr: "" Jan 29 10:47:56.518: INFO: stdout: "" Jan 29 10:47:56.518: INFO: update-demo-nautilus-n4zlm is created but not running Jan 29 10:48:01.518: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-ksz28' Jan 29 10:48:01.710: INFO: stderr: "" Jan 29 10:48:01.711: INFO: stdout: "update-demo-nautilus-gn9ps update-demo-nautilus-n4zlm " Jan 29 10:48:01.711: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-gn9ps -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-ksz28' Jan 29 10:48:01.918: INFO: stderr: "" Jan 29 10:48:01.918: INFO: stdout: "true" Jan 29 10:48:01.919: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-gn9ps -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-ksz28' Jan 29 10:48:02.184: INFO: stderr: "" Jan 29 10:48:02.184: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 29 10:48:02.184: INFO: validating pod update-demo-nautilus-gn9ps Jan 29 10:48:02.216: INFO: got data: { "image": "nautilus.jpg" } Jan 29 10:48:02.216: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 29 10:48:02.216: INFO: update-demo-nautilus-gn9ps is verified up and running Jan 29 10:48:02.216: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-n4zlm -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-ksz28' Jan 29 10:48:02.360: INFO: stderr: "" Jan 29 10:48:02.360: INFO: stdout: "" Jan 29 10:48:02.360: INFO: update-demo-nautilus-n4zlm is created but not running Jan 29 10:48:07.360: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-ksz28' Jan 29 10:48:07.576: INFO: stderr: "" Jan 29 10:48:07.576: INFO: stdout: "update-demo-nautilus-gn9ps update-demo-nautilus-n4zlm " Jan 29 10:48:07.576: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-gn9ps -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-ksz28' Jan 29 10:48:07.737: INFO: stderr: "" Jan 29 10:48:07.737: INFO: stdout: "true" Jan 29 10:48:07.737: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-gn9ps -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-ksz28' Jan 29 10:48:07.853: INFO: stderr: "" Jan 29 10:48:07.853: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 29 10:48:07.854: INFO: validating pod update-demo-nautilus-gn9ps Jan 29 10:48:07.867: INFO: got data: { "image": "nautilus.jpg" } Jan 29 10:48:07.867: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 29 10:48:07.868: INFO: update-demo-nautilus-gn9ps is verified up and running Jan 29 10:48:07.868: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-n4zlm -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-ksz28' Jan 29 10:48:08.036: INFO: stderr: "" Jan 29 10:48:08.037: INFO: stdout: "true" Jan 29 10:48:08.037: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-n4zlm -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-ksz28' Jan 29 10:48:08.203: INFO: stderr: "" Jan 29 10:48:08.203: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 29 10:48:08.203: INFO: validating pod update-demo-nautilus-n4zlm Jan 29 10:48:08.227: INFO: got data: { "image": "nautilus.jpg" } Jan 29 10:48:08.227: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 29 10:48:08.227: INFO: update-demo-nautilus-n4zlm is verified up and running STEP: using delete to clean up resources Jan 29 10:48:08.228: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-ksz28' Jan 29 10:48:08.400: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 29 10:48:08.401: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Jan 29 10:48:08.401: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=e2e-tests-kubectl-ksz28' Jan 29 10:48:08.613: INFO: stderr: "No resources found.\n" Jan 29 10:48:08.613: INFO: stdout: "" Jan 29 10:48:08.613: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=e2e-tests-kubectl-ksz28 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jan 29 10:48:08.772: INFO: stderr: "" Jan 29 10:48:08.772: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 29 10:48:08.773: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-ksz28" for this suite. Jan 29 10:48:32.839: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 29 10:48:33.127: INFO: namespace: e2e-tests-kubectl-ksz28, resource: bindings, ignored listing per whitelist Jan 29 10:48:33.156: INFO: namespace e2e-tests-kubectl-ksz28 deletion completed in 24.370166399s • [SLOW TEST:70.990 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 29 10:48:33.156: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jan 29 10:48:33.369: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e4f7ee4d-4284-11ea-8d54-0242ac110005" in namespace "e2e-tests-downward-api-zcdvf" to be "success or failure" Jan 29 10:48:33.391: INFO: Pod "downwardapi-volume-e4f7ee4d-4284-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 21.786388ms Jan 29 10:48:35.407: INFO: Pod "downwardapi-volume-e4f7ee4d-4284-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037773858s Jan 29 10:48:37.419: INFO: Pod "downwardapi-volume-e4f7ee4d-4284-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.049161179s Jan 29 10:48:39.595: INFO: Pod "downwardapi-volume-e4f7ee4d-4284-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.225874346s Jan 29 10:48:41.935: INFO: Pod "downwardapi-volume-e4f7ee4d-4284-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.565530804s Jan 29 10:48:44.045: INFO: Pod "downwardapi-volume-e4f7ee4d-4284-11ea-8d54-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.675211137s STEP: Saw pod success Jan 29 10:48:44.045: INFO: Pod "downwardapi-volume-e4f7ee4d-4284-11ea-8d54-0242ac110005" satisfied condition "success or failure" Jan 29 10:48:44.065: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-e4f7ee4d-4284-11ea-8d54-0242ac110005 container client-container: STEP: delete the pod Jan 29 10:48:44.540: INFO: Waiting for pod downwardapi-volume-e4f7ee4d-4284-11ea-8d54-0242ac110005 to disappear Jan 29 10:48:44.552: INFO: Pod downwardapi-volume-e4f7ee4d-4284-11ea-8d54-0242ac110005 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 29 10:48:44.552: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-zcdvf" for this suite. Jan 29 10:48:50.618: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 29 10:48:50.730: INFO: namespace: e2e-tests-downward-api-zcdvf, resource: bindings, ignored listing per whitelist Jan 29 10:48:50.827: INFO: namespace e2e-tests-downward-api-zcdvf deletion completed in 6.261113848s • [SLOW TEST:17.671 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 29 10:48:50.827: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should write entries to /etc/hosts [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 29 10:49:01.168: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-2975q" for this suite. Jan 29 10:49:43.212: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 29 10:49:43.247: INFO: namespace: e2e-tests-kubelet-test-2975q, resource: bindings, ignored listing per whitelist Jan 29 10:49:43.317: INFO: namespace e2e-tests-kubelet-test-2975q deletion completed in 42.143103334s • [SLOW TEST:52.491 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a busybox Pod with hostAliases /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136 should write entries to /etc/hosts [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 29 10:49:43.318: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Jan 29 10:49:54.217: INFO: Successfully updated pod "pod-update-0edefd9f-4285-11ea-8d54-0242ac110005" STEP: verifying the updated pod is in kubernetes Jan 29 10:49:54.233: INFO: Pod update OK [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 29 10:49:54.233: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-9jlvq" for this suite. Jan 29 10:50:18.260: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 29 10:50:18.357: INFO: namespace: e2e-tests-pods-9jlvq, resource: bindings, ignored listing per whitelist Jan 29 10:50:18.429: INFO: namespace e2e-tests-pods-9jlvq deletion completed in 24.190946392s • [SLOW TEST:35.111 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 29 10:50:18.429: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification Jan 29 10:50:18.791: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-gwsxd,SelfLink:/api/v1/namespaces/e2e-tests-watch-gwsxd/configmaps/e2e-watch-test-configmap-a,UID:23d8182f-4285-11ea-a994-fa163e34d433,ResourceVersion:19845273,Generation:0,CreationTimestamp:2020-01-29 10:50:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} Jan 29 10:50:18.792: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-gwsxd,SelfLink:/api/v1/namespaces/e2e-tests-watch-gwsxd/configmaps/e2e-watch-test-configmap-a,UID:23d8182f-4285-11ea-a994-fa163e34d433,ResourceVersion:19845273,Generation:0,CreationTimestamp:2020-01-29 10:50:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: modifying configmap A and ensuring the correct watchers observe the notification Jan 29 10:50:28.833: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-gwsxd,SelfLink:/api/v1/namespaces/e2e-tests-watch-gwsxd/configmaps/e2e-watch-test-configmap-a,UID:23d8182f-4285-11ea-a994-fa163e34d433,ResourceVersion:19845286,Generation:0,CreationTimestamp:2020-01-29 10:50:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} Jan 29 10:50:28.833: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-gwsxd,SelfLink:/api/v1/namespaces/e2e-tests-watch-gwsxd/configmaps/e2e-watch-test-configmap-a,UID:23d8182f-4285-11ea-a994-fa163e34d433,ResourceVersion:19845286,Generation:0,CreationTimestamp:2020-01-29 10:50:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying configmap A again and ensuring the correct watchers observe the notification Jan 29 10:50:38.865: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-gwsxd,SelfLink:/api/v1/namespaces/e2e-tests-watch-gwsxd/configmaps/e2e-watch-test-configmap-a,UID:23d8182f-4285-11ea-a994-fa163e34d433,ResourceVersion:19845299,Generation:0,CreationTimestamp:2020-01-29 10:50:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Jan 29 10:50:38.865: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-gwsxd,SelfLink:/api/v1/namespaces/e2e-tests-watch-gwsxd/configmaps/e2e-watch-test-configmap-a,UID:23d8182f-4285-11ea-a994-fa163e34d433,ResourceVersion:19845299,Generation:0,CreationTimestamp:2020-01-29 10:50:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: deleting configmap A and ensuring the correct watchers observe the notification Jan 29 10:50:48.883: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-gwsxd,SelfLink:/api/v1/namespaces/e2e-tests-watch-gwsxd/configmaps/e2e-watch-test-configmap-a,UID:23d8182f-4285-11ea-a994-fa163e34d433,ResourceVersion:19845312,Generation:0,CreationTimestamp:2020-01-29 10:50:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Jan 29 10:50:48.883: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-gwsxd,SelfLink:/api/v1/namespaces/e2e-tests-watch-gwsxd/configmaps/e2e-watch-test-configmap-a,UID:23d8182f-4285-11ea-a994-fa163e34d433,ResourceVersion:19845312,Generation:0,CreationTimestamp:2020-01-29 10:50:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification Jan 29 10:50:58.938: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-gwsxd,SelfLink:/api/v1/namespaces/e2e-tests-watch-gwsxd/configmaps/e2e-watch-test-configmap-b,UID:3bc463f6-4285-11ea-a994-fa163e34d433,ResourceVersion:19845325,Generation:0,CreationTimestamp:2020-01-29 10:50:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} Jan 29 10:50:58.938: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-gwsxd,SelfLink:/api/v1/namespaces/e2e-tests-watch-gwsxd/configmaps/e2e-watch-test-configmap-b,UID:3bc463f6-4285-11ea-a994-fa163e34d433,ResourceVersion:19845325,Generation:0,CreationTimestamp:2020-01-29 10:50:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: deleting configmap B and ensuring the correct watchers observe the notification Jan 29 10:51:08.964: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-gwsxd,SelfLink:/api/v1/namespaces/e2e-tests-watch-gwsxd/configmaps/e2e-watch-test-configmap-b,UID:3bc463f6-4285-11ea-a994-fa163e34d433,ResourceVersion:19845337,Generation:0,CreationTimestamp:2020-01-29 10:50:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} Jan 29 10:51:08.965: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-gwsxd,SelfLink:/api/v1/namespaces/e2e-tests-watch-gwsxd/configmaps/e2e-watch-test-configmap-b,UID:3bc463f6-4285-11ea-a994-fa163e34d433,ResourceVersion:19845337,Generation:0,CreationTimestamp:2020-01-29 10:50:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 29 10:51:18.965: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-watch-gwsxd" for this suite. Jan 29 10:51:25.157: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 29 10:51:25.305: INFO: namespace: e2e-tests-watch-gwsxd, resource: bindings, ignored listing per whitelist Jan 29 10:51:25.316: INFO: namespace e2e-tests-watch-gwsxd deletion completed in 6.30036139s • [SLOW TEST:66.887 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 29 10:51:25.316: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85 [It] should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating service multi-endpoint-test in namespace e2e-tests-services-vds8f STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-vds8f to expose endpoints map[] Jan 29 10:51:25.580: INFO: Get endpoints failed (19.787502ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found Jan 29 10:51:26.616: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-vds8f exposes endpoints map[] (1.056063792s elapsed) STEP: Creating pod pod1 in namespace e2e-tests-services-vds8f STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-vds8f to expose endpoints map[pod1:[100]] Jan 29 10:51:30.779: INFO: Unexpected endpoints: found map[], expected map[pod1:[100]] (4.114324238s elapsed, will retry) Jan 29 10:51:33.986: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-vds8f exposes endpoints map[pod1:[100]] (7.321978849s elapsed) STEP: Creating pod pod2 in namespace e2e-tests-services-vds8f STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-vds8f to expose endpoints map[pod1:[100] pod2:[101]] Jan 29 10:51:39.374: INFO: Unexpected endpoints: found map[4c4c0513-4285-11ea-a994-fa163e34d433:[100]], expected map[pod1:[100] pod2:[101]] (5.364238995s elapsed, will retry) Jan 29 10:51:43.521: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-vds8f exposes endpoints map[pod1:[100] pod2:[101]] (9.511383266s elapsed) STEP: Deleting pod pod1 in namespace e2e-tests-services-vds8f STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-vds8f to expose endpoints map[pod2:[101]] Jan 29 10:51:45.156: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-vds8f exposes endpoints map[pod2:[101]] (1.605670849s elapsed) STEP: Deleting pod pod2 in namespace e2e-tests-services-vds8f STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-vds8f to expose endpoints map[] Jan 29 10:51:46.429: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-vds8f exposes endpoints map[] (1.263236592s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 29 10:51:46.752: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-services-vds8f" for this suite. Jan 29 10:52:08.910: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 29 10:52:08.936: INFO: namespace: e2e-tests-services-vds8f, resource: bindings, ignored listing per whitelist Jan 29 10:52:09.151: INFO: namespace e2e-tests-services-vds8f deletion completed in 22.301506187s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90 • [SLOW TEST:43.834 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run --rm job should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 29 10:52:09.151: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: executing a command with run --rm and attach with stdin Jan 29 10:52:09.339: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=e2e-tests-kubectl-qktn5 run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed'' Jan 29 10:52:19.072: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0129 10:52:17.847228 834 log.go:172] (0xc0003e0580) (0xc00066d360) Create stream\nI0129 10:52:17.847315 834 log.go:172] (0xc0003e0580) (0xc00066d360) Stream added, broadcasting: 1\nI0129 10:52:17.856163 834 log.go:172] (0xc0003e0580) Reply frame received for 1\nI0129 10:52:17.856280 834 log.go:172] (0xc0003e0580) (0xc00066d400) Create stream\nI0129 10:52:17.856311 834 log.go:172] (0xc0003e0580) (0xc00066d400) Stream added, broadcasting: 3\nI0129 10:52:17.862633 834 log.go:172] (0xc0003e0580) Reply frame received for 3\nI0129 10:52:17.862907 834 log.go:172] (0xc0003e0580) (0xc0008cc000) Create stream\nI0129 10:52:17.862958 834 log.go:172] (0xc0003e0580) (0xc0008cc000) Stream added, broadcasting: 5\nI0129 10:52:17.865679 834 log.go:172] (0xc0003e0580) Reply frame received for 5\nI0129 10:52:17.865767 834 log.go:172] (0xc0003e0580) (0xc0008ac460) Create stream\nI0129 10:52:17.865789 834 log.go:172] (0xc0003e0580) (0xc0008ac460) Stream added, broadcasting: 7\nI0129 10:52:17.867510 834 log.go:172] (0xc0003e0580) Reply frame received for 7\nI0129 10:52:17.868332 834 log.go:172] (0xc00066d400) (3) Writing data frame\nI0129 10:52:17.868727 834 log.go:172] (0xc00066d400) (3) Writing data frame\nI0129 10:52:17.876146 834 log.go:172] (0xc0003e0580) Data frame received for 5\nI0129 10:52:17.876192 834 log.go:172] (0xc0008cc000) (5) Data frame handling\nI0129 10:52:17.876300 834 log.go:172] (0xc0008cc000) (5) Data frame sent\nI0129 10:52:17.883318 834 log.go:172] (0xc0003e0580) Data frame received for 5\nI0129 10:52:17.883385 834 log.go:172] (0xc0008cc000) (5) Data frame handling\nI0129 10:52:17.883445 834 log.go:172] (0xc0008cc000) (5) Data frame sent\nI0129 10:52:19.024303 834 log.go:172] (0xc0003e0580) Data frame received for 1\nI0129 10:52:19.024395 834 log.go:172] (0xc0003e0580) (0xc00066d400) Stream removed, broadcasting: 3\nI0129 10:52:19.024474 834 log.go:172] (0xc00066d360) (1) Data frame handling\nI0129 10:52:19.024523 834 log.go:172] (0xc00066d360) (1) Data frame sent\nI0129 10:52:19.024569 834 log.go:172] (0xc0003e0580) (0xc0008cc000) Stream removed, broadcasting: 5\nI0129 10:52:19.024635 834 log.go:172] (0xc0003e0580) (0xc00066d360) Stream removed, broadcasting: 1\nI0129 10:52:19.024899 834 log.go:172] (0xc0003e0580) (0xc0008ac460) Stream removed, broadcasting: 7\nI0129 10:52:19.025217 834 log.go:172] (0xc0003e0580) Go away received\nI0129 10:52:19.025430 834 log.go:172] (0xc0003e0580) (0xc00066d360) Stream removed, broadcasting: 1\nI0129 10:52:19.025469 834 log.go:172] (0xc0003e0580) (0xc00066d400) Stream removed, broadcasting: 3\nI0129 10:52:19.025492 834 log.go:172] (0xc0003e0580) (0xc0008cc000) Stream removed, broadcasting: 5\nI0129 10:52:19.025506 834 log.go:172] (0xc0003e0580) (0xc0008ac460) Stream removed, broadcasting: 7\n" Jan 29 10:52:19.072: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n" STEP: verifying the job e2e-test-rm-busybox-job was deleted [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 29 10:52:21.087: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-qktn5" for this suite. Jan 29 10:52:27.524: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 29 10:52:27.787: INFO: namespace: e2e-tests-kubectl-qktn5, resource: bindings, ignored listing per whitelist Jan 29 10:52:27.839: INFO: namespace e2e-tests-kubectl-qktn5 deletion completed in 6.744260761s • [SLOW TEST:18.688 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run --rm job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 29 10:52:27.840: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-70f124e2-4285-11ea-8d54-0242ac110005 STEP: Creating a pod to test consume configMaps Jan 29 10:52:28.204: INFO: Waiting up to 5m0s for pod "pod-configmaps-70f214ec-4285-11ea-8d54-0242ac110005" in namespace "e2e-tests-configmap-4w2wq" to be "success or failure" Jan 29 10:52:28.219: INFO: Pod "pod-configmaps-70f214ec-4285-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 15.385386ms Jan 29 10:52:30.246: INFO: Pod "pod-configmaps-70f214ec-4285-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.042107609s Jan 29 10:52:32.267: INFO: Pod "pod-configmaps-70f214ec-4285-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.062953709s Jan 29 10:52:34.429: INFO: Pod "pod-configmaps-70f214ec-4285-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.225030849s Jan 29 10:52:36.470: INFO: Pod "pod-configmaps-70f214ec-4285-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.266313875s Jan 29 10:52:38.507: INFO: Pod "pod-configmaps-70f214ec-4285-11ea-8d54-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.303559737s STEP: Saw pod success Jan 29 10:52:38.508: INFO: Pod "pod-configmaps-70f214ec-4285-11ea-8d54-0242ac110005" satisfied condition "success or failure" Jan 29 10:52:38.535: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-70f214ec-4285-11ea-8d54-0242ac110005 container configmap-volume-test: STEP: delete the pod Jan 29 10:52:39.399: INFO: Waiting for pod pod-configmaps-70f214ec-4285-11ea-8d54-0242ac110005 to disappear Jan 29 10:52:39.773: INFO: Pod pod-configmaps-70f214ec-4285-11ea-8d54-0242ac110005 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 29 10:52:39.774: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-4w2wq" for this suite. Jan 29 10:52:47.993: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 29 10:52:48.029: INFO: namespace: e2e-tests-configmap-4w2wq, resource: bindings, ignored listing per whitelist Jan 29 10:52:48.227: INFO: namespace e2e-tests-configmap-4w2wq deletion completed in 8.445046298s • [SLOW TEST:20.388 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 29 10:52:48.229: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-projected-all-test-volume-7d07eefa-4285-11ea-8d54-0242ac110005 STEP: Creating secret with name secret-projected-all-test-volume-7d07eed2-4285-11ea-8d54-0242ac110005 STEP: Creating a pod to test Check all projections for projected volume plugin Jan 29 10:52:48.575: INFO: Waiting up to 5m0s for pod "projected-volume-7d07eda5-4285-11ea-8d54-0242ac110005" in namespace "e2e-tests-projected-wkj66" to be "success or failure" Jan 29 10:52:48.595: INFO: Pod "projected-volume-7d07eda5-4285-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 19.0006ms Jan 29 10:52:50.621: INFO: Pod "projected-volume-7d07eda5-4285-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0448369s Jan 29 10:52:52.645: INFO: Pod "projected-volume-7d07eda5-4285-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.069242753s Jan 29 10:52:54.704: INFO: Pod "projected-volume-7d07eda5-4285-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.127873113s Jan 29 10:52:56.951: INFO: Pod "projected-volume-7d07eda5-4285-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.375699371s Jan 29 10:52:58.967: INFO: Pod "projected-volume-7d07eda5-4285-11ea-8d54-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.391269085s STEP: Saw pod success Jan 29 10:52:58.967: INFO: Pod "projected-volume-7d07eda5-4285-11ea-8d54-0242ac110005" satisfied condition "success or failure" Jan 29 10:52:58.972: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod projected-volume-7d07eda5-4285-11ea-8d54-0242ac110005 container projected-all-volume-test: STEP: delete the pod Jan 29 10:52:59.111: INFO: Waiting for pod projected-volume-7d07eda5-4285-11ea-8d54-0242ac110005 to disappear Jan 29 10:52:59.150: INFO: Pod projected-volume-7d07eda5-4285-11ea-8d54-0242ac110005 no longer exists [AfterEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 29 10:52:59.150: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-wkj66" for this suite. Jan 29 10:53:05.196: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 29 10:53:05.220: INFO: namespace: e2e-tests-projected-wkj66, resource: bindings, ignored listing per whitelist Jan 29 10:53:05.416: INFO: namespace e2e-tests-projected-wkj66 deletion completed in 6.256161042s • [SLOW TEST:17.188 seconds] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31 should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 29 10:53:05.417: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jan 29 10:53:16.120: INFO: Waiting up to 5m0s for pod "client-envvars-8d8a0a89-4285-11ea-8d54-0242ac110005" in namespace "e2e-tests-pods-84l5b" to be "success or failure" Jan 29 10:53:16.179: INFO: Pod "client-envvars-8d8a0a89-4285-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 58.475877ms Jan 29 10:53:18.194: INFO: Pod "client-envvars-8d8a0a89-4285-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.074329529s Jan 29 10:53:20.210: INFO: Pod "client-envvars-8d8a0a89-4285-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.089957993s Jan 29 10:53:22.226: INFO: Pod "client-envvars-8d8a0a89-4285-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.105676173s Jan 29 10:53:24.258: INFO: Pod "client-envvars-8d8a0a89-4285-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.138410173s Jan 29 10:53:26.477: INFO: Pod "client-envvars-8d8a0a89-4285-11ea-8d54-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.357422408s STEP: Saw pod success Jan 29 10:53:26.478: INFO: Pod "client-envvars-8d8a0a89-4285-11ea-8d54-0242ac110005" satisfied condition "success or failure" Jan 29 10:53:26.492: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-envvars-8d8a0a89-4285-11ea-8d54-0242ac110005 container env3cont: STEP: delete the pod Jan 29 10:53:26.734: INFO: Waiting for pod client-envvars-8d8a0a89-4285-11ea-8d54-0242ac110005 to disappear Jan 29 10:53:26.749: INFO: Pod client-envvars-8d8a0a89-4285-11ea-8d54-0242ac110005 no longer exists [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 29 10:53:26.749: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-84l5b" for this suite. Jan 29 10:54:16.802: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 29 10:54:16.968: INFO: namespace: e2e-tests-pods-84l5b, resource: bindings, ignored listing per whitelist Jan 29 10:54:17.002: INFO: namespace e2e-tests-pods-84l5b deletion completed in 50.235153623s • [SLOW TEST:71.585 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 29 10:54:17.003: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Jan 29 10:57:21.857: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 29 10:57:22.000: INFO: Pod pod-with-poststart-exec-hook still exists Jan 29 10:57:24.001: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 29 10:57:24.026: INFO: Pod pod-with-poststart-exec-hook still exists Jan 29 10:57:26.001: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 29 10:57:26.015: INFO: Pod pod-with-poststart-exec-hook still exists Jan 29 10:57:28.001: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 29 10:57:28.017: INFO: Pod pod-with-poststart-exec-hook still exists Jan 29 10:57:30.001: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 29 10:57:30.035: INFO: Pod pod-with-poststart-exec-hook still exists Jan 29 10:57:32.001: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 29 10:57:32.022: INFO: Pod pod-with-poststart-exec-hook still exists Jan 29 10:57:34.001: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 29 10:57:34.018: INFO: Pod pod-with-poststart-exec-hook still exists Jan 29 10:57:36.001: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 29 10:57:36.019: INFO: Pod pod-with-poststart-exec-hook still exists Jan 29 10:57:38.001: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 29 10:57:38.018: INFO: Pod pod-with-poststart-exec-hook still exists Jan 29 10:57:40.001: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 29 10:57:40.061: INFO: Pod pod-with-poststart-exec-hook still exists Jan 29 10:57:42.001: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 29 10:57:42.107: INFO: Pod pod-with-poststart-exec-hook still exists Jan 29 10:57:44.001: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 29 10:57:44.066: INFO: Pod pod-with-poststart-exec-hook still exists Jan 29 10:57:46.001: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 29 10:57:46.025: INFO: Pod pod-with-poststart-exec-hook still exists Jan 29 10:57:48.001: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 29 10:57:48.093: INFO: Pod pod-with-poststart-exec-hook still exists Jan 29 10:57:50.001: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 29 10:57:50.021: INFO: Pod pod-with-poststart-exec-hook still exists Jan 29 10:57:52.001: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 29 10:57:52.018: INFO: Pod pod-with-poststart-exec-hook still exists Jan 29 10:57:54.001: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 29 10:57:54.021: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 29 10:57:54.022: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-4fhjs" for this suite. Jan 29 10:58:18.106: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 29 10:58:18.245: INFO: namespace: e2e-tests-container-lifecycle-hook-4fhjs, resource: bindings, ignored listing per whitelist Jan 29 10:58:18.257: INFO: namespace e2e-tests-container-lifecycle-hook-4fhjs deletion completed in 24.226839764s • [SLOW TEST:241.254 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40 should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 29 10:58:18.257: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-41c7255c-4286-11ea-8d54-0242ac110005 STEP: Creating a pod to test consume secrets Jan 29 10:58:18.576: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-41c9e9b6-4286-11ea-8d54-0242ac110005" in namespace "e2e-tests-projected-sp5f2" to be "success or failure" Jan 29 10:58:18.586: INFO: Pod "pod-projected-secrets-41c9e9b6-4286-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.846526ms Jan 29 10:58:20.674: INFO: Pod "pod-projected-secrets-41c9e9b6-4286-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.097869777s Jan 29 10:58:22.697: INFO: Pod "pod-projected-secrets-41c9e9b6-4286-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.120685334s Jan 29 10:58:24.719: INFO: Pod "pod-projected-secrets-41c9e9b6-4286-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.143176714s Jan 29 10:58:26.729: INFO: Pod "pod-projected-secrets-41c9e9b6-4286-11ea-8d54-0242ac110005": Phase="Running", Reason="", readiness=true. Elapsed: 8.152602624s Jan 29 10:58:28.779: INFO: Pod "pod-projected-secrets-41c9e9b6-4286-11ea-8d54-0242ac110005": Phase="Running", Reason="", readiness=true. Elapsed: 10.2029603s Jan 29 10:58:30.806: INFO: Pod "pod-projected-secrets-41c9e9b6-4286-11ea-8d54-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.230159038s STEP: Saw pod success Jan 29 10:58:30.807: INFO: Pod "pod-projected-secrets-41c9e9b6-4286-11ea-8d54-0242ac110005" satisfied condition "success or failure" Jan 29 10:58:30.820: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-41c9e9b6-4286-11ea-8d54-0242ac110005 container projected-secret-volume-test: STEP: delete the pod Jan 29 10:58:30.937: INFO: Waiting for pod pod-projected-secrets-41c9e9b6-4286-11ea-8d54-0242ac110005 to disappear Jan 29 10:58:30.993: INFO: Pod pod-projected-secrets-41c9e9b6-4286-11ea-8d54-0242ac110005 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 29 10:58:30.994: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-sp5f2" for this suite. Jan 29 10:58:37.079: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 29 10:58:37.137: INFO: namespace: e2e-tests-projected-sp5f2, resource: bindings, ignored listing per whitelist Jan 29 10:58:37.245: INFO: namespace e2e-tests-projected-sp5f2 deletion completed in 6.240477785s • [SLOW TEST:18.989 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 29 10:58:37.246: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jan 29 10:58:37.545: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4d18793c-4286-11ea-8d54-0242ac110005" in namespace "e2e-tests-downward-api-wqk9c" to be "success or failure" Jan 29 10:58:37.552: INFO: Pod "downwardapi-volume-4d18793c-4286-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.886477ms Jan 29 10:58:40.406: INFO: Pod "downwardapi-volume-4d18793c-4286-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.860220093s Jan 29 10:58:42.432: INFO: Pod "downwardapi-volume-4d18793c-4286-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.886344752s Jan 29 10:58:44.451: INFO: Pod "downwardapi-volume-4d18793c-4286-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.905375985s Jan 29 10:58:46.502: INFO: Pod "downwardapi-volume-4d18793c-4286-11ea-8d54-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.956879558s STEP: Saw pod success Jan 29 10:58:46.503: INFO: Pod "downwardapi-volume-4d18793c-4286-11ea-8d54-0242ac110005" satisfied condition "success or failure" Jan 29 10:58:46.535: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-4d18793c-4286-11ea-8d54-0242ac110005 container client-container: STEP: delete the pod Jan 29 10:58:46.749: INFO: Waiting for pod downwardapi-volume-4d18793c-4286-11ea-8d54-0242ac110005 to disappear Jan 29 10:58:46.764: INFO: Pod downwardapi-volume-4d18793c-4286-11ea-8d54-0242ac110005 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 29 10:58:46.764: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-wqk9c" for this suite. Jan 29 10:58:52.930: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 29 10:58:52.989: INFO: namespace: e2e-tests-downward-api-wqk9c, resource: bindings, ignored listing per whitelist Jan 29 10:58:53.195: INFO: namespace e2e-tests-downward-api-wqk9c deletion completed in 6.410911359s • [SLOW TEST:15.949 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 29 10:58:53.196: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap e2e-tests-configmap-fnfqf/configmap-test-56945946-4286-11ea-8d54-0242ac110005 STEP: Creating a pod to test consume configMaps Jan 29 10:58:53.409: INFO: Waiting up to 5m0s for pod "pod-configmaps-56955843-4286-11ea-8d54-0242ac110005" in namespace "e2e-tests-configmap-fnfqf" to be "success or failure" Jan 29 10:58:53.529: INFO: Pod "pod-configmaps-56955843-4286-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 120.050589ms Jan 29 10:58:55.547: INFO: Pod "pod-configmaps-56955843-4286-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.13789393s Jan 29 10:58:57.565: INFO: Pod "pod-configmaps-56955843-4286-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.155817739s Jan 29 10:59:00.162: INFO: Pod "pod-configmaps-56955843-4286-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.752959089s Jan 29 10:59:02.209: INFO: Pod "pod-configmaps-56955843-4286-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.799847484s Jan 29 10:59:04.232: INFO: Pod "pod-configmaps-56955843-4286-11ea-8d54-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.823044536s STEP: Saw pod success Jan 29 10:59:04.232: INFO: Pod "pod-configmaps-56955843-4286-11ea-8d54-0242ac110005" satisfied condition "success or failure" Jan 29 10:59:04.242: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-56955843-4286-11ea-8d54-0242ac110005 container env-test: STEP: delete the pod Jan 29 10:59:04.377: INFO: Waiting for pod pod-configmaps-56955843-4286-11ea-8d54-0242ac110005 to disappear Jan 29 10:59:04.395: INFO: Pod pod-configmaps-56955843-4286-11ea-8d54-0242ac110005 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 29 10:59:04.396: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-fnfqf" for this suite. Jan 29 10:59:10.469: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 29 10:59:10.622: INFO: namespace: e2e-tests-configmap-fnfqf, resource: bindings, ignored listing per whitelist Jan 29 10:59:10.684: INFO: namespace e2e-tests-configmap-fnfqf deletion completed in 6.27232413s • [SLOW TEST:17.488 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 29 10:59:10.684: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W0129 10:59:52.560689 8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jan 29 10:59:52.561: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 29 10:59:52.561: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-9hvbv" for this suite. Jan 29 11:00:03.464: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 29 11:00:03.654: INFO: namespace: e2e-tests-gc-9hvbv, resource: bindings, ignored listing per whitelist Jan 29 11:00:03.963: INFO: namespace e2e-tests-gc-9hvbv deletion completed in 11.393899755s • [SLOW TEST:53.279 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] Downward API volume should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 29 11:00:03.963: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jan 29 11:00:04.892: INFO: Waiting up to 5m0s for pod "downwardapi-volume-81264e38-4286-11ea-8d54-0242ac110005" in namespace "e2e-tests-downward-api-dnv9q" to be "success or failure" Jan 29 11:00:05.168: INFO: Pod "downwardapi-volume-81264e38-4286-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 275.360768ms Jan 29 11:00:08.267: INFO: Pod "downwardapi-volume-81264e38-4286-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 3.374048425s Jan 29 11:00:10.445: INFO: Pod "downwardapi-volume-81264e38-4286-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 5.552894094s Jan 29 11:00:12.462: INFO: Pod "downwardapi-volume-81264e38-4286-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.570013348s Jan 29 11:00:14.487: INFO: Pod "downwardapi-volume-81264e38-4286-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.594905728s Jan 29 11:00:16.507: INFO: Pod "downwardapi-volume-81264e38-4286-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 11.614732721s Jan 29 11:00:18.548: INFO: Pod "downwardapi-volume-81264e38-4286-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 13.655338737s Jan 29 11:00:21.044: INFO: Pod "downwardapi-volume-81264e38-4286-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 16.15145897s Jan 29 11:00:23.059: INFO: Pod "downwardapi-volume-81264e38-4286-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 18.167022345s Jan 29 11:00:25.089: INFO: Pod "downwardapi-volume-81264e38-4286-11ea-8d54-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 20.196393188s STEP: Saw pod success Jan 29 11:00:25.089: INFO: Pod "downwardapi-volume-81264e38-4286-11ea-8d54-0242ac110005" satisfied condition "success or failure" Jan 29 11:00:25.099: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-81264e38-4286-11ea-8d54-0242ac110005 container client-container: STEP: delete the pod Jan 29 11:00:25.149: INFO: Waiting for pod downwardapi-volume-81264e38-4286-11ea-8d54-0242ac110005 to disappear Jan 29 11:00:25.165: INFO: Pod downwardapi-volume-81264e38-4286-11ea-8d54-0242ac110005 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 29 11:00:25.165: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-dnv9q" for this suite. Jan 29 11:00:31.305: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 29 11:00:31.338: INFO: namespace: e2e-tests-downward-api-dnv9q, resource: bindings, ignored listing per whitelist Jan 29 11:00:31.472: INFO: namespace e2e-tests-downward-api-dnv9q deletion completed in 6.254789064s • [SLOW TEST:27.509 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 29 11:00:31.473: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jan 29 11:00:32.044: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"91381847-4286-11ea-a994-fa163e34d433", Controller:(*bool)(0xc001ad3bda), BlockOwnerDeletion:(*bool)(0xc001ad3bdb)}} Jan 29 11:00:32.259: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"912e17ec-4286-11ea-a994-fa163e34d433", Controller:(*bool)(0xc001c314da), BlockOwnerDeletion:(*bool)(0xc001c314db)}} Jan 29 11:00:32.297: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"9130cf92-4286-11ea-a994-fa163e34d433", Controller:(*bool)(0xc001c316a2), BlockOwnerDeletion:(*bool)(0xc001c316a3)}} [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 29 11:00:37.468: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-8clb2" for this suite. Jan 29 11:00:43.603: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 29 11:00:43.777: INFO: namespace: e2e-tests-gc-8clb2, resource: bindings, ignored listing per whitelist Jan 29 11:00:44.028: INFO: namespace e2e-tests-gc-8clb2 deletion completed in 6.55110801s • [SLOW TEST:12.555 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 29 11:00:44.029: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-7sj5x [It] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a new StatefulSet Jan 29 11:00:44.417: INFO: Found 0 stateful pods, waiting for 3 Jan 29 11:00:54.440: INFO: Found 2 stateful pods, waiting for 3 Jan 29 11:01:04.433: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jan 29 11:01:04.433: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jan 29 11:01:04.433: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Jan 29 11:01:14.432: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jan 29 11:01:14.432: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jan 29 11:01:14.432: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true Jan 29 11:01:14.448: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7sj5x ss2-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jan 29 11:01:15.386: INFO: stderr: "I0129 11:01:14.670682 862 log.go:172] (0xc0006e4370) (0xc00072a640) Create stream\nI0129 11:01:14.670876 862 log.go:172] (0xc0006e4370) (0xc00072a640) Stream added, broadcasting: 1\nI0129 11:01:14.676670 862 log.go:172] (0xc0006e4370) Reply frame received for 1\nI0129 11:01:14.676702 862 log.go:172] (0xc0006e4370) (0xc000654dc0) Create stream\nI0129 11:01:14.676732 862 log.go:172] (0xc0006e4370) (0xc000654dc0) Stream added, broadcasting: 3\nI0129 11:01:14.678036 862 log.go:172] (0xc0006e4370) Reply frame received for 3\nI0129 11:01:14.678084 862 log.go:172] (0xc0006e4370) (0xc00068c000) Create stream\nI0129 11:01:14.678115 862 log.go:172] (0xc0006e4370) (0xc00068c000) Stream added, broadcasting: 5\nI0129 11:01:14.679375 862 log.go:172] (0xc0006e4370) Reply frame received for 5\nI0129 11:01:15.085891 862 log.go:172] (0xc0006e4370) Data frame received for 3\nI0129 11:01:15.086157 862 log.go:172] (0xc000654dc0) (3) Data frame handling\nI0129 11:01:15.086248 862 log.go:172] (0xc000654dc0) (3) Data frame sent\nI0129 11:01:15.375406 862 log.go:172] (0xc0006e4370) Data frame received for 1\nI0129 11:01:15.375775 862 log.go:172] (0xc0006e4370) (0xc00068c000) Stream removed, broadcasting: 5\nI0129 11:01:15.375952 862 log.go:172] (0xc00072a640) (1) Data frame handling\nI0129 11:01:15.376099 862 log.go:172] (0xc00072a640) (1) Data frame sent\nI0129 11:01:15.376124 862 log.go:172] (0xc0006e4370) (0xc000654dc0) Stream removed, broadcasting: 3\nI0129 11:01:15.376190 862 log.go:172] (0xc0006e4370) (0xc00072a640) Stream removed, broadcasting: 1\nI0129 11:01:15.376280 862 log.go:172] (0xc0006e4370) Go away received\nI0129 11:01:15.377024 862 log.go:172] (0xc0006e4370) (0xc00072a640) Stream removed, broadcasting: 1\nI0129 11:01:15.377133 862 log.go:172] (0xc0006e4370) (0xc000654dc0) Stream removed, broadcasting: 3\nI0129 11:01:15.377208 862 log.go:172] (0xc0006e4370) (0xc00068c000) Stream removed, broadcasting: 5\n" Jan 29 11:01:15.386: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jan 29 11:01:15.386: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine Jan 29 11:01:25.514: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order Jan 29 11:01:35.561: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7sj5x ss2-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 29 11:01:36.239: INFO: stderr: "I0129 11:01:35.874435 884 log.go:172] (0xc00078e2c0) (0xc0006d8640) Create stream\nI0129 11:01:35.875443 884 log.go:172] (0xc00078e2c0) (0xc0006d8640) Stream added, broadcasting: 1\nI0129 11:01:35.892853 884 log.go:172] (0xc00078e2c0) Reply frame received for 1\nI0129 11:01:35.893122 884 log.go:172] (0xc00078e2c0) (0xc0007bad20) Create stream\nI0129 11:01:35.893217 884 log.go:172] (0xc00078e2c0) (0xc0007bad20) Stream added, broadcasting: 3\nI0129 11:01:35.896329 884 log.go:172] (0xc00078e2c0) Reply frame received for 3\nI0129 11:01:35.896448 884 log.go:172] (0xc00078e2c0) (0xc0000ee000) Create stream\nI0129 11:01:35.896482 884 log.go:172] (0xc00078e2c0) (0xc0000ee000) Stream added, broadcasting: 5\nI0129 11:01:35.898885 884 log.go:172] (0xc00078e2c0) Reply frame received for 5\nI0129 11:01:36.038398 884 log.go:172] (0xc00078e2c0) Data frame received for 3\nI0129 11:01:36.039286 884 log.go:172] (0xc0007bad20) (3) Data frame handling\nI0129 11:01:36.039377 884 log.go:172] (0xc0007bad20) (3) Data frame sent\nI0129 11:01:36.224360 884 log.go:172] (0xc00078e2c0) (0xc0007bad20) Stream removed, broadcasting: 3\nI0129 11:01:36.224979 884 log.go:172] (0xc00078e2c0) (0xc0000ee000) Stream removed, broadcasting: 5\nI0129 11:01:36.225138 884 log.go:172] (0xc00078e2c0) Data frame received for 1\nI0129 11:01:36.225241 884 log.go:172] (0xc0006d8640) (1) Data frame handling\nI0129 11:01:36.225304 884 log.go:172] (0xc0006d8640) (1) Data frame sent\nI0129 11:01:36.225342 884 log.go:172] (0xc00078e2c0) (0xc0006d8640) Stream removed, broadcasting: 1\nI0129 11:01:36.225726 884 log.go:172] (0xc00078e2c0) Go away received\nI0129 11:01:36.226417 884 log.go:172] (0xc00078e2c0) (0xc0006d8640) Stream removed, broadcasting: 1\nI0129 11:01:36.226446 884 log.go:172] (0xc00078e2c0) (0xc0007bad20) Stream removed, broadcasting: 3\nI0129 11:01:36.226455 884 log.go:172] (0xc00078e2c0) (0xc0000ee000) Stream removed, broadcasting: 5\n" Jan 29 11:01:36.239: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jan 29 11:01:36.239: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jan 29 11:01:46.308: INFO: Waiting for StatefulSet e2e-tests-statefulset-7sj5x/ss2 to complete update Jan 29 11:01:46.308: INFO: Waiting for Pod e2e-tests-statefulset-7sj5x/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Jan 29 11:01:46.308: INFO: Waiting for Pod e2e-tests-statefulset-7sj5x/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Jan 29 11:01:46.308: INFO: Waiting for Pod e2e-tests-statefulset-7sj5x/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Jan 29 11:01:56.342: INFO: Waiting for StatefulSet e2e-tests-statefulset-7sj5x/ss2 to complete update Jan 29 11:01:56.342: INFO: Waiting for Pod e2e-tests-statefulset-7sj5x/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Jan 29 11:01:56.342: INFO: Waiting for Pod e2e-tests-statefulset-7sj5x/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Jan 29 11:02:06.604: INFO: Waiting for StatefulSet e2e-tests-statefulset-7sj5x/ss2 to complete update Jan 29 11:02:06.604: INFO: Waiting for Pod e2e-tests-statefulset-7sj5x/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Jan 29 11:02:06.604: INFO: Waiting for Pod e2e-tests-statefulset-7sj5x/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Jan 29 11:02:16.418: INFO: Waiting for StatefulSet e2e-tests-statefulset-7sj5x/ss2 to complete update Jan 29 11:02:16.419: INFO: Waiting for Pod e2e-tests-statefulset-7sj5x/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Jan 29 11:02:26.336: INFO: Waiting for StatefulSet e2e-tests-statefulset-7sj5x/ss2 to complete update Jan 29 11:02:26.336: INFO: Waiting for Pod e2e-tests-statefulset-7sj5x/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Jan 29 11:02:36.363: INFO: Waiting for StatefulSet e2e-tests-statefulset-7sj5x/ss2 to complete update STEP: Rolling back to a previous revision Jan 29 11:02:46.344: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7sj5x ss2-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jan 29 11:02:47.156: INFO: stderr: "I0129 11:02:46.631924 906 log.go:172] (0xc00073c370) (0xc00075c640) Create stream\nI0129 11:02:46.632089 906 log.go:172] (0xc00073c370) (0xc00075c640) Stream added, broadcasting: 1\nI0129 11:02:46.638869 906 log.go:172] (0xc00073c370) Reply frame received for 1\nI0129 11:02:46.638917 906 log.go:172] (0xc00073c370) (0xc00064cbe0) Create stream\nI0129 11:02:46.638925 906 log.go:172] (0xc00073c370) (0xc00064cbe0) Stream added, broadcasting: 3\nI0129 11:02:46.639963 906 log.go:172] (0xc00073c370) Reply frame received for 3\nI0129 11:02:46.640021 906 log.go:172] (0xc00073c370) (0xc0006e8000) Create stream\nI0129 11:02:46.640031 906 log.go:172] (0xc00073c370) (0xc0006e8000) Stream added, broadcasting: 5\nI0129 11:02:46.641303 906 log.go:172] (0xc00073c370) Reply frame received for 5\nI0129 11:02:46.992078 906 log.go:172] (0xc00073c370) Data frame received for 3\nI0129 11:02:46.992123 906 log.go:172] (0xc00064cbe0) (3) Data frame handling\nI0129 11:02:46.992140 906 log.go:172] (0xc00064cbe0) (3) Data frame sent\nI0129 11:02:47.143704 906 log.go:172] (0xc00073c370) (0xc00064cbe0) Stream removed, broadcasting: 3\nI0129 11:02:47.143926 906 log.go:172] (0xc00073c370) Data frame received for 1\nI0129 11:02:47.143949 906 log.go:172] (0xc00075c640) (1) Data frame handling\nI0129 11:02:47.143972 906 log.go:172] (0xc00075c640) (1) Data frame sent\nI0129 11:02:47.144037 906 log.go:172] (0xc00073c370) (0xc00075c640) Stream removed, broadcasting: 1\nI0129 11:02:47.144160 906 log.go:172] (0xc00073c370) (0xc0006e8000) Stream removed, broadcasting: 5\nI0129 11:02:47.144221 906 log.go:172] (0xc00073c370) Go away received\nI0129 11:02:47.144617 906 log.go:172] (0xc00073c370) (0xc00075c640) Stream removed, broadcasting: 1\nI0129 11:02:47.144638 906 log.go:172] (0xc00073c370) (0xc00064cbe0) Stream removed, broadcasting: 3\nI0129 11:02:47.144647 906 log.go:172] (0xc00073c370) (0xc0006e8000) Stream removed, broadcasting: 5\n" Jan 29 11:02:47.157: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jan 29 11:02:47.157: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jan 29 11:02:57.227: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order Jan 29 11:03:07.306: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7sj5x ss2-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 29 11:03:07.894: INFO: stderr: "I0129 11:03:07.541010 928 log.go:172] (0xc000736370) (0xc000760640) Create stream\nI0129 11:03:07.541282 928 log.go:172] (0xc000736370) (0xc000760640) Stream added, broadcasting: 1\nI0129 11:03:07.549197 928 log.go:172] (0xc000736370) Reply frame received for 1\nI0129 11:03:07.549238 928 log.go:172] (0xc000736370) (0xc0005e2d20) Create stream\nI0129 11:03:07.549251 928 log.go:172] (0xc000736370) (0xc0005e2d20) Stream added, broadcasting: 3\nI0129 11:03:07.551535 928 log.go:172] (0xc000736370) Reply frame received for 3\nI0129 11:03:07.551576 928 log.go:172] (0xc000736370) (0xc00066e000) Create stream\nI0129 11:03:07.551593 928 log.go:172] (0xc000736370) (0xc00066e000) Stream added, broadcasting: 5\nI0129 11:03:07.553443 928 log.go:172] (0xc000736370) Reply frame received for 5\nI0129 11:03:07.748850 928 log.go:172] (0xc000736370) Data frame received for 3\nI0129 11:03:07.748920 928 log.go:172] (0xc0005e2d20) (3) Data frame handling\nI0129 11:03:07.748931 928 log.go:172] (0xc0005e2d20) (3) Data frame sent\nI0129 11:03:07.883620 928 log.go:172] (0xc000736370) Data frame received for 1\nI0129 11:03:07.883731 928 log.go:172] (0xc000736370) (0xc00066e000) Stream removed, broadcasting: 5\nI0129 11:03:07.883767 928 log.go:172] (0xc000760640) (1) Data frame handling\nI0129 11:03:07.883785 928 log.go:172] (0xc000760640) (1) Data frame sent\nI0129 11:03:07.883829 928 log.go:172] (0xc000736370) (0xc0005e2d20) Stream removed, broadcasting: 3\nI0129 11:03:07.883870 928 log.go:172] (0xc000736370) (0xc000760640) Stream removed, broadcasting: 1\nI0129 11:03:07.883912 928 log.go:172] (0xc000736370) Go away received\nI0129 11:03:07.884957 928 log.go:172] (0xc000736370) (0xc000760640) Stream removed, broadcasting: 1\nI0129 11:03:07.885044 928 log.go:172] (0xc000736370) (0xc0005e2d20) Stream removed, broadcasting: 3\nI0129 11:03:07.885064 928 log.go:172] (0xc000736370) (0xc00066e000) Stream removed, broadcasting: 5\n" Jan 29 11:03:07.894: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jan 29 11:03:07.894: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jan 29 11:03:17.960: INFO: Waiting for StatefulSet e2e-tests-statefulset-7sj5x/ss2 to complete update Jan 29 11:03:17.960: INFO: Waiting for Pod e2e-tests-statefulset-7sj5x/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Jan 29 11:03:17.960: INFO: Waiting for Pod e2e-tests-statefulset-7sj5x/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Jan 29 11:03:27.992: INFO: Waiting for StatefulSet e2e-tests-statefulset-7sj5x/ss2 to complete update Jan 29 11:03:27.992: INFO: Waiting for Pod e2e-tests-statefulset-7sj5x/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Jan 29 11:03:27.992: INFO: Waiting for Pod e2e-tests-statefulset-7sj5x/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Jan 29 11:03:38.034: INFO: Waiting for StatefulSet e2e-tests-statefulset-7sj5x/ss2 to complete update Jan 29 11:03:38.034: INFO: Waiting for Pod e2e-tests-statefulset-7sj5x/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Jan 29 11:03:47.988: INFO: Waiting for StatefulSet e2e-tests-statefulset-7sj5x/ss2 to complete update Jan 29 11:03:47.988: INFO: Waiting for Pod e2e-tests-statefulset-7sj5x/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Jan 29 11:03:58.645: INFO: Waiting for StatefulSet e2e-tests-statefulset-7sj5x/ss2 to complete update [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 Jan 29 11:04:07.980: INFO: Deleting all statefulset in ns e2e-tests-statefulset-7sj5x Jan 29 11:04:07.985: INFO: Scaling statefulset ss2 to 0 Jan 29 11:04:48.056: INFO: Waiting for statefulset status.replicas updated to 0 Jan 29 11:04:48.065: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 29 11:04:48.107: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-7sj5x" for this suite. Jan 29 11:04:56.160: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 29 11:04:56.226: INFO: namespace: e2e-tests-statefulset-7sj5x, resource: bindings, ignored listing per whitelist Jan 29 11:04:56.325: INFO: namespace e2e-tests-statefulset-7sj5x deletion completed in 8.204750942s • [SLOW TEST:252.297 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 29 11:04:56.326: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-configmap-95h8 STEP: Creating a pod to test atomic-volume-subpath Jan 29 11:04:56.746: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-95h8" in namespace "e2e-tests-subpath-68cbc" to be "success or failure" Jan 29 11:04:56.760: INFO: Pod "pod-subpath-test-configmap-95h8": Phase="Pending", Reason="", readiness=false. Elapsed: 14.108472ms Jan 29 11:04:58.779: INFO: Pod "pod-subpath-test-configmap-95h8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032897209s Jan 29 11:05:00.812: INFO: Pod "pod-subpath-test-configmap-95h8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.066028415s Jan 29 11:05:02.834: INFO: Pod "pod-subpath-test-configmap-95h8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.087586231s Jan 29 11:05:05.094: INFO: Pod "pod-subpath-test-configmap-95h8": Phase="Pending", Reason="", readiness=false. Elapsed: 8.347374065s Jan 29 11:05:07.116: INFO: Pod "pod-subpath-test-configmap-95h8": Phase="Pending", Reason="", readiness=false. Elapsed: 10.370084462s Jan 29 11:05:09.139: INFO: Pod "pod-subpath-test-configmap-95h8": Phase="Pending", Reason="", readiness=false. Elapsed: 12.392637816s Jan 29 11:05:11.152: INFO: Pod "pod-subpath-test-configmap-95h8": Phase="Pending", Reason="", readiness=false. Elapsed: 14.405529208s Jan 29 11:05:13.165: INFO: Pod "pod-subpath-test-configmap-95h8": Phase="Running", Reason="", readiness=false. Elapsed: 16.418802168s Jan 29 11:05:15.185: INFO: Pod "pod-subpath-test-configmap-95h8": Phase="Running", Reason="", readiness=false. Elapsed: 18.438937474s Jan 29 11:05:17.203: INFO: Pod "pod-subpath-test-configmap-95h8": Phase="Running", Reason="", readiness=false. Elapsed: 20.45634852s Jan 29 11:05:19.226: INFO: Pod "pod-subpath-test-configmap-95h8": Phase="Running", Reason="", readiness=false. Elapsed: 22.479412262s Jan 29 11:05:21.253: INFO: Pod "pod-subpath-test-configmap-95h8": Phase="Running", Reason="", readiness=false. Elapsed: 24.506858212s Jan 29 11:05:23.276: INFO: Pod "pod-subpath-test-configmap-95h8": Phase="Running", Reason="", readiness=false. Elapsed: 26.530063969s Jan 29 11:05:25.290: INFO: Pod "pod-subpath-test-configmap-95h8": Phase="Running", Reason="", readiness=false. Elapsed: 28.543515225s Jan 29 11:05:27.304: INFO: Pod "pod-subpath-test-configmap-95h8": Phase="Running", Reason="", readiness=false. Elapsed: 30.557983181s Jan 29 11:05:29.326: INFO: Pod "pod-subpath-test-configmap-95h8": Phase="Running", Reason="", readiness=false. Elapsed: 32.579495131s Jan 29 11:05:31.341: INFO: Pod "pod-subpath-test-configmap-95h8": Phase="Running", Reason="", readiness=false. Elapsed: 34.595238439s Jan 29 11:05:33.362: INFO: Pod "pod-subpath-test-configmap-95h8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 36.615534026s STEP: Saw pod success Jan 29 11:05:33.362: INFO: Pod "pod-subpath-test-configmap-95h8" satisfied condition "success or failure" Jan 29 11:05:33.371: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-configmap-95h8 container test-container-subpath-configmap-95h8: STEP: delete the pod Jan 29 11:05:33.520: INFO: Waiting for pod pod-subpath-test-configmap-95h8 to disappear Jan 29 11:05:33.536: INFO: Pod pod-subpath-test-configmap-95h8 no longer exists STEP: Deleting pod pod-subpath-test-configmap-95h8 Jan 29 11:05:33.536: INFO: Deleting pod "pod-subpath-test-configmap-95h8" in namespace "e2e-tests-subpath-68cbc" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 29 11:05:33.542: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-68cbc" for this suite. Jan 29 11:05:39.695: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 29 11:05:39.801: INFO: namespace: e2e-tests-subpath-68cbc, resource: bindings, ignored listing per whitelist Jan 29 11:05:39.828: INFO: namespace e2e-tests-subpath-68cbc deletion completed in 6.2778288s • [SLOW TEST:43.502 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 29 11:05:39.829: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0129 11:06:10.697544 8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jan 29 11:06:10.697: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 29 11:06:10.697: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-kcxsd" for this suite. Jan 29 11:06:20.787: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 29 11:06:20.965: INFO: namespace: e2e-tests-gc-kcxsd, resource: bindings, ignored listing per whitelist Jan 29 11:06:21.097: INFO: namespace e2e-tests-gc-kcxsd deletion completed in 10.396650948s • [SLOW TEST:41.269 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-network] Services should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 29 11:06:21.097: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85 [It] should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 29 11:06:21.872: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-services-6z7kf" for this suite. Jan 29 11:06:28.189: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 29 11:06:28.314: INFO: namespace: e2e-tests-services-6z7kf, resource: bindings, ignored listing per whitelist Jan 29 11:06:28.362: INFO: namespace e2e-tests-services-6z7kf deletion completed in 6.480523227s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90 • [SLOW TEST:7.265 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 29 11:06:28.363: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 29 11:06:41.793: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replication-controller-crqhz" for this suite. Jan 29 11:07:05.840: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 29 11:07:05.985: INFO: namespace: e2e-tests-replication-controller-crqhz, resource: bindings, ignored listing per whitelist Jan 29 11:07:06.046: INFO: namespace e2e-tests-replication-controller-crqhz deletion completed in 24.239721699s • [SLOW TEST:37.683 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 29 11:07:06.046: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-7c56a5fe-4287-11ea-8d54-0242ac110005 STEP: Creating a pod to test consume secrets Jan 29 11:07:06.255: INFO: Waiting up to 5m0s for pod "pod-secrets-7c5776c4-4287-11ea-8d54-0242ac110005" in namespace "e2e-tests-secrets-jm7lp" to be "success or failure" Jan 29 11:07:06.262: INFO: Pod "pod-secrets-7c5776c4-4287-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.647722ms Jan 29 11:07:08.406: INFO: Pod "pod-secrets-7c5776c4-4287-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.15027662s Jan 29 11:07:10.418: INFO: Pod "pod-secrets-7c5776c4-4287-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.162745643s Jan 29 11:07:12.711: INFO: Pod "pod-secrets-7c5776c4-4287-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.455850662s Jan 29 11:07:14.727: INFO: Pod "pod-secrets-7c5776c4-4287-11ea-8d54-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.471725593s STEP: Saw pod success Jan 29 11:07:14.727: INFO: Pod "pod-secrets-7c5776c4-4287-11ea-8d54-0242ac110005" satisfied condition "success or failure" Jan 29 11:07:14.733: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-7c5776c4-4287-11ea-8d54-0242ac110005 container secret-volume-test: STEP: delete the pod Jan 29 11:07:14.961: INFO: Waiting for pod pod-secrets-7c5776c4-4287-11ea-8d54-0242ac110005 to disappear Jan 29 11:07:14.974: INFO: Pod pod-secrets-7c5776c4-4287-11ea-8d54-0242ac110005 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 29 11:07:14.974: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-jm7lp" for this suite. Jan 29 11:07:21.009: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 29 11:07:21.037: INFO: namespace: e2e-tests-secrets-jm7lp, resource: bindings, ignored listing per whitelist Jan 29 11:07:21.158: INFO: namespace e2e-tests-secrets-jm7lp deletion completed in 6.1765213s • [SLOW TEST:15.112 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 29 11:07:21.159: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Jan 29 11:07:41.599: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jan 29 11:07:41.614: INFO: Pod pod-with-prestop-http-hook still exists Jan 29 11:07:43.614: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jan 29 11:07:44.107: INFO: Pod pod-with-prestop-http-hook still exists Jan 29 11:07:45.614: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jan 29 11:07:45.623: INFO: Pod pod-with-prestop-http-hook still exists Jan 29 11:07:47.614: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jan 29 11:07:47.642: INFO: Pod pod-with-prestop-http-hook still exists Jan 29 11:07:49.614: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jan 29 11:07:49.633: INFO: Pod pod-with-prestop-http-hook still exists Jan 29 11:07:51.614: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jan 29 11:07:51.633: INFO: Pod pod-with-prestop-http-hook still exists Jan 29 11:07:53.614: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jan 29 11:07:53.644: INFO: Pod pod-with-prestop-http-hook still exists Jan 29 11:07:55.614: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jan 29 11:07:55.638: INFO: Pod pod-with-prestop-http-hook still exists Jan 29 11:07:57.614: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jan 29 11:07:57.631: INFO: Pod pod-with-prestop-http-hook still exists Jan 29 11:07:59.614: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jan 29 11:07:59.632: INFO: Pod pod-with-prestop-http-hook still exists Jan 29 11:08:01.614: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jan 29 11:08:01.629: INFO: Pod pod-with-prestop-http-hook still exists Jan 29 11:08:03.614: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jan 29 11:08:03.647: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 29 11:08:03.682: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-cggrg" for this suite. Jan 29 11:08:27.757: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 29 11:08:27.914: INFO: namespace: e2e-tests-container-lifecycle-hook-cggrg, resource: bindings, ignored listing per whitelist Jan 29 11:08:27.953: INFO: namespace e2e-tests-container-lifecycle-hook-cggrg deletion completed in 24.25957443s • [SLOW TEST:66.795 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40 should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 29 11:08:27.954: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: validating cluster-info Jan 29 11:08:28.197: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info' Jan 29 11:08:30.172: INFO: stderr: "" Jan 29 11:08:30.172: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.24.4.212:6443\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.24.4.212:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 29 11:08:30.172: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-bcsr4" for this suite. Jan 29 11:08:36.287: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 29 11:08:36.451: INFO: namespace: e2e-tests-kubectl-bcsr4, resource: bindings, ignored listing per whitelist Jan 29 11:08:36.454: INFO: namespace e2e-tests-kubectl-bcsr4 deletion completed in 6.2695253s • [SLOW TEST:8.500 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl cluster-info /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 29 11:08:36.455: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-map-b247f47d-4287-11ea-8d54-0242ac110005 STEP: Creating a pod to test consume configMaps Jan 29 11:08:36.777: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-b248bdc9-4287-11ea-8d54-0242ac110005" in namespace "e2e-tests-projected-5m79b" to be "success or failure" Jan 29 11:08:36.787: INFO: Pod "pod-projected-configmaps-b248bdc9-4287-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.448486ms Jan 29 11:08:38.802: INFO: Pod "pod-projected-configmaps-b248bdc9-4287-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024769873s Jan 29 11:08:41.786: INFO: Pod "pod-projected-configmaps-b248bdc9-4287-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 5.008513785s Jan 29 11:08:43.799: INFO: Pod "pod-projected-configmaps-b248bdc9-4287-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.022189404s Jan 29 11:08:45.820: INFO: Pod "pod-projected-configmaps-b248bdc9-4287-11ea-8d54-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 9.042558502s STEP: Saw pod success Jan 29 11:08:45.820: INFO: Pod "pod-projected-configmaps-b248bdc9-4287-11ea-8d54-0242ac110005" satisfied condition "success or failure" Jan 29 11:08:45.833: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-b248bdc9-4287-11ea-8d54-0242ac110005 container projected-configmap-volume-test: STEP: delete the pod Jan 29 11:08:46.587: INFO: Waiting for pod pod-projected-configmaps-b248bdc9-4287-11ea-8d54-0242ac110005 to disappear Jan 29 11:08:46.625: INFO: Pod pod-projected-configmaps-b248bdc9-4287-11ea-8d54-0242ac110005 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 29 11:08:46.625: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-5m79b" for this suite. Jan 29 11:08:52.680: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 29 11:08:52.714: INFO: namespace: e2e-tests-projected-5m79b, resource: bindings, ignored listing per whitelist Jan 29 11:08:52.804: INFO: namespace e2e-tests-projected-5m79b deletion completed in 6.168357666s • [SLOW TEST:16.349 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 29 11:08:52.805: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-map-bbf8758f-4287-11ea-8d54-0242ac110005 STEP: Creating a pod to test consume configMaps Jan 29 11:08:53.040: INFO: Waiting up to 5m0s for pod "pod-configmaps-bbfcdc51-4287-11ea-8d54-0242ac110005" in namespace "e2e-tests-configmap-qr6xj" to be "success or failure" Jan 29 11:08:53.157: INFO: Pod "pod-configmaps-bbfcdc51-4287-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 117.205779ms Jan 29 11:08:55.662: INFO: Pod "pod-configmaps-bbfcdc51-4287-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.621700106s Jan 29 11:08:57.678: INFO: Pod "pod-configmaps-bbfcdc51-4287-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.637467809s Jan 29 11:08:59.743: INFO: Pod "pod-configmaps-bbfcdc51-4287-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.703402523s Jan 29 11:09:01.765: INFO: Pod "pod-configmaps-bbfcdc51-4287-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.72511747s Jan 29 11:09:03.795: INFO: Pod "pod-configmaps-bbfcdc51-4287-11ea-8d54-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.755272819s STEP: Saw pod success Jan 29 11:09:03.796: INFO: Pod "pod-configmaps-bbfcdc51-4287-11ea-8d54-0242ac110005" satisfied condition "success or failure" Jan 29 11:09:03.815: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-bbfcdc51-4287-11ea-8d54-0242ac110005 container configmap-volume-test: STEP: delete the pod Jan 29 11:09:04.130: INFO: Waiting for pod pod-configmaps-bbfcdc51-4287-11ea-8d54-0242ac110005 to disappear Jan 29 11:09:04.176: INFO: Pod pod-configmaps-bbfcdc51-4287-11ea-8d54-0242ac110005 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 29 11:09:04.177: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-qr6xj" for this suite. Jan 29 11:09:10.285: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 29 11:09:10.460: INFO: namespace: e2e-tests-configmap-qr6xj, resource: bindings, ignored listing per whitelist Jan 29 11:09:10.460: INFO: namespace e2e-tests-configmap-qr6xj deletion completed in 6.2746311s • [SLOW TEST:17.656 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 29 11:09:10.461: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Jan 29 11:09:19.492: INFO: Successfully updated pod "pod-update-activedeadlineseconds-c68e70df-4287-11ea-8d54-0242ac110005" Jan 29 11:09:19.493: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-c68e70df-4287-11ea-8d54-0242ac110005" in namespace "e2e-tests-pods-dc7m9" to be "terminated due to deadline exceeded" Jan 29 11:09:19.573: INFO: Pod "pod-update-activedeadlineseconds-c68e70df-4287-11ea-8d54-0242ac110005": Phase="Running", Reason="", readiness=true. Elapsed: 80.50843ms Jan 29 11:09:22.268: INFO: Pod "pod-update-activedeadlineseconds-c68e70df-4287-11ea-8d54-0242ac110005": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.774924008s Jan 29 11:09:22.268: INFO: Pod "pod-update-activedeadlineseconds-c68e70df-4287-11ea-8d54-0242ac110005" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 29 11:09:22.268: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-dc7m9" for this suite. Jan 29 11:09:28.708: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 29 11:09:28.802: INFO: namespace: e2e-tests-pods-dc7m9, resource: bindings, ignored listing per whitelist Jan 29 11:09:28.850: INFO: namespace e2e-tests-pods-dc7m9 deletion completed in 6.569861671s • [SLOW TEST:18.389 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 29 11:09:28.851: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod Jan 29 11:09:29.102: INFO: PodSpec: initContainers in spec.initContainers Jan 29 11:10:37.131: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-d17f6dc9-4287-11ea-8d54-0242ac110005", GenerateName:"", Namespace:"e2e-tests-init-container-rsrl2", SelfLink:"/api/v1/namespaces/e2e-tests-init-container-rsrl2/pods/pod-init-d17f6dc9-4287-11ea-8d54-0242ac110005", UID:"d1807f66-4287-11ea-a994-fa163e34d433", ResourceVersion:"19847954", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63715892969, loc:(*time.Location)(0x7950ac0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"time":"102928325", "name":"foo"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-qqjnn", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc001f0e3c0), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-qqjnn", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-qqjnn", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-qqjnn", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0011b2938), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"hunter-server-hu5at5svl7ps", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc000a73260), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0011b29b0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0011b29d0)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc0011b29d8), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc0011b29dc)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715892969, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715892969, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715892969, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715892969, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.96.1.240", PodIP:"10.32.0.4", StartTime:(*v1.Time)(0xc002b06360), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0017d9110)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0017d9180)}, Ready:false, RestartCount:3, Image:"busybox:1.29", ImageID:"docker-pullable://busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"docker://98ea3caa1bdce415a883b8aeec7174dd2564b841f19a1ae66ba17ef746302cab"}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc002b063a0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:""}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc002b06380), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:""}}, QOSClass:"Guaranteed"}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 29 11:10:37.132: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-init-container-rsrl2" for this suite. Jan 29 11:11:01.294: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 29 11:11:01.371: INFO: namespace: e2e-tests-init-container-rsrl2, resource: bindings, ignored listing per whitelist Jan 29 11:11:01.439: INFO: namespace e2e-tests-init-container-rsrl2 deletion completed in 24.18048744s • [SLOW TEST:92.589 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 29 11:11:01.440: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating replication controller svc-latency-rc in namespace e2e-tests-svc-latency-sbdmr I0129 11:11:01.810024 8 runners.go:184] Created replication controller with name: svc-latency-rc, namespace: e2e-tests-svc-latency-sbdmr, replica count: 1 I0129 11:11:02.860995 8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0129 11:11:03.861351 8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0129 11:11:04.862119 8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0129 11:11:05.863327 8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0129 11:11:06.863807 8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0129 11:11:07.864592 8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0129 11:11:08.864985 8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0129 11:11:09.865542 8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 29 11:11:10.037: INFO: Created: latency-svc-mspd8 Jan 29 11:11:10.189: INFO: Got endpoints: latency-svc-mspd8 [223.647702ms] Jan 29 11:11:10.450: INFO: Created: latency-svc-gcg49 Jan 29 11:11:10.493: INFO: Got endpoints: latency-svc-gcg49 [303.646058ms] Jan 29 11:11:10.663: INFO: Created: latency-svc-6gzmx Jan 29 11:11:10.669: INFO: Got endpoints: latency-svc-6gzmx [478.888838ms] Jan 29 11:11:10.961: INFO: Created: latency-svc-fg482 Jan 29 11:11:10.983: INFO: Got endpoints: latency-svc-fg482 [790.399075ms] Jan 29 11:11:11.139: INFO: Created: latency-svc-rlx7d Jan 29 11:11:11.144: INFO: Got endpoints: latency-svc-rlx7d [953.547264ms] Jan 29 11:11:11.298: INFO: Created: latency-svc-w5h2c Jan 29 11:11:11.357: INFO: Got endpoints: latency-svc-w5h2c [1.16442167s] Jan 29 11:11:11.373: INFO: Created: latency-svc-x6xj9 Jan 29 11:11:11.381: INFO: Got endpoints: latency-svc-x6xj9 [1.18870269s] Jan 29 11:11:11.532: INFO: Created: latency-svc-7rcfb Jan 29 11:11:11.618: INFO: Created: latency-svc-rtsxl Jan 29 11:11:11.619: INFO: Got endpoints: latency-svc-7rcfb [1.427214601s] Jan 29 11:11:11.769: INFO: Got endpoints: latency-svc-rtsxl [1.576045735s] Jan 29 11:11:11.801: INFO: Created: latency-svc-8h9ss Jan 29 11:11:11.949: INFO: Got endpoints: latency-svc-8h9ss [1.756829965s] Jan 29 11:11:11.962: INFO: Created: latency-svc-dh5n9 Jan 29 11:11:11.968: INFO: Got endpoints: latency-svc-dh5n9 [199.65439ms] Jan 29 11:11:12.344: INFO: Created: latency-svc-xdxrz Jan 29 11:11:12.375: INFO: Got endpoints: latency-svc-xdxrz [2.183600099s] Jan 29 11:11:12.514: INFO: Created: latency-svc-lwm84 Jan 29 11:11:12.534: INFO: Got endpoints: latency-svc-lwm84 [2.342328742s] Jan 29 11:11:12.705: INFO: Created: latency-svc-d6z5t Jan 29 11:11:12.718: INFO: Got endpoints: latency-svc-d6z5t [2.526433139s] Jan 29 11:11:12.760: INFO: Created: latency-svc-h9rsw Jan 29 11:11:12.776: INFO: Got endpoints: latency-svc-h9rsw [2.583358858s] Jan 29 11:11:12.980: INFO: Created: latency-svc-vdgwm Jan 29 11:11:13.012: INFO: Got endpoints: latency-svc-vdgwm [2.819225313s] Jan 29 11:11:13.280: INFO: Created: latency-svc-5vppk Jan 29 11:11:13.298: INFO: Got endpoints: latency-svc-5vppk [3.107013393s] Jan 29 11:11:13.463: INFO: Created: latency-svc-tg7qq Jan 29 11:11:13.478: INFO: Got endpoints: latency-svc-tg7qq [2.984567639s] Jan 29 11:11:13.536: INFO: Created: latency-svc-qllc7 Jan 29 11:11:13.665: INFO: Got endpoints: latency-svc-qllc7 [2.99550012s] Jan 29 11:11:13.710: INFO: Created: latency-svc-xlgdh Jan 29 11:11:13.722: INFO: Got endpoints: latency-svc-xlgdh [2.738729647s] Jan 29 11:11:13.888: INFO: Created: latency-svc-mljbx Jan 29 11:11:13.925: INFO: Got endpoints: latency-svc-mljbx [2.780416316s] Jan 29 11:11:14.119: INFO: Created: latency-svc-s7nq6 Jan 29 11:11:14.143: INFO: Got endpoints: latency-svc-s7nq6 [2.785891067s] Jan 29 11:11:14.330: INFO: Created: latency-svc-pzsdr Jan 29 11:11:14.359: INFO: Got endpoints: latency-svc-pzsdr [2.97870586s] Jan 29 11:11:14.547: INFO: Created: latency-svc-jb4pf Jan 29 11:11:14.557: INFO: Got endpoints: latency-svc-jb4pf [2.9381179s] Jan 29 11:11:14.633: INFO: Created: latency-svc-wsn5h Jan 29 11:11:14.751: INFO: Got endpoints: latency-svc-wsn5h [2.802226974s] Jan 29 11:11:14.899: INFO: Created: latency-svc-j8qhj Jan 29 11:11:14.917: INFO: Got endpoints: latency-svc-j8qhj [2.948701837s] Jan 29 11:11:14.965: INFO: Created: latency-svc-n2nls Jan 29 11:11:14.987: INFO: Got endpoints: latency-svc-n2nls [2.611749294s] Jan 29 11:11:15.126: INFO: Created: latency-svc-57csn Jan 29 11:11:15.132: INFO: Got endpoints: latency-svc-57csn [2.597767139s] Jan 29 11:11:15.371: INFO: Created: latency-svc-8ft4g Jan 29 11:11:15.399: INFO: Created: latency-svc-9gsmh Jan 29 11:11:15.427: INFO: Got endpoints: latency-svc-9gsmh [2.651168596s] Jan 29 11:11:15.427: INFO: Got endpoints: latency-svc-8ft4g [2.708616174s] Jan 29 11:11:15.566: INFO: Created: latency-svc-6f2nw Jan 29 11:11:15.591: INFO: Got endpoints: latency-svc-6f2nw [2.578661993s] Jan 29 11:11:15.666: INFO: Created: latency-svc-5fdfk Jan 29 11:11:15.808: INFO: Got endpoints: latency-svc-5fdfk [2.509759901s] Jan 29 11:11:15.862: INFO: Created: latency-svc-l7r2h Jan 29 11:11:16.050: INFO: Got endpoints: latency-svc-l7r2h [2.571083541s] Jan 29 11:11:16.112: INFO: Created: latency-svc-fgszg Jan 29 11:11:16.317: INFO: Got endpoints: latency-svc-fgszg [2.651541895s] Jan 29 11:11:16.348: INFO: Created: latency-svc-zpmpr Jan 29 11:11:16.400: INFO: Got endpoints: latency-svc-zpmpr [2.67792756s] Jan 29 11:11:16.608: INFO: Created: latency-svc-smxzk Jan 29 11:11:16.661: INFO: Got endpoints: latency-svc-smxzk [2.736477398s] Jan 29 11:11:16.780: INFO: Created: latency-svc-vr8kw Jan 29 11:11:16.822: INFO: Got endpoints: latency-svc-vr8kw [2.67887183s] Jan 29 11:11:16.961: INFO: Created: latency-svc-stfgg Jan 29 11:11:16.989: INFO: Got endpoints: latency-svc-stfgg [2.629116595s] Jan 29 11:11:17.204: INFO: Created: latency-svc-bwgf9 Jan 29 11:11:17.210: INFO: Got endpoints: latency-svc-bwgf9 [2.652735085s] Jan 29 11:11:17.468: INFO: Created: latency-svc-8dnhr Jan 29 11:11:17.494: INFO: Got endpoints: latency-svc-8dnhr [2.74264271s] Jan 29 11:11:17.711: INFO: Created: latency-svc-2fqzx Jan 29 11:11:17.735: INFO: Got endpoints: latency-svc-2fqzx [2.817724178s] Jan 29 11:11:17.781: INFO: Created: latency-svc-fm49m Jan 29 11:11:17.958: INFO: Got endpoints: latency-svc-fm49m [2.970008725s] Jan 29 11:11:19.067: INFO: Created: latency-svc-wzpgq Jan 29 11:11:19.079: INFO: Got endpoints: latency-svc-wzpgq [3.947369613s] Jan 29 11:11:19.146: INFO: Created: latency-svc-4njdg Jan 29 11:11:19.247: INFO: Got endpoints: latency-svc-4njdg [3.820182473s] Jan 29 11:11:19.302: INFO: Created: latency-svc-9hmkv Jan 29 11:11:19.312: INFO: Got endpoints: latency-svc-9hmkv [3.88512243s] Jan 29 11:11:19.544: INFO: Created: latency-svc-29tb6 Jan 29 11:11:19.562: INFO: Got endpoints: latency-svc-29tb6 [3.971707663s] Jan 29 11:11:19.728: INFO: Created: latency-svc-wcl6h Jan 29 11:11:19.731: INFO: Got endpoints: latency-svc-wcl6h [3.922881468s] Jan 29 11:11:19.922: INFO: Created: latency-svc-s4v4b Jan 29 11:11:19.937: INFO: Got endpoints: latency-svc-s4v4b [3.886802757s] Jan 29 11:11:19.983: INFO: Created: latency-svc-fh2tt Jan 29 11:11:20.146: INFO: Got endpoints: latency-svc-fh2tt [3.82861551s] Jan 29 11:11:20.180: INFO: Created: latency-svc-d886s Jan 29 11:11:20.185: INFO: Got endpoints: latency-svc-d886s [3.784328384s] Jan 29 11:11:20.338: INFO: Created: latency-svc-xrpj8 Jan 29 11:11:20.388: INFO: Got endpoints: latency-svc-xrpj8 [3.725862293s] Jan 29 11:11:20.593: INFO: Created: latency-svc-thztn Jan 29 11:11:20.596: INFO: Got endpoints: latency-svc-thztn [3.774017727s] Jan 29 11:11:20.648: INFO: Created: latency-svc-n5cx6 Jan 29 11:11:20.782: INFO: Got endpoints: latency-svc-n5cx6 [3.792707538s] Jan 29 11:11:20.854: INFO: Created: latency-svc-82th5 Jan 29 11:11:20.982: INFO: Got endpoints: latency-svc-82th5 [3.771683228s] Jan 29 11:11:20.997: INFO: Created: latency-svc-cn4gl Jan 29 11:11:21.005: INFO: Got endpoints: latency-svc-cn4gl [3.511168259s] Jan 29 11:11:21.492: INFO: Created: latency-svc-jvrzf Jan 29 11:11:21.511: INFO: Got endpoints: latency-svc-jvrzf [3.775494096s] Jan 29 11:11:21.723: INFO: Created: latency-svc-2r8q4 Jan 29 11:11:21.726: INFO: Got endpoints: latency-svc-2r8q4 [3.768317298s] Jan 29 11:11:21.812: INFO: Created: latency-svc-lq9dk Jan 29 11:11:21.946: INFO: Got endpoints: latency-svc-lq9dk [2.866762769s] Jan 29 11:11:21.976: INFO: Created: latency-svc-q5mfl Jan 29 11:11:21.988: INFO: Got endpoints: latency-svc-q5mfl [2.740647151s] Jan 29 11:11:22.127: INFO: Created: latency-svc-tg4cl Jan 29 11:11:22.148: INFO: Got endpoints: latency-svc-tg4cl [2.835790614s] Jan 29 11:11:22.317: INFO: Created: latency-svc-p2zbm Jan 29 11:11:22.343: INFO: Got endpoints: latency-svc-p2zbm [2.780772437s] Jan 29 11:11:22.412: INFO: Created: latency-svc-5phpc Jan 29 11:11:22.421: INFO: Got endpoints: latency-svc-5phpc [2.689856555s] Jan 29 11:11:22.598: INFO: Created: latency-svc-8wqb8 Jan 29 11:11:22.623: INFO: Got endpoints: latency-svc-8wqb8 [2.685642178s] Jan 29 11:11:22.670: INFO: Created: latency-svc-8whdc Jan 29 11:11:22.775: INFO: Got endpoints: latency-svc-8whdc [2.628378006s] Jan 29 11:11:22.805: INFO: Created: latency-svc-856p7 Jan 29 11:11:22.825: INFO: Got endpoints: latency-svc-856p7 [2.639492096s] Jan 29 11:11:23.101: INFO: Created: latency-svc-4ft4j Jan 29 11:11:23.121: INFO: Got endpoints: latency-svc-4ft4j [2.732849957s] Jan 29 11:11:23.289: INFO: Created: latency-svc-5hqmr Jan 29 11:11:23.305: INFO: Got endpoints: latency-svc-5hqmr [2.708505839s] Jan 29 11:11:23.485: INFO: Created: latency-svc-vhtcw Jan 29 11:11:23.503: INFO: Got endpoints: latency-svc-vhtcw [2.720876841s] Jan 29 11:11:23.741: INFO: Created: latency-svc-md42n Jan 29 11:11:23.741: INFO: Got endpoints: latency-svc-md42n [2.759296125s] Jan 29 11:11:24.480: INFO: Created: latency-svc-v4ggg Jan 29 11:11:24.540: INFO: Got endpoints: latency-svc-v4ggg [3.534625576s] Jan 29 11:11:24.790: INFO: Created: latency-svc-glclz Jan 29 11:11:24.907: INFO: Got endpoints: latency-svc-glclz [3.396037072s] Jan 29 11:11:25.152: INFO: Created: latency-svc-xxvnr Jan 29 11:11:25.163: INFO: Got endpoints: latency-svc-xxvnr [3.436474938s] Jan 29 11:11:25.359: INFO: Created: latency-svc-zmp5h Jan 29 11:11:25.389: INFO: Got endpoints: latency-svc-zmp5h [3.44264333s] Jan 29 11:11:25.646: INFO: Created: latency-svc-nwkt8 Jan 29 11:11:25.664: INFO: Created: latency-svc-9npxt Jan 29 11:11:25.671: INFO: Got endpoints: latency-svc-nwkt8 [3.682990015s] Jan 29 11:11:25.691: INFO: Got endpoints: latency-svc-9npxt [3.543020225s] Jan 29 11:11:25.910: INFO: Created: latency-svc-wf4nw Jan 29 11:11:25.914: INFO: Got endpoints: latency-svc-wf4nw [3.570444997s] Jan 29 11:11:26.087: INFO: Created: latency-svc-56vpg Jan 29 11:11:26.129: INFO: Got endpoints: latency-svc-56vpg [3.707345929s] Jan 29 11:11:26.244: INFO: Created: latency-svc-hbpjr Jan 29 11:11:26.265: INFO: Got endpoints: latency-svc-hbpjr [3.642639653s] Jan 29 11:11:26.304: INFO: Created: latency-svc-sf62k Jan 29 11:11:26.324: INFO: Got endpoints: latency-svc-sf62k [3.548710144s] Jan 29 11:11:26.453: INFO: Created: latency-svc-l8jw7 Jan 29 11:11:26.692: INFO: Got endpoints: latency-svc-l8jw7 [3.86699122s] Jan 29 11:11:26.720: INFO: Created: latency-svc-jpkbx Jan 29 11:11:27.154: INFO: Got endpoints: latency-svc-jpkbx [4.032767202s] Jan 29 11:11:27.400: INFO: Created: latency-svc-zhl7m Jan 29 11:11:27.472: INFO: Got endpoints: latency-svc-zhl7m [4.166467997s] Jan 29 11:11:27.626: INFO: Created: latency-svc-p56cz Jan 29 11:11:27.641: INFO: Got endpoints: latency-svc-p56cz [4.138574349s] Jan 29 11:11:27.693: INFO: Created: latency-svc-gbfsx Jan 29 11:11:27.858: INFO: Got endpoints: latency-svc-gbfsx [4.116601908s] Jan 29 11:11:27.892: INFO: Created: latency-svc-mrv5d Jan 29 11:11:27.931: INFO: Got endpoints: latency-svc-mrv5d [3.390114445s] Jan 29 11:11:28.046: INFO: Created: latency-svc-j722k Jan 29 11:11:28.059: INFO: Got endpoints: latency-svc-j722k [3.152158868s] Jan 29 11:11:28.122: INFO: Created: latency-svc-wz8cn Jan 29 11:11:28.133: INFO: Got endpoints: latency-svc-wz8cn [2.970240592s] Jan 29 11:11:28.268: INFO: Created: latency-svc-lzgxs Jan 29 11:11:28.310: INFO: Got endpoints: latency-svc-lzgxs [2.921179419s] Jan 29 11:11:28.354: INFO: Created: latency-svc-2krj6 Jan 29 11:11:28.513: INFO: Got endpoints: latency-svc-2krj6 [2.841161095s] Jan 29 11:11:28.563: INFO: Created: latency-svc-lrbcn Jan 29 11:11:28.577: INFO: Got endpoints: latency-svc-lrbcn [2.885202667s] Jan 29 11:11:28.778: INFO: Created: latency-svc-f5h6d Jan 29 11:11:28.778: INFO: Got endpoints: latency-svc-f5h6d [2.864032299s] Jan 29 11:11:28.945: INFO: Created: latency-svc-854qx Jan 29 11:11:28.957: INFO: Got endpoints: latency-svc-854qx [2.827915329s] Jan 29 11:11:29.008: INFO: Created: latency-svc-248nn Jan 29 11:11:29.022: INFO: Got endpoints: latency-svc-248nn [2.756924374s] Jan 29 11:11:29.146: INFO: Created: latency-svc-hpx4x Jan 29 11:11:29.175: INFO: Got endpoints: latency-svc-hpx4x [2.851006579s] Jan 29 11:11:29.241: INFO: Created: latency-svc-c7l4z Jan 29 11:11:29.371: INFO: Got endpoints: latency-svc-c7l4z [2.678404941s] Jan 29 11:11:29.416: INFO: Created: latency-svc-jtj9c Jan 29 11:11:29.437: INFO: Got endpoints: latency-svc-jtj9c [2.28307499s] Jan 29 11:11:29.484: INFO: Created: latency-svc-5nnvw Jan 29 11:11:29.651: INFO: Got endpoints: latency-svc-5nnvw [2.179184476s] Jan 29 11:11:29.679: INFO: Created: latency-svc-c9bq5 Jan 29 11:11:29.752: INFO: Got endpoints: latency-svc-c9bq5 [2.110146456s] Jan 29 11:11:29.967: INFO: Created: latency-svc-kstnw Jan 29 11:11:29.990: INFO: Got endpoints: latency-svc-kstnw [2.131887022s] Jan 29 11:11:30.062: INFO: Created: latency-svc-qhw6s Jan 29 11:11:30.186: INFO: Got endpoints: latency-svc-qhw6s [2.25454761s] Jan 29 11:11:30.206: INFO: Created: latency-svc-mzz6v Jan 29 11:11:30.217: INFO: Got endpoints: latency-svc-mzz6v [2.157225689s] Jan 29 11:11:30.410: INFO: Created: latency-svc-5b5bw Jan 29 11:11:30.467: INFO: Created: latency-svc-xvpb2 Jan 29 11:11:30.636: INFO: Created: latency-svc-6bg5r Jan 29 11:11:30.650: INFO: Got endpoints: latency-svc-5b5bw [2.516519835s] Jan 29 11:11:30.654: INFO: Got endpoints: latency-svc-6bg5r [2.141293847s] Jan 29 11:11:30.657: INFO: Got endpoints: latency-svc-xvpb2 [2.346461295s] Jan 29 11:11:30.718: INFO: Created: latency-svc-tm4qp Jan 29 11:11:30.927: INFO: Got endpoints: latency-svc-tm4qp [2.349702378s] Jan 29 11:11:30.980: INFO: Created: latency-svc-rc7lp Jan 29 11:11:30.990: INFO: Got endpoints: latency-svc-rc7lp [2.211799347s] Jan 29 11:11:31.131: INFO: Created: latency-svc-rrchc Jan 29 11:11:31.156: INFO: Got endpoints: latency-svc-rrchc [2.198902567s] Jan 29 11:11:31.204: INFO: Created: latency-svc-c9bdr Jan 29 11:11:31.312: INFO: Got endpoints: latency-svc-c9bdr [2.28896245s] Jan 29 11:11:31.347: INFO: Created: latency-svc-vzwjg Jan 29 11:11:31.372: INFO: Got endpoints: latency-svc-vzwjg [2.196580098s] Jan 29 11:11:31.602: INFO: Created: latency-svc-cjmn2 Jan 29 11:11:31.811: INFO: Got endpoints: latency-svc-cjmn2 [2.439300045s] Jan 29 11:11:31.811: INFO: Created: latency-svc-x6zrh Jan 29 11:11:31.833: INFO: Got endpoints: latency-svc-x6zrh [2.395922252s] Jan 29 11:11:32.058: INFO: Created: latency-svc-c7zmb Jan 29 11:11:32.084: INFO: Got endpoints: latency-svc-c7zmb [2.431979007s] Jan 29 11:11:32.155: INFO: Created: latency-svc-nd8dl Jan 29 11:11:32.276: INFO: Got endpoints: latency-svc-nd8dl [2.523934704s] Jan 29 11:11:32.302: INFO: Created: latency-svc-7cjf5 Jan 29 11:11:32.317: INFO: Got endpoints: latency-svc-7cjf5 [2.326811856s] Jan 29 11:11:32.375: INFO: Created: latency-svc-hknlh Jan 29 11:11:32.505: INFO: Got endpoints: latency-svc-hknlh [2.319334558s] Jan 29 11:11:32.544: INFO: Created: latency-svc-q7gdp Jan 29 11:11:32.580: INFO: Got endpoints: latency-svc-q7gdp [2.362963515s] Jan 29 11:11:32.807: INFO: Created: latency-svc-k5p47 Jan 29 11:11:32.827: INFO: Got endpoints: latency-svc-k5p47 [2.17704853s] Jan 29 11:11:33.098: INFO: Created: latency-svc-4vj9g Jan 29 11:11:33.133: INFO: Got endpoints: latency-svc-4vj9g [2.478696412s] Jan 29 11:11:34.029: INFO: Created: latency-svc-j5746 Jan 29 11:11:34.063: INFO: Got endpoints: latency-svc-j5746 [3.406026584s] Jan 29 11:11:34.242: INFO: Created: latency-svc-9npz7 Jan 29 11:11:34.253: INFO: Got endpoints: latency-svc-9npz7 [3.325846252s] Jan 29 11:11:34.322: INFO: Created: latency-svc-d7dkz Jan 29 11:11:34.444: INFO: Got endpoints: latency-svc-d7dkz [3.45334975s] Jan 29 11:11:34.503: INFO: Created: latency-svc-xrsdg Jan 29 11:11:34.535: INFO: Got endpoints: latency-svc-xrsdg [3.378530568s] Jan 29 11:11:34.672: INFO: Created: latency-svc-bspnb Jan 29 11:11:34.687: INFO: Got endpoints: latency-svc-bspnb [3.375670377s] Jan 29 11:11:34.758: INFO: Created: latency-svc-x9hkk Jan 29 11:11:34.927: INFO: Got endpoints: latency-svc-x9hkk [3.555358746s] Jan 29 11:11:34.967: INFO: Created: latency-svc-5xgjz Jan 29 11:11:34.993: INFO: Got endpoints: latency-svc-5xgjz [3.182344223s] Jan 29 11:11:35.204: INFO: Created: latency-svc-fxhhp Jan 29 11:11:35.209: INFO: Got endpoints: latency-svc-fxhhp [3.375240559s] Jan 29 11:11:35.259: INFO: Created: latency-svc-kbttj Jan 29 11:11:35.353: INFO: Got endpoints: latency-svc-kbttj [3.26870894s] Jan 29 11:11:35.369: INFO: Created: latency-svc-92z57 Jan 29 11:11:35.391: INFO: Got endpoints: latency-svc-92z57 [3.114884348s] Jan 29 11:11:35.436: INFO: Created: latency-svc-78lml Jan 29 11:11:35.448: INFO: Got endpoints: latency-svc-78lml [3.129918465s] Jan 29 11:11:35.583: INFO: Created: latency-svc-5tx9n Jan 29 11:11:35.598: INFO: Got endpoints: latency-svc-5tx9n [3.092591658s] Jan 29 11:11:35.746: INFO: Created: latency-svc-q5cdn Jan 29 11:11:35.760: INFO: Got endpoints: latency-svc-q5cdn [3.180141319s] Jan 29 11:11:35.974: INFO: Created: latency-svc-xfj6k Jan 29 11:11:35.994: INFO: Got endpoints: latency-svc-xfj6k [3.166771786s] Jan 29 11:11:36.049: INFO: Created: latency-svc-hmptj Jan 29 11:11:36.197: INFO: Got endpoints: latency-svc-hmptj [3.06343855s] Jan 29 11:11:36.225: INFO: Created: latency-svc-vrq4j Jan 29 11:11:36.237: INFO: Got endpoints: latency-svc-vrq4j [2.173505624s] Jan 29 11:11:36.294: INFO: Created: latency-svc-t6f65 Jan 29 11:11:36.380: INFO: Got endpoints: latency-svc-t6f65 [2.127021933s] Jan 29 11:11:36.412: INFO: Created: latency-svc-7rv7h Jan 29 11:11:36.470: INFO: Got endpoints: latency-svc-7rv7h [2.026465421s] Jan 29 11:11:36.477: INFO: Created: latency-svc-jnx7t Jan 29 11:11:36.585: INFO: Got endpoints: latency-svc-jnx7t [2.050615485s] Jan 29 11:11:36.657: INFO: Created: latency-svc-9st8l Jan 29 11:11:36.793: INFO: Got endpoints: latency-svc-9st8l [2.10539983s] Jan 29 11:11:36.828: INFO: Created: latency-svc-jwckn Jan 29 11:11:36.860: INFO: Got endpoints: latency-svc-jwckn [1.933003174s] Jan 29 11:11:37.026: INFO: Created: latency-svc-f46xb Jan 29 11:11:37.075: INFO: Got endpoints: latency-svc-f46xb [2.081493709s] Jan 29 11:11:37.189: INFO: Created: latency-svc-c4vwg Jan 29 11:11:37.190: INFO: Got endpoints: latency-svc-c4vwg [1.980970901s] Jan 29 11:11:37.263: INFO: Created: latency-svc-bq7sv Jan 29 11:11:37.351: INFO: Got endpoints: latency-svc-bq7sv [1.997666166s] Jan 29 11:11:37.372: INFO: Created: latency-svc-ng4wq Jan 29 11:11:37.412: INFO: Got endpoints: latency-svc-ng4wq [2.020184136s] Jan 29 11:11:37.414: INFO: Created: latency-svc-88vzg Jan 29 11:11:37.535: INFO: Got endpoints: latency-svc-88vzg [2.086990781s] Jan 29 11:11:37.547: INFO: Created: latency-svc-hwbbp Jan 29 11:11:37.600: INFO: Got endpoints: latency-svc-hwbbp [2.001084683s] Jan 29 11:11:37.653: INFO: Created: latency-svc-nj6rc Jan 29 11:11:37.828: INFO: Got endpoints: latency-svc-nj6rc [2.067714047s] Jan 29 11:11:37.879: INFO: Created: latency-svc-w6x2m Jan 29 11:11:37.900: INFO: Got endpoints: latency-svc-w6x2m [1.905694211s] Jan 29 11:11:38.136: INFO: Created: latency-svc-zbn4p Jan 29 11:11:38.296: INFO: Got endpoints: latency-svc-zbn4p [2.098695581s] Jan 29 11:11:38.336: INFO: Created: latency-svc-gwz64 Jan 29 11:11:38.344: INFO: Got endpoints: latency-svc-gwz64 [2.107030378s] Jan 29 11:11:38.550: INFO: Created: latency-svc-8m2bn Jan 29 11:11:38.581: INFO: Got endpoints: latency-svc-8m2bn [2.200288281s] Jan 29 11:11:39.480: INFO: Created: latency-svc-m7rbj Jan 29 11:11:39.586: INFO: Got endpoints: latency-svc-m7rbj [3.114914979s] Jan 29 11:11:39.768: INFO: Created: latency-svc-7whxd Jan 29 11:11:39.786: INFO: Got endpoints: latency-svc-7whxd [3.200196012s] Jan 29 11:11:40.047: INFO: Created: latency-svc-54hjk Jan 29 11:11:40.065: INFO: Got endpoints: latency-svc-54hjk [3.271230628s] Jan 29 11:11:40.312: INFO: Created: latency-svc-zm466 Jan 29 11:11:40.330: INFO: Got endpoints: latency-svc-zm466 [3.468520739s] Jan 29 11:11:40.868: INFO: Created: latency-svc-cgq5d Jan 29 11:11:41.023: INFO: Created: latency-svc-hjh8c Jan 29 11:11:41.027: INFO: Got endpoints: latency-svc-cgq5d [3.951189988s] Jan 29 11:11:41.062: INFO: Got endpoints: latency-svc-hjh8c [3.87168474s] Jan 29 11:11:41.088: INFO: Created: latency-svc-s9glv Jan 29 11:11:41.101: INFO: Got endpoints: latency-svc-s9glv [3.750341786s] Jan 29 11:11:41.214: INFO: Created: latency-svc-2chx9 Jan 29 11:11:41.228: INFO: Got endpoints: latency-svc-2chx9 [3.815824019s] Jan 29 11:11:41.285: INFO: Created: latency-svc-87jxv Jan 29 11:11:41.410: INFO: Got endpoints: latency-svc-87jxv [3.874760668s] Jan 29 11:11:41.432: INFO: Created: latency-svc-75ff9 Jan 29 11:11:41.455: INFO: Got endpoints: latency-svc-75ff9 [3.854777757s] Jan 29 11:11:41.591: INFO: Created: latency-svc-tn4jq Jan 29 11:11:41.600: INFO: Got endpoints: latency-svc-tn4jq [3.771864728s] Jan 29 11:11:41.670: INFO: Created: latency-svc-rnfk2 Jan 29 11:11:41.772: INFO: Got endpoints: latency-svc-rnfk2 [3.871730066s] Jan 29 11:11:41.789: INFO: Created: latency-svc-xkvqc Jan 29 11:11:41.807: INFO: Got endpoints: latency-svc-xkvqc [3.509999325s] Jan 29 11:11:42.016: INFO: Created: latency-svc-s2bf9 Jan 29 11:11:42.028: INFO: Got endpoints: latency-svc-s2bf9 [3.683295152s] Jan 29 11:11:42.074: INFO: Created: latency-svc-dqfrf Jan 29 11:11:42.088: INFO: Got endpoints: latency-svc-dqfrf [3.506684833s] Jan 29 11:11:42.228: INFO: Created: latency-svc-f2n6g Jan 29 11:11:42.237: INFO: Got endpoints: latency-svc-f2n6g [2.650957096s] Jan 29 11:11:42.431: INFO: Created: latency-svc-nc4gl Jan 29 11:11:42.519: INFO: Got endpoints: latency-svc-nc4gl [2.733314054s] Jan 29 11:11:42.532: INFO: Created: latency-svc-7cbmd Jan 29 11:11:42.692: INFO: Created: latency-svc-zlmt5 Jan 29 11:11:42.694: INFO: Got endpoints: latency-svc-7cbmd [2.629240645s] Jan 29 11:11:42.714: INFO: Got endpoints: latency-svc-zlmt5 [2.384623482s] Jan 29 11:11:42.942: INFO: Created: latency-svc-58fv7 Jan 29 11:11:42.962: INFO: Got endpoints: latency-svc-58fv7 [1.935535421s] Jan 29 11:11:43.006: INFO: Created: latency-svc-hwdr6 Jan 29 11:11:43.025: INFO: Got endpoints: latency-svc-hwdr6 [1.963460865s] Jan 29 11:11:43.146: INFO: Created: latency-svc-mm4nh Jan 29 11:11:43.168: INFO: Got endpoints: latency-svc-mm4nh [2.066455989s] Jan 29 11:11:43.224: INFO: Created: latency-svc-f2vdx Jan 29 11:11:43.377: INFO: Got endpoints: latency-svc-f2vdx [2.149530718s] Jan 29 11:11:43.409: INFO: Created: latency-svc-xb54n Jan 29 11:11:43.434: INFO: Got endpoints: latency-svc-xb54n [2.023714911s] Jan 29 11:11:43.470: INFO: Created: latency-svc-rspw8 Jan 29 11:11:43.564: INFO: Got endpoints: latency-svc-rspw8 [2.109304568s] Jan 29 11:11:43.639: INFO: Created: latency-svc-5zjh8 Jan 29 11:11:43.795: INFO: Got endpoints: latency-svc-5zjh8 [2.194812112s] Jan 29 11:11:43.844: INFO: Created: latency-svc-7q29g Jan 29 11:11:43.999: INFO: Got endpoints: latency-svc-7q29g [2.227048479s] Jan 29 11:11:44.091: INFO: Created: latency-svc-wbcqx Jan 29 11:11:44.291: INFO: Got endpoints: latency-svc-wbcqx [2.484624693s] Jan 29 11:11:44.334: INFO: Created: latency-svc-28v5h Jan 29 11:11:44.371: INFO: Got endpoints: latency-svc-28v5h [2.343191649s] Jan 29 11:11:44.623: INFO: Created: latency-svc-xnh4c Jan 29 11:11:44.646: INFO: Got endpoints: latency-svc-xnh4c [2.558293091s] Jan 29 11:11:44.793: INFO: Created: latency-svc-7mjfq Jan 29 11:11:44.807: INFO: Got endpoints: latency-svc-7mjfq [2.570082508s] Jan 29 11:11:44.994: INFO: Created: latency-svc-kthrp Jan 29 11:11:45.017: INFO: Got endpoints: latency-svc-kthrp [2.496656574s] Jan 29 11:11:45.065: INFO: Created: latency-svc-j44xq Jan 29 11:11:45.076: INFO: Got endpoints: latency-svc-j44xq [2.381771161s] Jan 29 11:11:45.262: INFO: Created: latency-svc-wfg9k Jan 29 11:11:45.271: INFO: Got endpoints: latency-svc-wfg9k [2.556665761s] Jan 29 11:11:45.436: INFO: Created: latency-svc-7gcbr Jan 29 11:11:45.476: INFO: Got endpoints: latency-svc-7gcbr [2.513367811s] Jan 29 11:11:45.623: INFO: Created: latency-svc-4ftkl Jan 29 11:11:45.647: INFO: Got endpoints: latency-svc-4ftkl [2.622006789s] Jan 29 11:11:45.708: INFO: Created: latency-svc-4kf79 Jan 29 11:11:45.788: INFO: Got endpoints: latency-svc-4kf79 [2.620215317s] Jan 29 11:11:46.168: INFO: Created: latency-svc-st65f Jan 29 11:11:46.355: INFO: Got endpoints: latency-svc-st65f [2.977799498s] Jan 29 11:11:46.698: INFO: Created: latency-svc-tsp5x Jan 29 11:11:46.982: INFO: Got endpoints: latency-svc-tsp5x [3.547639761s] Jan 29 11:11:47.024: INFO: Created: latency-svc-wk24l Jan 29 11:11:47.063: INFO: Got endpoints: latency-svc-wk24l [3.498533872s] Jan 29 11:11:47.179: INFO: Created: latency-svc-tp8v4 Jan 29 11:11:47.191: INFO: Got endpoints: latency-svc-tp8v4 [3.395151228s] Jan 29 11:11:47.346: INFO: Created: latency-svc-dnbcq Jan 29 11:11:47.430: INFO: Got endpoints: latency-svc-dnbcq [3.430907922s] Jan 29 11:11:47.521: INFO: Created: latency-svc-nlv9m Jan 29 11:11:47.531: INFO: Got endpoints: latency-svc-nlv9m [3.239771299s] Jan 29 11:11:47.560: INFO: Created: latency-svc-vbdv2 Jan 29 11:11:47.581: INFO: Got endpoints: latency-svc-vbdv2 [3.209383058s] Jan 29 11:11:47.698: INFO: Created: latency-svc-d6ts6 Jan 29 11:11:47.723: INFO: Got endpoints: latency-svc-d6ts6 [3.076486112s] Jan 29 11:11:47.800: INFO: Created: latency-svc-8bgrr Jan 29 11:11:47.946: INFO: Got endpoints: latency-svc-8bgrr [3.138269632s] Jan 29 11:11:47.977: INFO: Created: latency-svc-z24jx Jan 29 11:11:47.985: INFO: Got endpoints: latency-svc-z24jx [2.96853514s] Jan 29 11:11:48.035: INFO: Created: latency-svc-fmszd Jan 29 11:11:48.183: INFO: Got endpoints: latency-svc-fmszd [3.10719212s] Jan 29 11:11:48.209: INFO: Created: latency-svc-2qtv2 Jan 29 11:11:48.230: INFO: Got endpoints: latency-svc-2qtv2 [2.958424964s] Jan 29 11:11:48.272: INFO: Created: latency-svc-548wk Jan 29 11:11:48.347: INFO: Got endpoints: latency-svc-548wk [2.871171277s] Jan 29 11:11:48.362: INFO: Created: latency-svc-65dgl Jan 29 11:11:48.372: INFO: Got endpoints: latency-svc-65dgl [2.724928209s] Jan 29 11:11:48.373: INFO: Latencies: [199.65439ms 303.646058ms 478.888838ms 790.399075ms 953.547264ms 1.16442167s 1.18870269s 1.427214601s 1.576045735s 1.756829965s 1.905694211s 1.933003174s 1.935535421s 1.963460865s 1.980970901s 1.997666166s 2.001084683s 2.020184136s 2.023714911s 2.026465421s 2.050615485s 2.066455989s 2.067714047s 2.081493709s 2.086990781s 2.098695581s 2.10539983s 2.107030378s 2.109304568s 2.110146456s 2.127021933s 2.131887022s 2.141293847s 2.149530718s 2.157225689s 2.173505624s 2.17704853s 2.179184476s 2.183600099s 2.194812112s 2.196580098s 2.198902567s 2.200288281s 2.211799347s 2.227048479s 2.25454761s 2.28307499s 2.28896245s 2.319334558s 2.326811856s 2.342328742s 2.343191649s 2.346461295s 2.349702378s 2.362963515s 2.381771161s 2.384623482s 2.395922252s 2.431979007s 2.439300045s 2.478696412s 2.484624693s 2.496656574s 2.509759901s 2.513367811s 2.516519835s 2.523934704s 2.526433139s 2.556665761s 2.558293091s 2.570082508s 2.571083541s 2.578661993s 2.583358858s 2.597767139s 2.611749294s 2.620215317s 2.622006789s 2.628378006s 2.629116595s 2.629240645s 2.639492096s 2.650957096s 2.651168596s 2.651541895s 2.652735085s 2.67792756s 2.678404941s 2.67887183s 2.685642178s 2.689856555s 2.708505839s 2.708616174s 2.720876841s 2.724928209s 2.732849957s 2.733314054s 2.736477398s 2.738729647s 2.740647151s 2.74264271s 2.756924374s 2.759296125s 2.780416316s 2.780772437s 2.785891067s 2.802226974s 2.817724178s 2.819225313s 2.827915329s 2.835790614s 2.841161095s 2.851006579s 2.864032299s 2.866762769s 2.871171277s 2.885202667s 2.921179419s 2.9381179s 2.948701837s 2.958424964s 2.96853514s 2.970008725s 2.970240592s 2.977799498s 2.97870586s 2.984567639s 2.99550012s 3.06343855s 3.076486112s 3.092591658s 3.107013393s 3.10719212s 3.114884348s 3.114914979s 3.129918465s 3.138269632s 3.152158868s 3.166771786s 3.180141319s 3.182344223s 3.200196012s 3.209383058s 3.239771299s 3.26870894s 3.271230628s 3.325846252s 3.375240559s 3.375670377s 3.378530568s 3.390114445s 3.395151228s 3.396037072s 3.406026584s 3.430907922s 3.436474938s 3.44264333s 3.45334975s 3.468520739s 3.498533872s 3.506684833s 3.509999325s 3.511168259s 3.534625576s 3.543020225s 3.547639761s 3.548710144s 3.555358746s 3.570444997s 3.642639653s 3.682990015s 3.683295152s 3.707345929s 3.725862293s 3.750341786s 3.768317298s 3.771683228s 3.771864728s 3.774017727s 3.775494096s 3.784328384s 3.792707538s 3.815824019s 3.820182473s 3.82861551s 3.854777757s 3.86699122s 3.87168474s 3.871730066s 3.874760668s 3.88512243s 3.886802757s 3.922881468s 3.947369613s 3.951189988s 3.971707663s 4.032767202s 4.116601908s 4.138574349s 4.166467997s] Jan 29 11:11:48.373: INFO: 50 %ile: 2.74264271s Jan 29 11:11:48.373: INFO: 90 %ile: 3.784328384s Jan 29 11:11:48.373: INFO: 99 %ile: 4.138574349s Jan 29 11:11:48.373: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 29 11:11:48.373: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-svc-latency-sbdmr" for this suite. Jan 29 11:13:24.405: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 29 11:13:24.550: INFO: namespace: e2e-tests-svc-latency-sbdmr, resource: bindings, ignored listing per whitelist Jan 29 11:13:24.719: INFO: namespace e2e-tests-svc-latency-sbdmr deletion completed in 1m36.33827713s • [SLOW TEST:143.279 seconds] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 29 11:13:24.720: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir volume type on tmpfs Jan 29 11:13:24.897: INFO: Waiting up to 5m0s for pod "pod-5e087bcc-4288-11ea-8d54-0242ac110005" in namespace "e2e-tests-emptydir-kszsw" to be "success or failure" Jan 29 11:13:24.902: INFO: Pod "pod-5e087bcc-4288-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.870902ms Jan 29 11:13:27.079: INFO: Pod "pod-5e087bcc-4288-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.181735034s Jan 29 11:13:29.094: INFO: Pod "pod-5e087bcc-4288-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.196689307s Jan 29 11:13:31.216: INFO: Pod "pod-5e087bcc-4288-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.318981099s Jan 29 11:13:33.233: INFO: Pod "pod-5e087bcc-4288-11ea-8d54-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.336142456s STEP: Saw pod success Jan 29 11:13:33.233: INFO: Pod "pod-5e087bcc-4288-11ea-8d54-0242ac110005" satisfied condition "success or failure" Jan 29 11:13:33.240: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-5e087bcc-4288-11ea-8d54-0242ac110005 container test-container: STEP: delete the pod Jan 29 11:13:33.356: INFO: Waiting for pod pod-5e087bcc-4288-11ea-8d54-0242ac110005 to disappear Jan 29 11:13:33.372: INFO: Pod pod-5e087bcc-4288-11ea-8d54-0242ac110005 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 29 11:13:33.373: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-kszsw" for this suite. Jan 29 11:13:39.473: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 29 11:13:39.587: INFO: namespace: e2e-tests-emptydir-kszsw, resource: bindings, ignored listing per whitelist Jan 29 11:13:39.621: INFO: namespace e2e-tests-emptydir-kszsw deletion completed in 6.234487035s • [SLOW TEST:14.902 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 volume on tmpfs should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 29 11:13:39.622: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false Jan 29 11:14:04.217: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-tfpdq PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 29 11:14:04.218: INFO: >>> kubeConfig: /root/.kube/config I0129 11:14:04.297678 8 log.go:172] (0xc0000ead10) (0xc001e75a40) Create stream I0129 11:14:04.297941 8 log.go:172] (0xc0000ead10) (0xc001e75a40) Stream added, broadcasting: 1 I0129 11:14:04.302947 8 log.go:172] (0xc0000ead10) Reply frame received for 1 I0129 11:14:04.302977 8 log.go:172] (0xc0000ead10) (0xc001e75ae0) Create stream I0129 11:14:04.302990 8 log.go:172] (0xc0000ead10) (0xc001e75ae0) Stream added, broadcasting: 3 I0129 11:14:04.304173 8 log.go:172] (0xc0000ead10) Reply frame received for 3 I0129 11:14:04.304201 8 log.go:172] (0xc0000ead10) (0xc001db4dc0) Create stream I0129 11:14:04.304212 8 log.go:172] (0xc0000ead10) (0xc001db4dc0) Stream added, broadcasting: 5 I0129 11:14:04.305454 8 log.go:172] (0xc0000ead10) Reply frame received for 5 I0129 11:14:04.419418 8 log.go:172] (0xc0000ead10) Data frame received for 3 I0129 11:14:04.419691 8 log.go:172] (0xc001e75ae0) (3) Data frame handling I0129 11:14:04.419819 8 log.go:172] (0xc001e75ae0) (3) Data frame sent I0129 11:14:04.666930 8 log.go:172] (0xc0000ead10) Data frame received for 1 I0129 11:14:04.667099 8 log.go:172] (0xc0000ead10) (0xc001e75ae0) Stream removed, broadcasting: 3 I0129 11:14:04.667168 8 log.go:172] (0xc001e75a40) (1) Data frame handling I0129 11:14:04.667225 8 log.go:172] (0xc001e75a40) (1) Data frame sent I0129 11:14:04.667427 8 log.go:172] (0xc0000ead10) (0xc001db4dc0) Stream removed, broadcasting: 5 I0129 11:14:04.667769 8 log.go:172] (0xc0000ead10) (0xc001e75a40) Stream removed, broadcasting: 1 I0129 11:14:04.667856 8 log.go:172] (0xc0000ead10) Go away received I0129 11:14:04.668963 8 log.go:172] (0xc0000ead10) (0xc001e75a40) Stream removed, broadcasting: 1 I0129 11:14:04.669024 8 log.go:172] (0xc0000ead10) (0xc001e75ae0) Stream removed, broadcasting: 3 I0129 11:14:04.669050 8 log.go:172] (0xc0000ead10) (0xc001db4dc0) Stream removed, broadcasting: 5 Jan 29 11:14:04.669: INFO: Exec stderr: "" Jan 29 11:14:04.669: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-tfpdq PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 29 11:14:04.669: INFO: >>> kubeConfig: /root/.kube/config I0129 11:14:04.773899 8 log.go:172] (0xc0018842c0) (0xc001e988c0) Create stream I0129 11:14:04.774034 8 log.go:172] (0xc0018842c0) (0xc001e988c0) Stream added, broadcasting: 1 I0129 11:14:04.779651 8 log.go:172] (0xc0018842c0) Reply frame received for 1 I0129 11:14:04.779726 8 log.go:172] (0xc0018842c0) (0xc001631ae0) Create stream I0129 11:14:04.779747 8 log.go:172] (0xc0018842c0) (0xc001631ae0) Stream added, broadcasting: 3 I0129 11:14:04.781539 8 log.go:172] (0xc0018842c0) Reply frame received for 3 I0129 11:14:04.781651 8 log.go:172] (0xc0018842c0) (0xc001e98960) Create stream I0129 11:14:04.781675 8 log.go:172] (0xc0018842c0) (0xc001e98960) Stream added, broadcasting: 5 I0129 11:14:04.782823 8 log.go:172] (0xc0018842c0) Reply frame received for 5 I0129 11:14:04.947750 8 log.go:172] (0xc0018842c0) Data frame received for 3 I0129 11:14:04.947834 8 log.go:172] (0xc001631ae0) (3) Data frame handling I0129 11:14:04.947877 8 log.go:172] (0xc001631ae0) (3) Data frame sent I0129 11:14:05.072687 8 log.go:172] (0xc0018842c0) Data frame received for 1 I0129 11:14:05.072821 8 log.go:172] (0xc0018842c0) (0xc001631ae0) Stream removed, broadcasting: 3 I0129 11:14:05.072922 8 log.go:172] (0xc001e988c0) (1) Data frame handling I0129 11:14:05.072964 8 log.go:172] (0xc001e988c0) (1) Data frame sent I0129 11:14:05.073046 8 log.go:172] (0xc0018842c0) (0xc001e98960) Stream removed, broadcasting: 5 I0129 11:14:05.073093 8 log.go:172] (0xc0018842c0) (0xc001e988c0) Stream removed, broadcasting: 1 I0129 11:14:05.073122 8 log.go:172] (0xc0018842c0) Go away received I0129 11:14:05.073350 8 log.go:172] (0xc0018842c0) (0xc001e988c0) Stream removed, broadcasting: 1 I0129 11:14:05.073372 8 log.go:172] (0xc0018842c0) (0xc001631ae0) Stream removed, broadcasting: 3 I0129 11:14:05.073399 8 log.go:172] (0xc0018842c0) (0xc001e98960) Stream removed, broadcasting: 5 Jan 29 11:14:05.073: INFO: Exec stderr: "" Jan 29 11:14:05.073: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-tfpdq PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 29 11:14:05.073: INFO: >>> kubeConfig: /root/.kube/config I0129 11:14:05.145919 8 log.go:172] (0xc00271c2c0) (0xc001db5040) Create stream I0129 11:14:05.146003 8 log.go:172] (0xc00271c2c0) (0xc001db5040) Stream added, broadcasting: 1 I0129 11:14:05.150331 8 log.go:172] (0xc00271c2c0) Reply frame received for 1 I0129 11:14:05.150372 8 log.go:172] (0xc00271c2c0) (0xc001db50e0) Create stream I0129 11:14:05.150381 8 log.go:172] (0xc00271c2c0) (0xc001db50e0) Stream added, broadcasting: 3 I0129 11:14:05.151412 8 log.go:172] (0xc00271c2c0) Reply frame received for 3 I0129 11:14:05.151476 8 log.go:172] (0xc00271c2c0) (0xc0009332c0) Create stream I0129 11:14:05.151494 8 log.go:172] (0xc00271c2c0) (0xc0009332c0) Stream added, broadcasting: 5 I0129 11:14:05.152706 8 log.go:172] (0xc00271c2c0) Reply frame received for 5 I0129 11:14:05.267612 8 log.go:172] (0xc00271c2c0) Data frame received for 3 I0129 11:14:05.267744 8 log.go:172] (0xc001db50e0) (3) Data frame handling I0129 11:14:05.267779 8 log.go:172] (0xc001db50e0) (3) Data frame sent I0129 11:14:05.368213 8 log.go:172] (0xc00271c2c0) Data frame received for 1 I0129 11:14:05.368324 8 log.go:172] (0xc00271c2c0) (0xc001db50e0) Stream removed, broadcasting: 3 I0129 11:14:05.368358 8 log.go:172] (0xc001db5040) (1) Data frame handling I0129 11:14:05.368384 8 log.go:172] (0xc001db5040) (1) Data frame sent I0129 11:14:05.368412 8 log.go:172] (0xc00271c2c0) (0xc0009332c0) Stream removed, broadcasting: 5 I0129 11:14:05.368495 8 log.go:172] (0xc00271c2c0) (0xc001db5040) Stream removed, broadcasting: 1 I0129 11:14:05.368537 8 log.go:172] (0xc00271c2c0) Go away received I0129 11:14:05.368784 8 log.go:172] (0xc00271c2c0) (0xc001db5040) Stream removed, broadcasting: 1 I0129 11:14:05.368802 8 log.go:172] (0xc00271c2c0) (0xc001db50e0) Stream removed, broadcasting: 3 I0129 11:14:05.368818 8 log.go:172] (0xc00271c2c0) (0xc0009332c0) Stream removed, broadcasting: 5 Jan 29 11:14:05.368: INFO: Exec stderr: "" Jan 29 11:14:05.368: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-tfpdq PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 29 11:14:05.368: INFO: >>> kubeConfig: /root/.kube/config I0129 11:14:05.426027 8 log.go:172] (0xc001eaa2c0) (0xc000933540) Create stream I0129 11:14:05.426179 8 log.go:172] (0xc001eaa2c0) (0xc000933540) Stream added, broadcasting: 1 I0129 11:14:05.432784 8 log.go:172] (0xc001eaa2c0) Reply frame received for 1 I0129 11:14:05.432830 8 log.go:172] (0xc001eaa2c0) (0xc001e98a00) Create stream I0129 11:14:05.432847 8 log.go:172] (0xc001eaa2c0) (0xc001e98a00) Stream added, broadcasting: 3 I0129 11:14:05.433809 8 log.go:172] (0xc001eaa2c0) Reply frame received for 3 I0129 11:14:05.433835 8 log.go:172] (0xc001eaa2c0) (0xc001e98aa0) Create stream I0129 11:14:05.433842 8 log.go:172] (0xc001eaa2c0) (0xc001e98aa0) Stream added, broadcasting: 5 I0129 11:14:05.436352 8 log.go:172] (0xc001eaa2c0) Reply frame received for 5 I0129 11:14:05.546619 8 log.go:172] (0xc001eaa2c0) Data frame received for 3 I0129 11:14:05.546688 8 log.go:172] (0xc001e98a00) (3) Data frame handling I0129 11:14:05.546712 8 log.go:172] (0xc001e98a00) (3) Data frame sent I0129 11:14:05.646081 8 log.go:172] (0xc001eaa2c0) (0xc001e98a00) Stream removed, broadcasting: 3 I0129 11:14:05.646205 8 log.go:172] (0xc001eaa2c0) Data frame received for 1 I0129 11:14:05.646228 8 log.go:172] (0xc001eaa2c0) (0xc001e98aa0) Stream removed, broadcasting: 5 I0129 11:14:05.646287 8 log.go:172] (0xc000933540) (1) Data frame handling I0129 11:14:05.646306 8 log.go:172] (0xc000933540) (1) Data frame sent I0129 11:14:05.646319 8 log.go:172] (0xc001eaa2c0) (0xc000933540) Stream removed, broadcasting: 1 I0129 11:14:05.646342 8 log.go:172] (0xc001eaa2c0) Go away received I0129 11:14:05.646959 8 log.go:172] (0xc001eaa2c0) (0xc000933540) Stream removed, broadcasting: 1 I0129 11:14:05.647057 8 log.go:172] (0xc001eaa2c0) (0xc001e98a00) Stream removed, broadcasting: 3 I0129 11:14:05.647074 8 log.go:172] (0xc001eaa2c0) (0xc001e98aa0) Stream removed, broadcasting: 5 Jan 29 11:14:05.647: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount Jan 29 11:14:05.647: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-tfpdq PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 29 11:14:05.647: INFO: >>> kubeConfig: /root/.kube/config I0129 11:14:05.700529 8 log.go:172] (0xc0000eb1e0) (0xc0009280a0) Create stream I0129 11:14:05.700597 8 log.go:172] (0xc0000eb1e0) (0xc0009280a0) Stream added, broadcasting: 1 I0129 11:14:05.704995 8 log.go:172] (0xc0000eb1e0) Reply frame received for 1 I0129 11:14:05.705062 8 log.go:172] (0xc0000eb1e0) (0xc0008e74a0) Create stream I0129 11:14:05.705077 8 log.go:172] (0xc0000eb1e0) (0xc0008e74a0) Stream added, broadcasting: 3 I0129 11:14:05.706917 8 log.go:172] (0xc0000eb1e0) Reply frame received for 3 I0129 11:14:05.706973 8 log.go:172] (0xc0000eb1e0) (0xc001e98be0) Create stream I0129 11:14:05.706983 8 log.go:172] (0xc0000eb1e0) (0xc001e98be0) Stream added, broadcasting: 5 I0129 11:14:05.708526 8 log.go:172] (0xc0000eb1e0) Reply frame received for 5 I0129 11:14:05.818983 8 log.go:172] (0xc0000eb1e0) Data frame received for 3 I0129 11:14:05.819089 8 log.go:172] (0xc0008e74a0) (3) Data frame handling I0129 11:14:05.819121 8 log.go:172] (0xc0008e74a0) (3) Data frame sent I0129 11:14:05.930002 8 log.go:172] (0xc0000eb1e0) (0xc0008e74a0) Stream removed, broadcasting: 3 I0129 11:14:05.930127 8 log.go:172] (0xc0000eb1e0) Data frame received for 1 I0129 11:14:05.930153 8 log.go:172] (0xc0009280a0) (1) Data frame handling I0129 11:14:05.930170 8 log.go:172] (0xc0009280a0) (1) Data frame sent I0129 11:14:05.930183 8 log.go:172] (0xc0000eb1e0) (0xc0009280a0) Stream removed, broadcasting: 1 I0129 11:14:05.930207 8 log.go:172] (0xc0000eb1e0) (0xc001e98be0) Stream removed, broadcasting: 5 I0129 11:14:05.930270 8 log.go:172] (0xc0000eb1e0) Go away received I0129 11:14:05.930446 8 log.go:172] (0xc0000eb1e0) (0xc0009280a0) Stream removed, broadcasting: 1 I0129 11:14:05.930467 8 log.go:172] (0xc0000eb1e0) (0xc0008e74a0) Stream removed, broadcasting: 3 I0129 11:14:05.930476 8 log.go:172] (0xc0000eb1e0) (0xc001e98be0) Stream removed, broadcasting: 5 Jan 29 11:14:05.930: INFO: Exec stderr: "" Jan 29 11:14:05.930: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-tfpdq PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 29 11:14:05.930: INFO: >>> kubeConfig: /root/.kube/config I0129 11:14:05.997235 8 log.go:172] (0xc0000eb6b0) (0xc0009286e0) Create stream I0129 11:14:05.997402 8 log.go:172] (0xc0000eb6b0) (0xc0009286e0) Stream added, broadcasting: 1 I0129 11:14:06.004440 8 log.go:172] (0xc0000eb6b0) Reply frame received for 1 I0129 11:14:06.004500 8 log.go:172] (0xc0000eb6b0) (0xc001e98d20) Create stream I0129 11:14:06.004514 8 log.go:172] (0xc0000eb6b0) (0xc001e98d20) Stream added, broadcasting: 3 I0129 11:14:06.005619 8 log.go:172] (0xc0000eb6b0) Reply frame received for 3 I0129 11:14:06.005657 8 log.go:172] (0xc0000eb6b0) (0xc001db5180) Create stream I0129 11:14:06.005677 8 log.go:172] (0xc0000eb6b0) (0xc001db5180) Stream added, broadcasting: 5 I0129 11:14:06.006693 8 log.go:172] (0xc0000eb6b0) Reply frame received for 5 I0129 11:14:06.155247 8 log.go:172] (0xc0000eb6b0) Data frame received for 3 I0129 11:14:06.155300 8 log.go:172] (0xc001e98d20) (3) Data frame handling I0129 11:14:06.155314 8 log.go:172] (0xc001e98d20) (3) Data frame sent I0129 11:14:06.256667 8 log.go:172] (0xc0000eb6b0) (0xc001e98d20) Stream removed, broadcasting: 3 I0129 11:14:06.256838 8 log.go:172] (0xc0000eb6b0) Data frame received for 1 I0129 11:14:06.256868 8 log.go:172] (0xc0009286e0) (1) Data frame handling I0129 11:14:06.256905 8 log.go:172] (0xc0009286e0) (1) Data frame sent I0129 11:14:06.256936 8 log.go:172] (0xc0000eb6b0) (0xc001db5180) Stream removed, broadcasting: 5 I0129 11:14:06.256988 8 log.go:172] (0xc0000eb6b0) (0xc0009286e0) Stream removed, broadcasting: 1 I0129 11:14:06.257006 8 log.go:172] (0xc0000eb6b0) Go away received I0129 11:14:06.257212 8 log.go:172] (0xc0000eb6b0) (0xc0009286e0) Stream removed, broadcasting: 1 I0129 11:14:06.257227 8 log.go:172] (0xc0000eb6b0) (0xc001e98d20) Stream removed, broadcasting: 3 I0129 11:14:06.257237 8 log.go:172] (0xc0000eb6b0) (0xc001db5180) Stream removed, broadcasting: 5 Jan 29 11:14:06.257: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true Jan 29 11:14:06.257: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-tfpdq PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 29 11:14:06.257: INFO: >>> kubeConfig: /root/.kube/config I0129 11:14:06.319530 8 log.go:172] (0xc00271c790) (0xc001db5400) Create stream I0129 11:14:06.319631 8 log.go:172] (0xc00271c790) (0xc001db5400) Stream added, broadcasting: 1 I0129 11:14:06.352702 8 log.go:172] (0xc00271c790) Reply frame received for 1 I0129 11:14:06.352764 8 log.go:172] (0xc00271c790) (0xc001d8c000) Create stream I0129 11:14:06.352774 8 log.go:172] (0xc00271c790) (0xc001d8c000) Stream added, broadcasting: 3 I0129 11:14:06.353681 8 log.go:172] (0xc00271c790) Reply frame received for 3 I0129 11:14:06.353711 8 log.go:172] (0xc00271c790) (0xc001b9a000) Create stream I0129 11:14:06.353723 8 log.go:172] (0xc00271c790) (0xc001b9a000) Stream added, broadcasting: 5 I0129 11:14:06.354899 8 log.go:172] (0xc00271c790) Reply frame received for 5 I0129 11:14:06.483084 8 log.go:172] (0xc00271c790) Data frame received for 3 I0129 11:14:06.483182 8 log.go:172] (0xc001d8c000) (3) Data frame handling I0129 11:14:06.483227 8 log.go:172] (0xc001d8c000) (3) Data frame sent I0129 11:14:06.695028 8 log.go:172] (0xc00271c790) Data frame received for 1 I0129 11:14:06.695219 8 log.go:172] (0xc00271c790) (0xc001d8c000) Stream removed, broadcasting: 3 I0129 11:14:06.695345 8 log.go:172] (0xc001db5400) (1) Data frame handling I0129 11:14:06.695387 8 log.go:172] (0xc001db5400) (1) Data frame sent I0129 11:14:06.695468 8 log.go:172] (0xc00271c790) (0xc001b9a000) Stream removed, broadcasting: 5 I0129 11:14:06.695520 8 log.go:172] (0xc00271c790) (0xc001db5400) Stream removed, broadcasting: 1 I0129 11:14:06.695535 8 log.go:172] (0xc00271c790) Go away received I0129 11:14:06.695743 8 log.go:172] (0xc00271c790) (0xc001db5400) Stream removed, broadcasting: 1 I0129 11:14:06.695759 8 log.go:172] (0xc00271c790) (0xc001d8c000) Stream removed, broadcasting: 3 I0129 11:14:06.695771 8 log.go:172] (0xc00271c790) (0xc001b9a000) Stream removed, broadcasting: 5 Jan 29 11:14:06.695: INFO: Exec stderr: "" Jan 29 11:14:06.695: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-tfpdq PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 29 11:14:06.696: INFO: >>> kubeConfig: /root/.kube/config I0129 11:14:06.880046 8 log.go:172] (0xc000ce8630) (0xc001b9a280) Create stream I0129 11:14:06.880192 8 log.go:172] (0xc000ce8630) (0xc001b9a280) Stream added, broadcasting: 1 I0129 11:14:06.891906 8 log.go:172] (0xc000ce8630) Reply frame received for 1 I0129 11:14:06.891993 8 log.go:172] (0xc000ce8630) (0xc001b96000) Create stream I0129 11:14:06.892009 8 log.go:172] (0xc000ce8630) (0xc001b96000) Stream added, broadcasting: 3 I0129 11:14:06.895304 8 log.go:172] (0xc000ce8630) Reply frame received for 3 I0129 11:14:06.895346 8 log.go:172] (0xc000ce8630) (0xc001e74000) Create stream I0129 11:14:06.895357 8 log.go:172] (0xc000ce8630) (0xc001e74000) Stream added, broadcasting: 5 I0129 11:14:06.897647 8 log.go:172] (0xc000ce8630) Reply frame received for 5 I0129 11:14:07.040405 8 log.go:172] (0xc000ce8630) Data frame received for 3 I0129 11:14:07.040441 8 log.go:172] (0xc001b96000) (3) Data frame handling I0129 11:14:07.040456 8 log.go:172] (0xc001b96000) (3) Data frame sent I0129 11:14:07.179133 8 log.go:172] (0xc000ce8630) Data frame received for 1 I0129 11:14:07.179334 8 log.go:172] (0xc000ce8630) (0xc001b96000) Stream removed, broadcasting: 3 I0129 11:14:07.179418 8 log.go:172] (0xc001b9a280) (1) Data frame handling I0129 11:14:07.179455 8 log.go:172] (0xc001b9a280) (1) Data frame sent I0129 11:14:07.179519 8 log.go:172] (0xc000ce8630) (0xc001e74000) Stream removed, broadcasting: 5 I0129 11:14:07.179610 8 log.go:172] (0xc000ce8630) (0xc001b9a280) Stream removed, broadcasting: 1 I0129 11:14:07.179757 8 log.go:172] (0xc000ce8630) Go away received I0129 11:14:07.179900 8 log.go:172] (0xc000ce8630) (0xc001b9a280) Stream removed, broadcasting: 1 I0129 11:14:07.179940 8 log.go:172] (0xc000ce8630) (0xc001b96000) Stream removed, broadcasting: 3 I0129 11:14:07.179960 8 log.go:172] (0xc000ce8630) (0xc001e74000) Stream removed, broadcasting: 5 Jan 29 11:14:07.180: INFO: Exec stderr: "" Jan 29 11:14:07.180: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-tfpdq PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 29 11:14:07.180: INFO: >>> kubeConfig: /root/.kube/config I0129 11:14:07.260969 8 log.go:172] (0xc0020e2580) (0xc001d8c320) Create stream I0129 11:14:07.261041 8 log.go:172] (0xc0020e2580) (0xc001d8c320) Stream added, broadcasting: 1 I0129 11:14:07.267667 8 log.go:172] (0xc0020e2580) Reply frame received for 1 I0129 11:14:07.267720 8 log.go:172] (0xc0020e2580) (0xc001e740a0) Create stream I0129 11:14:07.267735 8 log.go:172] (0xc0020e2580) (0xc001e740a0) Stream added, broadcasting: 3 I0129 11:14:07.268549 8 log.go:172] (0xc0020e2580) Reply frame received for 3 I0129 11:14:07.268580 8 log.go:172] (0xc0020e2580) (0xc001e52000) Create stream I0129 11:14:07.268592 8 log.go:172] (0xc0020e2580) (0xc001e52000) Stream added, broadcasting: 5 I0129 11:14:07.269516 8 log.go:172] (0xc0020e2580) Reply frame received for 5 I0129 11:14:07.542807 8 log.go:172] (0xc0020e2580) Data frame received for 3 I0129 11:14:07.542969 8 log.go:172] (0xc001e740a0) (3) Data frame handling I0129 11:14:07.543020 8 log.go:172] (0xc001e740a0) (3) Data frame sent I0129 11:14:07.674223 8 log.go:172] (0xc0020e2580) (0xc001e740a0) Stream removed, broadcasting: 3 I0129 11:14:07.674392 8 log.go:172] (0xc0020e2580) Data frame received for 1 I0129 11:14:07.674457 8 log.go:172] (0xc0020e2580) (0xc001e52000) Stream removed, broadcasting: 5 I0129 11:14:07.674531 8 log.go:172] (0xc001d8c320) (1) Data frame handling I0129 11:14:07.674577 8 log.go:172] (0xc001d8c320) (1) Data frame sent I0129 11:14:07.674600 8 log.go:172] (0xc0020e2580) (0xc001d8c320) Stream removed, broadcasting: 1 I0129 11:14:07.674627 8 log.go:172] (0xc0020e2580) Go away received I0129 11:14:07.675273 8 log.go:172] (0xc0020e2580) (0xc001d8c320) Stream removed, broadcasting: 1 I0129 11:14:07.675287 8 log.go:172] (0xc0020e2580) (0xc001e740a0) Stream removed, broadcasting: 3 I0129 11:14:07.675300 8 log.go:172] (0xc0020e2580) (0xc001e52000) Stream removed, broadcasting: 5 Jan 29 11:14:07.675: INFO: Exec stderr: "" Jan 29 11:14:07.675: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-tfpdq PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 29 11:14:07.675: INFO: >>> kubeConfig: /root/.kube/config I0129 11:14:07.751764 8 log.go:172] (0xc0000ead10) (0xc001e52320) Create stream I0129 11:14:07.751964 8 log.go:172] (0xc0000ead10) (0xc001e52320) Stream added, broadcasting: 1 I0129 11:14:07.759744 8 log.go:172] (0xc0000ead10) Reply frame received for 1 I0129 11:14:07.759839 8 log.go:172] (0xc0000ead10) (0xc001d8c3c0) Create stream I0129 11:14:07.759871 8 log.go:172] (0xc0000ead10) (0xc001d8c3c0) Stream added, broadcasting: 3 I0129 11:14:07.761294 8 log.go:172] (0xc0000ead10) Reply frame received for 3 I0129 11:14:07.761330 8 log.go:172] (0xc0000ead10) (0xc001d8c500) Create stream I0129 11:14:07.761351 8 log.go:172] (0xc0000ead10) (0xc001d8c500) Stream added, broadcasting: 5 I0129 11:14:07.762328 8 log.go:172] (0xc0000ead10) Reply frame received for 5 I0129 11:14:07.927417 8 log.go:172] (0xc0000ead10) Data frame received for 3 I0129 11:14:07.927520 8 log.go:172] (0xc001d8c3c0) (3) Data frame handling I0129 11:14:07.927563 8 log.go:172] (0xc001d8c3c0) (3) Data frame sent I0129 11:14:08.056905 8 log.go:172] (0xc0000ead10) Data frame received for 1 I0129 11:14:08.057102 8 log.go:172] (0xc001e52320) (1) Data frame handling I0129 11:14:08.057201 8 log.go:172] (0xc001e52320) (1) Data frame sent I0129 11:14:08.059073 8 log.go:172] (0xc0000ead10) (0xc001e52320) Stream removed, broadcasting: 1 I0129 11:14:08.059270 8 log.go:172] (0xc0000ead10) (0xc001d8c3c0) Stream removed, broadcasting: 3 I0129 11:14:08.059442 8 log.go:172] (0xc0000ead10) (0xc001d8c500) Stream removed, broadcasting: 5 I0129 11:14:08.059484 8 log.go:172] (0xc0000ead10) Go away received I0129 11:14:08.059577 8 log.go:172] (0xc0000ead10) (0xc001e52320) Stream removed, broadcasting: 1 I0129 11:14:08.059599 8 log.go:172] (0xc0000ead10) (0xc001d8c3c0) Stream removed, broadcasting: 3 I0129 11:14:08.059618 8 log.go:172] (0xc0000ead10) (0xc001d8c500) Stream removed, broadcasting: 5 Jan 29 11:14:08.059: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 29 11:14:08.059: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-e2e-kubelet-etc-hosts-tfpdq" for this suite. Jan 29 11:15:12.135: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 29 11:15:12.168: INFO: namespace: e2e-tests-e2e-kubelet-etc-hosts-tfpdq, resource: bindings, ignored listing per whitelist Jan 29 11:15:12.305: INFO: namespace e2e-tests-e2e-kubelet-etc-hosts-tfpdq deletion completed in 1m4.21223244s • [SLOW TEST:92.683 seconds] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should test kubelet managed /etc/hosts file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 29 11:15:12.306: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-map-9e396262-4288-11ea-8d54-0242ac110005 STEP: Creating a pod to test consume configMaps Jan 29 11:15:12.663: INFO: Waiting up to 5m0s for pod "pod-configmaps-9e4354bb-4288-11ea-8d54-0242ac110005" in namespace "e2e-tests-configmap-rsm5n" to be "success or failure" Jan 29 11:15:12.676: INFO: Pod "pod-configmaps-9e4354bb-4288-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 12.672635ms Jan 29 11:15:14.693: INFO: Pod "pod-configmaps-9e4354bb-4288-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030122093s Jan 29 11:15:16.737: INFO: Pod "pod-configmaps-9e4354bb-4288-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.073854618s Jan 29 11:15:18.755: INFO: Pod "pod-configmaps-9e4354bb-4288-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.092062188s Jan 29 11:15:20.770: INFO: Pod "pod-configmaps-9e4354bb-4288-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.106918856s Jan 29 11:15:22.781: INFO: Pod "pod-configmaps-9e4354bb-4288-11ea-8d54-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.117437441s STEP: Saw pod success Jan 29 11:15:22.781: INFO: Pod "pod-configmaps-9e4354bb-4288-11ea-8d54-0242ac110005" satisfied condition "success or failure" Jan 29 11:15:22.783: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-9e4354bb-4288-11ea-8d54-0242ac110005 container configmap-volume-test: STEP: delete the pod Jan 29 11:15:22.830: INFO: Waiting for pod pod-configmaps-9e4354bb-4288-11ea-8d54-0242ac110005 to disappear Jan 29 11:15:22.834: INFO: Pod pod-configmaps-9e4354bb-4288-11ea-8d54-0242ac110005 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 29 11:15:22.834: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-rsm5n" for this suite. Jan 29 11:15:29.027: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 29 11:15:29.144: INFO: namespace: e2e-tests-configmap-rsm5n, resource: bindings, ignored listing per whitelist Jan 29 11:15:29.183: INFO: namespace e2e-tests-configmap-rsm5n deletion completed in 6.340618044s • [SLOW TEST:16.878 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 29 11:15:29.184: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating replication controller my-hostname-basic-a83849c9-4288-11ea-8d54-0242ac110005 Jan 29 11:15:29.377: INFO: Pod name my-hostname-basic-a83849c9-4288-11ea-8d54-0242ac110005: Found 0 pods out of 1 Jan 29 11:15:34.748: INFO: Pod name my-hostname-basic-a83849c9-4288-11ea-8d54-0242ac110005: Found 1 pods out of 1 Jan 29 11:15:34.748: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-a83849c9-4288-11ea-8d54-0242ac110005" are running Jan 29 11:15:37.284: INFO: Pod "my-hostname-basic-a83849c9-4288-11ea-8d54-0242ac110005-tl8ct" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-29 11:15:29 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-29 11:15:29 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-a83849c9-4288-11ea-8d54-0242ac110005]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-29 11:15:29 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-a83849c9-4288-11ea-8d54-0242ac110005]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-29 11:15:29 +0000 UTC Reason: Message:}]) Jan 29 11:15:37.284: INFO: Trying to dial the pod Jan 29 11:15:42.323: INFO: Controller my-hostname-basic-a83849c9-4288-11ea-8d54-0242ac110005: Got expected result from replica 1 [my-hostname-basic-a83849c9-4288-11ea-8d54-0242ac110005-tl8ct]: "my-hostname-basic-a83849c9-4288-11ea-8d54-0242ac110005-tl8ct", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 29 11:15:42.323: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replication-controller-hxjln" for this suite. Jan 29 11:15:48.374: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 29 11:15:48.428: INFO: namespace: e2e-tests-replication-controller-hxjln, resource: bindings, ignored listing per whitelist Jan 29 11:15:48.590: INFO: namespace e2e-tests-replication-controller-hxjln deletion completed in 6.260058245s • [SLOW TEST:19.407 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 29 11:15:48.592: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-2mk2j [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a new StaefulSet Jan 29 11:15:48.995: INFO: Found 0 stateful pods, waiting for 3 Jan 29 11:15:59.005: INFO: Found 1 stateful pods, waiting for 3 Jan 29 11:16:09.026: INFO: Found 2 stateful pods, waiting for 3 Jan 29 11:16:19.019: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jan 29 11:16:19.019: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jan 29 11:16:19.019: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine Jan 29 11:16:19.067: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update Jan 29 11:16:29.160: INFO: Updating stateful set ss2 Jan 29 11:16:29.199: INFO: Waiting for Pod e2e-tests-statefulset-2mk2j/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c STEP: Restoring Pods to the correct revision when they are deleted Jan 29 11:16:39.513: INFO: Found 1 stateful pods, waiting for 3 Jan 29 11:16:49.575: INFO: Found 2 stateful pods, waiting for 3 Jan 29 11:16:59.603: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jan 29 11:16:59.603: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jan 29 11:16:59.603: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Jan 29 11:17:09.537: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jan 29 11:17:09.537: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jan 29 11:17:09.537: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update Jan 29 11:17:09.593: INFO: Updating stateful set ss2 Jan 29 11:17:09.665: INFO: Waiting for Pod e2e-tests-statefulset-2mk2j/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Jan 29 11:17:19.687: INFO: Waiting for Pod e2e-tests-statefulset-2mk2j/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Jan 29 11:17:29.808: INFO: Updating stateful set ss2 Jan 29 11:17:29.904: INFO: Waiting for StatefulSet e2e-tests-statefulset-2mk2j/ss2 to complete update Jan 29 11:17:29.904: INFO: Waiting for Pod e2e-tests-statefulset-2mk2j/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Jan 29 11:17:39.931: INFO: Waiting for StatefulSet e2e-tests-statefulset-2mk2j/ss2 to complete update Jan 29 11:17:39.931: INFO: Waiting for Pod e2e-tests-statefulset-2mk2j/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Jan 29 11:17:49.925: INFO: Waiting for StatefulSet e2e-tests-statefulset-2mk2j/ss2 to complete update [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 Jan 29 11:17:59.935: INFO: Deleting all statefulset in ns e2e-tests-statefulset-2mk2j Jan 29 11:17:59.946: INFO: Scaling statefulset ss2 to 0 Jan 29 11:18:40.020: INFO: Waiting for statefulset status.replicas updated to 0 Jan 29 11:18:40.029: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 29 11:18:40.082: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-2mk2j" for this suite. Jan 29 11:18:48.185: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 29 11:18:48.325: INFO: namespace: e2e-tests-statefulset-2mk2j, resource: bindings, ignored listing per whitelist Jan 29 11:18:48.388: INFO: namespace e2e-tests-statefulset-2mk2j deletion completed in 8.291908118s • [SLOW TEST:179.797 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 29 11:18:48.388: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test override arguments Jan 29 11:18:48.825: INFO: Waiting up to 5m0s for pod "client-containers-1f1c5ecd-4289-11ea-8d54-0242ac110005" in namespace "e2e-tests-containers-n4r4t" to be "success or failure" Jan 29 11:18:48.835: INFO: Pod "client-containers-1f1c5ecd-4289-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.431794ms Jan 29 11:18:50.850: INFO: Pod "client-containers-1f1c5ecd-4289-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024548899s Jan 29 11:18:52.875: INFO: Pod "client-containers-1f1c5ecd-4289-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.049829426s Jan 29 11:18:54.895: INFO: Pod "client-containers-1f1c5ecd-4289-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.069751696s Jan 29 11:18:56.903: INFO: Pod "client-containers-1f1c5ecd-4289-11ea-8d54-0242ac110005": Phase="Running", Reason="", readiness=true. Elapsed: 8.077718912s Jan 29 11:18:58.917: INFO: Pod "client-containers-1f1c5ecd-4289-11ea-8d54-0242ac110005": Phase="Running", Reason="", readiness=true. Elapsed: 10.091518743s Jan 29 11:19:00.945: INFO: Pod "client-containers-1f1c5ecd-4289-11ea-8d54-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.119932738s STEP: Saw pod success Jan 29 11:19:00.945: INFO: Pod "client-containers-1f1c5ecd-4289-11ea-8d54-0242ac110005" satisfied condition "success or failure" Jan 29 11:19:00.954: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-containers-1f1c5ecd-4289-11ea-8d54-0242ac110005 container test-container: STEP: delete the pod Jan 29 11:19:01.118: INFO: Waiting for pod client-containers-1f1c5ecd-4289-11ea-8d54-0242ac110005 to disappear Jan 29 11:19:01.160: INFO: Pod client-containers-1f1c5ecd-4289-11ea-8d54-0242ac110005 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 29 11:19:01.160: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-containers-n4r4t" for this suite. Jan 29 11:19:07.300: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 29 11:19:07.339: INFO: namespace: e2e-tests-containers-n4r4t, resource: bindings, ignored listing per whitelist Jan 29 11:19:07.465: INFO: namespace e2e-tests-containers-n4r4t deletion completed in 6.283076544s • [SLOW TEST:19.077 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 29 11:19:07.465: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir volume type on node default medium Jan 29 11:19:07.678: INFO: Waiting up to 5m0s for pod "pod-2a5945c3-4289-11ea-8d54-0242ac110005" in namespace "e2e-tests-emptydir-lwkgc" to be "success or failure" Jan 29 11:19:07.708: INFO: Pod "pod-2a5945c3-4289-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 29.599142ms Jan 29 11:19:10.327: INFO: Pod "pod-2a5945c3-4289-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.648689916s Jan 29 11:19:12.347: INFO: Pod "pod-2a5945c3-4289-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.66938096s Jan 29 11:19:14.368: INFO: Pod "pod-2a5945c3-4289-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.689998687s Jan 29 11:19:16.386: INFO: Pod "pod-2a5945c3-4289-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.708093605s Jan 29 11:19:18.411: INFO: Pod "pod-2a5945c3-4289-11ea-8d54-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.733113486s STEP: Saw pod success Jan 29 11:19:18.411: INFO: Pod "pod-2a5945c3-4289-11ea-8d54-0242ac110005" satisfied condition "success or failure" Jan 29 11:19:18.417: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-2a5945c3-4289-11ea-8d54-0242ac110005 container test-container: STEP: delete the pod Jan 29 11:19:18.632: INFO: Waiting for pod pod-2a5945c3-4289-11ea-8d54-0242ac110005 to disappear Jan 29 11:19:18.639: INFO: Pod pod-2a5945c3-4289-11ea-8d54-0242ac110005 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 29 11:19:18.639: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-lwkgc" for this suite. Jan 29 11:19:26.685: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 29 11:19:26.746: INFO: namespace: e2e-tests-emptydir-lwkgc, resource: bindings, ignored listing per whitelist Jan 29 11:19:26.864: INFO: namespace e2e-tests-emptydir-lwkgc deletion completed in 8.218666097s • [SLOW TEST:19.399 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 volume on default medium should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 29 11:19:26.865: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-map-35e43939-4289-11ea-8d54-0242ac110005 STEP: Creating a pod to test consume secrets Jan 29 11:19:27.092: INFO: Waiting up to 5m0s for pod "pod-secrets-35eba8f3-4289-11ea-8d54-0242ac110005" in namespace "e2e-tests-secrets-xjtfm" to be "success or failure" Jan 29 11:19:27.106: INFO: Pod "pod-secrets-35eba8f3-4289-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 13.873059ms Jan 29 11:19:29.194: INFO: Pod "pod-secrets-35eba8f3-4289-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.101866633s Jan 29 11:19:31.205: INFO: Pod "pod-secrets-35eba8f3-4289-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.112641496s Jan 29 11:19:33.220: INFO: Pod "pod-secrets-35eba8f3-4289-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.128510336s Jan 29 11:19:35.238: INFO: Pod "pod-secrets-35eba8f3-4289-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.146177266s Jan 29 11:19:37.337: INFO: Pod "pod-secrets-35eba8f3-4289-11ea-8d54-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.244668274s STEP: Saw pod success Jan 29 11:19:37.337: INFO: Pod "pod-secrets-35eba8f3-4289-11ea-8d54-0242ac110005" satisfied condition "success or failure" Jan 29 11:19:37.385: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-35eba8f3-4289-11ea-8d54-0242ac110005 container secret-volume-test: STEP: delete the pod Jan 29 11:19:37.516: INFO: Waiting for pod pod-secrets-35eba8f3-4289-11ea-8d54-0242ac110005 to disappear Jan 29 11:19:37.529: INFO: Pod pod-secrets-35eba8f3-4289-11ea-8d54-0242ac110005 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 29 11:19:37.529: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-xjtfm" for this suite. Jan 29 11:19:43.589: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 29 11:19:43.745: INFO: namespace: e2e-tests-secrets-xjtfm, resource: bindings, ignored listing per whitelist Jan 29 11:19:43.799: INFO: namespace e2e-tests-secrets-xjtfm deletion completed in 6.262093472s • [SLOW TEST:16.934 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 29 11:19:43.800: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-configmap-vtkj STEP: Creating a pod to test atomic-volume-subpath Jan 29 11:19:44.192: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-vtkj" in namespace "e2e-tests-subpath-889wt" to be "success or failure" Jan 29 11:19:44.213: INFO: Pod "pod-subpath-test-configmap-vtkj": Phase="Pending", Reason="", readiness=false. Elapsed: 20.658481ms Jan 29 11:19:46.246: INFO: Pod "pod-subpath-test-configmap-vtkj": Phase="Pending", Reason="", readiness=false. Elapsed: 2.053661817s Jan 29 11:19:48.268: INFO: Pod "pod-subpath-test-configmap-vtkj": Phase="Pending", Reason="", readiness=false. Elapsed: 4.076320724s Jan 29 11:19:50.434: INFO: Pod "pod-subpath-test-configmap-vtkj": Phase="Pending", Reason="", readiness=false. Elapsed: 6.242373714s Jan 29 11:19:52.515: INFO: Pod "pod-subpath-test-configmap-vtkj": Phase="Pending", Reason="", readiness=false. Elapsed: 8.322441637s Jan 29 11:19:54.572: INFO: Pod "pod-subpath-test-configmap-vtkj": Phase="Pending", Reason="", readiness=false. Elapsed: 10.380318923s Jan 29 11:19:56.624: INFO: Pod "pod-subpath-test-configmap-vtkj": Phase="Pending", Reason="", readiness=false. Elapsed: 12.432308743s Jan 29 11:19:58.791: INFO: Pod "pod-subpath-test-configmap-vtkj": Phase="Running", Reason="", readiness=false. Elapsed: 14.598402269s Jan 29 11:20:00.800: INFO: Pod "pod-subpath-test-configmap-vtkj": Phase="Running", Reason="", readiness=false. Elapsed: 16.607692859s Jan 29 11:20:02.811: INFO: Pod "pod-subpath-test-configmap-vtkj": Phase="Running", Reason="", readiness=false. Elapsed: 18.618988829s Jan 29 11:20:04.836: INFO: Pod "pod-subpath-test-configmap-vtkj": Phase="Running", Reason="", readiness=false. Elapsed: 20.644321824s Jan 29 11:20:06.864: INFO: Pod "pod-subpath-test-configmap-vtkj": Phase="Running", Reason="", readiness=false. Elapsed: 22.67144177s Jan 29 11:20:08.894: INFO: Pod "pod-subpath-test-configmap-vtkj": Phase="Running", Reason="", readiness=false. Elapsed: 24.701803223s Jan 29 11:20:10.990: INFO: Pod "pod-subpath-test-configmap-vtkj": Phase="Running", Reason="", readiness=false. Elapsed: 26.798021904s Jan 29 11:20:13.014: INFO: Pod "pod-subpath-test-configmap-vtkj": Phase="Running", Reason="", readiness=false. Elapsed: 28.821396123s Jan 29 11:20:15.035: INFO: Pod "pod-subpath-test-configmap-vtkj": Phase="Running", Reason="", readiness=false. Elapsed: 30.84247423s Jan 29 11:20:17.053: INFO: Pod "pod-subpath-test-configmap-vtkj": Phase="Running", Reason="", readiness=false. Elapsed: 32.861087818s Jan 29 11:20:19.283: INFO: Pod "pod-subpath-test-configmap-vtkj": Phase="Succeeded", Reason="", readiness=false. Elapsed: 35.090919366s STEP: Saw pod success Jan 29 11:20:19.283: INFO: Pod "pod-subpath-test-configmap-vtkj" satisfied condition "success or failure" Jan 29 11:20:19.293: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-configmap-vtkj container test-container-subpath-configmap-vtkj: STEP: delete the pod Jan 29 11:20:19.582: INFO: Waiting for pod pod-subpath-test-configmap-vtkj to disappear Jan 29 11:20:19.595: INFO: Pod pod-subpath-test-configmap-vtkj no longer exists STEP: Deleting pod pod-subpath-test-configmap-vtkj Jan 29 11:20:19.595: INFO: Deleting pod "pod-subpath-test-configmap-vtkj" in namespace "e2e-tests-subpath-889wt" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 29 11:20:19.601: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-889wt" for this suite. Jan 29 11:20:25.710: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 29 11:20:25.866: INFO: namespace: e2e-tests-subpath-889wt, resource: bindings, ignored listing per whitelist Jan 29 11:20:25.884: INFO: namespace e2e-tests-subpath-889wt deletion completed in 6.274026826s • [SLOW TEST:42.084 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod with mountPath of existing file [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 29 11:20:25.884: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with configMap that has name projected-configmap-test-upd-5917c6b3-4289-11ea-8d54-0242ac110005 STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-5917c6b3-4289-11ea-8d54-0242ac110005 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 29 11:20:36.343: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-h9s9x" for this suite. Jan 29 11:21:00.454: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 29 11:21:00.597: INFO: namespace: e2e-tests-projected-h9s9x, resource: bindings, ignored listing per whitelist Jan 29 11:21:00.733: INFO: namespace e2e-tests-projected-h9s9x deletion completed in 24.378511493s • [SLOW TEST:34.849 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 29 11:21:00.734: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name s-test-opt-del-6defba04-4289-11ea-8d54-0242ac110005 STEP: Creating secret with name s-test-opt-upd-6defbb3b-4289-11ea-8d54-0242ac110005 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-6defba04-4289-11ea-8d54-0242ac110005 STEP: Updating secret s-test-opt-upd-6defbb3b-4289-11ea-8d54-0242ac110005 STEP: Creating secret with name s-test-opt-create-6defbb7e-4289-11ea-8d54-0242ac110005 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 29 11:21:19.505: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-rtfmv" for this suite. Jan 29 11:21:45.540: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 29 11:21:45.682: INFO: namespace: e2e-tests-secrets-rtfmv, resource: bindings, ignored listing per whitelist Jan 29 11:21:45.684: INFO: namespace e2e-tests-secrets-rtfmv deletion completed in 26.170885025s • [SLOW TEST:44.950 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 29 11:21:45.684: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating the pod Jan 29 11:21:56.529: INFO: Successfully updated pod "annotationupdate88a504d7-4289-11ea-8d54-0242ac110005" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 29 11:21:58.774: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-frwlt" for this suite. Jan 29 11:22:22.834: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 29 11:22:22.885: INFO: namespace: e2e-tests-downward-api-frwlt, resource: bindings, ignored listing per whitelist Jan 29 11:22:23.026: INFO: namespace e2e-tests-downward-api-frwlt deletion completed in 24.239298874s • [SLOW TEST:37.342 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl label should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 29 11:22:23.027: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1052 STEP: creating the pod Jan 29 11:22:23.212: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-lvmqv' Jan 29 11:22:25.435: INFO: stderr: "" Jan 29 11:22:25.436: INFO: stdout: "pod/pause created\n" Jan 29 11:22:25.436: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] Jan 29 11:22:25.436: INFO: Waiting up to 5m0s for pod "pause" in namespace "e2e-tests-kubectl-lvmqv" to be "running and ready" Jan 29 11:22:25.476: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 40.241793ms Jan 29 11:22:27.493: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.057067799s Jan 29 11:22:29.515: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 4.079386337s Jan 29 11:22:31.808: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 6.372189378s Jan 29 11:22:33.833: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 8.397284065s Jan 29 11:22:33.833: INFO: Pod "pause" satisfied condition "running and ready" Jan 29 11:22:33.833: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: adding the label testing-label with value testing-label-value to a pod Jan 29 11:22:33.834: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=e2e-tests-kubectl-lvmqv' Jan 29 11:22:34.125: INFO: stderr: "" Jan 29 11:22:34.125: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value Jan 29 11:22:34.126: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=e2e-tests-kubectl-lvmqv' Jan 29 11:22:34.265: INFO: stderr: "" Jan 29 11:22:34.265: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 9s testing-label-value\n" STEP: removing the label testing-label of a pod Jan 29 11:22:34.266: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=e2e-tests-kubectl-lvmqv' Jan 29 11:22:34.386: INFO: stderr: "" Jan 29 11:22:34.386: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label Jan 29 11:22:34.386: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=e2e-tests-kubectl-lvmqv' Jan 29 11:22:34.503: INFO: stderr: "" Jan 29 11:22:34.503: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 9s \n" [AfterEach] [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1059 STEP: using delete to clean up resources Jan 29 11:22:34.503: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-lvmqv' Jan 29 11:22:34.755: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 29 11:22:34.755: INFO: stdout: "pod \"pause\" force deleted\n" Jan 29 11:22:34.756: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=e2e-tests-kubectl-lvmqv' Jan 29 11:22:34.931: INFO: stderr: "No resources found.\n" Jan 29 11:22:34.931: INFO: stdout: "" Jan 29 11:22:34.932: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=e2e-tests-kubectl-lvmqv -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jan 29 11:22:35.138: INFO: stderr: "" Jan 29 11:22:35.138: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 29 11:22:35.139: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-lvmqv" for this suite. Jan 29 11:22:42.040: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 29 11:22:42.136: INFO: namespace: e2e-tests-kubectl-lvmqv, resource: bindings, ignored listing per whitelist Jan 29 11:22:42.205: INFO: namespace e2e-tests-kubectl-lvmqv deletion completed in 6.426640425s • [SLOW TEST:19.179 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 29 11:22:42.207: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 29 11:22:50.749: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-wrapper-ffq62" for this suite. Jan 29 11:22:56.858: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 29 11:22:56.969: INFO: namespace: e2e-tests-emptydir-wrapper-ffq62, resource: bindings, ignored listing per whitelist Jan 29 11:22:56.979: INFO: namespace e2e-tests-emptydir-wrapper-ffq62 deletion completed in 6.164454676s • [SLOW TEST:14.773 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 29 11:22:56.980: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating the pod Jan 29 11:23:07.934: INFO: Successfully updated pod "labelsupdateb3256cb2-4289-11ea-8d54-0242ac110005" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 29 11:23:10.148: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-q8gkv" for this suite. Jan 29 11:23:34.198: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 29 11:23:34.238: INFO: namespace: e2e-tests-projected-q8gkv, resource: bindings, ignored listing per whitelist Jan 29 11:23:34.413: INFO: namespace e2e-tests-projected-q8gkv deletion completed in 24.257068012s • [SLOW TEST:37.433 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 29 11:23:34.414: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jan 29 11:23:34.751: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. Jan 29 11:23:34.815: INFO: Number of nodes with available pods: 0 Jan 29 11:23:34.815: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. Jan 29 11:23:34.896: INFO: Number of nodes with available pods: 0 Jan 29 11:23:34.896: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 29 11:23:35.924: INFO: Number of nodes with available pods: 0 Jan 29 11:23:35.924: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 29 11:23:36.910: INFO: Number of nodes with available pods: 0 Jan 29 11:23:36.910: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 29 11:23:37.913: INFO: Number of nodes with available pods: 0 Jan 29 11:23:37.913: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 29 11:23:38.907: INFO: Number of nodes with available pods: 0 Jan 29 11:23:38.908: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 29 11:23:40.103: INFO: Number of nodes with available pods: 0 Jan 29 11:23:40.103: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 29 11:23:40.925: INFO: Number of nodes with available pods: 0 Jan 29 11:23:40.925: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 29 11:23:41.951: INFO: Number of nodes with available pods: 0 Jan 29 11:23:41.951: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 29 11:23:42.908: INFO: Number of nodes with available pods: 0 Jan 29 11:23:42.908: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 29 11:23:43.950: INFO: Number of nodes with available pods: 0 Jan 29 11:23:43.950: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 29 11:23:44.906: INFO: Number of nodes with available pods: 1 Jan 29 11:23:44.906: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled Jan 29 11:23:45.047: INFO: Number of nodes with available pods: 1 Jan 29 11:23:45.047: INFO: Number of running nodes: 0, number of available pods: 1 Jan 29 11:23:46.068: INFO: Number of nodes with available pods: 0 Jan 29 11:23:46.068: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate Jan 29 11:23:46.094: INFO: Number of nodes with available pods: 0 Jan 29 11:23:46.094: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 29 11:23:47.104: INFO: Number of nodes with available pods: 0 Jan 29 11:23:47.104: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 29 11:23:48.110: INFO: Number of nodes with available pods: 0 Jan 29 11:23:48.111: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 29 11:23:49.116: INFO: Number of nodes with available pods: 0 Jan 29 11:23:49.116: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 29 11:23:50.113: INFO: Number of nodes with available pods: 0 Jan 29 11:23:50.114: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 29 11:23:51.102: INFO: Number of nodes with available pods: 0 Jan 29 11:23:51.102: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 29 11:23:52.112: INFO: Number of nodes with available pods: 0 Jan 29 11:23:52.112: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 29 11:23:53.110: INFO: Number of nodes with available pods: 0 Jan 29 11:23:53.110: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 29 11:23:54.107: INFO: Number of nodes with available pods: 0 Jan 29 11:23:54.107: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 29 11:23:55.106: INFO: Number of nodes with available pods: 0 Jan 29 11:23:55.106: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 29 11:23:56.109: INFO: Number of nodes with available pods: 0 Jan 29 11:23:56.109: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 29 11:23:57.109: INFO: Number of nodes with available pods: 0 Jan 29 11:23:57.109: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 29 11:23:58.110: INFO: Number of nodes with available pods: 0 Jan 29 11:23:58.110: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 29 11:23:59.108: INFO: Number of nodes with available pods: 0 Jan 29 11:23:59.108: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 29 11:24:00.114: INFO: Number of nodes with available pods: 0 Jan 29 11:24:00.114: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 29 11:24:01.106: INFO: Number of nodes with available pods: 0 Jan 29 11:24:01.106: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 29 11:24:02.108: INFO: Number of nodes with available pods: 0 Jan 29 11:24:02.108: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 29 11:24:03.212: INFO: Number of nodes with available pods: 0 Jan 29 11:24:03.212: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 29 11:24:04.112: INFO: Number of nodes with available pods: 0 Jan 29 11:24:04.112: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 29 11:24:05.104: INFO: Number of nodes with available pods: 0 Jan 29 11:24:05.104: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 29 11:24:06.116: INFO: Number of nodes with available pods: 0 Jan 29 11:24:06.116: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 29 11:24:07.107: INFO: Number of nodes with available pods: 0 Jan 29 11:24:07.107: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 29 11:24:08.912: INFO: Number of nodes with available pods: 0 Jan 29 11:24:08.912: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 29 11:24:09.147: INFO: Number of nodes with available pods: 0 Jan 29 11:24:09.147: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 29 11:24:10.146: INFO: Number of nodes with available pods: 0 Jan 29 11:24:10.146: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 29 11:24:11.101: INFO: Number of nodes with available pods: 0 Jan 29 11:24:11.101: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 29 11:24:12.110: INFO: Number of nodes with available pods: 0 Jan 29 11:24:12.110: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 29 11:24:13.148: INFO: Number of nodes with available pods: 1 Jan 29 11:24:13.149: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-tvzkn, will wait for the garbage collector to delete the pods Jan 29 11:24:13.244: INFO: Deleting DaemonSet.extensions daemon-set took: 27.041372ms Jan 29 11:24:13.445: INFO: Terminating DaemonSet.extensions daemon-set pods took: 200.832652ms Jan 29 11:24:22.624: INFO: Number of nodes with available pods: 0 Jan 29 11:24:22.624: INFO: Number of running nodes: 0, number of available pods: 0 Jan 29 11:24:22.636: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-tvzkn/daemonsets","resourceVersion":"19850922"},"items":null} Jan 29 11:24:22.642: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-tvzkn/pods","resourceVersion":"19850922"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 29 11:24:22.678: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-tvzkn" for this suite. Jan 29 11:24:30.789: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 29 11:24:30.830: INFO: namespace: e2e-tests-daemonsets-tvzkn, resource: bindings, ignored listing per whitelist Jan 29 11:24:31.110: INFO: namespace e2e-tests-daemonsets-tvzkn deletion completed in 8.406559095s • [SLOW TEST:56.697 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 29 11:24:31.111: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:204 [It] should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 29 11:24:31.344: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-bkmkn" for this suite. Jan 29 11:24:43.481: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 29 11:24:43.558: INFO: namespace: e2e-tests-pods-bkmkn, resource: bindings, ignored listing per whitelist Jan 29 11:24:43.644: INFO: namespace e2e-tests-pods-bkmkn deletion completed in 12.245388373s • [SLOW TEST:12.533 seconds] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 29 11:24:43.644: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0777 on node default medium Jan 29 11:24:44.080: INFO: Waiting up to 5m0s for pod "pod-f2d8a253-4289-11ea-8d54-0242ac110005" in namespace "e2e-tests-emptydir-clrfw" to be "success or failure" Jan 29 11:24:44.086: INFO: Pod "pod-f2d8a253-4289-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 5.805532ms Jan 29 11:24:46.113: INFO: Pod "pod-f2d8a253-4289-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033200643s Jan 29 11:24:48.140: INFO: Pod "pod-f2d8a253-4289-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.059842153s Jan 29 11:24:50.152: INFO: Pod "pod-f2d8a253-4289-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.07214845s Jan 29 11:24:52.349: INFO: Pod "pod-f2d8a253-4289-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.269205026s Jan 29 11:24:54.401: INFO: Pod "pod-f2d8a253-4289-11ea-8d54-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.320520884s STEP: Saw pod success Jan 29 11:24:54.401: INFO: Pod "pod-f2d8a253-4289-11ea-8d54-0242ac110005" satisfied condition "success or failure" Jan 29 11:24:54.409: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-f2d8a253-4289-11ea-8d54-0242ac110005 container test-container: STEP: delete the pod Jan 29 11:24:54.833: INFO: Waiting for pod pod-f2d8a253-4289-11ea-8d54-0242ac110005 to disappear Jan 29 11:24:54.891: INFO: Pod pod-f2d8a253-4289-11ea-8d54-0242ac110005 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 29 11:24:54.891: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-clrfw" for this suite. Jan 29 11:25:01.121: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 29 11:25:01.219: INFO: namespace: e2e-tests-emptydir-clrfw, resource: bindings, ignored listing per whitelist Jan 29 11:25:01.225: INFO: namespace e2e-tests-emptydir-clrfw deletion completed in 6.247031935s • [SLOW TEST:17.581 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 29 11:25:01.225: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0644 on node default medium Jan 29 11:25:01.446: INFO: Waiting up to 5m0s for pod "pod-fd3576f1-4289-11ea-8d54-0242ac110005" in namespace "e2e-tests-emptydir-p48nl" to be "success or failure" Jan 29 11:25:01.559: INFO: Pod "pod-fd3576f1-4289-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 112.823927ms Jan 29 11:25:03.589: INFO: Pod "pod-fd3576f1-4289-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.142635325s Jan 29 11:25:05.629: INFO: Pod "pod-fd3576f1-4289-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.182591442s Jan 29 11:25:07.642: INFO: Pod "pod-fd3576f1-4289-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.196112529s Jan 29 11:25:09.660: INFO: Pod "pod-fd3576f1-4289-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.213404774s Jan 29 11:25:11.671: INFO: Pod "pod-fd3576f1-4289-11ea-8d54-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.224766263s STEP: Saw pod success Jan 29 11:25:11.671: INFO: Pod "pod-fd3576f1-4289-11ea-8d54-0242ac110005" satisfied condition "success or failure" Jan 29 11:25:11.674: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-fd3576f1-4289-11ea-8d54-0242ac110005 container test-container: STEP: delete the pod Jan 29 11:25:12.193: INFO: Waiting for pod pod-fd3576f1-4289-11ea-8d54-0242ac110005 to disappear Jan 29 11:25:12.207: INFO: Pod pod-fd3576f1-4289-11ea-8d54-0242ac110005 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 29 11:25:12.207: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-p48nl" for this suite. Jan 29 11:25:18.268: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 29 11:25:18.599: INFO: namespace: e2e-tests-emptydir-p48nl, resource: bindings, ignored listing per whitelist Jan 29 11:25:18.599: INFO: namespace e2e-tests-emptydir-p48nl deletion completed in 6.379132618s • [SLOW TEST:17.374 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 29 11:25:18.600: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with projected pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-projected-6gbx STEP: Creating a pod to test atomic-volume-subpath Jan 29 11:25:18.922: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-6gbx" in namespace "e2e-tests-subpath-dwbdw" to be "success or failure" Jan 29 11:25:18.944: INFO: Pod "pod-subpath-test-projected-6gbx": Phase="Pending", Reason="", readiness=false. Elapsed: 21.43285ms Jan 29 11:25:20.960: INFO: Pod "pod-subpath-test-projected-6gbx": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037290873s Jan 29 11:25:24.298: INFO: Pod "pod-subpath-test-projected-6gbx": Phase="Pending", Reason="", readiness=false. Elapsed: 5.375393186s Jan 29 11:25:26.337: INFO: Pod "pod-subpath-test-projected-6gbx": Phase="Pending", Reason="", readiness=false. Elapsed: 7.414920509s Jan 29 11:25:28.373: INFO: Pod "pod-subpath-test-projected-6gbx": Phase="Pending", Reason="", readiness=false. Elapsed: 9.450733426s Jan 29 11:25:30.385: INFO: Pod "pod-subpath-test-projected-6gbx": Phase="Pending", Reason="", readiness=false. Elapsed: 11.462200928s Jan 29 11:25:32.408: INFO: Pod "pod-subpath-test-projected-6gbx": Phase="Pending", Reason="", readiness=false. Elapsed: 13.485292862s Jan 29 11:25:34.417: INFO: Pod "pod-subpath-test-projected-6gbx": Phase="Running", Reason="", readiness=false. Elapsed: 15.494736214s Jan 29 11:25:36.435: INFO: Pod "pod-subpath-test-projected-6gbx": Phase="Running", Reason="", readiness=false. Elapsed: 17.512930682s Jan 29 11:25:38.458: INFO: Pod "pod-subpath-test-projected-6gbx": Phase="Running", Reason="", readiness=false. Elapsed: 19.535159414s Jan 29 11:25:40.496: INFO: Pod "pod-subpath-test-projected-6gbx": Phase="Running", Reason="", readiness=false. Elapsed: 21.573897547s Jan 29 11:25:42.516: INFO: Pod "pod-subpath-test-projected-6gbx": Phase="Running", Reason="", readiness=false. Elapsed: 23.593266358s Jan 29 11:25:44.608: INFO: Pod "pod-subpath-test-projected-6gbx": Phase="Running", Reason="", readiness=false. Elapsed: 25.685418755s Jan 29 11:25:46.642: INFO: Pod "pod-subpath-test-projected-6gbx": Phase="Running", Reason="", readiness=false. Elapsed: 27.719798192s Jan 29 11:25:48.654: INFO: Pod "pod-subpath-test-projected-6gbx": Phase="Running", Reason="", readiness=false. Elapsed: 29.731984426s Jan 29 11:25:50.691: INFO: Pod "pod-subpath-test-projected-6gbx": Phase="Running", Reason="", readiness=false. Elapsed: 31.76863148s Jan 29 11:25:52.712: INFO: Pod "pod-subpath-test-projected-6gbx": Phase="Succeeded", Reason="", readiness=false. Elapsed: 33.789712738s STEP: Saw pod success Jan 29 11:25:52.712: INFO: Pod "pod-subpath-test-projected-6gbx" satisfied condition "success or failure" Jan 29 11:25:52.721: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-projected-6gbx container test-container-subpath-projected-6gbx: STEP: delete the pod Jan 29 11:25:52.838: INFO: Waiting for pod pod-subpath-test-projected-6gbx to disappear Jan 29 11:25:52.885: INFO: Pod pod-subpath-test-projected-6gbx no longer exists STEP: Deleting pod pod-subpath-test-projected-6gbx Jan 29 11:25:52.885: INFO: Deleting pod "pod-subpath-test-projected-6gbx" in namespace "e2e-tests-subpath-dwbdw" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 29 11:25:52.892: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-dwbdw" for this suite. Jan 29 11:26:00.927: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 29 11:26:00.958: INFO: namespace: e2e-tests-subpath-dwbdw, resource: bindings, ignored listing per whitelist Jan 29 11:26:01.061: INFO: namespace e2e-tests-subpath-dwbdw deletion completed in 8.161983896s • [SLOW TEST:42.462 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with projected pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 29 11:26:01.062: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap e2e-tests-configmap-rnj9b/configmap-test-20e07516-428a-11ea-8d54-0242ac110005 STEP: Creating a pod to test consume configMaps Jan 29 11:26:01.292: INFO: Waiting up to 5m0s for pod "pod-configmaps-20e2175b-428a-11ea-8d54-0242ac110005" in namespace "e2e-tests-configmap-rnj9b" to be "success or failure" Jan 29 11:26:01.298: INFO: Pod "pod-configmaps-20e2175b-428a-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.056422ms Jan 29 11:26:03.412: INFO: Pod "pod-configmaps-20e2175b-428a-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.120580333s Jan 29 11:26:05.418: INFO: Pod "pod-configmaps-20e2175b-428a-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.126132033s Jan 29 11:26:08.056: INFO: Pod "pod-configmaps-20e2175b-428a-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.764106121s Jan 29 11:26:10.180: INFO: Pod "pod-configmaps-20e2175b-428a-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.887854101s Jan 29 11:26:12.194: INFO: Pod "pod-configmaps-20e2175b-428a-11ea-8d54-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.902610421s STEP: Saw pod success Jan 29 11:26:12.194: INFO: Pod "pod-configmaps-20e2175b-428a-11ea-8d54-0242ac110005" satisfied condition "success or failure" Jan 29 11:26:12.199: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-20e2175b-428a-11ea-8d54-0242ac110005 container env-test: STEP: delete the pod Jan 29 11:26:12.265: INFO: Waiting for pod pod-configmaps-20e2175b-428a-11ea-8d54-0242ac110005 to disappear Jan 29 11:26:12.281: INFO: Pod pod-configmaps-20e2175b-428a-11ea-8d54-0242ac110005 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 29 11:26:12.281: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-rnj9b" for this suite. Jan 29 11:26:18.378: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 29 11:26:18.710: INFO: namespace: e2e-tests-configmap-rnj9b, resource: bindings, ignored listing per whitelist Jan 29 11:26:18.784: INFO: namespace e2e-tests-configmap-rnj9b deletion completed in 6.494139469s • [SLOW TEST:17.721 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 29 11:26:18.784: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update Jan 29 11:26:19.018: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:e2e-tests-watch-rpjp8,SelfLink:/api/v1/namespaces/e2e-tests-watch-rpjp8/configmaps/e2e-watch-test-resource-version,UID:2b6bcd74-428a-11ea-a994-fa163e34d433,ResourceVersion:19851216,Generation:0,CreationTimestamp:2020-01-29 11:26:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Jan 29 11:26:19.018: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:e2e-tests-watch-rpjp8,SelfLink:/api/v1/namespaces/e2e-tests-watch-rpjp8/configmaps/e2e-watch-test-resource-version,UID:2b6bcd74-428a-11ea-a994-fa163e34d433,ResourceVersion:19851218,Generation:0,CreationTimestamp:2020-01-29 11:26:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 29 11:26:19.018: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-watch-rpjp8" for this suite. Jan 29 11:26:25.063: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 29 11:26:25.092: INFO: namespace: e2e-tests-watch-rpjp8, resource: bindings, ignored listing per whitelist Jan 29 11:26:25.305: INFO: namespace e2e-tests-watch-rpjp8 deletion completed in 6.277334918s • [SLOW TEST:6.521 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 29 11:26:25.306: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jan 29 11:26:25.584: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2f5a4ad2-428a-11ea-8d54-0242ac110005" in namespace "e2e-tests-projected-w5hm4" to be "success or failure" Jan 29 11:26:25.596: INFO: Pod "downwardapi-volume-2f5a4ad2-428a-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 11.853347ms Jan 29 11:26:27.611: INFO: Pod "downwardapi-volume-2f5a4ad2-428a-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026639953s Jan 29 11:26:29.623: INFO: Pod "downwardapi-volume-2f5a4ad2-428a-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.038834384s Jan 29 11:26:31.742: INFO: Pod "downwardapi-volume-2f5a4ad2-428a-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.157521038s Jan 29 11:26:33.803: INFO: Pod "downwardapi-volume-2f5a4ad2-428a-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.218313728s Jan 29 11:26:35.816: INFO: Pod "downwardapi-volume-2f5a4ad2-428a-11ea-8d54-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.231788514s STEP: Saw pod success Jan 29 11:26:35.816: INFO: Pod "downwardapi-volume-2f5a4ad2-428a-11ea-8d54-0242ac110005" satisfied condition "success or failure" Jan 29 11:26:35.823: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-2f5a4ad2-428a-11ea-8d54-0242ac110005 container client-container: STEP: delete the pod Jan 29 11:26:35.901: INFO: Waiting for pod downwardapi-volume-2f5a4ad2-428a-11ea-8d54-0242ac110005 to disappear Jan 29 11:26:37.003: INFO: Pod downwardapi-volume-2f5a4ad2-428a-11ea-8d54-0242ac110005 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 29 11:26:37.003: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-w5hm4" for this suite. Jan 29 11:26:45.278: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 29 11:26:45.361: INFO: namespace: e2e-tests-projected-w5hm4, resource: bindings, ignored listing per whitelist Jan 29 11:26:45.427: INFO: namespace e2e-tests-projected-w5hm4 deletion completed in 8.403608s • [SLOW TEST:20.122 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 29 11:26:45.428: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 29 11:27:45.669: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-q5tmv" for this suite. Jan 29 11:28:09.734: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 29 11:28:09.772: INFO: namespace: e2e-tests-container-probe-q5tmv, resource: bindings, ignored listing per whitelist Jan 29 11:28:09.982: INFO: namespace e2e-tests-container-probe-q5tmv deletion completed in 24.30601637s • [SLOW TEST:84.554 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 29 11:28:09.982: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jan 29 11:28:10.181: INFO: (0) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/:
alternatives.log
alternatives.l... (200; 60.348613ms)
Jan 29 11:28:10.208: INFO: (1) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 26.765829ms)
Jan 29 11:28:10.223: INFO: (2) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 15.381532ms)
Jan 29 11:28:10.232: INFO: (3) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 9.093022ms)
Jan 29 11:28:10.243: INFO: (4) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 10.515775ms)
Jan 29 11:28:10.252: INFO: (5) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.902374ms)
Jan 29 11:28:10.260: INFO: (6) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.252692ms)
Jan 29 11:28:10.268: INFO: (7) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.609842ms)
Jan 29 11:28:10.282: INFO: (8) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 14.520717ms)
Jan 29 11:28:10.293: INFO: (9) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 10.313011ms)
Jan 29 11:28:10.352: INFO: (10) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 59.597628ms)
Jan 29 11:28:10.371: INFO: (11) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 18.944455ms)
Jan 29 11:28:10.416: INFO: (12) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 44.935592ms)
Jan 29 11:28:10.453: INFO: (13) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 36.925696ms)
Jan 29 11:28:10.472: INFO: (14) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 18.541287ms)
Jan 29 11:28:10.494: INFO: (15) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 22.334068ms)
Jan 29 11:28:10.512: INFO: (16) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 17.272455ms)
Jan 29 11:28:10.536: INFO: (17) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 23.662478ms)
Jan 29 11:28:10.590: INFO: (18) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 53.944894ms)
Jan 29 11:28:10.626: INFO: (19) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 36.089487ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 29 11:28:10.627: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-proxy-lw49l" for this suite.
Jan 29 11:28:16.731: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 11:28:16.766: INFO: namespace: e2e-tests-proxy-lw49l, resource: bindings, ignored listing per whitelist
Jan 29 11:28:16.943: INFO: namespace e2e-tests-proxy-lw49l deletion completed in 6.290592335s

• [SLOW TEST:6.961 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:56
    should proxy logs on node using proxy subresource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 29 11:28:16.944: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan 29 11:28:17.085: INFO: Waiting up to 5m0s for pod "downwardapi-volume-71d1b6ba-428a-11ea-8d54-0242ac110005" in namespace "e2e-tests-projected-hpjff" to be "success or failure"
Jan 29 11:28:17.158: INFO: Pod "downwardapi-volume-71d1b6ba-428a-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 72.808793ms
Jan 29 11:28:19.602: INFO: Pod "downwardapi-volume-71d1b6ba-428a-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.516219071s
Jan 29 11:28:21.620: INFO: Pod "downwardapi-volume-71d1b6ba-428a-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.534189951s
Jan 29 11:28:23.656: INFO: Pod "downwardapi-volume-71d1b6ba-428a-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.570697527s
Jan 29 11:28:26.094: INFO: Pod "downwardapi-volume-71d1b6ba-428a-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.008181338s
Jan 29 11:28:28.115: INFO: Pod "downwardapi-volume-71d1b6ba-428a-11ea-8d54-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.029351287s
STEP: Saw pod success
Jan 29 11:28:28.115: INFO: Pod "downwardapi-volume-71d1b6ba-428a-11ea-8d54-0242ac110005" satisfied condition "success or failure"
Jan 29 11:28:28.327: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-71d1b6ba-428a-11ea-8d54-0242ac110005 container client-container: 
STEP: delete the pod
Jan 29 11:28:28.640: INFO: Waiting for pod downwardapi-volume-71d1b6ba-428a-11ea-8d54-0242ac110005 to disappear
Jan 29 11:28:28.738: INFO: Pod downwardapi-volume-71d1b6ba-428a-11ea-8d54-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 29 11:28:28.738: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-hpjff" for this suite.
Jan 29 11:28:34.785: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 11:28:34.909: INFO: namespace: e2e-tests-projected-hpjff, resource: bindings, ignored listing per whitelist
Jan 29 11:28:34.961: INFO: namespace e2e-tests-projected-hpjff deletion completed in 6.214794204s

• [SLOW TEST:18.017 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 29 11:28:34.962: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Given a ReplicationController is created
STEP: When the matched label of one of its pods change
Jan 29 11:28:35.246: INFO: Pod name pod-release: Found 0 pods out of 1
Jan 29 11:28:40.269: INFO: Pod name pod-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 29 11:28:40.568: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-replication-controller-c5r5x" for this suite.
Jan 29 11:28:46.758: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 11:28:46.841: INFO: namespace: e2e-tests-replication-controller-c5r5x, resource: bindings, ignored listing per whitelist
Jan 29 11:28:47.081: INFO: namespace e2e-tests-replication-controller-c5r5x deletion completed in 6.450862228s

• [SLOW TEST:12.119 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 29 11:28:47.082: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-map-84d903fd-428a-11ea-8d54-0242ac110005
STEP: Creating a pod to test consume configMaps
Jan 29 11:28:49.185: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-84efedbc-428a-11ea-8d54-0242ac110005" in namespace "e2e-tests-projected-q6nrs" to be "success or failure"
Jan 29 11:28:49.232: INFO: Pod "pod-projected-configmaps-84efedbc-428a-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 46.424085ms
Jan 29 11:28:51.453: INFO: Pod "pod-projected-configmaps-84efedbc-428a-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.267359145s
Jan 29 11:28:53.471: INFO: Pod "pod-projected-configmaps-84efedbc-428a-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.285757084s
Jan 29 11:28:55.480: INFO: Pod "pod-projected-configmaps-84efedbc-428a-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.295179206s
Jan 29 11:28:57.594: INFO: Pod "pod-projected-configmaps-84efedbc-428a-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.408991303s
Jan 29 11:28:59.618: INFO: Pod "pod-projected-configmaps-84efedbc-428a-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.432740249s
Jan 29 11:29:01.633: INFO: Pod "pod-projected-configmaps-84efedbc-428a-11ea-8d54-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.448103168s
STEP: Saw pod success
Jan 29 11:29:01.634: INFO: Pod "pod-projected-configmaps-84efedbc-428a-11ea-8d54-0242ac110005" satisfied condition "success or failure"
Jan 29 11:29:01.650: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-84efedbc-428a-11ea-8d54-0242ac110005 container projected-configmap-volume-test: 
STEP: delete the pod
Jan 29 11:29:01.826: INFO: Waiting for pod pod-projected-configmaps-84efedbc-428a-11ea-8d54-0242ac110005 to disappear
Jan 29 11:29:01.841: INFO: Pod pod-projected-configmaps-84efedbc-428a-11ea-8d54-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 29 11:29:01.841: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-q6nrs" for this suite.
Jan 29 11:29:07.894: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 11:29:07.968: INFO: namespace: e2e-tests-projected-q6nrs, resource: bindings, ignored listing per whitelist
Jan 29 11:29:08.059: INFO: namespace e2e-tests-projected-q6nrs deletion completed in 6.206719044s

• [SLOW TEST:20.978 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 29 11:29:08.060: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name projected-secret-test-90814bcd-428a-11ea-8d54-0242ac110005
STEP: Creating a pod to test consume secrets
Jan 29 11:29:08.777: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-90865d64-428a-11ea-8d54-0242ac110005" in namespace "e2e-tests-projected-vndpt" to be "success or failure"
Jan 29 11:29:09.025: INFO: Pod "pod-projected-secrets-90865d64-428a-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 247.904276ms
Jan 29 11:29:11.045: INFO: Pod "pod-projected-secrets-90865d64-428a-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.268145669s
Jan 29 11:29:13.064: INFO: Pod "pod-projected-secrets-90865d64-428a-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.28705085s
Jan 29 11:29:15.307: INFO: Pod "pod-projected-secrets-90865d64-428a-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.529699119s
Jan 29 11:29:17.333: INFO: Pod "pod-projected-secrets-90865d64-428a-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.555511514s
Jan 29 11:29:19.347: INFO: Pod "pod-projected-secrets-90865d64-428a-11ea-8d54-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.569947251s
STEP: Saw pod success
Jan 29 11:29:19.347: INFO: Pod "pod-projected-secrets-90865d64-428a-11ea-8d54-0242ac110005" satisfied condition "success or failure"
Jan 29 11:29:19.358: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-90865d64-428a-11ea-8d54-0242ac110005 container secret-volume-test: 
STEP: delete the pod
Jan 29 11:29:19.491: INFO: Waiting for pod pod-projected-secrets-90865d64-428a-11ea-8d54-0242ac110005 to disappear
Jan 29 11:29:19.497: INFO: Pod pod-projected-secrets-90865d64-428a-11ea-8d54-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 29 11:29:19.497: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-vndpt" for this suite.
Jan 29 11:29:25.562: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 11:29:25.714: INFO: namespace: e2e-tests-projected-vndpt, resource: bindings, ignored listing per whitelist
Jan 29 11:29:25.890: INFO: namespace e2e-tests-projected-vndpt deletion completed in 6.385220206s

• [SLOW TEST:17.830 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 29 11:29:25.891: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the rc
STEP: delete the rc
STEP: wait for all pods to be garbage collected
STEP: Gathering metrics
W0129 11:29:36.883421       8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jan 29 11:29:36.883: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 29 11:29:36.883: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-djlzq" for this suite.
Jan 29 11:29:43.040: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 11:29:43.146: INFO: namespace: e2e-tests-gc-djlzq, resource: bindings, ignored listing per whitelist
Jan 29 11:29:43.205: INFO: namespace e2e-tests-gc-djlzq deletion completed in 6.211871936s

• [SLOW TEST:17.314 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 29 11:29:43.205: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0644 on node default medium
Jan 29 11:29:43.578: INFO: Waiting up to 5m0s for pod "pod-a55feb0f-428a-11ea-8d54-0242ac110005" in namespace "e2e-tests-emptydir-kklm5" to be "success or failure"
Jan 29 11:29:43.587: INFO: Pod "pod-a55feb0f-428a-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.370809ms
Jan 29 11:29:45.607: INFO: Pod "pod-a55feb0f-428a-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028725794s
Jan 29 11:29:47.664: INFO: Pod "pod-a55feb0f-428a-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.08586408s
Jan 29 11:29:49.879: INFO: Pod "pod-a55feb0f-428a-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.300685398s
Jan 29 11:29:51.892: INFO: Pod "pod-a55feb0f-428a-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.313457479s
Jan 29 11:29:53.911: INFO: Pod "pod-a55feb0f-428a-11ea-8d54-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.332987666s
STEP: Saw pod success
Jan 29 11:29:53.912: INFO: Pod "pod-a55feb0f-428a-11ea-8d54-0242ac110005" satisfied condition "success or failure"
Jan 29 11:29:53.929: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-a55feb0f-428a-11ea-8d54-0242ac110005 container test-container: 
STEP: delete the pod
Jan 29 11:29:54.823: INFO: Waiting for pod pod-a55feb0f-428a-11ea-8d54-0242ac110005 to disappear
Jan 29 11:29:54.908: INFO: Pod pod-a55feb0f-428a-11ea-8d54-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 29 11:29:54.908: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-kklm5" for this suite.
Jan 29 11:30:00.948: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 11:30:01.170: INFO: namespace: e2e-tests-emptydir-kklm5, resource: bindings, ignored listing per whitelist
Jan 29 11:30:01.196: INFO: namespace e2e-tests-emptydir-kklm5 deletion completed in 6.279990432s

• [SLOW TEST:17.991 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-api-machinery] Secrets 
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 29 11:30:01.197: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating secret e2e-tests-secrets-xkqj5/secret-test-b00a67d0-428a-11ea-8d54-0242ac110005
STEP: Creating a pod to test consume secrets
Jan 29 11:30:01.500: INFO: Waiting up to 5m0s for pod "pod-configmaps-b00b91b8-428a-11ea-8d54-0242ac110005" in namespace "e2e-tests-secrets-xkqj5" to be "success or failure"
Jan 29 11:30:01.508: INFO: Pod "pod-configmaps-b00b91b8-428a-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.424238ms
Jan 29 11:30:03.525: INFO: Pod "pod-configmaps-b00b91b8-428a-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02468258s
Jan 29 11:30:05.541: INFO: Pod "pod-configmaps-b00b91b8-428a-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.040788224s
Jan 29 11:30:07.576: INFO: Pod "pod-configmaps-b00b91b8-428a-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.075563581s
Jan 29 11:30:09.585: INFO: Pod "pod-configmaps-b00b91b8-428a-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.084929087s
Jan 29 11:30:11.595: INFO: Pod "pod-configmaps-b00b91b8-428a-11ea-8d54-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.094651538s
STEP: Saw pod success
Jan 29 11:30:11.595: INFO: Pod "pod-configmaps-b00b91b8-428a-11ea-8d54-0242ac110005" satisfied condition "success or failure"
Jan 29 11:30:11.598: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-b00b91b8-428a-11ea-8d54-0242ac110005 container env-test: 
STEP: delete the pod
Jan 29 11:30:12.289: INFO: Waiting for pod pod-configmaps-b00b91b8-428a-11ea-8d54-0242ac110005 to disappear
Jan 29 11:30:12.308: INFO: Pod pod-configmaps-b00b91b8-428a-11ea-8d54-0242ac110005 no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 29 11:30:12.309: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-xkqj5" for this suite.
Jan 29 11:30:18.634: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 11:30:18.673: INFO: namespace: e2e-tests-secrets-xkqj5, resource: bindings, ignored listing per whitelist
Jan 29 11:30:18.724: INFO: namespace e2e-tests-secrets-xkqj5 deletion completed in 6.141481835s

• [SLOW TEST:17.527 seconds]
[sig-api-machinery] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:32
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] PreStop 
  should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 29 11:30:18.725: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename prestop
STEP: Waiting for a default service account to be provisioned in namespace
[It] should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating server pod server in namespace e2e-tests-prestop-pjdg8
STEP: Waiting for pods to come up.
STEP: Creating tester pod tester in namespace e2e-tests-prestop-pjdg8
STEP: Deleting pre-stop pod
Jan 29 11:30:44.211: INFO: Saw: {
	"Hostname": "server",
	"Sent": null,
	"Received": {
		"prestop": 1
	},
	"Errors": null,
	"Log": [
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up."
	],
	"StillContactingPeers": true
}
STEP: Deleting the server pod
[AfterEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 29 11:30:44.234: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-prestop-pjdg8" for this suite.
Jan 29 11:31:24.818: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 11:31:24.976: INFO: namespace: e2e-tests-prestop-pjdg8, resource: bindings, ignored listing per whitelist
Jan 29 11:31:24.995: INFO: namespace e2e-tests-prestop-pjdg8 deletion completed in 40.746298072s

• [SLOW TEST:66.270 seconds]
[k8s.io] [sig-node] PreStop
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 29 11:31:24.995: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43
[It] should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
Jan 29 11:31:25.195: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 29 11:31:40.511: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-init-container-gnljt" for this suite.
Jan 29 11:31:46.688: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 11:31:46.765: INFO: namespace: e2e-tests-init-container-gnljt, resource: bindings, ignored listing per whitelist
Jan 29 11:31:46.800: INFO: namespace e2e-tests-init-container-gnljt deletion completed in 6.21364049s

• [SLOW TEST:21.805 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 29 11:31:46.801: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0666 on node default medium
Jan 29 11:31:46.978: INFO: Waiting up to 5m0s for pod "pod-eeed8c90-428a-11ea-8d54-0242ac110005" in namespace "e2e-tests-emptydir-snjdj" to be "success or failure"
Jan 29 11:31:46.989: INFO: Pod "pod-eeed8c90-428a-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 11.032366ms
Jan 29 11:31:49.001: INFO: Pod "pod-eeed8c90-428a-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022966335s
Jan 29 11:31:51.020: INFO: Pod "pod-eeed8c90-428a-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.042131797s
Jan 29 11:31:53.263: INFO: Pod "pod-eeed8c90-428a-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.285426427s
Jan 29 11:31:55.275: INFO: Pod "pod-eeed8c90-428a-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.297193504s
Jan 29 11:31:57.303: INFO: Pod "pod-eeed8c90-428a-11ea-8d54-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.324573531s
STEP: Saw pod success
Jan 29 11:31:57.303: INFO: Pod "pod-eeed8c90-428a-11ea-8d54-0242ac110005" satisfied condition "success or failure"
Jan 29 11:31:57.314: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-eeed8c90-428a-11ea-8d54-0242ac110005 container test-container: 
STEP: delete the pod
Jan 29 11:31:57.984: INFO: Waiting for pod pod-eeed8c90-428a-11ea-8d54-0242ac110005 to disappear
Jan 29 11:31:57.993: INFO: Pod pod-eeed8c90-428a-11ea-8d54-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 29 11:31:57.993: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-snjdj" for this suite.
Jan 29 11:32:04.039: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 11:32:04.166: INFO: namespace: e2e-tests-emptydir-snjdj, resource: bindings, ignored listing per whitelist
Jan 29 11:32:04.186: INFO: namespace e2e-tests-emptydir-snjdj deletion completed in 6.184763106s

• [SLOW TEST:17.385 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 29 11:32:04.186: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-f96adc53-428a-11ea-8d54-0242ac110005
STEP: Creating a pod to test consume configMaps
Jan 29 11:32:04.664: INFO: Waiting up to 5m0s for pod "pod-configmaps-f96ee097-428a-11ea-8d54-0242ac110005" in namespace "e2e-tests-configmap-vvhx7" to be "success or failure"
Jan 29 11:32:04.712: INFO: Pod "pod-configmaps-f96ee097-428a-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 48.800685ms
Jan 29 11:32:07.067: INFO: Pod "pod-configmaps-f96ee097-428a-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.403689052s
Jan 29 11:32:09.109: INFO: Pod "pod-configmaps-f96ee097-428a-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.445421076s
Jan 29 11:32:11.126: INFO: Pod "pod-configmaps-f96ee097-428a-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.462242493s
Jan 29 11:32:13.142: INFO: Pod "pod-configmaps-f96ee097-428a-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.47833704s
Jan 29 11:32:15.154: INFO: Pod "pod-configmaps-f96ee097-428a-11ea-8d54-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.490371802s
STEP: Saw pod success
Jan 29 11:32:15.154: INFO: Pod "pod-configmaps-f96ee097-428a-11ea-8d54-0242ac110005" satisfied condition "success or failure"
Jan 29 11:32:15.158: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-f96ee097-428a-11ea-8d54-0242ac110005 container configmap-volume-test: 
STEP: delete the pod
Jan 29 11:32:15.978: INFO: Waiting for pod pod-configmaps-f96ee097-428a-11ea-8d54-0242ac110005 to disappear
Jan 29 11:32:16.247: INFO: Pod pod-configmaps-f96ee097-428a-11ea-8d54-0242ac110005 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 29 11:32:16.247: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-vvhx7" for this suite.
Jan 29 11:32:22.349: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 11:32:22.459: INFO: namespace: e2e-tests-configmap-vvhx7, resource: bindings, ignored listing per whitelist
Jan 29 11:32:22.550: INFO: namespace e2e-tests-configmap-vvhx7 deletion completed in 6.28924576s

• [SLOW TEST:18.364 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[k8s.io] Container Runtime blackbox test when starting a container that exits 
  should run with the expected status [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 29 11:32:22.551: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run with the expected status [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpa': should get the expected 'State'
STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpof': should get the expected 'State'
STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpn': should get the expected 'State'
STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance]
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 29 11:33:18.351: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-runtime-ws8x4" for this suite.
Jan 29 11:33:26.414: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 11:33:26.662: INFO: namespace: e2e-tests-container-runtime-ws8x4, resource: bindings, ignored listing per whitelist
Jan 29 11:33:26.667: INFO: namespace e2e-tests-container-runtime-ws8x4 deletion completed in 8.289506003s

• [SLOW TEST:64.116 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:37
    when starting a container that exits
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
      should run with the expected status [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 29 11:33:26.667: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a watch on configmaps with a certain label
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: changing the label value of the configmap
STEP: Expecting to observe a delete notification for the watched object
Jan 29 11:33:27.101: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-4p9px,SelfLink:/api/v1/namespaces/e2e-tests-watch-4p9px/configmaps/e2e-watch-test-label-changed,UID:2a833ca0-428b-11ea-a994-fa163e34d433,ResourceVersion:19852196,Generation:0,CreationTimestamp:2020-01-29 11:33:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
Jan 29 11:33:27.101: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-4p9px,SelfLink:/api/v1/namespaces/e2e-tests-watch-4p9px/configmaps/e2e-watch-test-label-changed,UID:2a833ca0-428b-11ea-a994-fa163e34d433,ResourceVersion:19852197,Generation:0,CreationTimestamp:2020-01-29 11:33:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
Jan 29 11:33:27.101: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-4p9px,SelfLink:/api/v1/namespaces/e2e-tests-watch-4p9px/configmaps/e2e-watch-test-label-changed,UID:2a833ca0-428b-11ea-a994-fa163e34d433,ResourceVersion:19852198,Generation:0,CreationTimestamp:2020-01-29 11:33:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying the configmap a second time
STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements
STEP: changing the label value of the configmap back
STEP: modifying the configmap a third time
STEP: deleting the configmap
STEP: Expecting to observe an add notification for the watched object when the label value was restored
Jan 29 11:33:37.233: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-4p9px,SelfLink:/api/v1/namespaces/e2e-tests-watch-4p9px/configmaps/e2e-watch-test-label-changed,UID:2a833ca0-428b-11ea-a994-fa163e34d433,ResourceVersion:19852211,Generation:0,CreationTimestamp:2020-01-29 11:33:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Jan 29 11:33:37.234: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-4p9px,SelfLink:/api/v1/namespaces/e2e-tests-watch-4p9px/configmaps/e2e-watch-test-label-changed,UID:2a833ca0-428b-11ea-a994-fa163e34d433,ResourceVersion:19852212,Generation:0,CreationTimestamp:2020-01-29 11:33:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},}
Jan 29 11:33:37.234: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-4p9px,SelfLink:/api/v1/namespaces/e2e-tests-watch-4p9px/configmaps/e2e-watch-test-label-changed,UID:2a833ca0-428b-11ea-a994-fa163e34d433,ResourceVersion:19852214,Generation:0,CreationTimestamp:2020-01-29 11:33:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 29 11:33:37.235: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-watch-4p9px" for this suite.
Jan 29 11:33:43.303: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 11:33:43.721: INFO: namespace: e2e-tests-watch-4p9px, resource: bindings, ignored listing per whitelist
Jan 29 11:33:43.730: INFO: namespace e2e-tests-watch-4p9px deletion completed in 6.479232817s

• [SLOW TEST:17.062 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 29 11:33:43.731: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0666 on tmpfs
Jan 29 11:33:44.313: INFO: Waiting up to 5m0s for pod "pod-34d262dc-428b-11ea-8d54-0242ac110005" in namespace "e2e-tests-emptydir-chtw8" to be "success or failure"
Jan 29 11:33:44.361: INFO: Pod "pod-34d262dc-428b-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 47.410262ms
Jan 29 11:33:46.375: INFO: Pod "pod-34d262dc-428b-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.061926781s
Jan 29 11:33:48.386: INFO: Pod "pod-34d262dc-428b-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.07288845s
Jan 29 11:33:50.523: INFO: Pod "pod-34d262dc-428b-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.210000731s
Jan 29 11:33:52.655: INFO: Pod "pod-34d262dc-428b-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.341435881s
Jan 29 11:33:54.720: INFO: Pod "pod-34d262dc-428b-11ea-8d54-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.406878138s
STEP: Saw pod success
Jan 29 11:33:54.721: INFO: Pod "pod-34d262dc-428b-11ea-8d54-0242ac110005" satisfied condition "success or failure"
Jan 29 11:33:54.738: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-34d262dc-428b-11ea-8d54-0242ac110005 container test-container: 
STEP: delete the pod
Jan 29 11:33:54.909: INFO: Waiting for pod pod-34d262dc-428b-11ea-8d54-0242ac110005 to disappear
Jan 29 11:33:54.923: INFO: Pod pod-34d262dc-428b-11ea-8d54-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 29 11:33:54.924: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-chtw8" for this suite.
Jan 29 11:34:01.090: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 11:34:01.109: INFO: namespace: e2e-tests-emptydir-chtw8, resource: bindings, ignored listing per whitelist
Jan 29 11:34:01.194: INFO: namespace e2e-tests-emptydir-chtw8 deletion completed in 6.260708585s

• [SLOW TEST:17.464 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 29 11:34:01.195: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79
Jan 29 11:34:01.499: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Jan 29 11:34:01.550: INFO: Waiting for terminating namespaces to be deleted...
Jan 29 11:34:01.557: INFO: 
Logging pods the kubelet thinks is on node hunter-server-hu5at5svl7ps before test
Jan 29 11:34:01.574: INFO: kube-scheduler-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Jan 29 11:34:01.575: INFO: coredns-54ff9cd656-79kxx from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded)
Jan 29 11:34:01.575: INFO: 	Container coredns ready: true, restart count 0
Jan 29 11:34:01.575: INFO: kube-proxy-bqnnz from kube-system started at 2019-08-04 08:33:23 +0000 UTC (1 container statuses recorded)
Jan 29 11:34:01.575: INFO: 	Container kube-proxy ready: true, restart count 0
Jan 29 11:34:01.575: INFO: etcd-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Jan 29 11:34:01.575: INFO: weave-net-tqwf2 from kube-system started at 2019-08-04 08:33:23 +0000 UTC (2 container statuses recorded)
Jan 29 11:34:01.575: INFO: 	Container weave ready: true, restart count 0
Jan 29 11:34:01.575: INFO: 	Container weave-npc ready: true, restart count 0
Jan 29 11:34:01.575: INFO: coredns-54ff9cd656-bmkk4 from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded)
Jan 29 11:34:01.575: INFO: 	Container coredns ready: true, restart count 0
Jan 29 11:34:01.575: INFO: kube-controller-manager-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Jan 29 11:34:01.575: INFO: kube-apiserver-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
[It] validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Trying to schedule Pod with nonempty NodeSelector.
STEP: Considering event: 
Type = [Warning], Name = [restricted-pod.15ee58550feb5656], Reason = [FailedScheduling], Message = [0/1 nodes are available: 1 node(s) didn't match node selector.]
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 29 11:34:02.665: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-sched-pred-bdvrm" for this suite.
Jan 29 11:34:08.792: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 11:34:08.905: INFO: namespace: e2e-tests-sched-pred-bdvrm, resource: bindings, ignored listing per whitelist
Jan 29 11:34:08.925: INFO: namespace e2e-tests-sched-pred-bdvrm deletion completed in 6.237274578s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70

• [SLOW TEST:7.731 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22
  validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 29 11:34:08.926: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test override command
Jan 29 11:34:09.209: INFO: Waiting up to 5m0s for pod "client-containers-43b421a2-428b-11ea-8d54-0242ac110005" in namespace "e2e-tests-containers-4kgbk" to be "success or failure"
Jan 29 11:34:09.237: INFO: Pod "client-containers-43b421a2-428b-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 27.947349ms
Jan 29 11:34:11.627: INFO: Pod "client-containers-43b421a2-428b-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.418543052s
Jan 29 11:34:13.650: INFO: Pod "client-containers-43b421a2-428b-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.440947134s
Jan 29 11:34:15.676: INFO: Pod "client-containers-43b421a2-428b-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.467568473s
Jan 29 11:34:17.684: INFO: Pod "client-containers-43b421a2-428b-11ea-8d54-0242ac110005": Phase="Running", Reason="", readiness=true. Elapsed: 8.475578804s
Jan 29 11:34:19.704: INFO: Pod "client-containers-43b421a2-428b-11ea-8d54-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.495773167s
STEP: Saw pod success
Jan 29 11:34:19.705: INFO: Pod "client-containers-43b421a2-428b-11ea-8d54-0242ac110005" satisfied condition "success or failure"
Jan 29 11:34:19.717: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-containers-43b421a2-428b-11ea-8d54-0242ac110005 container test-container: 
STEP: delete the pod
Jan 29 11:34:20.208: INFO: Waiting for pod client-containers-43b421a2-428b-11ea-8d54-0242ac110005 to disappear
Jan 29 11:34:20.224: INFO: Pod client-containers-43b421a2-428b-11ea-8d54-0242ac110005 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 29 11:34:20.224: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-containers-4kgbk" for this suite.
Jan 29 11:34:26.291: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 11:34:26.397: INFO: namespace: e2e-tests-containers-4kgbk, resource: bindings, ignored listing per whitelist
Jan 29 11:34:26.491: INFO: namespace e2e-tests-containers-4kgbk deletion completed in 6.257084526s

• [SLOW TEST:17.565 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 29 11:34:26.492: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a pod in the namespace
STEP: Waiting for the pod to have running status
STEP: Creating an uninitialized pod in the namespace
Jan 29 11:34:37.085: INFO: error from create uninitialized namespace: 
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
STEP: Recreating the namespace
STEP: Verifying there are no pods in the namespace
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 29 11:35:03.647: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-namespaces-sfm8c" for this suite.
Jan 29 11:35:09.724: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 11:35:09.837: INFO: namespace: e2e-tests-namespaces-sfm8c, resource: bindings, ignored listing per whitelist
Jan 29 11:35:09.888: INFO: namespace e2e-tests-namespaces-sfm8c deletion completed in 6.218655438s
STEP: Destroying namespace "e2e-tests-nsdeletetest-h2bsm" for this suite.
Jan 29 11:35:09.892: INFO: Namespace e2e-tests-nsdeletetest-h2bsm was already deleted
STEP: Destroying namespace "e2e-tests-nsdeletetest-mkdf4" for this suite.
Jan 29 11:35:15.956: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 11:35:16.005: INFO: namespace: e2e-tests-nsdeletetest-mkdf4, resource: bindings, ignored listing per whitelist
Jan 29 11:35:16.151: INFO: namespace e2e-tests-nsdeletetest-mkdf4 deletion completed in 6.259147835s

• [SLOW TEST:49.659 seconds]
[sig-api-machinery] Namespaces [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with downward pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 29 11:35:16.152: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with downward pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod pod-subpath-test-downwardapi-zxps
STEP: Creating a pod to test atomic-volume-subpath
Jan 29 11:35:16.437: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-zxps" in namespace "e2e-tests-subpath-rrqf8" to be "success or failure"
Jan 29 11:35:16.476: INFO: Pod "pod-subpath-test-downwardapi-zxps": Phase="Pending", Reason="", readiness=false. Elapsed: 39.299534ms
Jan 29 11:35:18.699: INFO: Pod "pod-subpath-test-downwardapi-zxps": Phase="Pending", Reason="", readiness=false. Elapsed: 2.262115305s
Jan 29 11:35:20.721: INFO: Pod "pod-subpath-test-downwardapi-zxps": Phase="Pending", Reason="", readiness=false. Elapsed: 4.284050464s
Jan 29 11:35:22.736: INFO: Pod "pod-subpath-test-downwardapi-zxps": Phase="Pending", Reason="", readiness=false. Elapsed: 6.298733846s
Jan 29 11:35:24.762: INFO: Pod "pod-subpath-test-downwardapi-zxps": Phase="Pending", Reason="", readiness=false. Elapsed: 8.324437198s
Jan 29 11:35:26.773: INFO: Pod "pod-subpath-test-downwardapi-zxps": Phase="Pending", Reason="", readiness=false. Elapsed: 10.336355579s
Jan 29 11:35:28.827: INFO: Pod "pod-subpath-test-downwardapi-zxps": Phase="Pending", Reason="", readiness=false. Elapsed: 12.389461793s
Jan 29 11:35:30.842: INFO: Pod "pod-subpath-test-downwardapi-zxps": Phase="Pending", Reason="", readiness=false. Elapsed: 14.404858857s
Jan 29 11:35:32.881: INFO: Pod "pod-subpath-test-downwardapi-zxps": Phase="Running", Reason="", readiness=false. Elapsed: 16.444065175s
Jan 29 11:35:34.902: INFO: Pod "pod-subpath-test-downwardapi-zxps": Phase="Running", Reason="", readiness=false. Elapsed: 18.465050026s
Jan 29 11:35:36.920: INFO: Pod "pod-subpath-test-downwardapi-zxps": Phase="Running", Reason="", readiness=false. Elapsed: 20.483085934s
Jan 29 11:35:38.940: INFO: Pod "pod-subpath-test-downwardapi-zxps": Phase="Running", Reason="", readiness=false. Elapsed: 22.503274397s
Jan 29 11:35:40.956: INFO: Pod "pod-subpath-test-downwardapi-zxps": Phase="Running", Reason="", readiness=false. Elapsed: 24.518959535s
Jan 29 11:35:42.973: INFO: Pod "pod-subpath-test-downwardapi-zxps": Phase="Running", Reason="", readiness=false. Elapsed: 26.535696967s
Jan 29 11:35:44.991: INFO: Pod "pod-subpath-test-downwardapi-zxps": Phase="Running", Reason="", readiness=false. Elapsed: 28.553487197s
Jan 29 11:35:47.009: INFO: Pod "pod-subpath-test-downwardapi-zxps": Phase="Running", Reason="", readiness=false. Elapsed: 30.572153705s
Jan 29 11:35:49.876: INFO: Pod "pod-subpath-test-downwardapi-zxps": Phase="Running", Reason="", readiness=false. Elapsed: 33.438556194s
Jan 29 11:35:52.244: INFO: Pod "pod-subpath-test-downwardapi-zxps": Phase="Succeeded", Reason="", readiness=false. Elapsed: 35.807411208s
STEP: Saw pod success
Jan 29 11:35:52.245: INFO: Pod "pod-subpath-test-downwardapi-zxps" satisfied condition "success or failure"
Jan 29 11:35:52.265: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-downwardapi-zxps container test-container-subpath-downwardapi-zxps: 
STEP: delete the pod
Jan 29 11:35:52.883: INFO: Waiting for pod pod-subpath-test-downwardapi-zxps to disappear
Jan 29 11:35:52.888: INFO: Pod pod-subpath-test-downwardapi-zxps no longer exists
STEP: Deleting pod pod-subpath-test-downwardapi-zxps
Jan 29 11:35:52.888: INFO: Deleting pod "pod-subpath-test-downwardapi-zxps" in namespace "e2e-tests-subpath-rrqf8"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 29 11:35:52.891: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-subpath-rrqf8" for this suite.
Jan 29 11:36:00.928: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 11:36:00.975: INFO: namespace: e2e-tests-subpath-rrqf8, resource: bindings, ignored listing per whitelist
Jan 29 11:36:01.127: INFO: namespace e2e-tests-subpath-rrqf8 deletion completed in 8.229690553s

• [SLOW TEST:44.975 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with downward pod [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 29 11:36:01.128: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79
Jan 29 11:36:01.326: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Jan 29 11:36:01.351: INFO: Waiting for terminating namespaces to be deleted...
Jan 29 11:36:01.354: INFO: 
Logging pods the kubelet thinks is on node hunter-server-hu5at5svl7ps before test
Jan 29 11:36:01.367: INFO: kube-controller-manager-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Jan 29 11:36:01.367: INFO: kube-apiserver-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Jan 29 11:36:01.367: INFO: kube-scheduler-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Jan 29 11:36:01.367: INFO: coredns-54ff9cd656-79kxx from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded)
Jan 29 11:36:01.367: INFO: 	Container coredns ready: true, restart count 0
Jan 29 11:36:01.367: INFO: kube-proxy-bqnnz from kube-system started at 2019-08-04 08:33:23 +0000 UTC (1 container statuses recorded)
Jan 29 11:36:01.367: INFO: 	Container kube-proxy ready: true, restart count 0
Jan 29 11:36:01.367: INFO: etcd-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Jan 29 11:36:01.367: INFO: weave-net-tqwf2 from kube-system started at 2019-08-04 08:33:23 +0000 UTC (2 container statuses recorded)
Jan 29 11:36:01.367: INFO: 	Container weave ready: true, restart count 0
Jan 29 11:36:01.367: INFO: 	Container weave-npc ready: true, restart count 0
Jan 29 11:36:01.367: INFO: coredns-54ff9cd656-bmkk4 from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded)
Jan 29 11:36:01.367: INFO: 	Container coredns ready: true, restart count 0
[It] validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: verifying the node has the label node hunter-server-hu5at5svl7ps
Jan 29 11:36:01.468: INFO: Pod coredns-54ff9cd656-79kxx requesting resource cpu=100m on Node hunter-server-hu5at5svl7ps
Jan 29 11:36:01.468: INFO: Pod coredns-54ff9cd656-bmkk4 requesting resource cpu=100m on Node hunter-server-hu5at5svl7ps
Jan 29 11:36:01.468: INFO: Pod etcd-hunter-server-hu5at5svl7ps requesting resource cpu=0m on Node hunter-server-hu5at5svl7ps
Jan 29 11:36:01.468: INFO: Pod kube-apiserver-hunter-server-hu5at5svl7ps requesting resource cpu=250m on Node hunter-server-hu5at5svl7ps
Jan 29 11:36:01.468: INFO: Pod kube-controller-manager-hunter-server-hu5at5svl7ps requesting resource cpu=200m on Node hunter-server-hu5at5svl7ps
Jan 29 11:36:01.468: INFO: Pod kube-proxy-bqnnz requesting resource cpu=0m on Node hunter-server-hu5at5svl7ps
Jan 29 11:36:01.468: INFO: Pod kube-scheduler-hunter-server-hu5at5svl7ps requesting resource cpu=100m on Node hunter-server-hu5at5svl7ps
Jan 29 11:36:01.468: INFO: Pod weave-net-tqwf2 requesting resource cpu=20m on Node hunter-server-hu5at5svl7ps
STEP: Starting Pods to consume most of the cluster CPU.
STEP: Creating another pod that requires unavailable amount of CPU.
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-869f26b7-428b-11ea-8d54-0242ac110005.15ee5870f9d031dc], Reason = [Scheduled], Message = [Successfully assigned e2e-tests-sched-pred-6rqfq/filler-pod-869f26b7-428b-11ea-8d54-0242ac110005 to hunter-server-hu5at5svl7ps]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-869f26b7-428b-11ea-8d54-0242ac110005.15ee587200305e9a], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-869f26b7-428b-11ea-8d54-0242ac110005.15ee58729b5bc82a], Reason = [Created], Message = [Created container]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-869f26b7-428b-11ea-8d54-0242ac110005.15ee5872c5875405], Reason = [Started], Message = [Started container]
STEP: Considering event: 
Type = [Warning], Name = [additional-pod.15ee58734eba5a0e], Reason = [FailedScheduling], Message = [0/1 nodes are available: 1 Insufficient cpu.]
STEP: removing the label node off the node hunter-server-hu5at5svl7ps
STEP: verifying the node doesn't have the label node
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 29 11:36:12.880: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-sched-pred-6rqfq" for this suite.
Jan 29 11:36:19.055: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 11:36:19.271: INFO: namespace: e2e-tests-sched-pred-6rqfq, resource: bindings, ignored listing per whitelist
Jan 29 11:36:19.337: INFO: namespace e2e-tests-sched-pred-6rqfq deletion completed in 6.431186875s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70

• [SLOW TEST:18.210 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22
  validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 29 11:36:19.338: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79
Jan 29 11:36:20.419: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Jan 29 11:36:20.446: INFO: Waiting for terminating namespaces to be deleted...
Jan 29 11:36:20.450: INFO: 
Logging pods the kubelet thinks is on node hunter-server-hu5at5svl7ps before test
Jan 29 11:36:20.469: INFO: kube-scheduler-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Jan 29 11:36:20.469: INFO: coredns-54ff9cd656-79kxx from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded)
Jan 29 11:36:20.469: INFO: 	Container coredns ready: true, restart count 0
Jan 29 11:36:20.469: INFO: kube-proxy-bqnnz from kube-system started at 2019-08-04 08:33:23 +0000 UTC (1 container statuses recorded)
Jan 29 11:36:20.469: INFO: 	Container kube-proxy ready: true, restart count 0
Jan 29 11:36:20.469: INFO: etcd-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Jan 29 11:36:20.469: INFO: weave-net-tqwf2 from kube-system started at 2019-08-04 08:33:23 +0000 UTC (2 container statuses recorded)
Jan 29 11:36:20.469: INFO: 	Container weave ready: true, restart count 0
Jan 29 11:36:20.469: INFO: 	Container weave-npc ready: true, restart count 0
Jan 29 11:36:20.469: INFO: coredns-54ff9cd656-bmkk4 from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded)
Jan 29 11:36:20.469: INFO: 	Container coredns ready: true, restart count 0
Jan 29 11:36:20.469: INFO: kube-controller-manager-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Jan 29 11:36:20.469: INFO: kube-apiserver-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
[It] validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-993a1128-428b-11ea-8d54-0242ac110005 42
STEP: Trying to relaunch the pod, now with labels.
STEP: removing the label kubernetes.io/e2e-993a1128-428b-11ea-8d54-0242ac110005 off the node hunter-server-hu5at5svl7ps
STEP: verifying the node doesn't have the label kubernetes.io/e2e-993a1128-428b-11ea-8d54-0242ac110005
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 29 11:36:44.891: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-sched-pred-jjnr5" for this suite.
Jan 29 11:37:07.042: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 11:37:07.119: INFO: namespace: e2e-tests-sched-pred-jjnr5, resource: bindings, ignored listing per whitelist
Jan 29 11:37:07.198: INFO: namespace e2e-tests-sched-pred-jjnr5 deletion completed in 22.293572225s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70

• [SLOW TEST:47.860 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22
  validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 29 11:37:07.198: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-aded0685-428b-11ea-8d54-0242ac110005
STEP: Creating a pod to test consume secrets
Jan 29 11:37:07.442: INFO: Waiting up to 5m0s for pod "pod-secrets-adeeea68-428b-11ea-8d54-0242ac110005" in namespace "e2e-tests-secrets-z7md9" to be "success or failure"
Jan 29 11:37:07.482: INFO: Pod "pod-secrets-adeeea68-428b-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 39.888065ms
Jan 29 11:37:09.596: INFO: Pod "pod-secrets-adeeea68-428b-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.153432105s
Jan 29 11:37:11.702: INFO: Pod "pod-secrets-adeeea68-428b-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.260102384s
Jan 29 11:37:13.902: INFO: Pod "pod-secrets-adeeea68-428b-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.459536952s
Jan 29 11:37:15.912: INFO: Pod "pod-secrets-adeeea68-428b-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.469839537s
Jan 29 11:37:18.152: INFO: Pod "pod-secrets-adeeea68-428b-11ea-8d54-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.709581282s
STEP: Saw pod success
Jan 29 11:37:18.152: INFO: Pod "pod-secrets-adeeea68-428b-11ea-8d54-0242ac110005" satisfied condition "success or failure"
Jan 29 11:37:18.161: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-adeeea68-428b-11ea-8d54-0242ac110005 container secret-volume-test: 
STEP: delete the pod
Jan 29 11:37:18.662: INFO: Waiting for pod pod-secrets-adeeea68-428b-11ea-8d54-0242ac110005 to disappear
Jan 29 11:37:18.677: INFO: Pod pod-secrets-adeeea68-428b-11ea-8d54-0242ac110005 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 29 11:37:18.677: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-z7md9" for this suite.
Jan 29 11:37:24.844: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 11:37:24.898: INFO: namespace: e2e-tests-secrets-z7md9, resource: bindings, ignored listing per whitelist
Jan 29 11:37:25.090: INFO: namespace e2e-tests-secrets-z7md9 deletion completed in 6.403964005s

• [SLOW TEST:17.891 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 29 11:37:25.091: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-map-b890d36b-428b-11ea-8d54-0242ac110005
STEP: Creating a pod to test consume secrets
Jan 29 11:37:25.323: INFO: Waiting up to 5m0s for pod "pod-secrets-b8975efd-428b-11ea-8d54-0242ac110005" in namespace "e2e-tests-secrets-cq6ll" to be "success or failure"
Jan 29 11:37:25.350: INFO: Pod "pod-secrets-b8975efd-428b-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 27.548794ms
Jan 29 11:37:27.576: INFO: Pod "pod-secrets-b8975efd-428b-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.25308347s
Jan 29 11:37:29.600: INFO: Pod "pod-secrets-b8975efd-428b-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.277358832s
Jan 29 11:37:31.738: INFO: Pod "pod-secrets-b8975efd-428b-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.414737919s
Jan 29 11:37:33.910: INFO: Pod "pod-secrets-b8975efd-428b-11ea-8d54-0242ac110005": Phase="Running", Reason="", readiness=true. Elapsed: 8.587345084s
Jan 29 11:37:36.244: INFO: Pod "pod-secrets-b8975efd-428b-11ea-8d54-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.920868093s
STEP: Saw pod success
Jan 29 11:37:36.244: INFO: Pod "pod-secrets-b8975efd-428b-11ea-8d54-0242ac110005" satisfied condition "success or failure"
Jan 29 11:37:36.252: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-b8975efd-428b-11ea-8d54-0242ac110005 container secret-volume-test: 
STEP: delete the pod
Jan 29 11:37:36.589: INFO: Waiting for pod pod-secrets-b8975efd-428b-11ea-8d54-0242ac110005 to disappear
Jan 29 11:37:36.645: INFO: Pod pod-secrets-b8975efd-428b-11ea-8d54-0242ac110005 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 29 11:37:36.645: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-cq6ll" for this suite.
Jan 29 11:37:42.728: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 11:37:42.799: INFO: namespace: e2e-tests-secrets-cq6ll, resource: bindings, ignored listing per whitelist
Jan 29 11:37:42.914: INFO: namespace e2e-tests-secrets-cq6ll deletion completed in 6.221866471s

• [SLOW TEST:17.823 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Should recreate evicted statefulset [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 29 11:37:42.914: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74
STEP: Creating service test in namespace e2e-tests-statefulset-8x7jd
[It] Should recreate evicted statefulset [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Looking for a node to schedule stateful set and pod
STEP: Creating pod with conflicting port in namespace e2e-tests-statefulset-8x7jd
STEP: Creating statefulset with conflicting port in namespace e2e-tests-statefulset-8x7jd
STEP: Waiting until pod test-pod will start running in namespace e2e-tests-statefulset-8x7jd
STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace e2e-tests-statefulset-8x7jd
Jan 29 11:37:57.321: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-8x7jd, name: ss-0, uid: cb4bea58-428b-11ea-a994-fa163e34d433, status phase: Pending. Waiting for statefulset controller to delete.
Jan 29 11:37:57.463: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-8x7jd, name: ss-0, uid: cb4bea58-428b-11ea-a994-fa163e34d433, status phase: Failed. Waiting for statefulset controller to delete.
Jan 29 11:37:57.490: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-8x7jd, name: ss-0, uid: cb4bea58-428b-11ea-a994-fa163e34d433, status phase: Failed. Waiting for statefulset controller to delete.
Jan 29 11:37:57.528: INFO: Observed delete event for stateful pod ss-0 in namespace e2e-tests-statefulset-8x7jd
STEP: Removing pod with conflicting port in namespace e2e-tests-statefulset-8x7jd
STEP: Waiting when stateful pod ss-0 will be recreated in namespace e2e-tests-statefulset-8x7jd and will be in running state
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85
Jan 29 11:38:10.345: INFO: Deleting all statefulset in ns e2e-tests-statefulset-8x7jd
Jan 29 11:38:10.352: INFO: Scaling statefulset ss to 0
Jan 29 11:38:20.421: INFO: Waiting for statefulset status.replicas updated to 0
Jan 29 11:38:20.432: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 29 11:38:20.486: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-statefulset-8x7jd" for this suite.
Jan 29 11:38:28.756: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 11:38:28.789: INFO: namespace: e2e-tests-statefulset-8x7jd, resource: bindings, ignored listing per whitelist
Jan 29 11:38:29.017: INFO: namespace e2e-tests-statefulset-8x7jd deletion completed in 8.500035642s

• [SLOW TEST:46.103 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    Should recreate evicted statefulset [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 29 11:38:29.018: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating the pod
Jan 29 11:38:39.867: INFO: Successfully updated pod "labelsupdatedeb04a19-428b-11ea-8d54-0242ac110005"
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 29 11:38:41.980: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-72fs4" for this suite.
Jan 29 11:39:06.022: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 11:39:06.133: INFO: namespace: e2e-tests-downward-api-72fs4, resource: bindings, ignored listing per whitelist
Jan 29 11:39:06.209: INFO: namespace e2e-tests-downward-api-72fs4 deletion completed in 24.220673191s

• [SLOW TEST:37.191 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 29 11:39:06.209: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-f51114bd-428b-11ea-8d54-0242ac110005
STEP: Creating a pod to test consume configMaps
Jan 29 11:39:06.880: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-f5200b87-428b-11ea-8d54-0242ac110005" in namespace "e2e-tests-projected-cfvbp" to be "success or failure"
Jan 29 11:39:06.895: INFO: Pod "pod-projected-configmaps-f5200b87-428b-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 15.106817ms
Jan 29 11:39:08.940: INFO: Pod "pod-projected-configmaps-f5200b87-428b-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.059903192s
Jan 29 11:39:10.960: INFO: Pod "pod-projected-configmaps-f5200b87-428b-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.079634262s
Jan 29 11:39:13.356: INFO: Pod "pod-projected-configmaps-f5200b87-428b-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.475797263s
Jan 29 11:39:15.434: INFO: Pod "pod-projected-configmaps-f5200b87-428b-11ea-8d54-0242ac110005": Phase="Running", Reason="", readiness=true. Elapsed: 8.55442758s
Jan 29 11:39:17.467: INFO: Pod "pod-projected-configmaps-f5200b87-428b-11ea-8d54-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.586849559s
STEP: Saw pod success
Jan 29 11:39:17.467: INFO: Pod "pod-projected-configmaps-f5200b87-428b-11ea-8d54-0242ac110005" satisfied condition "success or failure"
Jan 29 11:39:17.477: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-f5200b87-428b-11ea-8d54-0242ac110005 container projected-configmap-volume-test: 
STEP: delete the pod
Jan 29 11:39:17.991: INFO: Waiting for pod pod-projected-configmaps-f5200b87-428b-11ea-8d54-0242ac110005 to disappear
Jan 29 11:39:18.079: INFO: Pod pod-projected-configmaps-f5200b87-428b-11ea-8d54-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 29 11:39:18.080: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-cfvbp" for this suite.
Jan 29 11:39:24.354: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 11:39:24.408: INFO: namespace: e2e-tests-projected-cfvbp, resource: bindings, ignored listing per whitelist
Jan 29 11:39:24.631: INFO: namespace e2e-tests-projected-cfvbp deletion completed in 6.333785383s

• [SLOW TEST:18.422 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 29 11:39:24.632: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65
[It] deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan 29 11:39:24.936: INFO: Pod name cleanup-pod: Found 0 pods out of 1
Jan 29 11:39:29.961: INFO: Pod name cleanup-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Jan 29 11:39:33.992: INFO: Creating deployment test-cleanup-deployment
STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59
Jan 29 11:39:34.205: INFO: Deployment "test-cleanup-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment,GenerateName:,Namespace:e2e-tests-deployment-vsp2z,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-vsp2z/deployments/test-cleanup-deployment,UID:054e2c12-428c-11ea-a994-fa163e34d433,ResourceVersion:19853183,Generation:1,CreationTimestamp:2020-01-29 11:39:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[],ReadyReplicas:0,CollisionCount:nil,},}

Jan 29 11:39:34.229: INFO: New ReplicaSet of Deployment "test-cleanup-deployment" is nil.
Jan 29 11:39:34.229: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment":
Jan 29 11:39:34.230: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller,GenerateName:,Namespace:e2e-tests-deployment-vsp2z,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-vsp2z/replicasets/test-cleanup-controller,UID:ffdcaac2-428b-11ea-a994-fa163e34d433,ResourceVersion:19853185,Generation:1,CreationTimestamp:2020-01-29 11:39:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment 054e2c12-428c-11ea-a994-fa163e34d433 0xc000b71e1f 0xc000b71e30}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Jan 29 11:39:34.248: INFO: Pod "test-cleanup-controller-7b25j" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller-7b25j,GenerateName:test-cleanup-controller-,Namespace:e2e-tests-deployment-vsp2z,SelfLink:/api/v1/namespaces/e2e-tests-deployment-vsp2z/pods/test-cleanup-controller-7b25j,UID:ffe7c64b-428b-11ea-a994-fa163e34d433,ResourceVersion:19853180,Generation:0,CreationTimestamp:2020-01-29 11:39:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-controller ffdcaac2-428b-11ea-a994-fa163e34d433 0xc00192fb97 0xc00192fb98}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-dvlrr {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dvlrr,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-dvlrr true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00192fc00} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00192fc20}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 11:39:25 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 11:39:32 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 11:39:32 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 11:39:24 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.4,StartTime:2020-01-29 11:39:25 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-29 11:39:31 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://d4812b23bb618acf2b7dc6ce7d30be83d7b913f5bde056fb2d71a1942cc15463}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 29 11:39:34.248: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-deployment-vsp2z" for this suite.
Jan 29 11:39:44.569: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 11:39:46.374: INFO: namespace: e2e-tests-deployment-vsp2z, resource: bindings, ignored listing per whitelist
Jan 29 11:39:46.500: INFO: namespace e2e-tests-deployment-vsp2z deletion completed in 12.144049615s

• [SLOW TEST:21.868 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 29 11:39:46.501: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43
[It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
Jan 29 11:39:46.814: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 29 11:40:05.637: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-init-container-7vr44" for this suite.
Jan 29 11:40:11.695: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 11:40:11.832: INFO: namespace: e2e-tests-init-container-7vr44, resource: bindings, ignored listing per whitelist
Jan 29 11:40:11.935: INFO: namespace e2e-tests-init-container-7vr44 deletion completed in 6.278448053s

• [SLOW TEST:25.434 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl version 
  should check is all data is printed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 29 11:40:11.936: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should check is all data is printed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan 29 11:40:12.406: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version'
Jan 29 11:40:12.687: INFO: stderr: ""
Jan 29 11:40:12.687: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.12\", GitCommit:\"a8b52209ee172232b6db7a6e0ce2adc77458829f\", GitTreeState:\"clean\", BuildDate:\"2019-12-22T15:53:48Z\", GoVersion:\"go1.11.13\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.8\", GitCommit:\"0c6d31a99f81476dfc9871ba3cf3f597bec29b58\", GitTreeState:\"clean\", BuildDate:\"2019-07-08T08:38:54Z\", GoVersion:\"go1.11.5\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 29 11:40:12.687: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-rfsp9" for this suite.
Jan 29 11:40:18.746: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 11:40:18.874: INFO: namespace: e2e-tests-kubectl-rfsp9, resource: bindings, ignored listing per whitelist
Jan 29 11:40:18.931: INFO: namespace e2e-tests-kubectl-rfsp9 deletion completed in 6.231988439s

• [SLOW TEST:6.995 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl version
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should check is all data is printed  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 29 11:40:18.932: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-map-2025dc58-428c-11ea-8d54-0242ac110005
STEP: Creating a pod to test consume configMaps
Jan 29 11:40:19.088: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-20293392-428c-11ea-8d54-0242ac110005" in namespace "e2e-tests-projected-p54xq" to be "success or failure"
Jan 29 11:40:19.192: INFO: Pod "pod-projected-configmaps-20293392-428c-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 103.994226ms
Jan 29 11:40:21.218: INFO: Pod "pod-projected-configmaps-20293392-428c-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.129934499s
Jan 29 11:40:23.241: INFO: Pod "pod-projected-configmaps-20293392-428c-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.152662002s
Jan 29 11:40:25.257: INFO: Pod "pod-projected-configmaps-20293392-428c-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.168882228s
Jan 29 11:40:27.271: INFO: Pod "pod-projected-configmaps-20293392-428c-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.182710244s
Jan 29 11:40:29.811: INFO: Pod "pod-projected-configmaps-20293392-428c-11ea-8d54-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.722747969s
STEP: Saw pod success
Jan 29 11:40:29.812: INFO: Pod "pod-projected-configmaps-20293392-428c-11ea-8d54-0242ac110005" satisfied condition "success or failure"
Jan 29 11:40:29.842: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-20293392-428c-11ea-8d54-0242ac110005 container projected-configmap-volume-test: 
STEP: delete the pod
Jan 29 11:40:30.171: INFO: Waiting for pod pod-projected-configmaps-20293392-428c-11ea-8d54-0242ac110005 to disappear
Jan 29 11:40:30.265: INFO: Pod pod-projected-configmaps-20293392-428c-11ea-8d54-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 29 11:40:30.265: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-p54xq" for this suite.
Jan 29 11:40:36.328: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 11:40:36.400: INFO: namespace: e2e-tests-projected-p54xq, resource: bindings, ignored listing per whitelist
Jan 29 11:40:36.583: INFO: namespace e2e-tests-projected-p54xq deletion completed in 6.302107891s

• [SLOW TEST:17.651 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-node] Downward API 
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 29 11:40:36.583: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward api env vars
Jan 29 11:40:36.784: INFO: Waiting up to 5m0s for pod "downward-api-2ab6736f-428c-11ea-8d54-0242ac110005" in namespace "e2e-tests-downward-api-np4zx" to be "success or failure"
Jan 29 11:40:36.801: INFO: Pod "downward-api-2ab6736f-428c-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 17.670325ms
Jan 29 11:40:38.826: INFO: Pod "downward-api-2ab6736f-428c-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.042441083s
Jan 29 11:40:40.845: INFO: Pod "downward-api-2ab6736f-428c-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.061451528s
Jan 29 11:40:43.171: INFO: Pod "downward-api-2ab6736f-428c-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.386849748s
Jan 29 11:40:45.185: INFO: Pod "downward-api-2ab6736f-428c-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.400818292s
Jan 29 11:40:47.205: INFO: Pod "downward-api-2ab6736f-428c-11ea-8d54-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.421437171s
STEP: Saw pod success
Jan 29 11:40:47.205: INFO: Pod "downward-api-2ab6736f-428c-11ea-8d54-0242ac110005" satisfied condition "success or failure"
Jan 29 11:40:47.213: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-2ab6736f-428c-11ea-8d54-0242ac110005 container dapi-container: 
STEP: delete the pod
Jan 29 11:40:47.464: INFO: Waiting for pod downward-api-2ab6736f-428c-11ea-8d54-0242ac110005 to disappear
Jan 29 11:40:47.469: INFO: Pod downward-api-2ab6736f-428c-11ea-8d54-0242ac110005 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 29 11:40:47.469: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-np4zx" for this suite.
Jan 29 11:40:53.510: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 11:40:53.651: INFO: namespace: e2e-tests-downward-api-np4zx, resource: bindings, ignored listing per whitelist
Jan 29 11:40:53.699: INFO: namespace e2e-tests-downward-api-np4zx deletion completed in 6.222370314s

• [SLOW TEST:17.116 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 29 11:40:53.701: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test env composition
Jan 29 11:40:54.016: INFO: Waiting up to 5m0s for pod "var-expansion-34fb8625-428c-11ea-8d54-0242ac110005" in namespace "e2e-tests-var-expansion-gg6jp" to be "success or failure"
Jan 29 11:40:54.092: INFO: Pod "var-expansion-34fb8625-428c-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 76.418053ms
Jan 29 11:40:56.107: INFO: Pod "var-expansion-34fb8625-428c-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.091583735s
Jan 29 11:40:58.666: INFO: Pod "var-expansion-34fb8625-428c-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.650015345s
Jan 29 11:41:00.681: INFO: Pod "var-expansion-34fb8625-428c-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.66509966s
Jan 29 11:41:02.706: INFO: Pod "var-expansion-34fb8625-428c-11ea-8d54-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.689865537s
STEP: Saw pod success
Jan 29 11:41:02.706: INFO: Pod "var-expansion-34fb8625-428c-11ea-8d54-0242ac110005" satisfied condition "success or failure"
Jan 29 11:41:02.714: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod var-expansion-34fb8625-428c-11ea-8d54-0242ac110005 container dapi-container: 
STEP: delete the pod
Jan 29 11:41:02.780: INFO: Waiting for pod var-expansion-34fb8625-428c-11ea-8d54-0242ac110005 to disappear
Jan 29 11:41:02.786: INFO: Pod var-expansion-34fb8625-428c-11ea-8d54-0242ac110005 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 29 11:41:02.787: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-var-expansion-gg6jp" for this suite.
Jan 29 11:41:08.937: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 11:41:08.966: INFO: namespace: e2e-tests-var-expansion-gg6jp, resource: bindings, ignored listing per whitelist
Jan 29 11:41:09.072: INFO: namespace e2e-tests-var-expansion-gg6jp deletion completed in 6.27560466s

• [SLOW TEST:15.371 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run job 
  should create a job from an image when restart is OnFailure  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 29 11:41:09.073: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1454
[It] should create a job from an image when restart is OnFailure  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Jan 29 11:41:09.268: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-c6gd7'
Jan 29 11:41:11.195: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Jan 29 11:41:11.196: INFO: stdout: "job.batch/e2e-test-nginx-job created\n"
STEP: verifying the job e2e-test-nginx-job was created
[AfterEach] [k8s.io] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1459
Jan 29 11:41:11.217: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-nginx-job --namespace=e2e-tests-kubectl-c6gd7'
Jan 29 11:41:11.524: INFO: stderr: ""
Jan 29 11:41:11.524: INFO: stdout: "job.batch \"e2e-test-nginx-job\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 29 11:41:11.525: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-c6gd7" for this suite.
Jan 29 11:41:19.700: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 11:41:19.879: INFO: namespace: e2e-tests-kubectl-c6gd7, resource: bindings, ignored listing per whitelist
Jan 29 11:41:19.934: INFO: namespace e2e-tests-kubectl-c6gd7 deletion completed in 8.388980641s

• [SLOW TEST:10.861 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create a job from an image when restart is OnFailure  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 29 11:41:19.934: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan 29 11:41:20.211: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4496ee2d-428c-11ea-8d54-0242ac110005" in namespace "e2e-tests-projected-b4r26" to be "success or failure"
Jan 29 11:41:20.219: INFO: Pod "downwardapi-volume-4496ee2d-428c-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.704595ms
Jan 29 11:41:22.248: INFO: Pod "downwardapi-volume-4496ee2d-428c-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03691487s
Jan 29 11:41:24.264: INFO: Pod "downwardapi-volume-4496ee2d-428c-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.052782715s
Jan 29 11:41:26.313: INFO: Pod "downwardapi-volume-4496ee2d-428c-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.101708422s
Jan 29 11:41:28.328: INFO: Pod "downwardapi-volume-4496ee2d-428c-11ea-8d54-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.116832399s
STEP: Saw pod success
Jan 29 11:41:28.328: INFO: Pod "downwardapi-volume-4496ee2d-428c-11ea-8d54-0242ac110005" satisfied condition "success or failure"
Jan 29 11:41:28.332: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-4496ee2d-428c-11ea-8d54-0242ac110005 container client-container: 
STEP: delete the pod
Jan 29 11:41:28.398: INFO: Waiting for pod downwardapi-volume-4496ee2d-428c-11ea-8d54-0242ac110005 to disappear
Jan 29 11:41:28.472: INFO: Pod downwardapi-volume-4496ee2d-428c-11ea-8d54-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 29 11:41:28.472: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-b4r26" for this suite.
Jan 29 11:41:34.560: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 11:41:34.704: INFO: namespace: e2e-tests-projected-b4r26, resource: bindings, ignored listing per whitelist
Jan 29 11:41:34.753: INFO: namespace e2e-tests-projected-b4r26 deletion completed in 6.260283323s

• [SLOW TEST:14.819 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 29 11:41:34.754: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0644 on tmpfs
Jan 29 11:41:34.920: INFO: Waiting up to 5m0s for pod "pod-4d5dfca2-428c-11ea-8d54-0242ac110005" in namespace "e2e-tests-emptydir-hrhwn" to be "success or failure"
Jan 29 11:41:34.949: INFO: Pod "pod-4d5dfca2-428c-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 29.362566ms
Jan 29 11:41:36.961: INFO: Pod "pod-4d5dfca2-428c-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.04114319s
Jan 29 11:41:38.971: INFO: Pod "pod-4d5dfca2-428c-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.05176701s
Jan 29 11:41:41.397: INFO: Pod "pod-4d5dfca2-428c-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.477097438s
Jan 29 11:41:43.412: INFO: Pod "pod-4d5dfca2-428c-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.492821995s
Jan 29 11:41:45.513: INFO: Pod "pod-4d5dfca2-428c-11ea-8d54-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.593423324s
STEP: Saw pod success
Jan 29 11:41:45.513: INFO: Pod "pod-4d5dfca2-428c-11ea-8d54-0242ac110005" satisfied condition "success or failure"
Jan 29 11:41:45.522: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-4d5dfca2-428c-11ea-8d54-0242ac110005 container test-container: 
STEP: delete the pod
Jan 29 11:41:46.215: INFO: Waiting for pod pod-4d5dfca2-428c-11ea-8d54-0242ac110005 to disappear
Jan 29 11:41:46.294: INFO: Pod pod-4d5dfca2-428c-11ea-8d54-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 29 11:41:46.294: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-hrhwn" for this suite.
Jan 29 11:41:52.349: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 11:41:52.517: INFO: namespace: e2e-tests-emptydir-hrhwn, resource: bindings, ignored listing per whitelist
Jan 29 11:41:52.655: INFO: namespace e2e-tests-emptydir-hrhwn deletion completed in 6.342904733s

• [SLOW TEST:17.901 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl logs 
  should be able to retrieve and filter logs  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 29 11:41:52.656: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1134
STEP: creating an rc
Jan 29 11:41:52.990: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-jfjl2'
Jan 29 11:41:53.518: INFO: stderr: ""
Jan 29 11:41:53.518: INFO: stdout: "replicationcontroller/redis-master created\n"
[It] should be able to retrieve and filter logs  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Waiting for Redis master to start.
Jan 29 11:41:54.553: INFO: Selector matched 1 pods for map[app:redis]
Jan 29 11:41:54.554: INFO: Found 0 / 1
Jan 29 11:41:55.541: INFO: Selector matched 1 pods for map[app:redis]
Jan 29 11:41:55.541: INFO: Found 0 / 1
Jan 29 11:41:56.614: INFO: Selector matched 1 pods for map[app:redis]
Jan 29 11:41:56.614: INFO: Found 0 / 1
Jan 29 11:41:57.546: INFO: Selector matched 1 pods for map[app:redis]
Jan 29 11:41:57.546: INFO: Found 0 / 1
Jan 29 11:41:58.583: INFO: Selector matched 1 pods for map[app:redis]
Jan 29 11:41:58.584: INFO: Found 0 / 1
Jan 29 11:41:59.532: INFO: Selector matched 1 pods for map[app:redis]
Jan 29 11:41:59.532: INFO: Found 0 / 1
Jan 29 11:42:00.546: INFO: Selector matched 1 pods for map[app:redis]
Jan 29 11:42:00.546: INFO: Found 0 / 1
Jan 29 11:42:01.537: INFO: Selector matched 1 pods for map[app:redis]
Jan 29 11:42:01.537: INFO: Found 1 / 1
Jan 29 11:42:01.537: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Jan 29 11:42:01.548: INFO: Selector matched 1 pods for map[app:redis]
Jan 29 11:42:01.548: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
STEP: checking for a matching strings
Jan 29 11:42:01.548: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-25ngr redis-master --namespace=e2e-tests-kubectl-jfjl2'
Jan 29 11:42:01.769: INFO: stderr: ""
Jan 29 11:42:01.769: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 29 Jan 11:42:00.757 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 29 Jan 11:42:00.757 # Server started, Redis version 3.2.12\n1:M 29 Jan 11:42:00.757 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 29 Jan 11:42:00.758 * The server is now ready to accept connections on port 6379\n"
STEP: limiting log lines
Jan 29 11:42:01.770: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-25ngr redis-master --namespace=e2e-tests-kubectl-jfjl2 --tail=1'
Jan 29 11:42:01.979: INFO: stderr: ""
Jan 29 11:42:01.979: INFO: stdout: "1:M 29 Jan 11:42:00.758 * The server is now ready to accept connections on port 6379\n"
STEP: limiting log bytes
Jan 29 11:42:01.980: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-25ngr redis-master --namespace=e2e-tests-kubectl-jfjl2 --limit-bytes=1'
Jan 29 11:42:02.139: INFO: stderr: ""
Jan 29 11:42:02.139: INFO: stdout: " "
STEP: exposing timestamps
Jan 29 11:42:02.140: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-25ngr redis-master --namespace=e2e-tests-kubectl-jfjl2 --tail=1 --timestamps'
Jan 29 11:42:02.289: INFO: stderr: ""
Jan 29 11:42:02.289: INFO: stdout: "2020-01-29T11:42:00.758340476Z 1:M 29 Jan 11:42:00.758 * The server is now ready to accept connections on port 6379\n"
STEP: restricting to a time range
Jan 29 11:42:04.790: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-25ngr redis-master --namespace=e2e-tests-kubectl-jfjl2 --since=1s'
Jan 29 11:42:04.981: INFO: stderr: ""
Jan 29 11:42:04.982: INFO: stdout: ""
Jan 29 11:42:04.982: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-25ngr redis-master --namespace=e2e-tests-kubectl-jfjl2 --since=24h'
Jan 29 11:42:05.113: INFO: stderr: ""
Jan 29 11:42:05.113: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 29 Jan 11:42:00.757 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 29 Jan 11:42:00.757 # Server started, Redis version 3.2.12\n1:M 29 Jan 11:42:00.757 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 29 Jan 11:42:00.758 * The server is now ready to accept connections on port 6379\n"
[AfterEach] [k8s.io] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1140
STEP: using delete to clean up resources
Jan 29 11:42:05.114: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-jfjl2'
Jan 29 11:42:05.252: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan 29 11:42:05.252: INFO: stdout: "replicationcontroller \"redis-master\" force deleted\n"
Jan 29 11:42:05.252: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=nginx --no-headers --namespace=e2e-tests-kubectl-jfjl2'
Jan 29 11:42:05.387: INFO: stderr: "No resources found.\n"
Jan 29 11:42:05.388: INFO: stdout: ""
Jan 29 11:42:05.388: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=nginx --namespace=e2e-tests-kubectl-jfjl2 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Jan 29 11:42:05.502: INFO: stderr: ""
Jan 29 11:42:05.503: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 29 11:42:05.503: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-jfjl2" for this suite.
Jan 29 11:42:29.560: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 11:42:29.763: INFO: namespace: e2e-tests-kubectl-jfjl2, resource: bindings, ignored listing per whitelist
Jan 29 11:42:29.802: INFO: namespace e2e-tests-kubectl-jfjl2 deletion completed in 24.292514529s

• [SLOW TEST:37.146 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should be able to retrieve and filter logs  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-api-machinery] Secrets 
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 29 11:42:29.802: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-6e403e99-428c-11ea-8d54-0242ac110005
STEP: Creating a pod to test consume secrets
Jan 29 11:42:30.104: INFO: Waiting up to 5m0s for pod "pod-secrets-6e413344-428c-11ea-8d54-0242ac110005" in namespace "e2e-tests-secrets-sxwbn" to be "success or failure"
Jan 29 11:42:30.122: INFO: Pod "pod-secrets-6e413344-428c-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 17.257182ms
Jan 29 11:42:32.206: INFO: Pod "pod-secrets-6e413344-428c-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.102007349s
Jan 29 11:42:34.220: INFO: Pod "pod-secrets-6e413344-428c-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.116031759s
Jan 29 11:42:36.690: INFO: Pod "pod-secrets-6e413344-428c-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.585947093s
Jan 29 11:42:38.703: INFO: Pod "pod-secrets-6e413344-428c-11ea-8d54-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.598885396s
STEP: Saw pod success
Jan 29 11:42:38.703: INFO: Pod "pod-secrets-6e413344-428c-11ea-8d54-0242ac110005" satisfied condition "success or failure"
Jan 29 11:42:38.787: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-6e413344-428c-11ea-8d54-0242ac110005 container secret-env-test: 
STEP: delete the pod
Jan 29 11:42:38.995: INFO: Waiting for pod pod-secrets-6e413344-428c-11ea-8d54-0242ac110005 to disappear
Jan 29 11:42:39.008: INFO: Pod pod-secrets-6e413344-428c-11ea-8d54-0242ac110005 no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 29 11:42:39.009: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-sxwbn" for this suite.
Jan 29 11:42:45.057: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 11:42:45.199: INFO: namespace: e2e-tests-secrets-sxwbn, resource: bindings, ignored listing per whitelist
Jan 29 11:42:45.220: INFO: namespace e2e-tests-secrets-sxwbn deletion completed in 6.198953497s

• [SLOW TEST:15.418 seconds]
[sig-api-machinery] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:32
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 29 11:42:45.221: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-775e5e5c-428c-11ea-8d54-0242ac110005
STEP: Creating a pod to test consume secrets
Jan 29 11:42:45.412: INFO: Waiting up to 5m0s for pod "pod-secrets-775f8378-428c-11ea-8d54-0242ac110005" in namespace "e2e-tests-secrets-27vnj" to be "success or failure"
Jan 29 11:42:45.676: INFO: Pod "pod-secrets-775f8378-428c-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 263.908868ms
Jan 29 11:42:47.781: INFO: Pod "pod-secrets-775f8378-428c-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.369454085s
Jan 29 11:42:49.806: INFO: Pod "pod-secrets-775f8378-428c-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.393787594s
Jan 29 11:42:51.857: INFO: Pod "pod-secrets-775f8378-428c-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.444557833s
Jan 29 11:42:53.876: INFO: Pod "pod-secrets-775f8378-428c-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.463872029s
Jan 29 11:42:55.889: INFO: Pod "pod-secrets-775f8378-428c-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.477078682s
Jan 29 11:42:57.994: INFO: Pod "pod-secrets-775f8378-428c-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 12.582329337s
Jan 29 11:43:00.008: INFO: Pod "pod-secrets-775f8378-428c-11ea-8d54-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.595579516s
STEP: Saw pod success
Jan 29 11:43:00.008: INFO: Pod "pod-secrets-775f8378-428c-11ea-8d54-0242ac110005" satisfied condition "success or failure"
Jan 29 11:43:00.012: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-775f8378-428c-11ea-8d54-0242ac110005 container secret-volume-test: 
STEP: delete the pod
Jan 29 11:43:00.081: INFO: Waiting for pod pod-secrets-775f8378-428c-11ea-8d54-0242ac110005 to disappear
Jan 29 11:43:00.097: INFO: Pod pod-secrets-775f8378-428c-11ea-8d54-0242ac110005 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 29 11:43:00.098: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-27vnj" for this suite.
Jan 29 11:43:06.213: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 11:43:06.243: INFO: namespace: e2e-tests-secrets-27vnj, resource: bindings, ignored listing per whitelist
Jan 29 11:43:06.381: INFO: namespace e2e-tests-secrets-27vnj deletion completed in 6.276220117s

• [SLOW TEST:21.161 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 29 11:43:06.382: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan 29 11:43:06.752: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8413bd6b-428c-11ea-8d54-0242ac110005" in namespace "e2e-tests-downward-api-m9mw7" to be "success or failure"
Jan 29 11:43:06.766: INFO: Pod "downwardapi-volume-8413bd6b-428c-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 14.279772ms
Jan 29 11:43:09.099: INFO: Pod "downwardapi-volume-8413bd6b-428c-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.347123956s
Jan 29 11:43:11.120: INFO: Pod "downwardapi-volume-8413bd6b-428c-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.367719143s
Jan 29 11:43:13.237: INFO: Pod "downwardapi-volume-8413bd6b-428c-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.484586288s
Jan 29 11:43:15.359: INFO: Pod "downwardapi-volume-8413bd6b-428c-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.606994841s
Jan 29 11:43:17.468: INFO: Pod "downwardapi-volume-8413bd6b-428c-11ea-8d54-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.715458292s
STEP: Saw pod success
Jan 29 11:43:17.468: INFO: Pod "downwardapi-volume-8413bd6b-428c-11ea-8d54-0242ac110005" satisfied condition "success or failure"
Jan 29 11:43:17.505: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-8413bd6b-428c-11ea-8d54-0242ac110005 container client-container: 
STEP: delete the pod
Jan 29 11:43:17.603: INFO: Waiting for pod downwardapi-volume-8413bd6b-428c-11ea-8d54-0242ac110005 to disappear
Jan 29 11:43:17.616: INFO: Pod downwardapi-volume-8413bd6b-428c-11ea-8d54-0242ac110005 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 29 11:43:17.616: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-m9mw7" for this suite.
Jan 29 11:43:25.667: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 11:43:25.900: INFO: namespace: e2e-tests-downward-api-m9mw7, resource: bindings, ignored listing per whitelist
Jan 29 11:43:25.917: INFO: namespace e2e-tests-downward-api-m9mw7 deletion completed in 8.293810485s

• [SLOW TEST:19.535 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a read only busybox container 
  should not write to root filesystem [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 29 11:43:25.918: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should not write to root filesystem [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 29 11:43:34.196: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubelet-test-r5g7c" for this suite.
Jan 29 11:44:28.232: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 11:44:28.348: INFO: namespace: e2e-tests-kubelet-test-r5g7c, resource: bindings, ignored listing per whitelist
Jan 29 11:44:28.419: INFO: namespace e2e-tests-kubelet-test-r5g7c deletion completed in 54.218752918s

• [SLOW TEST:62.502 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when scheduling a read only busybox container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:186
    should not write to root filesystem [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 29 11:44:28.420: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0644 on tmpfs
Jan 29 11:44:29.014: INFO: Waiting up to 5m0s for pod "pod-b5211098-428c-11ea-8d54-0242ac110005" in namespace "e2e-tests-emptydir-z4mb4" to be "success or failure"
Jan 29 11:44:29.023: INFO: Pod "pod-b5211098-428c-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.716043ms
Jan 29 11:44:31.046: INFO: Pod "pod-b5211098-428c-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031665554s
Jan 29 11:44:33.063: INFO: Pod "pod-b5211098-428c-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.048684298s
Jan 29 11:44:35.673: INFO: Pod "pod-b5211098-428c-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.659231229s
Jan 29 11:44:37.687: INFO: Pod "pod-b5211098-428c-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.672935215s
Jan 29 11:44:40.010: INFO: Pod "pod-b5211098-428c-11ea-8d54-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.995792272s
STEP: Saw pod success
Jan 29 11:44:40.010: INFO: Pod "pod-b5211098-428c-11ea-8d54-0242ac110005" satisfied condition "success or failure"
Jan 29 11:44:40.021: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-b5211098-428c-11ea-8d54-0242ac110005 container test-container: 
STEP: delete the pod
Jan 29 11:44:40.573: INFO: Waiting for pod pod-b5211098-428c-11ea-8d54-0242ac110005 to disappear
Jan 29 11:44:40.581: INFO: Pod pod-b5211098-428c-11ea-8d54-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 29 11:44:40.581: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-z4mb4" for this suite.
Jan 29 11:44:46.650: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 11:44:46.826: INFO: namespace: e2e-tests-emptydir-z4mb4, resource: bindings, ignored listing per whitelist
Jan 29 11:44:46.838: INFO: namespace e2e-tests-emptydir-z4mb4 deletion completed in 6.250166329s

• [SLOW TEST:18.419 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 29 11:44:46.839: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102
[It] should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Jan 29 11:44:47.067: INFO: Number of nodes with available pods: 0
Jan 29 11:44:47.067: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 29 11:44:48.082: INFO: Number of nodes with available pods: 0
Jan 29 11:44:48.082: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 29 11:44:49.460: INFO: Number of nodes with available pods: 0
Jan 29 11:44:49.460: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 29 11:44:50.086: INFO: Number of nodes with available pods: 0
Jan 29 11:44:50.086: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 29 11:44:51.098: INFO: Number of nodes with available pods: 0
Jan 29 11:44:51.099: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 29 11:44:52.452: INFO: Number of nodes with available pods: 0
Jan 29 11:44:52.452: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 29 11:44:53.434: INFO: Number of nodes with available pods: 0
Jan 29 11:44:53.434: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 29 11:44:54.109: INFO: Number of nodes with available pods: 0
Jan 29 11:44:54.109: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 29 11:44:55.097: INFO: Number of nodes with available pods: 1
Jan 29 11:44:55.097: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Stop a daemon pod, check that the daemon pod is revived.
Jan 29 11:44:55.163: INFO: Number of nodes with available pods: 0
Jan 29 11:44:55.163: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 29 11:44:56.192: INFO: Number of nodes with available pods: 0
Jan 29 11:44:56.192: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 29 11:44:57.207: INFO: Number of nodes with available pods: 0
Jan 29 11:44:57.207: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 29 11:44:58.189: INFO: Number of nodes with available pods: 0
Jan 29 11:44:58.189: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 29 11:44:59.207: INFO: Number of nodes with available pods: 0
Jan 29 11:44:59.208: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 29 11:45:00.182: INFO: Number of nodes with available pods: 0
Jan 29 11:45:00.182: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 29 11:45:01.192: INFO: Number of nodes with available pods: 0
Jan 29 11:45:01.192: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 29 11:45:02.185: INFO: Number of nodes with available pods: 0
Jan 29 11:45:02.185: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 29 11:45:03.190: INFO: Number of nodes with available pods: 0
Jan 29 11:45:03.190: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 29 11:45:04.181: INFO: Number of nodes with available pods: 0
Jan 29 11:45:04.181: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 29 11:45:05.190: INFO: Number of nodes with available pods: 0
Jan 29 11:45:05.190: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 29 11:45:06.185: INFO: Number of nodes with available pods: 0
Jan 29 11:45:06.185: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 29 11:45:07.206: INFO: Number of nodes with available pods: 0
Jan 29 11:45:07.206: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 29 11:45:08.192: INFO: Number of nodes with available pods: 0
Jan 29 11:45:08.192: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 29 11:45:09.183: INFO: Number of nodes with available pods: 0
Jan 29 11:45:09.183: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 29 11:45:10.193: INFO: Number of nodes with available pods: 0
Jan 29 11:45:10.193: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 29 11:45:11.203: INFO: Number of nodes with available pods: 0
Jan 29 11:45:11.203: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 29 11:45:12.187: INFO: Number of nodes with available pods: 0
Jan 29 11:45:12.187: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 29 11:45:13.185: INFO: Number of nodes with available pods: 0
Jan 29 11:45:13.185: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 29 11:45:14.191: INFO: Number of nodes with available pods: 0
Jan 29 11:45:14.191: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 29 11:45:15.184: INFO: Number of nodes with available pods: 0
Jan 29 11:45:15.184: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 29 11:45:16.179: INFO: Number of nodes with available pods: 0
Jan 29 11:45:16.179: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 29 11:45:17.493: INFO: Number of nodes with available pods: 0
Jan 29 11:45:17.493: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 29 11:45:18.187: INFO: Number of nodes with available pods: 0
Jan 29 11:45:18.187: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 29 11:45:19.183: INFO: Number of nodes with available pods: 0
Jan 29 11:45:19.183: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 29 11:45:20.217: INFO: Number of nodes with available pods: 1
Jan 29 11:45:20.217: INFO: Number of running nodes: 1, number of available pods: 1
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-bpztp, will wait for the garbage collector to delete the pods
Jan 29 11:45:20.296: INFO: Deleting DaemonSet.extensions daemon-set took: 20.642606ms
Jan 29 11:45:20.396: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.510567ms
Jan 29 11:45:32.719: INFO: Number of nodes with available pods: 0
Jan 29 11:45:32.719: INFO: Number of running nodes: 0, number of available pods: 0
Jan 29 11:45:32.723: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-bpztp/daemonsets","resourceVersion":"19854020"},"items":null}

Jan 29 11:45:32.727: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-bpztp/pods","resourceVersion":"19854020"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 29 11:45:32.742: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-daemonsets-bpztp" for this suite.
Jan 29 11:45:40.786: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 11:45:40.923: INFO: namespace: e2e-tests-daemonsets-bpztp, resource: bindings, ignored listing per whitelist
Jan 29 11:45:41.015: INFO: namespace e2e-tests-daemonsets-bpztp deletion completed in 8.267678374s

• [SLOW TEST:54.176 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: udp [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 29 11:45:41.015: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: udp [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-9vd2m
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Jan 29 11:45:41.278: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Jan 29 11:46:15.687: INFO: ExecWithOptions {Command:[/bin/sh -c echo 'hostName' | nc -w 1 -u 10.32.0.4 8081 | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-9vd2m PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 29 11:46:15.688: INFO: >>> kubeConfig: /root/.kube/config
I0129 11:46:15.871410       8 log.go:172] (0xc0000eadc0) (0xc0021c8aa0) Create stream
I0129 11:46:15.871546       8 log.go:172] (0xc0000eadc0) (0xc0021c8aa0) Stream added, broadcasting: 1
I0129 11:46:15.881108       8 log.go:172] (0xc0000eadc0) Reply frame received for 1
I0129 11:46:15.881149       8 log.go:172] (0xc0000eadc0) (0xc002132780) Create stream
I0129 11:46:15.881164       8 log.go:172] (0xc0000eadc0) (0xc002132780) Stream added, broadcasting: 3
I0129 11:46:15.882986       8 log.go:172] (0xc0000eadc0) Reply frame received for 3
I0129 11:46:15.883017       8 log.go:172] (0xc0000eadc0) (0xc0021c8b40) Create stream
I0129 11:46:15.883028       8 log.go:172] (0xc0000eadc0) (0xc0021c8b40) Stream added, broadcasting: 5
I0129 11:46:15.885671       8 log.go:172] (0xc0000eadc0) Reply frame received for 5
I0129 11:46:17.069963       8 log.go:172] (0xc0000eadc0) Data frame received for 3
I0129 11:46:17.070111       8 log.go:172] (0xc002132780) (3) Data frame handling
I0129 11:46:17.070201       8 log.go:172] (0xc002132780) (3) Data frame sent
I0129 11:46:17.212679       8 log.go:172] (0xc0000eadc0) Data frame received for 1
I0129 11:46:17.213073       8 log.go:172] (0xc0021c8aa0) (1) Data frame handling
I0129 11:46:17.213154       8 log.go:172] (0xc0021c8aa0) (1) Data frame sent
I0129 11:46:17.213206       8 log.go:172] (0xc0000eadc0) (0xc0021c8aa0) Stream removed, broadcasting: 1
I0129 11:46:17.214453       8 log.go:172] (0xc0000eadc0) (0xc0021c8b40) Stream removed, broadcasting: 5
I0129 11:46:17.214521       8 log.go:172] (0xc0000eadc0) (0xc002132780) Stream removed, broadcasting: 3
I0129 11:46:17.214821       8 log.go:172] (0xc0000eadc0) (0xc0021c8aa0) Stream removed, broadcasting: 1
I0129 11:46:17.214853       8 log.go:172] (0xc0000eadc0) (0xc002132780) Stream removed, broadcasting: 3
I0129 11:46:17.214884       8 log.go:172] (0xc0000eadc0) (0xc0021c8b40) Stream removed, broadcasting: 5
I0129 11:46:17.216097       8 log.go:172] (0xc0000eadc0) Go away received
Jan 29 11:46:17.216: INFO: Found all expected endpoints: [netserver-0]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 29 11:46:17.216: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pod-network-test-9vd2m" for this suite.
Jan 29 11:46:41.283: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 11:46:41.436: INFO: namespace: e2e-tests-pod-network-test-9vd2m, resource: bindings, ignored listing per whitelist
Jan 29 11:46:41.475: INFO: namespace e2e-tests-pod-network-test-9vd2m deletion completed in 24.228530034s

• [SLOW TEST:60.459 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for node-pod communication: udp [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 29 11:46:41.475: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-map-043fbf43-428d-11ea-8d54-0242ac110005
STEP: Creating a pod to test consume configMaps
Jan 29 11:46:41.772: INFO: Waiting up to 5m0s for pod "pod-configmaps-0440cae7-428d-11ea-8d54-0242ac110005" in namespace "e2e-tests-configmap-tfkt5" to be "success or failure"
Jan 29 11:46:41.796: INFO: Pod "pod-configmaps-0440cae7-428d-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 24.279712ms
Jan 29 11:46:43.807: INFO: Pod "pod-configmaps-0440cae7-428d-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035881098s
Jan 29 11:46:45.848: INFO: Pod "pod-configmaps-0440cae7-428d-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.076641028s
Jan 29 11:46:47.984: INFO: Pod "pod-configmaps-0440cae7-428d-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.212225178s
Jan 29 11:46:50.199: INFO: Pod "pod-configmaps-0440cae7-428d-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.427563604s
Jan 29 11:46:52.217: INFO: Pod "pod-configmaps-0440cae7-428d-11ea-8d54-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.444926561s
STEP: Saw pod success
Jan 29 11:46:52.217: INFO: Pod "pod-configmaps-0440cae7-428d-11ea-8d54-0242ac110005" satisfied condition "success or failure"
Jan 29 11:46:52.224: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-0440cae7-428d-11ea-8d54-0242ac110005 container configmap-volume-test: 
STEP: delete the pod
Jan 29 11:46:52.415: INFO: Waiting for pod pod-configmaps-0440cae7-428d-11ea-8d54-0242ac110005 to disappear
Jan 29 11:46:52.433: INFO: Pod pod-configmaps-0440cae7-428d-11ea-8d54-0242ac110005 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 29 11:46:52.433: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-tfkt5" for this suite.
Jan 29 11:46:58.536: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 11:46:58.754: INFO: namespace: e2e-tests-configmap-tfkt5, resource: bindings, ignored listing per whitelist
Jan 29 11:46:58.782: INFO: namespace e2e-tests-configmap-tfkt5 deletion completed in 6.34233435s

• [SLOW TEST:17.308 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 29 11:46:58.783: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43
[It] should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
Jan 29 11:46:58.972: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 29 11:47:20.271: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-init-container-k4n5v" for this suite.
Jan 29 11:47:44.390: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 11:47:44.434: INFO: namespace: e2e-tests-init-container-k4n5v, resource: bindings, ignored listing per whitelist
Jan 29 11:47:44.659: INFO: namespace e2e-tests-init-container-k4n5v deletion completed in 24.353799793s

• [SLOW TEST:45.876 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 29 11:47:44.661: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan 29 11:47:45.020: INFO: Waiting up to 5m0s for pod "downwardapi-volume-29f514f5-428d-11ea-8d54-0242ac110005" in namespace "e2e-tests-downward-api-9rbt7" to be "success or failure"
Jan 29 11:47:45.061: INFO: Pod "downwardapi-volume-29f514f5-428d-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 40.908655ms
Jan 29 11:47:47.493: INFO: Pod "downwardapi-volume-29f514f5-428d-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.472909424s
Jan 29 11:47:49.516: INFO: Pod "downwardapi-volume-29f514f5-428d-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.494949178s
Jan 29 11:47:52.245: INFO: Pod "downwardapi-volume-29f514f5-428d-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.224111911s
Jan 29 11:47:54.265: INFO: Pod "downwardapi-volume-29f514f5-428d-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.244626041s
Jan 29 11:47:56.286: INFO: Pod "downwardapi-volume-29f514f5-428d-11ea-8d54-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.265384259s
STEP: Saw pod success
Jan 29 11:47:56.286: INFO: Pod "downwardapi-volume-29f514f5-428d-11ea-8d54-0242ac110005" satisfied condition "success or failure"
Jan 29 11:47:56.303: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-29f514f5-428d-11ea-8d54-0242ac110005 container client-container: 
STEP: delete the pod
Jan 29 11:47:56.745: INFO: Waiting for pod downwardapi-volume-29f514f5-428d-11ea-8d54-0242ac110005 to disappear
Jan 29 11:47:56.838: INFO: Pod downwardapi-volume-29f514f5-428d-11ea-8d54-0242ac110005 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 29 11:47:56.838: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-9rbt7" for this suite.
Jan 29 11:48:02.887: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 11:48:03.063: INFO: namespace: e2e-tests-downward-api-9rbt7, resource: bindings, ignored listing per whitelist
Jan 29 11:48:03.124: INFO: namespace e2e-tests-downward-api-9rbt7 deletion completed in 6.271277543s

• [SLOW TEST:18.463 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl rolling-update 
  should support rolling-update to same image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 29 11:48:03.124: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1358
[It] should support rolling-update to same image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Jan 29 11:48:03.350: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=e2e-tests-kubectl-mnbcc'
Jan 29 11:48:03.483: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Jan 29 11:48:03.483: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n"
STEP: verifying the rc e2e-test-nginx-rc was created
Jan 29 11:48:03.505: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0
Jan 29 11:48:03.512: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
STEP: rolling-update to same image controller
Jan 29 11:48:03.553: INFO: scanned /root for discovery docs: 
Jan 29 11:48:03.554: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-nginx-rc --update-period=1s --image=docker.io/library/nginx:1.14-alpine --image-pull-policy=IfNotPresent --namespace=e2e-tests-kubectl-mnbcc'
Jan 29 11:48:26.747: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Jan 29 11:48:26.747: INFO: stdout: "Created e2e-test-nginx-rc-11a2564864840242c277dd1d9b42e6cd\nScaling up e2e-test-nginx-rc-11a2564864840242c277dd1d9b42e6cd from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-11a2564864840242c277dd1d9b42e6cd up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-11a2564864840242c277dd1d9b42e6cd to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n"
Jan 29 11:48:26.747: INFO: stdout: "Created e2e-test-nginx-rc-11a2564864840242c277dd1d9b42e6cd\nScaling up e2e-test-nginx-rc-11a2564864840242c277dd1d9b42e6cd from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-11a2564864840242c277dd1d9b42e6cd up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-11a2564864840242c277dd1d9b42e6cd to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n"
STEP: waiting for all containers in run=e2e-test-nginx-rc pods to come up.
Jan 29 11:48:26.747: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=e2e-tests-kubectl-mnbcc'
Jan 29 11:48:26.876: INFO: stderr: ""
Jan 29 11:48:26.876: INFO: stdout: "e2e-test-nginx-rc-11a2564864840242c277dd1d9b42e6cd-zxg7f e2e-test-nginx-rc-lx6zl "
STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2
Jan 29 11:48:31.876: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=e2e-tests-kubectl-mnbcc'
Jan 29 11:48:32.079: INFO: stderr: ""
Jan 29 11:48:32.079: INFO: stdout: "e2e-test-nginx-rc-11a2564864840242c277dd1d9b42e6cd-zxg7f e2e-test-nginx-rc-lx6zl "
STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2
Jan 29 11:48:37.080: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=e2e-tests-kubectl-mnbcc'
Jan 29 11:48:37.281: INFO: stderr: ""
Jan 29 11:48:37.281: INFO: stdout: "e2e-test-nginx-rc-11a2564864840242c277dd1d9b42e6cd-zxg7f "
Jan 29 11:48:37.281: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-11a2564864840242c277dd1d9b42e6cd-zxg7f -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-nginx-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-mnbcc'
Jan 29 11:48:37.432: INFO: stderr: ""
Jan 29 11:48:37.432: INFO: stdout: "true"
Jan 29 11:48:37.433: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-11a2564864840242c277dd1d9b42e6cd-zxg7f -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-nginx-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-mnbcc'
Jan 29 11:48:37.551: INFO: stderr: ""
Jan 29 11:48:37.551: INFO: stdout: "docker.io/library/nginx:1.14-alpine"
Jan 29 11:48:37.551: INFO: e2e-test-nginx-rc-11a2564864840242c277dd1d9b42e6cd-zxg7f is verified up and running
[AfterEach] [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1364
Jan 29 11:48:37.551: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=e2e-tests-kubectl-mnbcc'
Jan 29 11:48:37.792: INFO: stderr: ""
Jan 29 11:48:37.793: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 29 11:48:37.793: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-mnbcc" for this suite.
Jan 29 11:49:01.893: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 11:49:02.037: INFO: namespace: e2e-tests-kubectl-mnbcc, resource: bindings, ignored listing per whitelist
Jan 29 11:49:02.213: INFO: namespace e2e-tests-kubectl-mnbcc deletion completed in 24.405476944s

• [SLOW TEST:59.089 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should support rolling-update to same image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 29 11:49:02.215: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-upd-581facef-428d-11ea-8d54-0242ac110005
STEP: Creating the pod
STEP: Updating configmap configmap-test-upd-581facef-428d-11ea-8d54-0242ac110005
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 29 11:50:26.717: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-fm8wz" for this suite.
Jan 29 11:50:50.821: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 11:50:50.936: INFO: namespace: e2e-tests-configmap-fm8wz, resource: bindings, ignored listing per whitelist
Jan 29 11:50:50.988: INFO: namespace e2e-tests-configmap-fm8wz deletion completed in 24.261403683s

• [SLOW TEST:108.774 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 29 11:50:50.990: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan 29 11:50:51.196: INFO: Waiting up to 5m0s for pod "downwardapi-volume-98ef5296-428d-11ea-8d54-0242ac110005" in namespace "e2e-tests-downward-api-g5rkl" to be "success or failure"
Jan 29 11:50:51.216: INFO: Pod "downwardapi-volume-98ef5296-428d-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 19.876436ms
Jan 29 11:50:53.256: INFO: Pod "downwardapi-volume-98ef5296-428d-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.05973962s
Jan 29 11:50:55.368: INFO: Pod "downwardapi-volume-98ef5296-428d-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.171402032s
Jan 29 11:50:57.380: INFO: Pod "downwardapi-volume-98ef5296-428d-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.183523042s
Jan 29 11:50:59.404: INFO: Pod "downwardapi-volume-98ef5296-428d-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.207234905s
Jan 29 11:51:01.421: INFO: Pod "downwardapi-volume-98ef5296-428d-11ea-8d54-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.224723523s
STEP: Saw pod success
Jan 29 11:51:01.421: INFO: Pod "downwardapi-volume-98ef5296-428d-11ea-8d54-0242ac110005" satisfied condition "success or failure"
Jan 29 11:51:01.426: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-98ef5296-428d-11ea-8d54-0242ac110005 container client-container: 
STEP: delete the pod
Jan 29 11:51:02.220: INFO: Waiting for pod downwardapi-volume-98ef5296-428d-11ea-8d54-0242ac110005 to disappear
Jan 29 11:51:02.632: INFO: Pod downwardapi-volume-98ef5296-428d-11ea-8d54-0242ac110005 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 29 11:51:02.633: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-g5rkl" for this suite.
Jan 29 11:51:08.680: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 11:51:08.708: INFO: namespace: e2e-tests-downward-api-g5rkl, resource: bindings, ignored listing per whitelist
Jan 29 11:51:08.807: INFO: namespace e2e-tests-downward-api-g5rkl deletion completed in 6.157433436s

• [SLOW TEST:17.817 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 29 11:51:08.807: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102
[It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan 29 11:51:09.016: INFO: Creating simple daemon set daemon-set
STEP: Check that daemon pods launch on every node of the cluster.
Jan 29 11:51:09.040: INFO: Number of nodes with available pods: 0
Jan 29 11:51:09.040: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 29 11:51:10.062: INFO: Number of nodes with available pods: 0
Jan 29 11:51:10.062: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 29 11:51:11.330: INFO: Number of nodes with available pods: 0
Jan 29 11:51:11.330: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 29 11:51:12.059: INFO: Number of nodes with available pods: 0
Jan 29 11:51:12.059: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 29 11:51:13.057: INFO: Number of nodes with available pods: 0
Jan 29 11:51:13.057: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 29 11:51:14.484: INFO: Number of nodes with available pods: 0
Jan 29 11:51:14.484: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 29 11:51:15.779: INFO: Number of nodes with available pods: 0
Jan 29 11:51:15.780: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 29 11:51:16.107: INFO: Number of nodes with available pods: 0
Jan 29 11:51:16.107: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 29 11:51:17.078: INFO: Number of nodes with available pods: 0
Jan 29 11:51:17.078: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 29 11:51:18.071: INFO: Number of nodes with available pods: 1
Jan 29 11:51:18.071: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Update daemon pods image.
STEP: Check that daemon pods images are updated.
Jan 29 11:51:18.172: INFO: Wrong image for pod: daemon-set-z2t7s. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan 29 11:51:19.380: INFO: Wrong image for pod: daemon-set-z2t7s. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan 29 11:51:20.334: INFO: Wrong image for pod: daemon-set-z2t7s. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan 29 11:51:21.337: INFO: Wrong image for pod: daemon-set-z2t7s. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan 29 11:51:22.333: INFO: Wrong image for pod: daemon-set-z2t7s. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan 29 11:51:23.349: INFO: Wrong image for pod: daemon-set-z2t7s. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan 29 11:51:24.337: INFO: Wrong image for pod: daemon-set-z2t7s. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan 29 11:51:24.337: INFO: Pod daemon-set-z2t7s is not available
Jan 29 11:51:25.331: INFO: Wrong image for pod: daemon-set-z2t7s. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan 29 11:51:25.331: INFO: Pod daemon-set-z2t7s is not available
Jan 29 11:51:26.478: INFO: Pod daemon-set-977dl is not available
STEP: Check that daemon pods are still running on every node of the cluster.
Jan 29 11:51:26.550: INFO: Number of nodes with available pods: 0
Jan 29 11:51:26.551: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 29 11:51:27.617: INFO: Number of nodes with available pods: 0
Jan 29 11:51:27.618: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 29 11:51:28.611: INFO: Number of nodes with available pods: 0
Jan 29 11:51:28.611: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 29 11:51:29.574: INFO: Number of nodes with available pods: 0
Jan 29 11:51:29.574: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 29 11:51:30.763: INFO: Number of nodes with available pods: 0
Jan 29 11:51:30.763: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 29 11:51:31.879: INFO: Number of nodes with available pods: 0
Jan 29 11:51:31.880: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 29 11:51:32.582: INFO: Number of nodes with available pods: 0
Jan 29 11:51:32.582: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 29 11:51:34.511: INFO: Number of nodes with available pods: 0
Jan 29 11:51:34.512: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 29 11:51:34.765: INFO: Number of nodes with available pods: 0
Jan 29 11:51:34.765: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 29 11:51:35.579: INFO: Number of nodes with available pods: 0
Jan 29 11:51:35.579: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 29 11:51:36.650: INFO: Number of nodes with available pods: 0
Jan 29 11:51:36.650: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 29 11:51:37.576: INFO: Number of nodes with available pods: 1
Jan 29 11:51:37.576: INFO: Number of running nodes: 1, number of available pods: 1
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-trbtn, will wait for the garbage collector to delete the pods
Jan 29 11:51:37.710: INFO: Deleting DaemonSet.extensions daemon-set took: 36.566278ms
Jan 29 11:51:37.811: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.55059ms
Jan 29 11:51:52.740: INFO: Number of nodes with available pods: 0
Jan 29 11:51:52.740: INFO: Number of running nodes: 0, number of available pods: 0
Jan 29 11:51:52.762: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-trbtn/daemonsets","resourceVersion":"19854806"},"items":null}

Jan 29 11:51:52.773: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-trbtn/pods","resourceVersion":"19854806"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 29 11:51:52.806: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-daemonsets-trbtn" for this suite.
Jan 29 11:52:00.843: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 11:52:00.961: INFO: namespace: e2e-tests-daemonsets-trbtn, resource: bindings, ignored listing per whitelist
Jan 29 11:52:01.035: INFO: namespace e2e-tests-daemonsets-trbtn deletion completed in 8.223439998s

• [SLOW TEST:52.228 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] Projected downwardAPI 
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 29 11:52:01.035: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating the pod
Jan 29 11:52:11.846: INFO: Successfully updated pod "annotationupdatec2a93dbd-428d-11ea-8d54-0242ac110005"
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 29 11:52:14.127: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-v6x2m" for this suite.
Jan 29 11:52:36.270: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 11:52:36.384: INFO: namespace: e2e-tests-projected-v6x2m, resource: bindings, ignored listing per whitelist
Jan 29 11:52:36.419: INFO: namespace e2e-tests-projected-v6x2m deletion completed in 22.280344736s

• [SLOW TEST:35.383 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 29 11:52:36.419: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test substitution in container's args
Jan 29 11:52:36.814: INFO: Waiting up to 5m0s for pod "var-expansion-d7e03a03-428d-11ea-8d54-0242ac110005" in namespace "e2e-tests-var-expansion-b7q8w" to be "success or failure"
Jan 29 11:52:36.833: INFO: Pod "var-expansion-d7e03a03-428d-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 18.181185ms
Jan 29 11:52:38.843: INFO: Pod "var-expansion-d7e03a03-428d-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028282716s
Jan 29 11:52:40.862: INFO: Pod "var-expansion-d7e03a03-428d-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.047049265s
Jan 29 11:52:42.888: INFO: Pod "var-expansion-d7e03a03-428d-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.072992576s
Jan 29 11:52:45.012: INFO: Pod "var-expansion-d7e03a03-428d-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.197612167s
Jan 29 11:52:47.400: INFO: Pod "var-expansion-d7e03a03-428d-11ea-8d54-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.585443084s
STEP: Saw pod success
Jan 29 11:52:47.400: INFO: Pod "var-expansion-d7e03a03-428d-11ea-8d54-0242ac110005" satisfied condition "success or failure"
Jan 29 11:52:47.680: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod var-expansion-d7e03a03-428d-11ea-8d54-0242ac110005 container dapi-container: 
STEP: delete the pod
Jan 29 11:52:47.762: INFO: Waiting for pod var-expansion-d7e03a03-428d-11ea-8d54-0242ac110005 to disappear
Jan 29 11:52:47.771: INFO: Pod var-expansion-d7e03a03-428d-11ea-8d54-0242ac110005 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 29 11:52:47.772: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-var-expansion-b7q8w" for this suite.
Jan 29 11:52:54.050: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 11:52:54.125: INFO: namespace: e2e-tests-var-expansion-b7q8w, resource: bindings, ignored listing per whitelist
Jan 29 11:52:54.161: INFO: namespace e2e-tests-var-expansion-b7q8w deletion completed in 6.177582708s

• [SLOW TEST:17.742 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] HostPath 
  should give a volume the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 29 11:52:54.162: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename hostpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37
[It] should give a volume the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test hostPath mode
Jan 29 11:52:54.664: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "e2e-tests-hostpath-wjqvl" to be "success or failure"
Jan 29 11:52:54.700: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 35.78641ms
Jan 29 11:52:56.757: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.092948943s
Jan 29 11:52:58.771: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.106719765s
Jan 29 11:53:00.787: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 6.122825132s
Jan 29 11:53:03.352: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 8.687477769s
Jan 29 11:53:05.368: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 10.703652793s
Jan 29 11:53:07.400: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.735704934s
STEP: Saw pod success
Jan 29 11:53:07.400: INFO: Pod "pod-host-path-test" satisfied condition "success or failure"
Jan 29 11:53:07.415: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-host-path-test container test-container-1: 
STEP: delete the pod
Jan 29 11:53:07.618: INFO: Waiting for pod pod-host-path-test to disappear
Jan 29 11:53:07.625: INFO: Pod pod-host-path-test no longer exists
[AfterEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 29 11:53:07.626: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-hostpath-wjqvl" for this suite.
Jan 29 11:53:13.738: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 11:53:13.781: INFO: namespace: e2e-tests-hostpath-wjqvl, resource: bindings, ignored listing per whitelist
Jan 29 11:53:13.961: INFO: namespace e2e-tests-hostpath-wjqvl deletion completed in 6.327764943s

• [SLOW TEST:19.799 seconds]
[sig-storage] HostPath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34
  should give a volume the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 29 11:53:13.962: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
Jan 29 11:53:21.589: INFO: 10 pods remaining
Jan 29 11:53:21.589: INFO: 10 pods has nil DeletionTimestamp
Jan 29 11:53:21.589: INFO: 
Jan 29 11:53:22.108: INFO: 9 pods remaining
Jan 29 11:53:22.109: INFO: 9 pods has nil DeletionTimestamp
Jan 29 11:53:22.109: INFO: 
STEP: Gathering metrics
W0129 11:53:22.810658       8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jan 29 11:53:22.810: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 29 11:53:22.810: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-9fpkx" for this suite.
Jan 29 11:53:38.846: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 11:53:39.027: INFO: namespace: e2e-tests-gc-9fpkx, resource: bindings, ignored listing per whitelist
Jan 29 11:53:39.031: INFO: namespace e2e-tests-gc-9fpkx deletion completed in 16.217021058s

• [SLOW TEST:25.069 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] Events 
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] [sig-node] Events
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 29 11:53:39.032: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename events
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: retrieving the pod
Jan 29 11:53:47.454: INFO: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:send-events-fd2631f3-428d-11ea-8d54-0242ac110005,GenerateName:,Namespace:e2e-tests-events-bw27t,SelfLink:/api/v1/namespaces/e2e-tests-events-bw27t/pods/send-events-fd2631f3-428d-11ea-8d54-0242ac110005,UID:fd26e28f-428d-11ea-a994-fa163e34d433,ResourceVersion:19855155,Generation:0,CreationTimestamp:2020-01-29 11:53:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: foo,time: 318256615,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-rqwpq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rqwpq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{p gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 [] []  [{ 0 80 TCP }] [] [] {map[] map[]} [{default-token-rqwpq true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002990740} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002990760}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 11:53:39 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 11:53:46 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 11:53:46 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 11:53:39 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.4,StartTime:2020-01-29 11:53:39 +0000 UTC,ContainerStatuses:[{p {nil ContainerStateRunning{StartedAt:2020-01-29 11:53:46 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 docker-pullable://gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 docker://85f92f582d8be50b2e88141af83140c87d9f34a862903bcd578e5b737c1a779e}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}

STEP: checking for scheduler event about the pod
Jan 29 11:53:49.471: INFO: Saw scheduler event for our pod.
STEP: checking for kubelet event about the pod
Jan 29 11:53:51.488: INFO: Saw kubelet event for our pod.
STEP: deleting the pod
[AfterEach] [k8s.io] [sig-node] Events
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 29 11:53:51.526: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-events-bw27t" for this suite.
Jan 29 11:54:31.848: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 11:54:31.980: INFO: namespace: e2e-tests-events-bw27t, resource: bindings, ignored listing per whitelist
Jan 29 11:54:31.990: INFO: namespace e2e-tests-events-bw27t deletion completed in 40.233057403s

• [SLOW TEST:52.958 seconds]
[k8s.io] [sig-node] Events
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 29 11:54:31.990: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-1caa17ac-428e-11ea-8d54-0242ac110005
STEP: Creating a pod to test consume configMaps
Jan 29 11:54:32.223: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-1cab7d73-428e-11ea-8d54-0242ac110005" in namespace "e2e-tests-projected-bxrc5" to be "success or failure"
Jan 29 11:54:32.258: INFO: Pod "pod-projected-configmaps-1cab7d73-428e-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 35.111794ms
Jan 29 11:54:34.278: INFO: Pod "pod-projected-configmaps-1cab7d73-428e-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.055205737s
Jan 29 11:54:36.299: INFO: Pod "pod-projected-configmaps-1cab7d73-428e-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.076207347s
Jan 29 11:54:38.332: INFO: Pod "pod-projected-configmaps-1cab7d73-428e-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.108647633s
Jan 29 11:54:40.424: INFO: Pod "pod-projected-configmaps-1cab7d73-428e-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.200605115s
Jan 29 11:54:42.475: INFO: Pod "pod-projected-configmaps-1cab7d73-428e-11ea-8d54-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.251746997s
STEP: Saw pod success
Jan 29 11:54:42.475: INFO: Pod "pod-projected-configmaps-1cab7d73-428e-11ea-8d54-0242ac110005" satisfied condition "success or failure"
Jan 29 11:54:42.499: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-1cab7d73-428e-11ea-8d54-0242ac110005 container projected-configmap-volume-test: 
STEP: delete the pod
Jan 29 11:54:42.651: INFO: Waiting for pod pod-projected-configmaps-1cab7d73-428e-11ea-8d54-0242ac110005 to disappear
Jan 29 11:54:42.660: INFO: Pod pod-projected-configmaps-1cab7d73-428e-11ea-8d54-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 29 11:54:42.660: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-bxrc5" for this suite.
Jan 29 11:54:50.704: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 11:54:50.926: INFO: namespace: e2e-tests-projected-bxrc5, resource: bindings, ignored listing per whitelist
Jan 29 11:54:50.991: INFO: namespace e2e-tests-projected-bxrc5 deletion completed in 8.323149449s

• [SLOW TEST:19.001 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[k8s.io] Probing container 
  should have monotonically increasing restart count [Slow][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 29 11:54:50.992: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] should have monotonically increasing restart count [Slow][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-pdw6n
Jan 29 11:55:01.268: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-pdw6n
STEP: checking the pod's current state and verifying that restartCount is present
Jan 29 11:55:01.274: INFO: Initial restart count of pod liveness-http is 0
Jan 29 11:55:17.411: INFO: Restart count of pod e2e-tests-container-probe-pdw6n/liveness-http is now 1 (16.136906573s elapsed)
Jan 29 11:55:35.915: INFO: Restart count of pod e2e-tests-container-probe-pdw6n/liveness-http is now 2 (34.640389827s elapsed)
Jan 29 11:55:58.307: INFO: Restart count of pod e2e-tests-container-probe-pdw6n/liveness-http is now 3 (57.033311306s elapsed)
Jan 29 11:56:16.488: INFO: Restart count of pod e2e-tests-container-probe-pdw6n/liveness-http is now 4 (1m15.214349938s elapsed)
Jan 29 11:57:25.370: INFO: Restart count of pod e2e-tests-container-probe-pdw6n/liveness-http is now 5 (2m24.095655263s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 29 11:57:25.547: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-pdw6n" for this suite.
Jan 29 11:57:31.746: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 11:57:31.804: INFO: namespace: e2e-tests-container-probe-pdw6n, resource: bindings, ignored listing per whitelist
Jan 29 11:57:32.032: INFO: namespace e2e-tests-container-probe-pdw6n deletion completed in 6.34910931s

• [SLOW TEST:161.041 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should have monotonically increasing restart count [Slow][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-auth] ServiceAccounts 
  should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 29 11:57:32.032: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: getting the auto-created API token
STEP: Creating a pod to test consume service account token
Jan 29 11:57:32.774: INFO: Waiting up to 5m0s for pod "pod-service-account-88482c38-428e-11ea-8d54-0242ac110005-pw2sg" in namespace "e2e-tests-svcaccounts-txl9q" to be "success or failure"
Jan 29 11:57:32.791: INFO: Pod "pod-service-account-88482c38-428e-11ea-8d54-0242ac110005-pw2sg": Phase="Pending", Reason="", readiness=false. Elapsed: 17.005709ms
Jan 29 11:57:34.919: INFO: Pod "pod-service-account-88482c38-428e-11ea-8d54-0242ac110005-pw2sg": Phase="Pending", Reason="", readiness=false. Elapsed: 2.144163638s
Jan 29 11:57:36.945: INFO: Pod "pod-service-account-88482c38-428e-11ea-8d54-0242ac110005-pw2sg": Phase="Pending", Reason="", readiness=false. Elapsed: 4.170324933s
Jan 29 11:57:39.082: INFO: Pod "pod-service-account-88482c38-428e-11ea-8d54-0242ac110005-pw2sg": Phase="Pending", Reason="", readiness=false. Elapsed: 6.307212381s
Jan 29 11:57:41.635: INFO: Pod "pod-service-account-88482c38-428e-11ea-8d54-0242ac110005-pw2sg": Phase="Pending", Reason="", readiness=false. Elapsed: 8.86064828s
Jan 29 11:57:44.245: INFO: Pod "pod-service-account-88482c38-428e-11ea-8d54-0242ac110005-pw2sg": Phase="Pending", Reason="", readiness=false. Elapsed: 11.470260648s
Jan 29 11:57:46.261: INFO: Pod "pod-service-account-88482c38-428e-11ea-8d54-0242ac110005-pw2sg": Phase="Pending", Reason="", readiness=false. Elapsed: 13.486143019s
Jan 29 11:57:48.282: INFO: Pod "pod-service-account-88482c38-428e-11ea-8d54-0242ac110005-pw2sg": Phase="Succeeded", Reason="", readiness=false. Elapsed: 15.507332799s
STEP: Saw pod success
Jan 29 11:57:48.282: INFO: Pod "pod-service-account-88482c38-428e-11ea-8d54-0242ac110005-pw2sg" satisfied condition "success or failure"
Jan 29 11:57:48.289: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-service-account-88482c38-428e-11ea-8d54-0242ac110005-pw2sg container token-test: 
STEP: delete the pod
Jan 29 11:57:48.494: INFO: Waiting for pod pod-service-account-88482c38-428e-11ea-8d54-0242ac110005-pw2sg to disappear
Jan 29 11:57:48.527: INFO: Pod pod-service-account-88482c38-428e-11ea-8d54-0242ac110005-pw2sg no longer exists
STEP: Creating a pod to test consume service account root CA
Jan 29 11:57:48.559: INFO: Waiting up to 5m0s for pod "pod-service-account-88482c38-428e-11ea-8d54-0242ac110005-l69gx" in namespace "e2e-tests-svcaccounts-txl9q" to be "success or failure"
Jan 29 11:57:48.641: INFO: Pod "pod-service-account-88482c38-428e-11ea-8d54-0242ac110005-l69gx": Phase="Pending", Reason="", readiness=false. Elapsed: 82.293032ms
Jan 29 11:57:50.687: INFO: Pod "pod-service-account-88482c38-428e-11ea-8d54-0242ac110005-l69gx": Phase="Pending", Reason="", readiness=false. Elapsed: 2.128713663s
Jan 29 11:57:52.714: INFO: Pod "pod-service-account-88482c38-428e-11ea-8d54-0242ac110005-l69gx": Phase="Pending", Reason="", readiness=false. Elapsed: 4.155315434s
Jan 29 11:57:55.055: INFO: Pod "pod-service-account-88482c38-428e-11ea-8d54-0242ac110005-l69gx": Phase="Pending", Reason="", readiness=false. Elapsed: 6.495929373s
Jan 29 11:57:57.071: INFO: Pod "pod-service-account-88482c38-428e-11ea-8d54-0242ac110005-l69gx": Phase="Pending", Reason="", readiness=false. Elapsed: 8.512372837s
Jan 29 11:57:59.101: INFO: Pod "pod-service-account-88482c38-428e-11ea-8d54-0242ac110005-l69gx": Phase="Pending", Reason="", readiness=false. Elapsed: 10.542662761s
Jan 29 11:58:01.171: INFO: Pod "pod-service-account-88482c38-428e-11ea-8d54-0242ac110005-l69gx": Phase="Pending", Reason="", readiness=false. Elapsed: 12.612567745s
Jan 29 11:58:03.188: INFO: Pod "pod-service-account-88482c38-428e-11ea-8d54-0242ac110005-l69gx": Phase="Pending", Reason="", readiness=false. Elapsed: 14.62908027s
Jan 29 11:58:05.199: INFO: Pod "pod-service-account-88482c38-428e-11ea-8d54-0242ac110005-l69gx": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.640291797s
STEP: Saw pod success
Jan 29 11:58:05.199: INFO: Pod "pod-service-account-88482c38-428e-11ea-8d54-0242ac110005-l69gx" satisfied condition "success or failure"
Jan 29 11:58:05.204: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-service-account-88482c38-428e-11ea-8d54-0242ac110005-l69gx container root-ca-test: 
STEP: delete the pod
Jan 29 11:58:05.637: INFO: Waiting for pod pod-service-account-88482c38-428e-11ea-8d54-0242ac110005-l69gx to disappear
Jan 29 11:58:05.691: INFO: Pod pod-service-account-88482c38-428e-11ea-8d54-0242ac110005-l69gx no longer exists
STEP: Creating a pod to test consume service account namespace
Jan 29 11:58:05.717: INFO: Waiting up to 5m0s for pod "pod-service-account-88482c38-428e-11ea-8d54-0242ac110005-nlql7" in namespace "e2e-tests-svcaccounts-txl9q" to be "success or failure"
Jan 29 11:58:05.803: INFO: Pod "pod-service-account-88482c38-428e-11ea-8d54-0242ac110005-nlql7": Phase="Pending", Reason="", readiness=false. Elapsed: 86.153468ms
Jan 29 11:58:07.817: INFO: Pod "pod-service-account-88482c38-428e-11ea-8d54-0242ac110005-nlql7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.100342301s
Jan 29 11:58:09.834: INFO: Pod "pod-service-account-88482c38-428e-11ea-8d54-0242ac110005-nlql7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.117130495s
Jan 29 11:58:12.188: INFO: Pod "pod-service-account-88482c38-428e-11ea-8d54-0242ac110005-nlql7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.470684631s
Jan 29 11:58:14.209: INFO: Pod "pod-service-account-88482c38-428e-11ea-8d54-0242ac110005-nlql7": Phase="Pending", Reason="", readiness=false. Elapsed: 8.49235925s
Jan 29 11:58:16.465: INFO: Pod "pod-service-account-88482c38-428e-11ea-8d54-0242ac110005-nlql7": Phase="Pending", Reason="", readiness=false. Elapsed: 10.747759429s
Jan 29 11:58:18.485: INFO: Pod "pod-service-account-88482c38-428e-11ea-8d54-0242ac110005-nlql7": Phase="Pending", Reason="", readiness=false. Elapsed: 12.767895631s
Jan 29 11:58:20.519: INFO: Pod "pod-service-account-88482c38-428e-11ea-8d54-0242ac110005-nlql7": Phase="Pending", Reason="", readiness=false. Elapsed: 14.801523339s
Jan 29 11:58:22.604: INFO: Pod "pod-service-account-88482c38-428e-11ea-8d54-0242ac110005-nlql7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.887010531s
STEP: Saw pod success
Jan 29 11:58:22.604: INFO: Pod "pod-service-account-88482c38-428e-11ea-8d54-0242ac110005-nlql7" satisfied condition "success or failure"
Jan 29 11:58:22.643: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-service-account-88482c38-428e-11ea-8d54-0242ac110005-nlql7 container namespace-test: 
STEP: delete the pod
Jan 29 11:58:22.769: INFO: Waiting for pod pod-service-account-88482c38-428e-11ea-8d54-0242ac110005-nlql7 to disappear
Jan 29 11:58:22.833: INFO: Pod pod-service-account-88482c38-428e-11ea-8d54-0242ac110005-nlql7 no longer exists
[AfterEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 29 11:58:22.833: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-svcaccounts-txl9q" for this suite.
Jan 29 11:58:31.064: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 11:58:31.126: INFO: namespace: e2e-tests-svcaccounts-txl9q, resource: bindings, ignored listing per whitelist
Jan 29 11:58:31.240: INFO: namespace e2e-tests-svcaccounts-txl9q deletion completed in 8.376253317s

• [SLOW TEST:59.207 seconds]
[sig-auth] ServiceAccounts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:22
  should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 29 11:58:31.240: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod liveness-exec in namespace e2e-tests-container-probe-4zzct
Jan 29 11:58:41.529: INFO: Started pod liveness-exec in namespace e2e-tests-container-probe-4zzct
STEP: checking the pod's current state and verifying that restartCount is present
Jan 29 11:58:41.535: INFO: Initial restart count of pod liveness-exec is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 29 12:02:43.667: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-4zzct" for this suite.
Jan 29 12:02:51.736: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 12:02:51.913: INFO: namespace: e2e-tests-container-probe-4zzct, resource: bindings, ignored listing per whitelist
Jan 29 12:02:52.001: INFO: namespace e2e-tests-container-probe-4zzct deletion completed in 8.314052714s

• [SLOW TEST:260.761 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 29 12:02:52.003: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0777 on tmpfs
Jan 29 12:02:52.237: INFO: Waiting up to 5m0s for pod "pod-46b5344a-428f-11ea-8d54-0242ac110005" in namespace "e2e-tests-emptydir-4585f" to be "success or failure"
Jan 29 12:02:52.299: INFO: Pod "pod-46b5344a-428f-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 61.441367ms
Jan 29 12:02:54.394: INFO: Pod "pod-46b5344a-428f-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.156447014s
Jan 29 12:02:56.469: INFO: Pod "pod-46b5344a-428f-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.232039114s
Jan 29 12:02:58.628: INFO: Pod "pod-46b5344a-428f-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.39086663s
Jan 29 12:03:00.649: INFO: Pod "pod-46b5344a-428f-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.411539297s
Jan 29 12:03:02.922: INFO: Pod "pod-46b5344a-428f-11ea-8d54-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.684902474s
STEP: Saw pod success
Jan 29 12:03:02.923: INFO: Pod "pod-46b5344a-428f-11ea-8d54-0242ac110005" satisfied condition "success or failure"
Jan 29 12:03:02.975: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-46b5344a-428f-11ea-8d54-0242ac110005 container test-container: 
STEP: delete the pod
Jan 29 12:03:03.447: INFO: Waiting for pod pod-46b5344a-428f-11ea-8d54-0242ac110005 to disappear
Jan 29 12:03:03.565: INFO: Pod pod-46b5344a-428f-11ea-8d54-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 29 12:03:03.566: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-4585f" for this suite.
Jan 29 12:03:09.683: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 12:03:09.772: INFO: namespace: e2e-tests-emptydir-4585f, resource: bindings, ignored listing per whitelist
Jan 29 12:03:09.857: INFO: namespace e2e-tests-emptydir-4585f deletion completed in 6.276050599s

• [SLOW TEST:17.855 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command in a pod 
  should print the output to logs [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 29 12:03:09.858: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should print the output to logs [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 29 12:03:18.300: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubelet-test-w7v4v" for this suite.
Jan 29 12:04:00.403: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 12:04:00.661: INFO: namespace: e2e-tests-kubelet-test-w7v4v, resource: bindings, ignored listing per whitelist
Jan 29 12:04:00.794: INFO: namespace e2e-tests-kubelet-test-w7v4v deletion completed in 42.481171762s

• [SLOW TEST:50.936 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when scheduling a busybox command in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40
    should print the output to logs [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 29 12:04:00.796: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test substitution in container's command
Jan 29 12:04:01.112: INFO: Waiting up to 5m0s for pod "var-expansion-6fc2ae57-428f-11ea-8d54-0242ac110005" in namespace "e2e-tests-var-expansion-m86l7" to be "success or failure"
Jan 29 12:04:01.128: INFO: Pod "var-expansion-6fc2ae57-428f-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 15.329334ms
Jan 29 12:04:03.141: INFO: Pod "var-expansion-6fc2ae57-428f-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029267779s
Jan 29 12:04:05.724: INFO: Pod "var-expansion-6fc2ae57-428f-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.611794209s
Jan 29 12:04:07.754: INFO: Pod "var-expansion-6fc2ae57-428f-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.641758532s
Jan 29 12:04:09.768: INFO: Pod "var-expansion-6fc2ae57-428f-11ea-8d54-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.655504185s
STEP: Saw pod success
Jan 29 12:04:09.768: INFO: Pod "var-expansion-6fc2ae57-428f-11ea-8d54-0242ac110005" satisfied condition "success or failure"
Jan 29 12:04:09.773: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod var-expansion-6fc2ae57-428f-11ea-8d54-0242ac110005 container dapi-container: 
STEP: delete the pod
Jan 29 12:04:10.566: INFO: Waiting for pod var-expansion-6fc2ae57-428f-11ea-8d54-0242ac110005 to disappear
Jan 29 12:04:10.582: INFO: Pod var-expansion-6fc2ae57-428f-11ea-8d54-0242ac110005 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 29 12:04:10.583: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-var-expansion-m86l7" for this suite.
Jan 29 12:04:16.688: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 12:04:16.779: INFO: namespace: e2e-tests-var-expansion-m86l7, resource: bindings, ignored listing per whitelist
Jan 29 12:04:16.819: INFO: namespace e2e-tests-var-expansion-m86l7 deletion completed in 6.216104009s

• [SLOW TEST:16.023 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 29 12:04:16.821: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102
[It] should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan 29 12:04:17.016: INFO: Requires at least 2 nodes (not -1)
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68
Jan 29 12:04:17.070: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-mzgws/daemonsets","resourceVersion":"19856171"},"items":null}

Jan 29 12:04:17.074: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-mzgws/pods","resourceVersion":"19856171"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 29 12:04:17.083: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-daemonsets-mzgws" for this suite.
Jan 29 12:04:23.124: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 12:04:23.257: INFO: namespace: e2e-tests-daemonsets-mzgws, resource: bindings, ignored listing per whitelist
Jan 29 12:04:23.348: INFO: namespace e2e-tests-daemonsets-mzgws deletion completed in 6.260556321s

S [SKIPPING] [6.527 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should rollback without unnecessary restarts [Conformance] [It]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699

  Jan 29 12:04:17.016: Requires at least 2 nodes (not -1)

  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:292
------------------------------
SSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Burst scaling should run to completion even with unhealthy pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 29 12:04:23.349: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74
STEP: Creating service test in namespace e2e-tests-statefulset-gtk5k
[It] Burst scaling should run to completion even with unhealthy pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating stateful set ss in namespace e2e-tests-statefulset-gtk5k
STEP: Waiting until all stateful set ss replicas will be running in namespace e2e-tests-statefulset-gtk5k
Jan 29 12:04:23.589: INFO: Found 0 stateful pods, waiting for 1
Jan 29 12:04:33.621: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod
Jan 29 12:04:33.640: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-gtk5k ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan 29 12:04:34.442: INFO: stderr: "I0129 12:04:33.958521    1612 log.go:172] (0xc00081a2c0) (0xc0005d9360) Create stream\nI0129 12:04:33.959139    1612 log.go:172] (0xc00081a2c0) (0xc0005d9360) Stream added, broadcasting: 1\nI0129 12:04:33.972109    1612 log.go:172] (0xc00081a2c0) Reply frame received for 1\nI0129 12:04:33.972296    1612 log.go:172] (0xc00081a2c0) (0xc0005d9400) Create stream\nI0129 12:04:33.972315    1612 log.go:172] (0xc00081a2c0) (0xc0005d9400) Stream added, broadcasting: 3\nI0129 12:04:33.974651    1612 log.go:172] (0xc00081a2c0) Reply frame received for 3\nI0129 12:04:33.974688    1612 log.go:172] (0xc00081a2c0) (0xc0005d94a0) Create stream\nI0129 12:04:33.974699    1612 log.go:172] (0xc00081a2c0) (0xc0005d94a0) Stream added, broadcasting: 5\nI0129 12:04:33.976599    1612 log.go:172] (0xc00081a2c0) Reply frame received for 5\nI0129 12:04:34.302639    1612 log.go:172] (0xc00081a2c0) Data frame received for 3\nI0129 12:04:34.302736    1612 log.go:172] (0xc0005d9400) (3) Data frame handling\nI0129 12:04:34.302783    1612 log.go:172] (0xc0005d9400) (3) Data frame sent\nI0129 12:04:34.433364    1612 log.go:172] (0xc00081a2c0) Data frame received for 1\nI0129 12:04:34.433441    1612 log.go:172] (0xc0005d9360) (1) Data frame handling\nI0129 12:04:34.433460    1612 log.go:172] (0xc0005d9360) (1) Data frame sent\nI0129 12:04:34.433514    1612 log.go:172] (0xc00081a2c0) (0xc0005d9360) Stream removed, broadcasting: 1\nI0129 12:04:34.433621    1612 log.go:172] (0xc00081a2c0) (0xc0005d9400) Stream removed, broadcasting: 3\nI0129 12:04:34.433859    1612 log.go:172] (0xc00081a2c0) (0xc0005d94a0) Stream removed, broadcasting: 5\nI0129 12:04:34.434002    1612 log.go:172] (0xc00081a2c0) (0xc0005d9360) Stream removed, broadcasting: 1\nI0129 12:04:34.434026    1612 log.go:172] (0xc00081a2c0) (0xc0005d9400) Stream removed, broadcasting: 3\nI0129 12:04:34.434037    1612 log.go:172] (0xc00081a2c0) (0xc0005d94a0) Stream removed, broadcasting: 5\nI0129 12:04:34.434214    1612 log.go:172] (0xc00081a2c0) Go away received\n"
Jan 29 12:04:34.442: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan 29 12:04:34.442: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan 29 12:04:34.479: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
Jan 29 12:04:44.505: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Jan 29 12:04:44.505: INFO: Waiting for statefulset status.replicas updated to 0
Jan 29 12:04:44.642: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Jan 29 12:04:44.642: INFO: ss-0  hunter-server-hu5at5svl7ps  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 12:04:23 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-29 12:04:35 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-29 12:04:35 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 12:04:23 +0000 UTC  }]
Jan 29 12:04:44.642: INFO: 
Jan 29 12:04:44.642: INFO: StatefulSet ss has not reached scale 3, at 1
Jan 29 12:04:46.411: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.90257374s
Jan 29 12:04:47.652: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.133343304s
Jan 29 12:04:48.700: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.892306031s
Jan 29 12:04:49.719: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.844617912s
Jan 29 12:04:52.285: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.825922722s
Jan 29 12:04:53.329: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.259949708s
Jan 29 12:04:54.454: INFO: Verifying statefulset ss doesn't scale past 3 for another 214.711419ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace e2e-tests-statefulset-gtk5k
Jan 29 12:04:55.480: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-gtk5k ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 29 12:04:56.105: INFO: stderr: "I0129 12:04:55.802718    1633 log.go:172] (0xc0006fe370) (0xc00066b400) Create stream\nI0129 12:04:55.803016    1633 log.go:172] (0xc0006fe370) (0xc00066b400) Stream added, broadcasting: 1\nI0129 12:04:55.810111    1633 log.go:172] (0xc0006fe370) Reply frame received for 1\nI0129 12:04:55.810156    1633 log.go:172] (0xc0006fe370) (0xc0002aa000) Create stream\nI0129 12:04:55.810166    1633 log.go:172] (0xc0006fe370) (0xc0002aa000) Stream added, broadcasting: 3\nI0129 12:04:55.811414    1633 log.go:172] (0xc0006fe370) Reply frame received for 3\nI0129 12:04:55.811445    1633 log.go:172] (0xc0006fe370) (0xc00029e000) Create stream\nI0129 12:04:55.811462    1633 log.go:172] (0xc0006fe370) (0xc00029e000) Stream added, broadcasting: 5\nI0129 12:04:55.812286    1633 log.go:172] (0xc0006fe370) Reply frame received for 5\nI0129 12:04:55.949527    1633 log.go:172] (0xc0006fe370) Data frame received for 3\nI0129 12:04:55.949634    1633 log.go:172] (0xc0002aa000) (3) Data frame handling\nI0129 12:04:55.949663    1633 log.go:172] (0xc0002aa000) (3) Data frame sent\nI0129 12:04:56.093364    1633 log.go:172] (0xc0006fe370) Data frame received for 1\nI0129 12:04:56.093529    1633 log.go:172] (0xc0006fe370) (0xc0002aa000) Stream removed, broadcasting: 3\nI0129 12:04:56.093612    1633 log.go:172] (0xc00066b400) (1) Data frame handling\nI0129 12:04:56.093632    1633 log.go:172] (0xc0006fe370) (0xc00029e000) Stream removed, broadcasting: 5\nI0129 12:04:56.093651    1633 log.go:172] (0xc00066b400) (1) Data frame sent\nI0129 12:04:56.093662    1633 log.go:172] (0xc0006fe370) (0xc00066b400) Stream removed, broadcasting: 1\nI0129 12:04:56.093707    1633 log.go:172] (0xc0006fe370) Go away received\nI0129 12:04:56.094297    1633 log.go:172] (0xc0006fe370) (0xc00066b400) Stream removed, broadcasting: 1\nI0129 12:04:56.094315    1633 log.go:172] (0xc0006fe370) (0xc0002aa000) Stream removed, broadcasting: 3\nI0129 12:04:56.094324    1633 log.go:172] (0xc0006fe370) (0xc00029e000) Stream removed, broadcasting: 5\n"
Jan 29 12:04:56.105: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jan 29 12:04:56.105: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jan 29 12:04:56.106: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-gtk5k ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 29 12:04:57.012: INFO: stderr: "I0129 12:04:56.446391    1655 log.go:172] (0xc0006c6000) (0xc0006ec000) Create stream\nI0129 12:04:56.446751    1655 log.go:172] (0xc0006c6000) (0xc0006ec000) Stream added, broadcasting: 1\nI0129 12:04:56.453767    1655 log.go:172] (0xc0006c6000) Reply frame received for 1\nI0129 12:04:56.453834    1655 log.go:172] (0xc0006c6000) (0xc0001a6d20) Create stream\nI0129 12:04:56.453846    1655 log.go:172] (0xc0006c6000) (0xc0001a6d20) Stream added, broadcasting: 3\nI0129 12:04:56.475828    1655 log.go:172] (0xc0006c6000) Reply frame received for 3\nI0129 12:04:56.490210    1655 log.go:172] (0xc0006c6000) (0xc000630000) Create stream\nI0129 12:04:56.490290    1655 log.go:172] (0xc0006c6000) (0xc000630000) Stream added, broadcasting: 5\nI0129 12:04:56.527695    1655 log.go:172] (0xc0006c6000) Reply frame received for 5\nI0129 12:04:56.861368    1655 log.go:172] (0xc0006c6000) Data frame received for 3\nI0129 12:04:56.861500    1655 log.go:172] (0xc0001a6d20) (3) Data frame handling\nI0129 12:04:56.861531    1655 log.go:172] (0xc0001a6d20) (3) Data frame sent\nI0129 12:04:56.862077    1655 log.go:172] (0xc0006c6000) Data frame received for 5\nI0129 12:04:56.862196    1655 log.go:172] (0xc000630000) (5) Data frame handling\nI0129 12:04:56.862233    1655 log.go:172] (0xc000630000) (5) Data frame sent\nmv: can't rename '/tmp/index.html': No such file or directory\nI0129 12:04:57.002601    1655 log.go:172] (0xc0006c6000) Data frame received for 1\nI0129 12:04:57.002691    1655 log.go:172] (0xc0006ec000) (1) Data frame handling\nI0129 12:04:57.002711    1655 log.go:172] (0xc0006ec000) (1) Data frame sent\nI0129 12:04:57.003170    1655 log.go:172] (0xc0006c6000) (0xc0001a6d20) Stream removed, broadcasting: 3\nI0129 12:04:57.003253    1655 log.go:172] (0xc0006c6000) (0xc0006ec000) Stream removed, broadcasting: 1\nI0129 12:04:57.003313    1655 log.go:172] (0xc0006c6000) (0xc000630000) Stream removed, broadcasting: 5\nI0129 12:04:57.003518    1655 log.go:172] (0xc0006c6000) Go away received\nI0129 12:04:57.003965    1655 log.go:172] (0xc0006c6000) (0xc0006ec000) Stream removed, broadcasting: 1\nI0129 12:04:57.003979    1655 log.go:172] (0xc0006c6000) (0xc0001a6d20) Stream removed, broadcasting: 3\nI0129 12:04:57.003987    1655 log.go:172] (0xc0006c6000) (0xc000630000) Stream removed, broadcasting: 5\n"
Jan 29 12:04:57.012: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jan 29 12:04:57.012: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jan 29 12:04:57.013: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-gtk5k ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 29 12:04:57.471: INFO: stderr: "I0129 12:04:57.193163    1676 log.go:172] (0xc0006f80b0) (0xc0006666e0) Create stream\nI0129 12:04:57.193287    1676 log.go:172] (0xc0006f80b0) (0xc0006666e0) Stream added, broadcasting: 1\nI0129 12:04:57.196410    1676 log.go:172] (0xc0006f80b0) Reply frame received for 1\nI0129 12:04:57.196449    1676 log.go:172] (0xc0006f80b0) (0xc0005ee000) Create stream\nI0129 12:04:57.196462    1676 log.go:172] (0xc0006f80b0) (0xc0005ee000) Stream added, broadcasting: 3\nI0129 12:04:57.197326    1676 log.go:172] (0xc0006f80b0) Reply frame received for 3\nI0129 12:04:57.197350    1676 log.go:172] (0xc0006f80b0) (0xc0002ceaa0) Create stream\nI0129 12:04:57.197357    1676 log.go:172] (0xc0006f80b0) (0xc0002ceaa0) Stream added, broadcasting: 5\nI0129 12:04:57.198510    1676 log.go:172] (0xc0006f80b0) Reply frame received for 5\nI0129 12:04:57.329361    1676 log.go:172] (0xc0006f80b0) Data frame received for 5\nI0129 12:04:57.329518    1676 log.go:172] (0xc0002ceaa0) (5) Data frame handling\nI0129 12:04:57.329548    1676 log.go:172] (0xc0002ceaa0) (5) Data frame sent\nmv: can't rename '/tmp/index.html': No such file or directory\nI0129 12:04:57.329598    1676 log.go:172] (0xc0006f80b0) Data frame received for 3\nI0129 12:04:57.329610    1676 log.go:172] (0xc0005ee000) (3) Data frame handling\nI0129 12:04:57.329630    1676 log.go:172] (0xc0005ee000) (3) Data frame sent\nI0129 12:04:57.462212    1676 log.go:172] (0xc0006f80b0) (0xc0005ee000) Stream removed, broadcasting: 3\nI0129 12:04:57.462337    1676 log.go:172] (0xc0006f80b0) Data frame received for 1\nI0129 12:04:57.462367    1676 log.go:172] (0xc0006666e0) (1) Data frame handling\nI0129 12:04:57.462397    1676 log.go:172] (0xc0006666e0) (1) Data frame sent\nI0129 12:04:57.462423    1676 log.go:172] (0xc0006f80b0) (0xc0002ceaa0) Stream removed, broadcasting: 5\nI0129 12:04:57.462449    1676 log.go:172] (0xc0006f80b0) (0xc0006666e0) Stream removed, broadcasting: 1\nI0129 12:04:57.462467    1676 log.go:172] (0xc0006f80b0) Go away received\nI0129 12:04:57.463067    1676 log.go:172] (0xc0006f80b0) (0xc0006666e0) Stream removed, broadcasting: 1\nI0129 12:04:57.463086    1676 log.go:172] (0xc0006f80b0) (0xc0005ee000) Stream removed, broadcasting: 3\nI0129 12:04:57.463093    1676 log.go:172] (0xc0006f80b0) (0xc0002ceaa0) Stream removed, broadcasting: 5\n"
Jan 29 12:04:57.471: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jan 29 12:04:57.471: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jan 29 12:04:57.490: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Jan 29 12:04:57.490: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Jan 29 12:04:57.490: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Scale down will not halt with unhealthy stateful pod
Jan 29 12:04:57.548: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-gtk5k ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan 29 12:04:57.950: INFO: stderr: "I0129 12:04:57.732564    1698 log.go:172] (0xc00016c790) (0xc000607400) Create stream\nI0129 12:04:57.732760    1698 log.go:172] (0xc00016c790) (0xc000607400) Stream added, broadcasting: 1\nI0129 12:04:57.736405    1698 log.go:172] (0xc00016c790) Reply frame received for 1\nI0129 12:04:57.736447    1698 log.go:172] (0xc00016c790) (0xc00069a000) Create stream\nI0129 12:04:57.736456    1698 log.go:172] (0xc00016c790) (0xc00069a000) Stream added, broadcasting: 3\nI0129 12:04:57.737291    1698 log.go:172] (0xc00016c790) Reply frame received for 3\nI0129 12:04:57.737308    1698 log.go:172] (0xc00016c790) (0xc00069a0a0) Create stream\nI0129 12:04:57.737317    1698 log.go:172] (0xc00016c790) (0xc00069a0a0) Stream added, broadcasting: 5\nI0129 12:04:57.737951    1698 log.go:172] (0xc00016c790) Reply frame received for 5\nI0129 12:04:57.832122    1698 log.go:172] (0xc00016c790) Data frame received for 3\nI0129 12:04:57.832229    1698 log.go:172] (0xc00069a000) (3) Data frame handling\nI0129 12:04:57.832257    1698 log.go:172] (0xc00069a000) (3) Data frame sent\nI0129 12:04:57.939047    1698 log.go:172] (0xc00016c790) (0xc00069a000) Stream removed, broadcasting: 3\nI0129 12:04:57.939196    1698 log.go:172] (0xc00016c790) Data frame received for 1\nI0129 12:04:57.939213    1698 log.go:172] (0xc000607400) (1) Data frame handling\nI0129 12:04:57.939233    1698 log.go:172] (0xc000607400) (1) Data frame sent\nI0129 12:04:57.939241    1698 log.go:172] (0xc00016c790) (0xc000607400) Stream removed, broadcasting: 1\nI0129 12:04:57.939586    1698 log.go:172] (0xc00016c790) (0xc00069a0a0) Stream removed, broadcasting: 5\nI0129 12:04:57.939660    1698 log.go:172] (0xc00016c790) (0xc000607400) Stream removed, broadcasting: 1\nI0129 12:04:57.939685    1698 log.go:172] (0xc00016c790) (0xc00069a000) Stream removed, broadcasting: 3\nI0129 12:04:57.939697    1698 log.go:172] (0xc00016c790) (0xc00069a0a0) Stream removed, broadcasting: 5\n"
Jan 29 12:04:57.950: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan 29 12:04:57.950: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan 29 12:04:57.950: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-gtk5k ss-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan 29 12:04:58.363: INFO: stderr: "I0129 12:04:58.101220    1720 log.go:172] (0xc000138580) (0xc0006475e0) Create stream\nI0129 12:04:58.101329    1720 log.go:172] (0xc000138580) (0xc0006475e0) Stream added, broadcasting: 1\nI0129 12:04:58.104540    1720 log.go:172] (0xc000138580) Reply frame received for 1\nI0129 12:04:58.104574    1720 log.go:172] (0xc000138580) (0xc0007da640) Create stream\nI0129 12:04:58.104580    1720 log.go:172] (0xc000138580) (0xc0007da640) Stream added, broadcasting: 3\nI0129 12:04:58.105287    1720 log.go:172] (0xc000138580) Reply frame received for 3\nI0129 12:04:58.105312    1720 log.go:172] (0xc000138580) (0xc000694000) Create stream\nI0129 12:04:58.105319    1720 log.go:172] (0xc000138580) (0xc000694000) Stream added, broadcasting: 5\nI0129 12:04:58.106219    1720 log.go:172] (0xc000138580) Reply frame received for 5\nI0129 12:04:58.214768    1720 log.go:172] (0xc000138580) Data frame received for 3\nI0129 12:04:58.214807    1720 log.go:172] (0xc0007da640) (3) Data frame handling\nI0129 12:04:58.214826    1720 log.go:172] (0xc0007da640) (3) Data frame sent\nI0129 12:04:58.353292    1720 log.go:172] (0xc000138580) (0xc0007da640) Stream removed, broadcasting: 3\nI0129 12:04:58.353527    1720 log.go:172] (0xc000138580) Data frame received for 1\nI0129 12:04:58.353604    1720 log.go:172] (0xc000138580) (0xc000694000) Stream removed, broadcasting: 5\nI0129 12:04:58.353771    1720 log.go:172] (0xc0006475e0) (1) Data frame handling\nI0129 12:04:58.353871    1720 log.go:172] (0xc0006475e0) (1) Data frame sent\nI0129 12:04:58.353899    1720 log.go:172] (0xc000138580) (0xc0006475e0) Stream removed, broadcasting: 1\nI0129 12:04:58.354480    1720 log.go:172] (0xc000138580) (0xc0006475e0) Stream removed, broadcasting: 1\nI0129 12:04:58.354506    1720 log.go:172] (0xc000138580) (0xc0007da640) Stream removed, broadcasting: 3\nI0129 12:04:58.354514    1720 log.go:172] (0xc000138580) (0xc000694000) Stream removed, broadcasting: 5\nI0129 12:04:58.354583    1720 log.go:172] (0xc000138580) Go away received\n"
Jan 29 12:04:58.364: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan 29 12:04:58.364: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan 29 12:04:58.364: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-gtk5k ss-2 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan 29 12:04:59.081: INFO: stderr: "I0129 12:04:58.760290    1741 log.go:172] (0xc00073c370) (0xc000693400) Create stream\nI0129 12:04:58.760507    1741 log.go:172] (0xc00073c370) (0xc000693400) Stream added, broadcasting: 1\nI0129 12:04:58.769543    1741 log.go:172] (0xc00073c370) Reply frame received for 1\nI0129 12:04:58.769596    1741 log.go:172] (0xc00073c370) (0xc0006ce000) Create stream\nI0129 12:04:58.769609    1741 log.go:172] (0xc00073c370) (0xc0006ce000) Stream added, broadcasting: 3\nI0129 12:04:58.770368    1741 log.go:172] (0xc00073c370) Reply frame received for 3\nI0129 12:04:58.770398    1741 log.go:172] (0xc00073c370) (0xc0006ce0a0) Create stream\nI0129 12:04:58.770405    1741 log.go:172] (0xc00073c370) (0xc0006ce0a0) Stream added, broadcasting: 5\nI0129 12:04:58.771235    1741 log.go:172] (0xc00073c370) Reply frame received for 5\nI0129 12:04:58.972571    1741 log.go:172] (0xc00073c370) Data frame received for 3\nI0129 12:04:58.972634    1741 log.go:172] (0xc0006ce000) (3) Data frame handling\nI0129 12:04:58.972648    1741 log.go:172] (0xc0006ce000) (3) Data frame sent\nI0129 12:04:59.071662    1741 log.go:172] (0xc00073c370) (0xc0006ce000) Stream removed, broadcasting: 3\nI0129 12:04:59.071744    1741 log.go:172] (0xc00073c370) Data frame received for 1\nI0129 12:04:59.071765    1741 log.go:172] (0xc000693400) (1) Data frame handling\nI0129 12:04:59.071774    1741 log.go:172] (0xc000693400) (1) Data frame sent\nI0129 12:04:59.071786    1741 log.go:172] (0xc00073c370) (0xc0006ce0a0) Stream removed, broadcasting: 5\nI0129 12:04:59.071803    1741 log.go:172] (0xc00073c370) (0xc000693400) Stream removed, broadcasting: 1\nI0129 12:04:59.071825    1741 log.go:172] (0xc00073c370) Go away received\nI0129 12:04:59.072139    1741 log.go:172] (0xc00073c370) (0xc000693400) Stream removed, broadcasting: 1\nI0129 12:04:59.072167    1741 log.go:172] (0xc00073c370) (0xc0006ce000) Stream removed, broadcasting: 3\nI0129 12:04:59.072171    1741 log.go:172] (0xc00073c370) (0xc0006ce0a0) Stream removed, broadcasting: 5\n"
Jan 29 12:04:59.081: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan 29 12:04:59.081: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan 29 12:04:59.081: INFO: Waiting for statefulset status.replicas updated to 0
Jan 29 12:04:59.096: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1
Jan 29 12:05:09.136: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Jan 29 12:05:09.137: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Jan 29 12:05:09.137: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Jan 29 12:05:09.186: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Jan 29 12:05:09.186: INFO: ss-0  hunter-server-hu5at5svl7ps  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 12:04:23 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-29 12:04:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-29 12:04:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 12:04:23 +0000 UTC  }]
Jan 29 12:05:09.186: INFO: ss-1  hunter-server-hu5at5svl7ps  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 12:04:44 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-29 12:04:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-29 12:04:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 12:04:44 +0000 UTC  }]
Jan 29 12:05:09.186: INFO: ss-2  hunter-server-hu5at5svl7ps  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 12:04:44 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-29 12:04:59 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-29 12:04:59 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 12:04:44 +0000 UTC  }]
Jan 29 12:05:09.186: INFO: 
Jan 29 12:05:09.186: INFO: StatefulSet ss has not reached scale 0, at 3
Jan 29 12:05:10.206: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Jan 29 12:05:10.206: INFO: ss-0  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 12:04:23 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-29 12:04:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-29 12:04:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 12:04:23 +0000 UTC  }]
Jan 29 12:05:10.206: INFO: ss-1  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 12:04:44 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-29 12:04:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-29 12:04:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 12:04:44 +0000 UTC  }]
Jan 29 12:05:10.206: INFO: ss-2  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 12:04:44 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-29 12:04:59 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-29 12:04:59 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 12:04:44 +0000 UTC  }]
Jan 29 12:05:10.207: INFO: 
Jan 29 12:05:10.207: INFO: StatefulSet ss has not reached scale 0, at 3
Jan 29 12:05:11.361: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Jan 29 12:05:11.362: INFO: ss-0  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 12:04:23 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-29 12:04:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-29 12:04:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 12:04:23 +0000 UTC  }]
Jan 29 12:05:11.362: INFO: ss-1  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 12:04:44 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-29 12:04:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-29 12:04:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 12:04:44 +0000 UTC  }]
Jan 29 12:05:11.362: INFO: ss-2  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 12:04:44 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-29 12:04:59 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-29 12:04:59 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 12:04:44 +0000 UTC  }]
Jan 29 12:05:11.362: INFO: 
Jan 29 12:05:11.362: INFO: StatefulSet ss has not reached scale 0, at 3
Jan 29 12:05:12.379: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Jan 29 12:05:12.380: INFO: ss-0  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 12:04:23 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-29 12:04:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-29 12:04:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 12:04:23 +0000 UTC  }]
Jan 29 12:05:12.380: INFO: ss-1  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 12:04:44 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-29 12:04:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-29 12:04:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 12:04:44 +0000 UTC  }]
Jan 29 12:05:12.380: INFO: ss-2  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 12:04:44 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-29 12:04:59 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-29 12:04:59 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 12:04:44 +0000 UTC  }]
Jan 29 12:05:12.380: INFO: 
Jan 29 12:05:12.380: INFO: StatefulSet ss has not reached scale 0, at 3
Jan 29 12:05:13.391: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Jan 29 12:05:13.391: INFO: ss-0  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 12:04:23 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-29 12:04:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-29 12:04:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 12:04:23 +0000 UTC  }]
Jan 29 12:05:13.391: INFO: ss-1  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 12:04:44 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-29 12:04:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-29 12:04:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 12:04:44 +0000 UTC  }]
Jan 29 12:05:13.391: INFO: ss-2  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 12:04:44 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-29 12:04:59 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-29 12:04:59 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 12:04:44 +0000 UTC  }]
Jan 29 12:05:13.391: INFO: 
Jan 29 12:05:13.391: INFO: StatefulSet ss has not reached scale 0, at 3
Jan 29 12:05:14.422: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Jan 29 12:05:14.423: INFO: ss-0  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 12:04:23 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-29 12:04:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-29 12:04:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 12:04:23 +0000 UTC  }]
Jan 29 12:05:14.423: INFO: ss-1  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 12:04:44 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-29 12:04:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-29 12:04:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 12:04:44 +0000 UTC  }]
Jan 29 12:05:14.423: INFO: ss-2  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 12:04:44 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-29 12:04:59 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-29 12:04:59 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 12:04:44 +0000 UTC  }]
Jan 29 12:05:14.423: INFO: 
Jan 29 12:05:14.423: INFO: StatefulSet ss has not reached scale 0, at 3
Jan 29 12:05:15.441: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Jan 29 12:05:15.441: INFO: ss-0  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 12:04:23 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-29 12:04:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-29 12:04:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 12:04:23 +0000 UTC  }]
Jan 29 12:05:15.441: INFO: ss-1  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 12:04:44 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-29 12:04:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-29 12:04:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 12:04:44 +0000 UTC  }]
Jan 29 12:05:15.442: INFO: ss-2  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 12:04:44 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-29 12:04:59 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-29 12:04:59 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 12:04:44 +0000 UTC  }]
Jan 29 12:05:15.442: INFO: 
Jan 29 12:05:15.442: INFO: StatefulSet ss has not reached scale 0, at 3
Jan 29 12:05:16.464: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Jan 29 12:05:16.464: INFO: ss-0  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 12:04:23 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-29 12:04:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-29 12:04:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 12:04:23 +0000 UTC  }]
Jan 29 12:05:16.465: INFO: ss-1  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 12:04:44 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-29 12:04:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-29 12:04:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 12:04:44 +0000 UTC  }]
Jan 29 12:05:16.465: INFO: ss-2  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 12:04:44 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-29 12:04:59 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-29 12:04:59 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 12:04:44 +0000 UTC  }]
Jan 29 12:05:16.465: INFO: 
Jan 29 12:05:16.465: INFO: StatefulSet ss has not reached scale 0, at 3
Jan 29 12:05:17.485: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Jan 29 12:05:17.485: INFO: ss-0  hunter-server-hu5at5svl7ps  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 12:04:23 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-29 12:04:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-29 12:04:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 12:04:23 +0000 UTC  }]
Jan 29 12:05:17.485: INFO: ss-1  hunter-server-hu5at5svl7ps  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 12:04:44 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-29 12:04:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-29 12:04:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 12:04:44 +0000 UTC  }]
Jan 29 12:05:17.485: INFO: 
Jan 29 12:05:17.485: INFO: StatefulSet ss has not reached scale 0, at 2
Jan 29 12:05:18.538: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Jan 29 12:05:18.538: INFO: ss-0  hunter-server-hu5at5svl7ps  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 12:04:23 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-29 12:04:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-29 12:04:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 12:04:23 +0000 UTC  }]
Jan 29 12:05:18.539: INFO: ss-1  hunter-server-hu5at5svl7ps  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 12:04:44 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-29 12:04:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-29 12:04:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 12:04:44 +0000 UTC  }]
Jan 29 12:05:18.539: INFO: 
Jan 29 12:05:18.539: INFO: StatefulSet ss has not reached scale 0, at 2
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacee2e-tests-statefulset-gtk5k
Jan 29 12:05:19.555: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-gtk5k ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 29 12:05:19.838: INFO: rc: 1
Jan 29 12:05:19.839: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-gtk5k ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    error: unable to upgrade connection: container not found ("nginx")
 []  0xc001bd6270 exit status 1   true [0xc00110a178 0xc00110a190 0xc00110a1a8] [0xc00110a178 0xc00110a190 0xc00110a1a8] [0xc00110a188 0xc00110a1a0] [0x935700 0x935700] 0xc001bc57a0 }:
Command stdout:

stderr:
error: unable to upgrade connection: container not found ("nginx")

error:
exit status 1

Jan 29 12:05:29.840: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-gtk5k ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 29 12:05:30.011: INFO: rc: 1
Jan 29 12:05:30.011: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-gtk5k ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001417290 exit status 1   true [0xc0004f84c8 0xc0004f8550 0xc0004f85b0] [0xc0004f84c8 0xc0004f8550 0xc0004f85b0] [0xc0004f84f0 0xc0004f85a0] [0x935700 0x935700] 0xc0026203c0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan 29 12:05:40.012: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-gtk5k ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 29 12:05:40.188: INFO: rc: 1
Jan 29 12:05:40.189: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-gtk5k ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc00143a690 exit status 1   true [0xc001166388 0xc0011663b8 0xc0011663e0] [0xc001166388 0xc0011663b8 0xc0011663e0] [0xc0011663a0 0xc0011663d8] [0x935700 0x935700] 0xc002942e40 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan 29 12:05:50.190: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-gtk5k ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 29 12:05:50.393: INFO: rc: 1
Jan 29 12:05:50.394: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-gtk5k ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc00143a7b0 exit status 1   true [0xc0011663f0 0xc001166420 0xc001166458] [0xc0011663f0 0xc001166420 0xc001166458] [0xc001166408 0xc001166440] [0x935700 0x935700] 0xc002943980 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan 29 12:06:00.394: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-gtk5k ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 29 12:06:00.622: INFO: rc: 1
Jan 29 12:06:00.623: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-gtk5k ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc000fcb740 exit status 1   true [0xc001c48028 0xc001c48040 0xc001c48058] [0xc001c48028 0xc001c48040 0xc001c48058] [0xc001c48038 0xc001c48050] [0x935700 0x935700] 0xc0024cb7a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan 29 12:06:10.625: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-gtk5k ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 29 12:06:10.814: INFO: rc: 1
Jan 29 12:06:10.815: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-gtk5k ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc000fcb890 exit status 1   true [0xc001c48060 0xc001c48078 0xc001c48090] [0xc001c48060 0xc001c48078 0xc001c48090] [0xc001c48070 0xc001c48088] [0x935700 0x935700] 0xc0024cba40 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan 29 12:06:20.815: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-gtk5k ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 29 12:06:20.984: INFO: rc: 1
Jan 29 12:06:20.985: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-gtk5k ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc000fcb9e0 exit status 1   true [0xc001c48098 0xc001c480b0 0xc001c480c8] [0xc001c48098 0xc001c480b0 0xc001c480c8] [0xc001c480a8 0xc001c480c0] [0x935700 0x935700] 0xc0024cbce0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan 29 12:06:30.986: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-gtk5k ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 29 12:06:31.195: INFO: rc: 1
Jan 29 12:06:31.196: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-gtk5k ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc000fcbb30 exit status 1   true [0xc001c480d0 0xc001c480e8 0xc001c48100] [0xc001c480d0 0xc001c480e8 0xc001c48100] [0xc001c480e0 0xc001c480f8] [0x935700 0x935700] 0xc0024cbf80 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan 29 12:06:41.196: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-gtk5k ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 29 12:06:41.379: INFO: rc: 1
Jan 29 12:06:41.379: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-gtk5k ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc000fcbc50 exit status 1   true [0xc001c48108 0xc001c48120 0xc001c48138] [0xc001c48108 0xc001c48120 0xc001c48138] [0xc001c48118 0xc001c48130] [0x935700 0x935700] 0xc001d04240 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan 29 12:06:51.381: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-gtk5k ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 29 12:06:51.540: INFO: rc: 1
Jan 29 12:06:51.541: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-gtk5k ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0014173e0 exit status 1   true [0xc0004f85c0 0xc0004f85e0 0xc0004f8688] [0xc0004f85c0 0xc0004f85e0 0xc0004f8688] [0xc0004f85d0 0xc0004f8670] [0x935700 0x935700] 0xc002620660 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan 29 12:07:01.542: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-gtk5k ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 29 12:07:01.719: INFO: rc: 1
Jan 29 12:07:01.719: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-gtk5k ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0013c4150 exit status 1   true [0xc00000e100 0xc00110a010 0xc00110a028] [0xc00000e100 0xc00110a010 0xc00110a028] [0xc00110a008 0xc00110a020] [0x935700 0x935700] 0xc0024caa80 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan 29 12:07:11.720: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-gtk5k ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 29 12:07:12.005: INFO: rc: 1
Jan 29 12:07:12.006: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-gtk5k ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0013c4300 exit status 1   true [0xc00110a030 0xc00110a048 0xc00110a060] [0xc00110a030 0xc00110a048 0xc00110a060] [0xc00110a040 0xc00110a058] [0x935700 0x935700] 0xc0024cad20 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan 29 12:07:22.007: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-gtk5k ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 29 12:07:22.198: INFO: rc: 1
Jan 29 12:07:22.198: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-gtk5k ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc00260c120 exit status 1   true [0xc001c48000 0xc001c48018 0xc001c48030] [0xc001c48000 0xc001c48018 0xc001c48030] [0xc001c48010 0xc001c48028] [0x935700 0x935700] 0xc00224e360 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan 29 12:07:32.200: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-gtk5k ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 29 12:07:32.325: INFO: rc: 1
Jan 29 12:07:32.325: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-gtk5k ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0013c4480 exit status 1   true [0xc00110a068 0xc00110a080 0xc00110a098] [0xc00110a068 0xc00110a080 0xc00110a098] [0xc00110a078 0xc00110a090] [0x935700 0x935700] 0xc0024cb860 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan 29 12:07:42.326: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-gtk5k ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 29 12:07:42.437: INFO: rc: 1
Jan 29 12:07:42.438: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-gtk5k ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001bd62d0 exit status 1   true [0xc0004f8018 0xc0004f8110 0xc0004f81d0] [0xc0004f8018 0xc0004f8110 0xc0004f81d0] [0xc0004f8108 0xc0004f81b0] [0x935700 0x935700] 0xc001bc41e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan 29 12:07:52.439: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-gtk5k ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 29 12:07:52.639: INFO: rc: 1
Jan 29 12:07:52.639: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-gtk5k ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc000fca180 exit status 1   true [0xc001166000 0xc001166038 0xc001166068] [0xc001166000 0xc001166038 0xc001166068] [0xc001166028 0xc001166060] [0x935700 0x935700] 0xc0026201e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan 29 12:08:02.640: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-gtk5k ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 29 12:08:02.783: INFO: rc: 1
Jan 29 12:08:02.783: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-gtk5k ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc00260c240 exit status 1   true [0xc001c48038 0xc001c48050 0xc001c48068] [0xc001c48038 0xc001c48050 0xc001c48068] [0xc001c48048 0xc001c48060] [0x935700 0x935700] 0xc00224e600 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan 29 12:08:12.784: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-gtk5k ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 29 12:08:12.962: INFO: rc: 1
Jan 29 12:08:12.962: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-gtk5k ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc000fca2a0 exit status 1   true [0xc0011660b8 0xc0011660e8 0xc001166138] [0xc0011660b8 0xc0011660e8 0xc001166138] [0xc0011660d0 0xc001166130] [0x935700 0x935700] 0xc002620480 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan 29 12:08:22.963: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-gtk5k ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 29 12:08:23.157: INFO: rc: 1
Jan 29 12:08:23.157: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-gtk5k ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc00260c540 exit status 1   true [0xc001c48070 0xc001c48088 0xc001c480a0] [0xc001c48070 0xc001c48088 0xc001c480a0] [0xc001c48080 0xc001c48098] [0x935700 0x935700] 0xc00224fd40 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan 29 12:08:33.158: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-gtk5k ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 29 12:08:33.343: INFO: rc: 1
Jan 29 12:08:33.343: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-gtk5k ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc00260c690 exit status 1   true [0xc001c480a8 0xc001c480c0 0xc001c480d8] [0xc001c480a8 0xc001c480c0 0xc001c480d8] [0xc001c480b8 0xc001c480d0] [0x935700 0x935700] 0xc001d04000 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan 29 12:08:43.344: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-gtk5k ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 29 12:08:43.490: INFO: rc: 1
Jan 29 12:08:43.490: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-gtk5k ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc00260c7b0 exit status 1   true [0xc001c480e0 0xc001c480f8 0xc001c48110] [0xc001c480e0 0xc001c480f8 0xc001c48110] [0xc001c480f0 0xc001c48108] [0x935700 0x935700] 0xc001d042a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan 29 12:08:53.491: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-gtk5k ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 29 12:08:53.645: INFO: rc: 1
Jan 29 12:08:53.645: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-gtk5k ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc00260c930 exit status 1   true [0xc001c48118 0xc001c48130 0xc001c48148] [0xc001c48118 0xc001c48130 0xc001c48148] [0xc001c48128 0xc001c48140] [0x935700 0x935700] 0xc001d04540 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan 29 12:09:03.646: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-gtk5k ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 29 12:09:03.796: INFO: rc: 1
Jan 29 12:09:03.796: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-gtk5k ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc000fca1b0 exit status 1   true [0xc00016e000 0xc001166028 0xc001166060] [0xc00016e000 0xc001166028 0xc001166060] [0xc001166020 0xc001166048] [0x935700 0x935700] 0xc00224e360 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan 29 12:09:13.797: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-gtk5k ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 29 12:09:13.957: INFO: rc: 1
Jan 29 12:09:13.957: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-gtk5k ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc00260c150 exit status 1   true [0xc00110a000 0xc00110a018 0xc00110a030] [0xc00110a000 0xc00110a018 0xc00110a030] [0xc00110a010 0xc00110a028] [0x935700 0x935700] 0xc001d041e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan 29 12:09:23.958: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-gtk5k ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 29 12:09:24.153: INFO: rc: 1
Jan 29 12:09:24.153: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-gtk5k ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001bd62a0 exit status 1   true [0xc001c48000 0xc001c48018 0xc001c48030] [0xc001c48000 0xc001c48018 0xc001c48030] [0xc001c48010 0xc001c48028] [0x935700 0x935700] 0xc0026201e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan 29 12:09:34.154: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-gtk5k ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 29 12:09:34.270: INFO: rc: 1
Jan 29 12:09:34.270: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-gtk5k ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0013c4180 exit status 1   true [0xc0004f8018 0xc0004f8110 0xc0004f81d0] [0xc0004f8018 0xc0004f8110 0xc0004f81d0] [0xc0004f8108 0xc0004f81b0] [0x935700 0x935700] 0xc001bc41e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan 29 12:09:44.271: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-gtk5k ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 29 12:09:44.435: INFO: rc: 1
Jan 29 12:09:44.436: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-gtk5k ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001bd6480 exit status 1   true [0xc001c48038 0xc001c48050 0xc001c48068] [0xc001c48038 0xc001c48050 0xc001c48068] [0xc001c48048 0xc001c48060] [0x935700 0x935700] 0xc002620480 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan 29 12:09:54.436: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-gtk5k ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 29 12:09:54.764: INFO: rc: 1
Jan 29 12:09:54.765: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-gtk5k ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc00260c3f0 exit status 1   true [0xc00110a038 0xc00110a050 0xc00110a068] [0xc00110a038 0xc00110a050 0xc00110a068] [0xc00110a048 0xc00110a060] [0x935700 0x935700] 0xc001d04480 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan 29 12:10:04.765: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-gtk5k ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 29 12:10:04.924: INFO: rc: 1
Jan 29 12:10:04.925: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-gtk5k ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001bd66f0 exit status 1   true [0xc001c48070 0xc001c48088 0xc001c480a0] [0xc001c48070 0xc001c48088 0xc001c480a0] [0xc001c48080 0xc001c48098] [0x935700 0x935700] 0xc002620720 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan 29 12:10:14.925: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-gtk5k ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 29 12:10:15.139: INFO: rc: 1
Jan 29 12:10:15.139: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-gtk5k ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0013c4360 exit status 1   true [0xc0004f81e8 0xc0004f8300 0xc0004f83b8] [0xc0004f81e8 0xc0004f8300 0xc0004f83b8] [0xc0004f82e0 0xc0004f8390] [0x935700 0x935700] 0xc001bc4480 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan 29 12:10:25.139: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-gtk5k ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 29 12:10:25.263: INFO: rc: 1
Jan 29 12:10:25.264: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: 
Jan 29 12:10:25.264: INFO: Scaling statefulset ss to 0
Jan 29 12:10:25.286: INFO: Waiting for statefulset status.replicas updated to 0
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85
Jan 29 12:10:25.290: INFO: Deleting all statefulset in ns e2e-tests-statefulset-gtk5k
Jan 29 12:10:25.295: INFO: Scaling statefulset ss to 0
Jan 29 12:10:25.308: INFO: Waiting for statefulset status.replicas updated to 0
Jan 29 12:10:25.312: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 29 12:10:25.343: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-statefulset-gtk5k" for this suite.
Jan 29 12:10:33.388: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 12:10:33.462: INFO: namespace: e2e-tests-statefulset-gtk5k, resource: bindings, ignored listing per whitelist
Jan 29 12:10:33.571: INFO: namespace e2e-tests-statefulset-gtk5k deletion completed in 8.221822825s

• [SLOW TEST:370.223 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    Burst scaling should run to completion even with unhealthy pods [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 29 12:10:33.572: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan 29 12:10:33.815: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 29 12:10:44.323: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-dcxwq" for this suite.
Jan 29 12:11:26.460: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 12:11:26.511: INFO: namespace: e2e-tests-pods-dcxwq, resource: bindings, ignored listing per whitelist
Jan 29 12:11:26.843: INFO: namespace e2e-tests-pods-dcxwq deletion completed in 42.495597716s

• [SLOW TEST:53.271 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl describe 
  should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 29 12:11:26.844: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan 29 12:11:27.085: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version --client'
Jan 29 12:11:27.181: INFO: stderr: ""
Jan 29 12:11:27.181: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.12\", GitCommit:\"a8b52209ee172232b6db7a6e0ce2adc77458829f\", GitTreeState:\"clean\", BuildDate:\"2019-12-22T15:53:48Z\", GoVersion:\"go1.11.13\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n"
Jan 29 12:11:27.185: INFO: Not supported for server versions before "1.13.12"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 29 12:11:27.186: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-zhpcj" for this suite.
Jan 29 12:11:33.288: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 12:11:33.351: INFO: namespace: e2e-tests-kubectl-zhpcj, resource: bindings, ignored listing per whitelist
Jan 29 12:11:33.479: INFO: namespace e2e-tests-kubectl-zhpcj deletion completed in 6.232823282s

S [SKIPPING] [6.635 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl describe
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should check if kubectl describe prints relevant information for rc and pods  [Conformance] [It]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699

    Jan 29 12:11:27.185: Not supported for server versions before "1.13.12"

    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:292
------------------------------
SSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 29 12:11:33.479: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan 29 12:11:33.713: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7d85cb95-4290-11ea-8d54-0242ac110005" in namespace "e2e-tests-projected-prq96" to be "success or failure"
Jan 29 12:11:33.812: INFO: Pod "downwardapi-volume-7d85cb95-4290-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 99.35781ms
Jan 29 12:11:35.915: INFO: Pod "downwardapi-volume-7d85cb95-4290-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.201843028s
Jan 29 12:11:37.938: INFO: Pod "downwardapi-volume-7d85cb95-4290-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.224495731s
Jan 29 12:11:40.645: INFO: Pod "downwardapi-volume-7d85cb95-4290-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.932385055s
Jan 29 12:11:42.734: INFO: Pod "downwardapi-volume-7d85cb95-4290-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.020793587s
Jan 29 12:11:44.763: INFO: Pod "downwardapi-volume-7d85cb95-4290-11ea-8d54-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.049582737s
STEP: Saw pod success
Jan 29 12:11:44.763: INFO: Pod "downwardapi-volume-7d85cb95-4290-11ea-8d54-0242ac110005" satisfied condition "success or failure"
Jan 29 12:11:44.774: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-7d85cb95-4290-11ea-8d54-0242ac110005 container client-container: 
STEP: delete the pod
Jan 29 12:11:45.036: INFO: Waiting for pod downwardapi-volume-7d85cb95-4290-11ea-8d54-0242ac110005 to disappear
Jan 29 12:11:45.139: INFO: Pod downwardapi-volume-7d85cb95-4290-11ea-8d54-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 29 12:11:45.139: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-prq96" for this suite.
Jan 29 12:11:51.212: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 12:11:51.293: INFO: namespace: e2e-tests-projected-prq96, resource: bindings, ignored listing per whitelist
Jan 29 12:11:51.436: INFO: namespace e2e-tests-projected-prq96 deletion completed in 6.276152019s

• [SLOW TEST:17.957 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 29 12:11:51.437: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-j8qsp
Jan 29 12:12:01.797: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-j8qsp
STEP: checking the pod's current state and verifying that restartCount is present
Jan 29 12:12:01.803: INFO: Initial restart count of pod liveness-http is 0
Jan 29 12:12:31.427: INFO: Restart count of pod e2e-tests-container-probe-j8qsp/liveness-http is now 1 (29.623695885s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 29 12:12:31.504: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-j8qsp" for this suite.
Jan 29 12:12:37.623: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 12:12:37.751: INFO: namespace: e2e-tests-container-probe-j8qsp, resource: bindings, ignored listing per whitelist
Jan 29 12:12:37.799: INFO: namespace e2e-tests-container-probe-j8qsp deletion completed in 6.226895621s

• [SLOW TEST:46.363 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 29 12:12:37.800: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Jan 29 12:12:58.167: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan 29 12:12:58.176: INFO: Pod pod-with-poststart-http-hook still exists
Jan 29 12:13:00.176: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan 29 12:13:00.237: INFO: Pod pod-with-poststart-http-hook still exists
Jan 29 12:13:02.176: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan 29 12:13:02.196: INFO: Pod pod-with-poststart-http-hook still exists
Jan 29 12:13:04.177: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan 29 12:13:04.236: INFO: Pod pod-with-poststart-http-hook still exists
Jan 29 12:13:06.176: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan 29 12:13:06.205: INFO: Pod pod-with-poststart-http-hook still exists
Jan 29 12:13:08.176: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan 29 12:13:08.253: INFO: Pod pod-with-poststart-http-hook still exists
Jan 29 12:13:10.177: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan 29 12:13:10.200: INFO: Pod pod-with-poststart-http-hook still exists
Jan 29 12:13:12.179: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan 29 12:13:12.318: INFO: Pod pod-with-poststart-http-hook still exists
Jan 29 12:13:14.177: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan 29 12:13:14.192: INFO: Pod pod-with-poststart-http-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 29 12:13:14.192: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-4vftl" for this suite.
Jan 29 12:13:38.236: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 12:13:38.321: INFO: namespace: e2e-tests-container-lifecycle-hook-4vftl, resource: bindings, ignored listing per whitelist
Jan 29 12:13:38.406: INFO: namespace e2e-tests-container-lifecycle-hook-4vftl deletion completed in 24.206092649s

• [SLOW TEST:60.606 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40
    should execute poststart http hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not cause race condition when used for configmaps [Serial] [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 29 12:13:38.407: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not cause race condition when used for configmaps [Serial] [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating 50 configmaps
STEP: Creating RC which spawns configmap-volume pods
Jan 29 12:13:39.882: INFO: Pod name wrapped-volume-race-c8a92974-4290-11ea-8d54-0242ac110005: Found 0 pods out of 5
Jan 29 12:13:44.912: INFO: Pod name wrapped-volume-race-c8a92974-4290-11ea-8d54-0242ac110005: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-c8a92974-4290-11ea-8d54-0242ac110005 in namespace e2e-tests-emptydir-wrapper-tcb5q, will wait for the garbage collector to delete the pods
Jan 29 12:15:59.071: INFO: Deleting ReplicationController wrapped-volume-race-c8a92974-4290-11ea-8d54-0242ac110005 took: 23.839482ms
Jan 29 12:15:59.372: INFO: Terminating ReplicationController wrapped-volume-race-c8a92974-4290-11ea-8d54-0242ac110005 pods took: 300.761572ms
STEP: Creating RC which spawns configmap-volume pods
Jan 29 12:16:43.352: INFO: Pod name wrapped-volume-race-360cf86d-4291-11ea-8d54-0242ac110005: Found 0 pods out of 5
Jan 29 12:16:48.381: INFO: Pod name wrapped-volume-race-360cf86d-4291-11ea-8d54-0242ac110005: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-360cf86d-4291-11ea-8d54-0242ac110005 in namespace e2e-tests-emptydir-wrapper-tcb5q, will wait for the garbage collector to delete the pods
Jan 29 12:18:28.685: INFO: Deleting ReplicationController wrapped-volume-race-360cf86d-4291-11ea-8d54-0242ac110005 took: 58.623982ms
Jan 29 12:18:29.086: INFO: Terminating ReplicationController wrapped-volume-race-360cf86d-4291-11ea-8d54-0242ac110005 pods took: 400.994235ms
STEP: Creating RC which spawns configmap-volume pods
Jan 29 12:19:13.099: INFO: Pod name wrapped-volume-race-8f3a398a-4291-11ea-8d54-0242ac110005: Found 0 pods out of 5
Jan 29 12:19:18.127: INFO: Pod name wrapped-volume-race-8f3a398a-4291-11ea-8d54-0242ac110005: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-8f3a398a-4291-11ea-8d54-0242ac110005 in namespace e2e-tests-emptydir-wrapper-tcb5q, will wait for the garbage collector to delete the pods
Jan 29 12:21:24.317: INFO: Deleting ReplicationController wrapped-volume-race-8f3a398a-4291-11ea-8d54-0242ac110005 took: 32.957115ms
Jan 29 12:21:24.618: INFO: Terminating ReplicationController wrapped-volume-race-8f3a398a-4291-11ea-8d54-0242ac110005 pods took: 300.551445ms
STEP: Cleaning up the configMaps
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 29 12:22:15.209: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-wrapper-tcb5q" for this suite.
Jan 29 12:22:25.341: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 12:22:25.468: INFO: namespace: e2e-tests-emptydir-wrapper-tcb5q, resource: bindings, ignored listing per whitelist
Jan 29 12:22:25.544: INFO: namespace e2e-tests-emptydir-wrapper-tcb5q deletion completed in 10.308199236s

• [SLOW TEST:527.137 seconds]
[sig-storage] EmptyDir wrapper volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  should not cause race condition when used for configmaps [Serial] [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 29 12:22:25.545: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0777 on tmpfs
Jan 29 12:22:25.758: INFO: Waiting up to 5m0s for pod "pod-0226d4de-4292-11ea-8d54-0242ac110005" in namespace "e2e-tests-emptydir-blscb" to be "success or failure"
Jan 29 12:22:25.774: INFO: Pod "pod-0226d4de-4292-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 16.577127ms
Jan 29 12:22:27.984: INFO: Pod "pod-0226d4de-4292-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.226347281s
Jan 29 12:22:30.368: INFO: Pod "pod-0226d4de-4292-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.609926354s
Jan 29 12:22:32.380: INFO: Pod "pod-0226d4de-4292-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.622803296s
Jan 29 12:22:34.560: INFO: Pod "pod-0226d4de-4292-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.801997177s
Jan 29 12:22:36.605: INFO: Pod "pod-0226d4de-4292-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.846990263s
Jan 29 12:22:38.706: INFO: Pod "pod-0226d4de-4292-11ea-8d54-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.94787751s
STEP: Saw pod success
Jan 29 12:22:38.706: INFO: Pod "pod-0226d4de-4292-11ea-8d54-0242ac110005" satisfied condition "success or failure"
Jan 29 12:22:38.756: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-0226d4de-4292-11ea-8d54-0242ac110005 container test-container: 
STEP: delete the pod
Jan 29 12:22:39.039: INFO: Waiting for pod pod-0226d4de-4292-11ea-8d54-0242ac110005 to disappear
Jan 29 12:22:39.061: INFO: Pod pod-0226d4de-4292-11ea-8d54-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 29 12:22:39.061: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-blscb" for this suite.
Jan 29 12:22:45.204: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 12:22:45.303: INFO: namespace: e2e-tests-emptydir-blscb, resource: bindings, ignored listing per whitelist
Jan 29 12:22:45.374: INFO: namespace e2e-tests-emptydir-blscb deletion completed in 6.20121237s

• [SLOW TEST:19.830 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 29 12:22:45.375: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0666 on node default medium
Jan 29 12:22:45.576: INFO: Waiting up to 5m0s for pod "pod-0dfc93ef-4292-11ea-8d54-0242ac110005" in namespace "e2e-tests-emptydir-hlpxp" to be "success or failure"
Jan 29 12:22:45.587: INFO: Pod "pod-0dfc93ef-4292-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 11.286792ms
Jan 29 12:22:48.202: INFO: Pod "pod-0dfc93ef-4292-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.626092309s
Jan 29 12:22:50.218: INFO: Pod "pod-0dfc93ef-4292-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.641763596s
Jan 29 12:22:52.254: INFO: Pod "pod-0dfc93ef-4292-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.678139061s
Jan 29 12:22:54.273: INFO: Pod "pod-0dfc93ef-4292-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.697010473s
Jan 29 12:22:56.283: INFO: Pod "pod-0dfc93ef-4292-11ea-8d54-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.706966231s
STEP: Saw pod success
Jan 29 12:22:56.283: INFO: Pod "pod-0dfc93ef-4292-11ea-8d54-0242ac110005" satisfied condition "success or failure"
Jan 29 12:22:56.288: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-0dfc93ef-4292-11ea-8d54-0242ac110005 container test-container: 
STEP: delete the pod
Jan 29 12:22:56.457: INFO: Waiting for pod pod-0dfc93ef-4292-11ea-8d54-0242ac110005 to disappear
Jan 29 12:22:56.483: INFO: Pod pod-0dfc93ef-4292-11ea-8d54-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 29 12:22:56.483: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-hlpxp" for this suite.
Jan 29 12:23:02.681: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 12:23:02.932: INFO: namespace: e2e-tests-emptydir-hlpxp, resource: bindings, ignored listing per whitelist
Jan 29 12:23:02.952: INFO: namespace e2e-tests-emptydir-hlpxp deletion completed in 6.336978691s

• [SLOW TEST:17.577 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 29 12:23:02.953: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Jan 29 12:23:23.475: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 29 12:23:23.541: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 29 12:23:25.541: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 29 12:23:25.555: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 29 12:23:27.541: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 29 12:23:27.559: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 29 12:23:29.541: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 29 12:23:29.561: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 29 12:23:31.541: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 29 12:23:31.563: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 29 12:23:33.541: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 29 12:23:33.555: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 29 12:23:35.541: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 29 12:23:35.557: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 29 12:23:37.542: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 29 12:23:37.558: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 29 12:23:39.541: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 29 12:23:39.557: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 29 12:23:41.541: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 29 12:23:41.567: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 29 12:23:43.541: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 29 12:23:43.554: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 29 12:23:45.541: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 29 12:23:45.559: INFO: Pod pod-with-prestop-exec-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 29 12:23:45.591: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-zv559" for this suite.
Jan 29 12:24:09.670: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 12:24:09.724: INFO: namespace: e2e-tests-container-lifecycle-hook-zv559, resource: bindings, ignored listing per whitelist
Jan 29 12:24:09.811: INFO: namespace e2e-tests-container-lifecycle-hook-zv559 deletion completed in 24.211935487s

• [SLOW TEST:66.859 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40
    should execute prestop exec hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicaSet 
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 29 12:24:09.813: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Given a Pod with a 'name' label pod-adoption-release is created
STEP: When a replicaset with a matching selector is created
STEP: Then the orphan pod is adopted
STEP: When the matched label of one of its pods change
Jan 29 12:24:23.142: INFO: Pod name pod-adoption-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 29 12:24:24.235: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-replicaset-ljxft" for this suite.
Jan 29 12:24:43.198: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 12:24:43.310: INFO: namespace: e2e-tests-replicaset-ljxft, resource: bindings, ignored listing per whitelist
Jan 29 12:24:43.346: INFO: namespace e2e-tests-replicaset-ljxft deletion completed in 19.100147698s

• [SLOW TEST:33.534 seconds]
[sig-apps] ReplicaSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 29 12:24:43.347: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a watch on configmaps
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: closing the watch once it receives two notifications
Jan 29 12:24:43.520: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-9tf7s,SelfLink:/api/v1/namespaces/e2e-tests-watch-9tf7s/configmaps/e2e-watch-test-watch-closed,UID:544a6347-4292-11ea-a994-fa163e34d433,ResourceVersion:19858491,Generation:0,CreationTimestamp:2020-01-29 12:24:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
Jan 29 12:24:43.520: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-9tf7s,SelfLink:/api/v1/namespaces/e2e-tests-watch-9tf7s/configmaps/e2e-watch-test-watch-closed,UID:544a6347-4292-11ea-a994-fa163e34d433,ResourceVersion:19858492,Generation:0,CreationTimestamp:2020-01-29 12:24:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying the configmap a second time, while the watch is closed
STEP: creating a new watch on configmaps from the last resource version observed by the first watch
STEP: deleting the configmap
STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed
Jan 29 12:24:43.555: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-9tf7s,SelfLink:/api/v1/namespaces/e2e-tests-watch-9tf7s/configmaps/e2e-watch-test-watch-closed,UID:544a6347-4292-11ea-a994-fa163e34d433,ResourceVersion:19858493,Generation:0,CreationTimestamp:2020-01-29 12:24:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Jan 29 12:24:43.555: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-9tf7s,SelfLink:/api/v1/namespaces/e2e-tests-watch-9tf7s/configmaps/e2e-watch-test-watch-closed,UID:544a6347-4292-11ea-a994-fa163e34d433,ResourceVersion:19858494,Generation:0,CreationTimestamp:2020-01-29 12:24:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 29 12:24:43.556: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-watch-9tf7s" for this suite.
Jan 29 12:24:49.590: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 12:24:49.677: INFO: namespace: e2e-tests-watch-9tf7s, resource: bindings, ignored listing per whitelist
Jan 29 12:24:49.761: INFO: namespace e2e-tests-watch-9tf7s deletion completed in 6.198263636s

• [SLOW TEST:6.414 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 29 12:24:49.761: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a service in the namespace
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
STEP: Recreating the namespace
STEP: Verifying there is no service in the namespace
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 29 12:24:56.677: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-namespaces-4z2zh" for this suite.
Jan 29 12:25:02.706: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 12:25:02.785: INFO: namespace: e2e-tests-namespaces-4z2zh, resource: bindings, ignored listing per whitelist
Jan 29 12:25:02.807: INFO: namespace e2e-tests-namespaces-4z2zh deletion completed in 6.123916285s
STEP: Destroying namespace "e2e-tests-nsdeletetest-g8kz9" for this suite.
Jan 29 12:25:02.810: INFO: Namespace e2e-tests-nsdeletetest-g8kz9 was already deleted
STEP: Destroying namespace "e2e-tests-nsdeletetest-bcdlv" for this suite.
Jan 29 12:25:08.891: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 12:25:08.954: INFO: namespace: e2e-tests-nsdeletetest-bcdlv, resource: bindings, ignored listing per whitelist
Jan 29 12:25:09.021: INFO: namespace e2e-tests-nsdeletetest-bcdlv deletion completed in 6.210699777s

• [SLOW TEST:19.260 seconds]
[sig-api-machinery] Namespaces [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-storage] ConfigMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 29 12:25:09.022: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name cm-test-opt-del-63ae8edd-4292-11ea-8d54-0242ac110005
STEP: Creating configMap with name cm-test-opt-upd-63ae9165-4292-11ea-8d54-0242ac110005
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-63ae8edd-4292-11ea-8d54-0242ac110005
STEP: Updating configmap cm-test-opt-upd-63ae9165-4292-11ea-8d54-0242ac110005
STEP: Creating configMap with name cm-test-opt-create-63ae91b4-4292-11ea-8d54-0242ac110005
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 29 12:25:27.701: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-vq5xv" for this suite.
Jan 29 12:25:51.759: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 12:25:51.943: INFO: namespace: e2e-tests-configmap-vq5xv, resource: bindings, ignored listing per whitelist
Jan 29 12:25:51.970: INFO: namespace e2e-tests-configmap-vq5xv deletion completed in 24.261511737s

• [SLOW TEST:42.949 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Guestbook application 
  should create and stop a working application  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 29 12:25:51.971: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should create and stop a working application  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating all guestbook components
Jan 29 12:25:52.456: INFO: apiVersion: v1
kind: Service
metadata:
  name: redis-slave
  labels:
    app: redis
    role: slave
    tier: backend
spec:
  ports:
  - port: 6379
  selector:
    app: redis
    role: slave
    tier: backend

Jan 29 12:25:52.457: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-kchw4'
Jan 29 12:25:54.874: INFO: stderr: ""
Jan 29 12:25:54.874: INFO: stdout: "service/redis-slave created\n"
Jan 29 12:25:54.875: INFO: apiVersion: v1
kind: Service
metadata:
  name: redis-master
  labels:
    app: redis
    role: master
    tier: backend
spec:
  ports:
  - port: 6379
    targetPort: 6379
  selector:
    app: redis
    role: master
    tier: backend

Jan 29 12:25:54.875: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-kchw4'
Jan 29 12:25:55.614: INFO: stderr: ""
Jan 29 12:25:55.615: INFO: stdout: "service/redis-master created\n"
Jan 29 12:25:55.616: INFO: apiVersion: v1
kind: Service
metadata:
  name: frontend
  labels:
    app: guestbook
    tier: frontend
spec:
  # if your cluster supports it, uncomment the following to automatically create
  # an external load-balanced IP for the frontend service.
  # type: LoadBalancer
  ports:
  - port: 80
  selector:
    app: guestbook
    tier: frontend

Jan 29 12:25:55.616: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-kchw4'
Jan 29 12:25:56.230: INFO: stderr: ""
Jan 29 12:25:56.230: INFO: stdout: "service/frontend created\n"
Jan 29 12:25:56.232: INFO: apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: frontend
spec:
  replicas: 3
  template:
    metadata:
      labels:
        app: guestbook
        tier: frontend
    spec:
      containers:
      - name: php-redis
        image: gcr.io/google-samples/gb-frontend:v6
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        env:
        - name: GET_HOSTS_FROM
          value: dns
          # If your cluster config does not include a dns service, then to
          # instead access environment variables to find service host
          # info, comment out the 'value: dns' line above, and uncomment the
          # line below:
          # value: env
        ports:
        - containerPort: 80

Jan 29 12:25:56.232: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-kchw4'
Jan 29 12:25:56.775: INFO: stderr: ""
Jan 29 12:25:56.775: INFO: stdout: "deployment.extensions/frontend created\n"
Jan 29 12:25:56.777: INFO: apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: redis-master
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: redis
        role: master
        tier: backend
    spec:
      containers:
      - name: master
        image: gcr.io/kubernetes-e2e-test-images/redis:1.0
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 6379

Jan 29 12:25:56.777: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-kchw4'
Jan 29 12:25:57.349: INFO: stderr: ""
Jan 29 12:25:57.349: INFO: stdout: "deployment.extensions/redis-master created\n"
Jan 29 12:25:57.350: INFO: apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: redis-slave
spec:
  replicas: 2
  template:
    metadata:
      labels:
        app: redis
        role: slave
        tier: backend
    spec:
      containers:
      - name: slave
        image: gcr.io/google-samples/gb-redisslave:v3
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        env:
        - name: GET_HOSTS_FROM
          value: dns
          # If your cluster config does not include a dns service, then to
          # instead access an environment variable to find the master
          # service's host, comment out the 'value: dns' line above, and
          # uncomment the line below:
          # value: env
        ports:
        - containerPort: 6379

Jan 29 12:25:57.350: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-kchw4'
Jan 29 12:25:58.046: INFO: stderr: ""
Jan 29 12:25:58.046: INFO: stdout: "deployment.extensions/redis-slave created\n"
STEP: validating guestbook app
Jan 29 12:25:58.046: INFO: Waiting for all frontend pods to be Running.
Jan 29 12:26:28.098: INFO: Waiting for frontend to serve content.
Jan 29 12:26:28.275: INFO: Trying to add a new entry to the guestbook.
Jan 29 12:26:28.313: INFO: Verifying that added entry can be retrieved.
STEP: using delete to clean up resources
Jan 29 12:26:28.335: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-kchw4'
Jan 29 12:26:28.820: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan 29 12:26:28.820: INFO: stdout: "service \"redis-slave\" force deleted\n"
STEP: using delete to clean up resources
Jan 29 12:26:28.821: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-kchw4'
Jan 29 12:26:29.126: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan 29 12:26:29.126: INFO: stdout: "service \"redis-master\" force deleted\n"
STEP: using delete to clean up resources
Jan 29 12:26:29.128: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-kchw4'
Jan 29 12:26:29.474: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan 29 12:26:29.474: INFO: stdout: "service \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Jan 29 12:26:29.475: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-kchw4'
Jan 29 12:26:29.690: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan 29 12:26:29.690: INFO: stdout: "deployment.extensions \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Jan 29 12:26:29.692: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-kchw4'
Jan 29 12:26:30.163: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan 29 12:26:30.163: INFO: stdout: "deployment.extensions \"redis-master\" force deleted\n"
STEP: using delete to clean up resources
Jan 29 12:26:30.164: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-kchw4'
Jan 29 12:26:30.381: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan 29 12:26:30.382: INFO: stdout: "deployment.extensions \"redis-slave\" force deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 29 12:26:30.382: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-kchw4" for this suite.
Jan 29 12:27:14.689: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 12:27:14.814: INFO: namespace: e2e-tests-kubectl-kchw4, resource: bindings, ignored listing per whitelist
Jan 29 12:27:14.873: INFO: namespace e2e-tests-kubectl-kchw4 deletion completed in 44.469461085s

• [SLOW TEST:82.902 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Guestbook application
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create and stop a working application  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run default 
  should create an rc or deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 29 12:27:14.874: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1262
[It] should create an rc or deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Jan 29 12:27:15.009: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-bw5xc'
Jan 29 12:27:15.174: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Jan 29 12:27:15.174: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n"
STEP: verifying the pod controlled by e2e-test-nginx-deployment gets created
[AfterEach] [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1268
Jan 29 12:27:17.698: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=e2e-tests-kubectl-bw5xc'
Jan 29 12:27:18.147: INFO: stderr: ""
Jan 29 12:27:18.147: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 29 12:27:18.148: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-bw5xc" for this suite.
Jan 29 12:27:26.394: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 12:27:26.497: INFO: namespace: e2e-tests-kubectl-bw5xc, resource: bindings, ignored listing per whitelist
Jan 29 12:27:26.863: INFO: namespace e2e-tests-kubectl-bw5xc deletion completed in 8.595289881s

• [SLOW TEST:11.989 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create an rc or deployment from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] Projected configMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 29 12:27:26.863: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name cm-test-opt-del-b5c41404-4292-11ea-8d54-0242ac110005
STEP: Creating configMap with name cm-test-opt-upd-b5c4156b-4292-11ea-8d54-0242ac110005
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-b5c41404-4292-11ea-8d54-0242ac110005
STEP: Updating configmap cm-test-opt-upd-b5c4156b-4292-11ea-8d54-0242ac110005
STEP: Creating configMap with name cm-test-opt-create-b5c415b5-4292-11ea-8d54-0242ac110005
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 29 12:27:41.608: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-mpd6z" for this suite.
Jan 29 12:28:05.662: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 12:28:05.829: INFO: namespace: e2e-tests-projected-mpd6z, resource: bindings, ignored listing per whitelist
Jan 29 12:28:05.839: INFO: namespace e2e-tests-projected-mpd6z deletion completed in 24.221102411s

• [SLOW TEST:38.976 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 29 12:28:05.839: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan 29 12:28:06.244: INFO: Waiting up to 5m0s for pod "downwardapi-volume-cd1304b9-4292-11ea-8d54-0242ac110005" in namespace "e2e-tests-downward-api-svcz4" to be "success or failure"
Jan 29 12:28:06.274: INFO: Pod "downwardapi-volume-cd1304b9-4292-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 30.103985ms
Jan 29 12:28:08.552: INFO: Pod "downwardapi-volume-cd1304b9-4292-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.307802969s
Jan 29 12:28:10.592: INFO: Pod "downwardapi-volume-cd1304b9-4292-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.348109444s
Jan 29 12:28:12.628: INFO: Pod "downwardapi-volume-cd1304b9-4292-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.383903357s
Jan 29 12:28:14.663: INFO: Pod "downwardapi-volume-cd1304b9-4292-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.419083959s
Jan 29 12:28:16.693: INFO: Pod "downwardapi-volume-cd1304b9-4292-11ea-8d54-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.449180483s
STEP: Saw pod success
Jan 29 12:28:16.693: INFO: Pod "downwardapi-volume-cd1304b9-4292-11ea-8d54-0242ac110005" satisfied condition "success or failure"
Jan 29 12:28:16.737: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-cd1304b9-4292-11ea-8d54-0242ac110005 container client-container: 
STEP: delete the pod
Jan 29 12:28:16.911: INFO: Waiting for pod downwardapi-volume-cd1304b9-4292-11ea-8d54-0242ac110005 to disappear
Jan 29 12:28:16.923: INFO: Pod downwardapi-volume-cd1304b9-4292-11ea-8d54-0242ac110005 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 29 12:28:16.923: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-svcz4" for this suite.
Jan 29 12:28:23.035: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 12:28:23.276: INFO: namespace: e2e-tests-downward-api-svcz4, resource: bindings, ignored listing per whitelist
Jan 29 12:28:23.332: INFO: namespace e2e-tests-downward-api-svcz4 deletion completed in 6.398541797s

• [SLOW TEST:17.493 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-cli] Kubectl client [k8s.io] Proxy server 
  should support --unix-socket=/path  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 29 12:28:23.333: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should support --unix-socket=/path  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Starting the proxy
Jan 29 12:28:23.567: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix027167096/test'
STEP: retrieving proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 29 12:28:23.726: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-dcwz8" for this suite.
Jan 29 12:28:29.807: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 12:28:29.982: INFO: namespace: e2e-tests-kubectl-dcwz8, resource: bindings, ignored listing per whitelist
Jan 29 12:28:30.018: INFO: namespace e2e-tests-kubectl-dcwz8 deletion completed in 6.263694168s

• [SLOW TEST:6.685 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Proxy server
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should support --unix-socket=/path  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 29 12:28:30.018: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0666 on tmpfs
Jan 29 12:28:30.288: INFO: Waiting up to 5m0s for pod "pod-db6ecccb-4292-11ea-8d54-0242ac110005" in namespace "e2e-tests-emptydir-sxqk8" to be "success or failure"
Jan 29 12:28:30.415: INFO: Pod "pod-db6ecccb-4292-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 127.380373ms
Jan 29 12:28:32.456: INFO: Pod "pod-db6ecccb-4292-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.16760092s
Jan 29 12:28:34.480: INFO: Pod "pod-db6ecccb-4292-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.192137332s
Jan 29 12:28:36.640: INFO: Pod "pod-db6ecccb-4292-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.351988711s
Jan 29 12:28:38.709: INFO: Pod "pod-db6ecccb-4292-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.420826961s
Jan 29 12:28:40.727: INFO: Pod "pod-db6ecccb-4292-11ea-8d54-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.438962319s
STEP: Saw pod success
Jan 29 12:28:40.727: INFO: Pod "pod-db6ecccb-4292-11ea-8d54-0242ac110005" satisfied condition "success or failure"
Jan 29 12:28:40.732: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-db6ecccb-4292-11ea-8d54-0242ac110005 container test-container: 
STEP: delete the pod
Jan 29 12:28:40.921: INFO: Waiting for pod pod-db6ecccb-4292-11ea-8d54-0242ac110005 to disappear
Jan 29 12:28:40.937: INFO: Pod pod-db6ecccb-4292-11ea-8d54-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 29 12:28:40.937: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-sxqk8" for this suite.
Jan 29 12:28:46.987: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 12:28:47.109: INFO: namespace: e2e-tests-emptydir-sxqk8, resource: bindings, ignored listing per whitelist
Jan 29 12:28:47.123: INFO: namespace e2e-tests-emptydir-sxqk8 deletion completed in 6.179857517s

• [SLOW TEST:17.105 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: http [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 29 12:28:47.124: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: http [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-xkbdh
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Jan 29 12:28:47.394: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Jan 29 12:29:27.737: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.32.0.5:8080/dial?request=hostName&protocol=http&host=10.32.0.4&port=8080&tries=1'] Namespace:e2e-tests-pod-network-test-xkbdh PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 29 12:29:27.737: INFO: >>> kubeConfig: /root/.kube/config
I0129 12:29:27.827198       8 log.go:172] (0xc000ce8630) (0xc0019925a0) Create stream
I0129 12:29:27.827311       8 log.go:172] (0xc000ce8630) (0xc0019925a0) Stream added, broadcasting: 1
I0129 12:29:27.834393       8 log.go:172] (0xc000ce8630) Reply frame received for 1
I0129 12:29:27.834451       8 log.go:172] (0xc000ce8630) (0xc001630000) Create stream
I0129 12:29:27.834466       8 log.go:172] (0xc000ce8630) (0xc001630000) Stream added, broadcasting: 3
I0129 12:29:27.835614       8 log.go:172] (0xc000ce8630) Reply frame received for 3
I0129 12:29:27.835645       8 log.go:172] (0xc000ce8630) (0xc00292e640) Create stream
I0129 12:29:27.835654       8 log.go:172] (0xc000ce8630) (0xc00292e640) Stream added, broadcasting: 5
I0129 12:29:27.838247       8 log.go:172] (0xc000ce8630) Reply frame received for 5
I0129 12:29:28.056025       8 log.go:172] (0xc000ce8630) Data frame received for 3
I0129 12:29:28.056130       8 log.go:172] (0xc001630000) (3) Data frame handling
I0129 12:29:28.056165       8 log.go:172] (0xc001630000) (3) Data frame sent
I0129 12:29:28.203275       8 log.go:172] (0xc000ce8630) Data frame received for 1
I0129 12:29:28.203378       8 log.go:172] (0xc0019925a0) (1) Data frame handling
I0129 12:29:28.203403       8 log.go:172] (0xc0019925a0) (1) Data frame sent
I0129 12:29:28.203425       8 log.go:172] (0xc000ce8630) (0xc001630000) Stream removed, broadcasting: 3
I0129 12:29:28.203488       8 log.go:172] (0xc000ce8630) (0xc0019925a0) Stream removed, broadcasting: 1
I0129 12:29:28.204273       8 log.go:172] (0xc000ce8630) (0xc00292e640) Stream removed, broadcasting: 5
I0129 12:29:28.204391       8 log.go:172] (0xc000ce8630) (0xc0019925a0) Stream removed, broadcasting: 1
I0129 12:29:28.204409       8 log.go:172] (0xc000ce8630) (0xc001630000) Stream removed, broadcasting: 3
I0129 12:29:28.204423       8 log.go:172] (0xc000ce8630) (0xc00292e640) Stream removed, broadcasting: 5
I0129 12:29:28.205108       8 log.go:172] (0xc000ce8630) Go away received
Jan 29 12:29:28.205: INFO: Waiting for endpoints: map[]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 29 12:29:28.206: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pod-network-test-xkbdh" for this suite.
Jan 29 12:29:44.260: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 12:29:44.400: INFO: namespace: e2e-tests-pod-network-test-xkbdh, resource: bindings, ignored listing per whitelist
Jan 29 12:29:44.447: INFO: namespace e2e-tests-pod-network-test-xkbdh deletion completed in 16.222804957s

• [SLOW TEST:57.324 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for intra-pod communication: http [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl api-versions 
  should check if v1 is in available api versions  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 29 12:29:44.448: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should check if v1 is in available api versions  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: validating api versions
Jan 29 12:29:44.778: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions'
Jan 29 12:29:44.981: INFO: stderr: ""
Jan 29 12:29:44.982: INFO: stdout: "admissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\napps/v1beta1\napps/v1beta2\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 29 12:29:44.982: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-mj9dn" for this suite.
Jan 29 12:29:51.022: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 12:29:51.151: INFO: namespace: e2e-tests-kubectl-mj9dn, resource: bindings, ignored listing per whitelist
Jan 29 12:29:51.180: INFO: namespace e2e-tests-kubectl-mj9dn deletion completed in 6.18694572s

• [SLOW TEST:6.733 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl api-versions
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should check if v1 is in available api versions  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 29 12:29:51.181: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with secret that has name projected-secret-test-map-0bc1ebe4-4293-11ea-8d54-0242ac110005
STEP: Creating a pod to test consume secrets
Jan 29 12:29:51.407: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-0bc2d877-4293-11ea-8d54-0242ac110005" in namespace "e2e-tests-projected-6mbf4" to be "success or failure"
Jan 29 12:29:51.428: INFO: Pod "pod-projected-secrets-0bc2d877-4293-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 20.756449ms
Jan 29 12:29:53.459: INFO: Pod "pod-projected-secrets-0bc2d877-4293-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.051518978s
Jan 29 12:29:55.478: INFO: Pod "pod-projected-secrets-0bc2d877-4293-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.07123461s
Jan 29 12:29:57.561: INFO: Pod "pod-projected-secrets-0bc2d877-4293-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.154376024s
Jan 29 12:29:59.573: INFO: Pod "pod-projected-secrets-0bc2d877-4293-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.165799016s
Jan 29 12:30:01.585: INFO: Pod "pod-projected-secrets-0bc2d877-4293-11ea-8d54-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.177548141s
STEP: Saw pod success
Jan 29 12:30:01.585: INFO: Pod "pod-projected-secrets-0bc2d877-4293-11ea-8d54-0242ac110005" satisfied condition "success or failure"
Jan 29 12:30:01.588: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-0bc2d877-4293-11ea-8d54-0242ac110005 container projected-secret-volume-test: 
STEP: delete the pod
Jan 29 12:30:02.485: INFO: Waiting for pod pod-projected-secrets-0bc2d877-4293-11ea-8d54-0242ac110005 to disappear
Jan 29 12:30:02.760: INFO: Pod pod-projected-secrets-0bc2d877-4293-11ea-8d54-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 29 12:30:02.761: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-6mbf4" for this suite.
Jan 29 12:30:08.908: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 12:30:09.068: INFO: namespace: e2e-tests-projected-6mbf4, resource: bindings, ignored listing per whitelist
Jan 29 12:30:09.078: INFO: namespace e2e-tests-projected-6mbf4 deletion completed in 6.293599631s

• [SLOW TEST:17.897 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with secret pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 29 12:30:09.078: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with secret pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod pod-subpath-test-secret-pwc6
STEP: Creating a pod to test atomic-volume-subpath
Jan 29 12:30:09.473: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-pwc6" in namespace "e2e-tests-subpath-gkp9w" to be "success or failure"
Jan 29 12:30:09.553: INFO: Pod "pod-subpath-test-secret-pwc6": Phase="Pending", Reason="", readiness=false. Elapsed: 79.508607ms
Jan 29 12:30:11.581: INFO: Pod "pod-subpath-test-secret-pwc6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.107295339s
Jan 29 12:30:13.600: INFO: Pod "pod-subpath-test-secret-pwc6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.126756363s
Jan 29 12:30:15.630: INFO: Pod "pod-subpath-test-secret-pwc6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.156258143s
Jan 29 12:30:17.653: INFO: Pod "pod-subpath-test-secret-pwc6": Phase="Pending", Reason="", readiness=false. Elapsed: 8.179430388s
Jan 29 12:30:19.668: INFO: Pod "pod-subpath-test-secret-pwc6": Phase="Pending", Reason="", readiness=false. Elapsed: 10.194281759s
Jan 29 12:30:22.152: INFO: Pod "pod-subpath-test-secret-pwc6": Phase="Pending", Reason="", readiness=false. Elapsed: 12.678342535s
Jan 29 12:30:24.163: INFO: Pod "pod-subpath-test-secret-pwc6": Phase="Pending", Reason="", readiness=false. Elapsed: 14.689049271s
Jan 29 12:30:26.182: INFO: Pod "pod-subpath-test-secret-pwc6": Phase="Running", Reason="", readiness=false. Elapsed: 16.707950484s
Jan 29 12:30:28.199: INFO: Pod "pod-subpath-test-secret-pwc6": Phase="Running", Reason="", readiness=false. Elapsed: 18.724925475s
Jan 29 12:30:30.213: INFO: Pod "pod-subpath-test-secret-pwc6": Phase="Running", Reason="", readiness=false. Elapsed: 20.739455568s
Jan 29 12:30:32.226: INFO: Pod "pod-subpath-test-secret-pwc6": Phase="Running", Reason="", readiness=false. Elapsed: 22.752135306s
Jan 29 12:30:34.249: INFO: Pod "pod-subpath-test-secret-pwc6": Phase="Running", Reason="", readiness=false. Elapsed: 24.775693832s
Jan 29 12:30:36.268: INFO: Pod "pod-subpath-test-secret-pwc6": Phase="Running", Reason="", readiness=false. Elapsed: 26.794417464s
Jan 29 12:30:38.286: INFO: Pod "pod-subpath-test-secret-pwc6": Phase="Running", Reason="", readiness=false. Elapsed: 28.812314107s
Jan 29 12:30:40.302: INFO: Pod "pod-subpath-test-secret-pwc6": Phase="Running", Reason="", readiness=false. Elapsed: 30.82837866s
Jan 29 12:30:42.324: INFO: Pod "pod-subpath-test-secret-pwc6": Phase="Running", Reason="", readiness=false. Elapsed: 32.850239562s
Jan 29 12:30:44.344: INFO: Pod "pod-subpath-test-secret-pwc6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 34.870609748s
STEP: Saw pod success
Jan 29 12:30:44.344: INFO: Pod "pod-subpath-test-secret-pwc6" satisfied condition "success or failure"
Jan 29 12:30:44.351: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-secret-pwc6 container test-container-subpath-secret-pwc6: 
STEP: delete the pod
Jan 29 12:30:46.207: INFO: Waiting for pod pod-subpath-test-secret-pwc6 to disappear
Jan 29 12:30:46.257: INFO: Pod pod-subpath-test-secret-pwc6 no longer exists
STEP: Deleting pod pod-subpath-test-secret-pwc6
Jan 29 12:30:46.257: INFO: Deleting pod "pod-subpath-test-secret-pwc6" in namespace "e2e-tests-subpath-gkp9w"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 29 12:30:46.353: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-subpath-gkp9w" for this suite.
Jan 29 12:30:52.522: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 12:30:52.641: INFO: namespace: e2e-tests-subpath-gkp9w, resource: bindings, ignored listing per whitelist
Jan 29 12:30:52.690: INFO: namespace e2e-tests-subpath-gkp9w deletion completed in 6.32596885s

• [SLOW TEST:43.611 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with secret pod [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run pod 
  should create a pod from an image when restart is Never  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 29 12:30:52.691: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1527
[It] should create a pod from an image when restart is Never  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Jan 29 12:30:53.016: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-p4mfd'
Jan 29 12:30:53.243: INFO: stderr: ""
Jan 29 12:30:53.243: INFO: stdout: "pod/e2e-test-nginx-pod created\n"
STEP: verifying the pod e2e-test-nginx-pod was created
[AfterEach] [k8s.io] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1532
Jan 29 12:30:53.320: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=e2e-tests-kubectl-p4mfd'
Jan 29 12:31:02.866: INFO: stderr: ""
Jan 29 12:31:02.867: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 29 12:31:02.868: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-p4mfd" for this suite.
Jan 29 12:31:09.015: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 12:31:09.125: INFO: namespace: e2e-tests-kubectl-p4mfd, resource: bindings, ignored listing per whitelist
Jan 29 12:31:09.135: INFO: namespace e2e-tests-kubectl-p4mfd deletion completed in 6.170480468s

• [SLOW TEST:16.444 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create a pod from an image when restart is Never  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources Simple CustomResourceDefinition 
  creating/deleting custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 29 12:31:09.135: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] creating/deleting custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan 29 12:31:09.350: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 29 12:31:10.553: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-custom-resource-definition-wscrm" for this suite.
Jan 29 12:31:16.766: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 12:31:16.829: INFO: namespace: e2e-tests-custom-resource-definition-wscrm, resource: bindings, ignored listing per whitelist
Jan 29 12:31:16.925: INFO: namespace e2e-tests-custom-resource-definition-wscrm deletion completed in 6.319824027s

• [SLOW TEST:7.790 seconds]
[sig-api-machinery] CustomResourceDefinition resources
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  Simple CustomResourceDefinition
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:35
    creating/deleting custom resource definition objects works  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] Secrets 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 29 12:31:16.925: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-3ee300b6-4293-11ea-8d54-0242ac110005
STEP: Creating a pod to test consume secrets
Jan 29 12:31:17.189: INFO: Waiting up to 5m0s for pod "pod-secrets-3ee6d100-4293-11ea-8d54-0242ac110005" in namespace "e2e-tests-secrets-9jb7f" to be "success or failure"
Jan 29 12:31:17.292: INFO: Pod "pod-secrets-3ee6d100-4293-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 102.936227ms
Jan 29 12:31:19.302: INFO: Pod "pod-secrets-3ee6d100-4293-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.112219981s
Jan 29 12:31:21.313: INFO: Pod "pod-secrets-3ee6d100-4293-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.123776856s
Jan 29 12:31:23.558: INFO: Pod "pod-secrets-3ee6d100-4293-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.36895035s
Jan 29 12:31:25.584: INFO: Pod "pod-secrets-3ee6d100-4293-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.394401856s
Jan 29 12:31:27.633: INFO: Pod "pod-secrets-3ee6d100-4293-11ea-8d54-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.443978947s
STEP: Saw pod success
Jan 29 12:31:27.634: INFO: Pod "pod-secrets-3ee6d100-4293-11ea-8d54-0242ac110005" satisfied condition "success or failure"
Jan 29 12:31:27.642: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-3ee6d100-4293-11ea-8d54-0242ac110005 container secret-volume-test: 
STEP: delete the pod
Jan 29 12:31:28.014: INFO: Waiting for pod pod-secrets-3ee6d100-4293-11ea-8d54-0242ac110005 to disappear
Jan 29 12:31:28.105: INFO: Pod pod-secrets-3ee6d100-4293-11ea-8d54-0242ac110005 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 29 12:31:28.105: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-9jb7f" for this suite.
Jan 29 12:31:34.151: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 12:31:34.311: INFO: namespace: e2e-tests-secrets-9jb7f, resource: bindings, ignored listing per whitelist
Jan 29 12:31:34.311: INFO: namespace e2e-tests-secrets-9jb7f deletion completed in 6.192336885s

• [SLOW TEST:17.385 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-apps] ReplicaSet 
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 29 12:31:34.311: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan 29 12:31:34.477: INFO: Creating ReplicaSet my-hostname-basic-493fefbf-4293-11ea-8d54-0242ac110005
Jan 29 12:31:34.532: INFO: Pod name my-hostname-basic-493fefbf-4293-11ea-8d54-0242ac110005: Found 0 pods out of 1
Jan 29 12:31:39.573: INFO: Pod name my-hostname-basic-493fefbf-4293-11ea-8d54-0242ac110005: Found 1 pods out of 1
Jan 29 12:31:39.573: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-493fefbf-4293-11ea-8d54-0242ac110005" is running
Jan 29 12:31:43.616: INFO: Pod "my-hostname-basic-493fefbf-4293-11ea-8d54-0242ac110005-mgz6m" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-29 12:31:34 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-29 12:31:34 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-493fefbf-4293-11ea-8d54-0242ac110005]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-29 12:31:34 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-493fefbf-4293-11ea-8d54-0242ac110005]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-29 12:31:34 +0000 UTC Reason: Message:}])
Jan 29 12:31:43.617: INFO: Trying to dial the pod
Jan 29 12:31:48.672: INFO: Controller my-hostname-basic-493fefbf-4293-11ea-8d54-0242ac110005: Got expected result from replica 1 [my-hostname-basic-493fefbf-4293-11ea-8d54-0242ac110005-mgz6m]: "my-hostname-basic-493fefbf-4293-11ea-8d54-0242ac110005-mgz6m", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 29 12:31:48.672: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-replicaset-7xjql" for this suite.
Jan 29 12:31:57.157: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 12:31:57.349: INFO: namespace: e2e-tests-replicaset-7xjql, resource: bindings, ignored listing per whitelist
Jan 29 12:31:57.350: INFO: namespace e2e-tests-replicaset-7xjql deletion completed in 8.66352258s

• [SLOW TEST:23.039 seconds]
[sig-apps] ReplicaSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 29 12:31:57.350: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward api env vars
Jan 29 12:31:58.275: INFO: Waiting up to 5m0s for pod "downward-api-5769952a-4293-11ea-8d54-0242ac110005" in namespace "e2e-tests-downward-api-jdmdh" to be "success or failure"
Jan 29 12:31:58.297: INFO: Pod "downward-api-5769952a-4293-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 21.635788ms
Jan 29 12:32:00.318: INFO: Pod "downward-api-5769952a-4293-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.042435124s
Jan 29 12:32:02.347: INFO: Pod "downward-api-5769952a-4293-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.071630974s
Jan 29 12:32:04.565: INFO: Pod "downward-api-5769952a-4293-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.290206886s
Jan 29 12:32:06.600: INFO: Pod "downward-api-5769952a-4293-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.324542493s
Jan 29 12:32:08.611: INFO: Pod "downward-api-5769952a-4293-11ea-8d54-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.335801048s
STEP: Saw pod success
Jan 29 12:32:08.611: INFO: Pod "downward-api-5769952a-4293-11ea-8d54-0242ac110005" satisfied condition "success or failure"
Jan 29 12:32:08.615: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-5769952a-4293-11ea-8d54-0242ac110005 container dapi-container: 
STEP: delete the pod
Jan 29 12:32:08.670: INFO: Waiting for pod downward-api-5769952a-4293-11ea-8d54-0242ac110005 to disappear
Jan 29 12:32:09.400: INFO: Pod downward-api-5769952a-4293-11ea-8d54-0242ac110005 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 29 12:32:09.401: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-jdmdh" for this suite.
Jan 29 12:32:15.804: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 12:32:15.885: INFO: namespace: e2e-tests-downward-api-jdmdh, resource: bindings, ignored listing per whitelist
Jan 29 12:32:16.005: INFO: namespace e2e-tests-downward-api-jdmdh deletion completed in 6.26148788s

• [SLOW TEST:18.655 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38
  should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run deployment 
  should create a deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 29 12:32:16.006: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1399
[It] should create a deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Jan 29 12:32:16.224: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --generator=deployment/v1beta1 --namespace=e2e-tests-kubectl-m5c5d'
Jan 29 12:32:16.429: INFO: stderr: "kubectl run --generator=deployment/v1beta1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Jan 29 12:32:16.429: INFO: stdout: "deployment.extensions/e2e-test-nginx-deployment created\n"
STEP: verifying the deployment e2e-test-nginx-deployment was created
STEP: verifying the pod controlled by deployment e2e-test-nginx-deployment was created
[AfterEach] [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1404
Jan 29 12:32:20.612: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=e2e-tests-kubectl-m5c5d'
Jan 29 12:32:21.485: INFO: stderr: ""
Jan 29 12:32:21.486: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 29 12:32:21.486: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-m5c5d" for this suite.
Jan 29 12:32:28.591: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 12:32:28.667: INFO: namespace: e2e-tests-kubectl-m5c5d, resource: bindings, ignored listing per whitelist
Jan 29 12:32:28.752: INFO: namespace e2e-tests-kubectl-m5c5d deletion completed in 6.976076194s

• [SLOW TEST:12.747 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create a deployment from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 29 12:32:28.753: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65
[It] RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan 29 12:32:28.967: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted)
Jan 29 12:32:29.030: INFO: Pod name sample-pod: Found 0 pods out of 1
Jan 29 12:32:34.833: INFO: Pod name sample-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Jan 29 12:32:38.889: INFO: Creating deployment "test-rolling-update-deployment"
Jan 29 12:32:38.914: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has
Jan 29 12:32:38.928: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created
Jan 29 12:32:40.953: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected
Jan 29 12:32:40.958: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715897959, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715897959, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715897959, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715897959, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 29 12:32:42.971: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715897959, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715897959, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715897959, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715897959, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 29 12:32:44.969: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715897959, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715897959, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715897959, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715897959, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 29 12:32:47.016: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715897959, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715897959, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715897959, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715897959, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 29 12:32:49.461: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted)
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59
Jan 29 12:32:49.704: INFO: Deployment "test-rolling-update-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment,GenerateName:,Namespace:e2e-tests-deployment-l8f5p,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-l8f5p/deployments/test-rolling-update-deployment,UID:6fa5e9b7-4293-11ea-a994-fa163e34d433,ResourceVersion:19859786,Generation:1,CreationTimestamp:2020-01-29 12:32:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-01-29 12:32:39 +0000 UTC 2020-01-29 12:32:39 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-01-29 12:32:48 +0000 UTC 2020-01-29 12:32:39 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rolling-update-deployment-75db98fb4c" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},}

Jan 29 12:32:49.714: INFO: New ReplicaSet "test-rolling-update-deployment-75db98fb4c" of Deployment "test-rolling-update-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-75db98fb4c,GenerateName:,Namespace:e2e-tests-deployment-l8f5p,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-l8f5p/replicasets/test-rolling-update-deployment-75db98fb4c,UID:6fae238f-4293-11ea-a994-fa163e34d433,ResourceVersion:19859777,Generation:1,CreationTimestamp:2020-01-29 12:32:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment 6fa5e9b7-4293-11ea-a994-fa163e34d433 0xc001dcfcf7 0xc001dcfcf8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Jan 29 12:32:49.714: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment":
Jan 29 12:32:49.714: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-controller,GenerateName:,Namespace:e2e-tests-deployment-l8f5p,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-l8f5p/replicasets/test-rolling-update-controller,UID:69baf5ad-4293-11ea-a994-fa163e34d433,ResourceVersion:19859785,Generation:2,CreationTimestamp:2020-01-29 12:32:28 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305832,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment 6fa5e9b7-4293-11ea-a994-fa163e34d433 0xc001dcfc1f 0xc001dcfc30}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Jan 29 12:32:49.724: INFO: Pod "test-rolling-update-deployment-75db98fb4c-75nvs" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-75db98fb4c-75nvs,GenerateName:test-rolling-update-deployment-75db98fb4c-,Namespace:e2e-tests-deployment-l8f5p,SelfLink:/api/v1/namespaces/e2e-tests-deployment-l8f5p/pods/test-rolling-update-deployment-75db98fb4c-75nvs,UID:6fc25d5f-4293-11ea-a994-fa163e34d433,ResourceVersion:19859776,Generation:0,CreationTimestamp:2020-01-29 12:32:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rolling-update-deployment-75db98fb4c 6fae238f-4293-11ea-a994-fa163e34d433 0xc0021368d7 0xc0021368d8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-x4hwn {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-x4hwn,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [{default-token-x4hwn true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002136940} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002136960}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 12:32:39 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 12:32:48 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 12:32:48 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 12:32:39 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.5,StartTime:2020-01-29 12:32:39 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-01-29 12:32:46 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 docker://8476d9f64330391949c910a369f3667262f6b9620b49927a6bf219a6e8aac866}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 29 12:32:49.724: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-deployment-l8f5p" for this suite.
Jan 29 12:32:57.793: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 12:32:57.857: INFO: namespace: e2e-tests-deployment-l8f5p, resource: bindings, ignored listing per whitelist
Jan 29 12:32:57.955: INFO: namespace e2e-tests-deployment-l8f5p deletion completed in 8.222671122s

• [SLOW TEST:29.201 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 29 12:32:57.955: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan 29 12:32:58.855: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7b787615-4293-11ea-8d54-0242ac110005" in namespace "e2e-tests-projected-jvx9z" to be "success or failure"
Jan 29 12:32:58.878: INFO: Pod "downwardapi-volume-7b787615-4293-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 22.684357ms
Jan 29 12:33:00.908: INFO: Pod "downwardapi-volume-7b787615-4293-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.052609607s
Jan 29 12:33:02.965: INFO: Pod "downwardapi-volume-7b787615-4293-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.109102821s
Jan 29 12:33:05.027: INFO: Pod "downwardapi-volume-7b787615-4293-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.171539998s
Jan 29 12:33:07.046: INFO: Pod "downwardapi-volume-7b787615-4293-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.190154821s
Jan 29 12:33:09.079: INFO: Pod "downwardapi-volume-7b787615-4293-11ea-8d54-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.222836535s
STEP: Saw pod success
Jan 29 12:33:09.079: INFO: Pod "downwardapi-volume-7b787615-4293-11ea-8d54-0242ac110005" satisfied condition "success or failure"
Jan 29 12:33:09.082: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-7b787615-4293-11ea-8d54-0242ac110005 container client-container: 
STEP: delete the pod
Jan 29 12:33:09.912: INFO: Waiting for pod downwardapi-volume-7b787615-4293-11ea-8d54-0242ac110005 to disappear
Jan 29 12:33:09.925: INFO: Pod downwardapi-volume-7b787615-4293-11ea-8d54-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 29 12:33:09.926: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-jvx9z" for this suite.
Jan 29 12:33:16.064: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 12:33:16.175: INFO: namespace: e2e-tests-projected-jvx9z, resource: bindings, ignored listing per whitelist
Jan 29 12:33:16.227: INFO: namespace e2e-tests-projected-jvx9z deletion completed in 6.27191808s

• [SLOW TEST:18.272 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-storage] ConfigMap 
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 29 12:33:16.227: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-upd-8603ab68-4293-11ea-8d54-0242ac110005
STEP: Creating the pod
STEP: Waiting for pod with text data
STEP: Waiting for pod with binary data
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 29 12:33:30.710: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-7p6h7" for this suite.
Jan 29 12:33:54.744: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 12:33:54.884: INFO: namespace: e2e-tests-configmap-7p6h7, resource: bindings, ignored listing per whitelist
Jan 29 12:33:54.992: INFO: namespace e2e-tests-configmap-7p6h7 deletion completed in 24.277528456s

• [SLOW TEST:38.765 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 29 12:33:54.993: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan 29 12:33:55.226: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9d22861c-4293-11ea-8d54-0242ac110005" in namespace "e2e-tests-downward-api-6lm7q" to be "success or failure"
Jan 29 12:33:55.250: INFO: Pod "downwardapi-volume-9d22861c-4293-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 23.742894ms
Jan 29 12:33:57.267: INFO: Pod "downwardapi-volume-9d22861c-4293-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040825962s
Jan 29 12:33:59.282: INFO: Pod "downwardapi-volume-9d22861c-4293-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.055784369s
Jan 29 12:34:01.413: INFO: Pod "downwardapi-volume-9d22861c-4293-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.186271263s
Jan 29 12:34:03.857: INFO: Pod "downwardapi-volume-9d22861c-4293-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.630822499s
Jan 29 12:34:06.020: INFO: Pod "downwardapi-volume-9d22861c-4293-11ea-8d54-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.793396837s
STEP: Saw pod success
Jan 29 12:34:06.020: INFO: Pod "downwardapi-volume-9d22861c-4293-11ea-8d54-0242ac110005" satisfied condition "success or failure"
Jan 29 12:34:06.030: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-9d22861c-4293-11ea-8d54-0242ac110005 container client-container: 
STEP: delete the pod
Jan 29 12:34:06.188: INFO: Waiting for pod downwardapi-volume-9d22861c-4293-11ea-8d54-0242ac110005 to disappear
Jan 29 12:34:06.233: INFO: Pod downwardapi-volume-9d22861c-4293-11ea-8d54-0242ac110005 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 29 12:34:06.233: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-6lm7q" for this suite.
Jan 29 12:34:12.322: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 12:34:12.471: INFO: namespace: e2e-tests-downward-api-6lm7q, resource: bindings, ignored listing per whitelist
Jan 29 12:34:12.491: INFO: namespace e2e-tests-downward-api-6lm7q deletion completed in 6.246251377s

• [SLOW TEST:17.498 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 29 12:34:12.492: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan 29 12:34:12.851: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a7959f32-4293-11ea-8d54-0242ac110005" in namespace "e2e-tests-downward-api-8ntwc" to be "success or failure"
Jan 29 12:34:12.891: INFO: Pod "downwardapi-volume-a7959f32-4293-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 38.816533ms
Jan 29 12:34:14.920: INFO: Pod "downwardapi-volume-a7959f32-4293-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.06868531s
Jan 29 12:34:16.936: INFO: Pod "downwardapi-volume-a7959f32-4293-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.083952257s
Jan 29 12:34:18.953: INFO: Pod "downwardapi-volume-a7959f32-4293-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.10145112s
Jan 29 12:34:20.994: INFO: Pod "downwardapi-volume-a7959f32-4293-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.142544819s
Jan 29 12:34:23.006: INFO: Pod "downwardapi-volume-a7959f32-4293-11ea-8d54-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.154127455s
STEP: Saw pod success
Jan 29 12:34:23.006: INFO: Pod "downwardapi-volume-a7959f32-4293-11ea-8d54-0242ac110005" satisfied condition "success or failure"
Jan 29 12:34:23.009: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-a7959f32-4293-11ea-8d54-0242ac110005 container client-container: 
STEP: delete the pod
Jan 29 12:34:23.061: INFO: Waiting for pod downwardapi-volume-a7959f32-4293-11ea-8d54-0242ac110005 to disappear
Jan 29 12:34:23.138: INFO: Pod downwardapi-volume-a7959f32-4293-11ea-8d54-0242ac110005 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 29 12:34:23.138: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-8ntwc" for this suite.
Jan 29 12:34:29.197: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 12:34:29.356: INFO: namespace: e2e-tests-downward-api-8ntwc, resource: bindings, ignored listing per whitelist
Jan 29 12:34:29.373: INFO: namespace e2e-tests-downward-api-8ntwc deletion completed in 6.222476942s

• [SLOW TEST:16.882 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy through a service and a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 29 12:34:29.374: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy through a service and a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: starting an echo server on multiple ports
STEP: creating replication controller proxy-service-ktm4q in namespace e2e-tests-proxy-4cg7r
I0129 12:34:29.844238       8 runners.go:184] Created replication controller with name: proxy-service-ktm4q, namespace: e2e-tests-proxy-4cg7r, replica count: 1
I0129 12:34:30.895355       8 runners.go:184] proxy-service-ktm4q Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0129 12:34:31.895715       8 runners.go:184] proxy-service-ktm4q Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0129 12:34:32.896035       8 runners.go:184] proxy-service-ktm4q Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0129 12:34:33.896625       8 runners.go:184] proxy-service-ktm4q Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0129 12:34:34.897209       8 runners.go:184] proxy-service-ktm4q Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0129 12:34:35.897623       8 runners.go:184] proxy-service-ktm4q Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0129 12:34:36.898202       8 runners.go:184] proxy-service-ktm4q Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0129 12:34:37.898674       8 runners.go:184] proxy-service-ktm4q Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0129 12:34:38.899337       8 runners.go:184] proxy-service-ktm4q Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0129 12:34:39.900113       8 runners.go:184] proxy-service-ktm4q Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0129 12:34:40.900611       8 runners.go:184] proxy-service-ktm4q Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0129 12:34:41.901117       8 runners.go:184] proxy-service-ktm4q Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0129 12:34:42.901723       8 runners.go:184] proxy-service-ktm4q Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0129 12:34:43.902350       8 runners.go:184] proxy-service-ktm4q Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0129 12:34:44.903022       8 runners.go:184] proxy-service-ktm4q Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0129 12:34:45.903684       8 runners.go:184] proxy-service-ktm4q Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0129 12:34:46.904255       8 runners.go:184] proxy-service-ktm4q Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0129 12:34:47.905082       8 runners.go:184] proxy-service-ktm4q Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Jan 29 12:34:47.944: INFO: setup took 18.212036198s, starting test cases
STEP: running 16 cases, 20 attempts per case, 320 total attempts
Jan 29 12:34:47.997: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-4cg7r/pods/http:proxy-service-ktm4q-888vf:160/proxy/: foo (200; 50.282296ms)
Jan 29 12:34:47.997: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-4cg7r/pods/proxy-service-ktm4q-888vf/proxy/: >> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-c47e8a97-4293-11ea-8d54-0242ac110005
STEP: Creating a pod to test consume configMaps
Jan 29 12:35:01.269: INFO: Waiting up to 5m0s for pod "pod-configmaps-c47f9c42-4293-11ea-8d54-0242ac110005" in namespace "e2e-tests-configmap-74mk5" to be "success or failure"
Jan 29 12:35:01.345: INFO: Pod "pod-configmaps-c47f9c42-4293-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 75.643426ms
Jan 29 12:35:03.370: INFO: Pod "pod-configmaps-c47f9c42-4293-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.100623472s
Jan 29 12:35:05.396: INFO: Pod "pod-configmaps-c47f9c42-4293-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.126529442s
Jan 29 12:35:07.437: INFO: Pod "pod-configmaps-c47f9c42-4293-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.167965542s
Jan 29 12:35:09.449: INFO: Pod "pod-configmaps-c47f9c42-4293-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.179085679s
Jan 29 12:35:11.468: INFO: Pod "pod-configmaps-c47f9c42-4293-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.198153898s
Jan 29 12:35:13.489: INFO: Pod "pod-configmaps-c47f9c42-4293-11ea-8d54-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.219586779s
STEP: Saw pod success
Jan 29 12:35:13.490: INFO: Pod "pod-configmaps-c47f9c42-4293-11ea-8d54-0242ac110005" satisfied condition "success or failure"
Jan 29 12:35:13.510: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-c47f9c42-4293-11ea-8d54-0242ac110005 container configmap-volume-test: 
STEP: delete the pod
Jan 29 12:35:13.644: INFO: Waiting for pod pod-configmaps-c47f9c42-4293-11ea-8d54-0242ac110005 to disappear
Jan 29 12:35:13.654: INFO: Pod pod-configmaps-c47f9c42-4293-11ea-8d54-0242ac110005 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 29 12:35:13.654: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-74mk5" for this suite.
Jan 29 12:35:19.731: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 12:35:19.903: INFO: namespace: e2e-tests-configmap-74mk5, resource: bindings, ignored listing per whitelist
Jan 29 12:35:20.020: INFO: namespace e2e-tests-configmap-74mk5 deletion completed in 6.354489264s

• [SLOW TEST:19.023 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 29 12:35:20.021: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-dtlr2
Jan 29 12:35:30.275: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-dtlr2
STEP: checking the pod's current state and verifying that restartCount is present
Jan 29 12:35:30.281: INFO: Initial restart count of pod liveness-http is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 29 12:39:31.159: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-dtlr2" for this suite.
Jan 29 12:39:37.376: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 12:39:37.403: INFO: namespace: e2e-tests-container-probe-dtlr2, resource: bindings, ignored listing per whitelist
Jan 29 12:39:37.490: INFO: namespace e2e-tests-container-probe-dtlr2 deletion completed in 6.170538543s

• [SLOW TEST:257.469 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 29 12:39:37.491: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85
[It] should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating service endpoint-test2 in namespace e2e-tests-services-jr2xq
STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-jr2xq to expose endpoints map[]
Jan 29 12:39:37.711: INFO: Get endpoints failed (13.632156ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found
Jan 29 12:39:38.719: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-jr2xq exposes endpoints map[] (1.021573332s elapsed)
STEP: Creating pod pod1 in namespace e2e-tests-services-jr2xq
STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-jr2xq to expose endpoints map[pod1:[80]]
Jan 29 12:39:42.975: INFO: Unexpected endpoints: found map[], expected map[pod1:[80]] (4.231103284s elapsed, will retry)
Jan 29 12:39:48.519: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-jr2xq exposes endpoints map[pod1:[80]] (9.775272754s elapsed)
STEP: Creating pod pod2 in namespace e2e-tests-services-jr2xq
STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-jr2xq to expose endpoints map[pod1:[80] pod2:[80]]
Jan 29 12:39:53.018: INFO: Unexpected endpoints: found map[69e24632-4294-11ea-a994-fa163e34d433:[80]], expected map[pod1:[80] pod2:[80]] (4.473624383s elapsed, will retry)
Jan 29 12:39:58.842: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-jr2xq exposes endpoints map[pod1:[80] pod2:[80]] (10.297770653s elapsed)
STEP: Deleting pod pod1 in namespace e2e-tests-services-jr2xq
STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-jr2xq to expose endpoints map[pod2:[80]]
Jan 29 12:39:59.063: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-jr2xq exposes endpoints map[pod2:[80]] (188.152529ms elapsed)
STEP: Deleting pod pod2 in namespace e2e-tests-services-jr2xq
STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-jr2xq to expose endpoints map[]
Jan 29 12:40:00.253: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-jr2xq exposes endpoints map[] (1.169139342s elapsed)
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 29 12:40:00.823: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-services-jr2xq" for this suite.
Jan 29 12:40:25.042: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 12:40:25.164: INFO: namespace: e2e-tests-services-jr2xq, resource: bindings, ignored listing per whitelist
Jan 29 12:40:25.192: INFO: namespace e2e-tests-services-jr2xq deletion completed in 24.282496482s
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90

• [SLOW TEST:47.702 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 29 12:40:25.193: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod liveness-exec in namespace e2e-tests-container-probe-tf7l6
Jan 29 12:40:34.061: INFO: Started pod liveness-exec in namespace e2e-tests-container-probe-tf7l6
STEP: checking the pod's current state and verifying that restartCount is present
Jan 29 12:40:34.064: INFO: Initial restart count of pod liveness-exec is 0
Jan 29 12:41:28.816: INFO: Restart count of pod e2e-tests-container-probe-tf7l6/liveness-exec is now 1 (54.752263578s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 29 12:41:29.000: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-tf7l6" for this suite.
Jan 29 12:41:37.065: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 12:41:37.122: INFO: namespace: e2e-tests-container-probe-tf7l6, resource: bindings, ignored listing per whitelist
Jan 29 12:41:37.270: INFO: namespace e2e-tests-container-probe-tf7l6 deletion completed in 8.253007848s

• [SLOW TEST:72.077 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: http [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 29 12:41:37.271: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: http [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-qw292
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Jan 29 12:41:37.508: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Jan 29 12:42:13.831: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.32.0.4:8080/hostName | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-qw292 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 29 12:42:13.831: INFO: >>> kubeConfig: /root/.kube/config
I0129 12:42:13.983979       8 log.go:172] (0xc0000eadc0) (0xc0021c9180) Create stream
I0129 12:42:13.984188       8 log.go:172] (0xc0000eadc0) (0xc0021c9180) Stream added, broadcasting: 1
I0129 12:42:13.999037       8 log.go:172] (0xc0000eadc0) Reply frame received for 1
I0129 12:42:13.999153       8 log.go:172] (0xc0000eadc0) (0xc00209a960) Create stream
I0129 12:42:13.999182       8 log.go:172] (0xc0000eadc0) (0xc00209a960) Stream added, broadcasting: 3
I0129 12:42:14.001336       8 log.go:172] (0xc0000eadc0) Reply frame received for 3
I0129 12:42:14.001370       8 log.go:172] (0xc0000eadc0) (0xc002731540) Create stream
I0129 12:42:14.001383       8 log.go:172] (0xc0000eadc0) (0xc002731540) Stream added, broadcasting: 5
I0129 12:42:14.003968       8 log.go:172] (0xc0000eadc0) Reply frame received for 5
I0129 12:42:14.237396       8 log.go:172] (0xc0000eadc0) Data frame received for 3
I0129 12:42:14.237516       8 log.go:172] (0xc00209a960) (3) Data frame handling
I0129 12:42:14.237549       8 log.go:172] (0xc00209a960) (3) Data frame sent
I0129 12:42:14.371559       8 log.go:172] (0xc0000eadc0) Data frame received for 1
I0129 12:42:14.371729       8 log.go:172] (0xc0021c9180) (1) Data frame handling
I0129 12:42:14.371761       8 log.go:172] (0xc0021c9180) (1) Data frame sent
I0129 12:42:14.371779       8 log.go:172] (0xc0000eadc0) (0xc0021c9180) Stream removed, broadcasting: 1
I0129 12:42:14.372303       8 log.go:172] (0xc0000eadc0) (0xc00209a960) Stream removed, broadcasting: 3
I0129 12:42:14.372664       8 log.go:172] (0xc0000eadc0) (0xc002731540) Stream removed, broadcasting: 5
I0129 12:42:14.372740       8 log.go:172] (0xc0000eadc0) Go away received
I0129 12:42:14.372845       8 log.go:172] (0xc0000eadc0) (0xc0021c9180) Stream removed, broadcasting: 1
I0129 12:42:14.372872       8 log.go:172] (0xc0000eadc0) (0xc00209a960) Stream removed, broadcasting: 3
I0129 12:42:14.372903       8 log.go:172] (0xc0000eadc0) (0xc002731540) Stream removed, broadcasting: 5
Jan 29 12:42:14.372: INFO: Found all expected endpoints: [netserver-0]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 29 12:42:14.373: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pod-network-test-qw292" for this suite.
Jan 29 12:42:40.426: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 12:42:40.622: INFO: namespace: e2e-tests-pod-network-test-qw292, resource: bindings, ignored listing per whitelist
Jan 29 12:42:40.673: INFO: namespace e2e-tests-pod-network-test-qw292 deletion completed in 26.28370912s

• [SLOW TEST:63.402 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for node-pod communication: http [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 29 12:42:40.673: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with secret that has name projected-secret-test-map-d688d6d9-4294-11ea-8d54-0242ac110005
STEP: Creating a pod to test consume secrets
Jan 29 12:42:41.098: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-d68a0add-4294-11ea-8d54-0242ac110005" in namespace "e2e-tests-projected-72th9" to be "success or failure"
Jan 29 12:42:41.141: INFO: Pod "pod-projected-secrets-d68a0add-4294-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 43.492254ms
Jan 29 12:42:43.445: INFO: Pod "pod-projected-secrets-d68a0add-4294-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.347459247s
Jan 29 12:42:45.466: INFO: Pod "pod-projected-secrets-d68a0add-4294-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.368022278s
Jan 29 12:42:47.734: INFO: Pod "pod-projected-secrets-d68a0add-4294-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.635979455s
Jan 29 12:42:50.401: INFO: Pod "pod-projected-secrets-d68a0add-4294-11ea-8d54-0242ac110005": Phase="Running", Reason="", readiness=true. Elapsed: 9.30279376s
Jan 29 12:42:52.765: INFO: Pod "pod-projected-secrets-d68a0add-4294-11ea-8d54-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.667531751s
STEP: Saw pod success
Jan 29 12:42:52.766: INFO: Pod "pod-projected-secrets-d68a0add-4294-11ea-8d54-0242ac110005" satisfied condition "success or failure"
Jan 29 12:42:53.014: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-d68a0add-4294-11ea-8d54-0242ac110005 container projected-secret-volume-test: 
STEP: delete the pod
Jan 29 12:42:53.285: INFO: Waiting for pod pod-projected-secrets-d68a0add-4294-11ea-8d54-0242ac110005 to disappear
Jan 29 12:42:53.300: INFO: Pod pod-projected-secrets-d68a0add-4294-11ea-8d54-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 29 12:42:53.300: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-72th9" for this suite.
Jan 29 12:42:59.420: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 12:42:59.458: INFO: namespace: e2e-tests-projected-72th9, resource: bindings, ignored listing per whitelist
Jan 29 12:42:59.566: INFO: namespace e2e-tests-projected-72th9 deletion completed in 6.254871914s

• [SLOW TEST:18.893 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[k8s.io] Probing container 
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 29 12:42:59.566: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan 29 12:43:23.797: INFO: Container started at 2020-01-29 12:43:06 +0000 UTC, pod became ready at 2020-01-29 12:43:22 +0000 UTC
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 29 12:43:23.797: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-dnk65" for this suite.
Jan 29 12:43:47.840: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 12:43:47.912: INFO: namespace: e2e-tests-container-probe-dnk65, resource: bindings, ignored listing per whitelist
Jan 29 12:43:48.044: INFO: namespace e2e-tests-container-probe-dnk65 deletion completed in 24.240062427s

• [SLOW TEST:48.478 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 29 12:43:48.044: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan 29 12:43:48.243: INFO: Waiting up to 5m0s for pod "downwardapi-volume-fe99ae83-4294-11ea-8d54-0242ac110005" in namespace "e2e-tests-projected-w7lcg" to be "success or failure"
Jan 29 12:43:48.279: INFO: Pod "downwardapi-volume-fe99ae83-4294-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 36.395602ms
Jan 29 12:43:50.297: INFO: Pod "downwardapi-volume-fe99ae83-4294-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.054720277s
Jan 29 12:43:52.310: INFO: Pod "downwardapi-volume-fe99ae83-4294-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.067262892s
Jan 29 12:43:54.340: INFO: Pod "downwardapi-volume-fe99ae83-4294-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.09697093s
Jan 29 12:43:56.359: INFO: Pod "downwardapi-volume-fe99ae83-4294-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.115927117s
Jan 29 12:43:58.799: INFO: Pod "downwardapi-volume-fe99ae83-4294-11ea-8d54-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.556535715s
STEP: Saw pod success
Jan 29 12:43:58.799: INFO: Pod "downwardapi-volume-fe99ae83-4294-11ea-8d54-0242ac110005" satisfied condition "success or failure"
Jan 29 12:43:58.807: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-fe99ae83-4294-11ea-8d54-0242ac110005 container client-container: 
STEP: delete the pod
Jan 29 12:43:59.167: INFO: Waiting for pod downwardapi-volume-fe99ae83-4294-11ea-8d54-0242ac110005 to disappear
Jan 29 12:43:59.197: INFO: Pod downwardapi-volume-fe99ae83-4294-11ea-8d54-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 29 12:43:59.197: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-w7lcg" for this suite.
Jan 29 12:44:05.314: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 12:44:05.504: INFO: namespace: e2e-tests-projected-w7lcg, resource: bindings, ignored listing per whitelist
Jan 29 12:44:05.530: INFO: namespace e2e-tests-projected-w7lcg deletion completed in 6.326881792s

• [SLOW TEST:17.486 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 29 12:44:05.531: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward api env vars
Jan 29 12:44:05.704: INFO: Waiting up to 5m0s for pod "downward-api-0901ce9c-4295-11ea-8d54-0242ac110005" in namespace "e2e-tests-downward-api-j58qk" to be "success or failure"
Jan 29 12:44:05.724: INFO: Pod "downward-api-0901ce9c-4295-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 20.447501ms
Jan 29 12:44:07.741: INFO: Pod "downward-api-0901ce9c-4295-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037411083s
Jan 29 12:44:09.758: INFO: Pod "downward-api-0901ce9c-4295-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.053930992s
Jan 29 12:44:12.099: INFO: Pod "downward-api-0901ce9c-4295-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.394755228s
Jan 29 12:44:14.115: INFO: Pod "downward-api-0901ce9c-4295-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.411234882s
Jan 29 12:44:16.137: INFO: Pod "downward-api-0901ce9c-4295-11ea-8d54-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.433191945s
STEP: Saw pod success
Jan 29 12:44:16.137: INFO: Pod "downward-api-0901ce9c-4295-11ea-8d54-0242ac110005" satisfied condition "success or failure"
Jan 29 12:44:16.142: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-0901ce9c-4295-11ea-8d54-0242ac110005 container dapi-container: 
STEP: delete the pod
Jan 29 12:44:16.332: INFO: Waiting for pod downward-api-0901ce9c-4295-11ea-8d54-0242ac110005 to disappear
Jan 29 12:44:16.340: INFO: Pod downward-api-0901ce9c-4295-11ea-8d54-0242ac110005 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 29 12:44:16.340: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-j58qk" for this suite.
Jan 29 12:44:22.401: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 12:44:22.767: INFO: namespace: e2e-tests-downward-api-j58qk, resource: bindings, ignored listing per whitelist
Jan 29 12:44:22.767: INFO: namespace e2e-tests-downward-api-j58qk deletion completed in 6.420958374s

• [SLOW TEST:17.236 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 29 12:44:22.767: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-134ee61a-4295-11ea-8d54-0242ac110005
STEP: Creating a pod to test consume configMaps
Jan 29 12:44:23.056: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-1350c828-4295-11ea-8d54-0242ac110005" in namespace "e2e-tests-projected-t76xz" to be "success or failure"
Jan 29 12:44:23.069: INFO: Pod "pod-projected-configmaps-1350c828-4295-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 12.948666ms
Jan 29 12:44:25.085: INFO: Pod "pod-projected-configmaps-1350c828-4295-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029049995s
Jan 29 12:44:27.097: INFO: Pod "pod-projected-configmaps-1350c828-4295-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.040701712s
Jan 29 12:44:29.779: INFO: Pod "pod-projected-configmaps-1350c828-4295-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.723096997s
Jan 29 12:44:32.084: INFO: Pod "pod-projected-configmaps-1350c828-4295-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.027704594s
Jan 29 12:44:34.104: INFO: Pod "pod-projected-configmaps-1350c828-4295-11ea-8d54-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.047560759s
STEP: Saw pod success
Jan 29 12:44:34.104: INFO: Pod "pod-projected-configmaps-1350c828-4295-11ea-8d54-0242ac110005" satisfied condition "success or failure"
Jan 29 12:44:34.119: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-1350c828-4295-11ea-8d54-0242ac110005 container projected-configmap-volume-test: 
STEP: delete the pod
Jan 29 12:44:34.723: INFO: Waiting for pod pod-projected-configmaps-1350c828-4295-11ea-8d54-0242ac110005 to disappear
Jan 29 12:44:34.793: INFO: Pod pod-projected-configmaps-1350c828-4295-11ea-8d54-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 29 12:44:34.794: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-t76xz" for this suite.
Jan 29 12:44:40.858: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 12:44:41.033: INFO: namespace: e2e-tests-projected-t76xz, resource: bindings, ignored listing per whitelist
Jan 29 12:44:41.052: INFO: namespace e2e-tests-projected-t76xz deletion completed in 6.2522722s

• [SLOW TEST:18.285 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: udp [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 29 12:44:41.053: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: udp [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-rhcv6
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Jan 29 12:44:41.270: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Jan 29 12:45:17.492: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.32.0.5:8080/dial?request=hostName&protocol=udp&host=10.32.0.4&port=8081&tries=1'] Namespace:e2e-tests-pod-network-test-rhcv6 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 29 12:45:17.492: INFO: >>> kubeConfig: /root/.kube/config
I0129 12:45:17.603962       8 log.go:172] (0xc0000eadc0) (0xc001efa780) Create stream
I0129 12:45:17.604067       8 log.go:172] (0xc0000eadc0) (0xc001efa780) Stream added, broadcasting: 1
I0129 12:45:17.614857       8 log.go:172] (0xc0000eadc0) Reply frame received for 1
I0129 12:45:17.614905       8 log.go:172] (0xc0000eadc0) (0xc001db4000) Create stream
I0129 12:45:17.614921       8 log.go:172] (0xc0000eadc0) (0xc001db4000) Stream added, broadcasting: 3
I0129 12:45:17.616414       8 log.go:172] (0xc0000eadc0) Reply frame received for 3
I0129 12:45:17.616453       8 log.go:172] (0xc0000eadc0) (0xc001992500) Create stream
I0129 12:45:17.616469       8 log.go:172] (0xc0000eadc0) (0xc001992500) Stream added, broadcasting: 5
I0129 12:45:17.617830       8 log.go:172] (0xc0000eadc0) Reply frame received for 5
I0129 12:45:17.777397       8 log.go:172] (0xc0000eadc0) Data frame received for 3
I0129 12:45:17.777525       8 log.go:172] (0xc001db4000) (3) Data frame handling
I0129 12:45:17.777553       8 log.go:172] (0xc001db4000) (3) Data frame sent
I0129 12:45:17.947272       8 log.go:172] (0xc0000eadc0) (0xc001db4000) Stream removed, broadcasting: 3
I0129 12:45:17.947412       8 log.go:172] (0xc0000eadc0) Data frame received for 1
I0129 12:45:17.947424       8 log.go:172] (0xc001efa780) (1) Data frame handling
I0129 12:45:17.947448       8 log.go:172] (0xc001efa780) (1) Data frame sent
I0129 12:45:17.947680       8 log.go:172] (0xc0000eadc0) (0xc001efa780) Stream removed, broadcasting: 1
I0129 12:45:17.947870       8 log.go:172] (0xc0000eadc0) (0xc001992500) Stream removed, broadcasting: 5
I0129 12:45:17.948034       8 log.go:172] (0xc0000eadc0) (0xc001efa780) Stream removed, broadcasting: 1
I0129 12:45:17.948051       8 log.go:172] (0xc0000eadc0) (0xc001db4000) Stream removed, broadcasting: 3
I0129 12:45:17.948068       8 log.go:172] (0xc0000eadc0) (0xc001992500) Stream removed, broadcasting: 5
I0129 12:45:17.948158       8 log.go:172] (0xc0000eadc0) Go away received
Jan 29 12:45:17.948: INFO: Waiting for endpoints: map[]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 29 12:45:17.948: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pod-network-test-rhcv6" for this suite.
Jan 29 12:45:44.161: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 12:45:44.199: INFO: namespace: e2e-tests-pod-network-test-rhcv6, resource: bindings, ignored listing per whitelist
Jan 29 12:45:44.276: INFO: namespace e2e-tests-pod-network-test-rhcv6 deletion completed in 26.310339687s

• [SLOW TEST:63.224 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for intra-pod communication: udp [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run rc 
  should create an rc from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 29 12:45:44.277: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1298
[It] should create an rc from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Jan 29 12:45:44.455: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=e2e-tests-kubectl-k8nnv'
Jan 29 12:45:46.162: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Jan 29 12:45:46.163: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n"
STEP: verifying the rc e2e-test-nginx-rc was created
STEP: verifying the pod controlled by rc e2e-test-nginx-rc was created
STEP: confirm that you can get logs from an rc
Jan 29 12:45:48.282: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-nginx-rc-qsvj4]
Jan 29 12:45:48.282: INFO: Waiting up to 5m0s for pod "e2e-test-nginx-rc-qsvj4" in namespace "e2e-tests-kubectl-k8nnv" to be "running and ready"
Jan 29 12:45:48.288: INFO: Pod "e2e-test-nginx-rc-qsvj4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.192845ms
Jan 29 12:45:50.300: INFO: Pod "e2e-test-nginx-rc-qsvj4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018007594s
Jan 29 12:45:52.413: INFO: Pod "e2e-test-nginx-rc-qsvj4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.130789301s
Jan 29 12:45:54.435: INFO: Pod "e2e-test-nginx-rc-qsvj4": Phase="Running", Reason="", readiness=true. Elapsed: 6.15311855s
Jan 29 12:45:54.435: INFO: Pod "e2e-test-nginx-rc-qsvj4" satisfied condition "running and ready"
Jan 29 12:45:54.435: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-nginx-rc-qsvj4]
Jan 29 12:45:54.436: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-nginx-rc --namespace=e2e-tests-kubectl-k8nnv'
Jan 29 12:45:54.669: INFO: stderr: ""
Jan 29 12:45:54.670: INFO: stdout: ""
[AfterEach] [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1303
Jan 29 12:45:54.670: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=e2e-tests-kubectl-k8nnv'
Jan 29 12:45:54.868: INFO: stderr: ""
Jan 29 12:45:54.868: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 29 12:45:54.868: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-k8nnv" for this suite.
Jan 29 12:46:16.929: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 12:46:17.205: INFO: namespace: e2e-tests-kubectl-k8nnv, resource: bindings, ignored listing per whitelist
Jan 29 12:46:17.222: INFO: namespace e2e-tests-kubectl-k8nnv deletion completed in 22.33405327s

• [SLOW TEST:32.945 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create an rc from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 29 12:46:17.222: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74
STEP: Creating service test in namespace e2e-tests-statefulset-c75qw
[It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Initializing watcher for selector baz=blah,foo=bar
STEP: Creating stateful set ss in namespace e2e-tests-statefulset-c75qw
STEP: Waiting until all stateful set ss replicas will be running in namespace e2e-tests-statefulset-c75qw
Jan 29 12:46:17.626: INFO: Found 0 stateful pods, waiting for 1
Jan 29 12:46:27.647: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod
Jan 29 12:46:27.655: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-c75qw ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan 29 12:46:28.333: INFO: stderr: "I0129 12:46:27.943029    2932 log.go:172] (0xc0003d2420) (0xc000637540) Create stream\nI0129 12:46:27.943226    2932 log.go:172] (0xc0003d2420) (0xc000637540) Stream added, broadcasting: 1\nI0129 12:46:27.950270    2932 log.go:172] (0xc0003d2420) Reply frame received for 1\nI0129 12:46:27.950356    2932 log.go:172] (0xc0003d2420) (0xc000392000) Create stream\nI0129 12:46:27.950369    2932 log.go:172] (0xc0003d2420) (0xc000392000) Stream added, broadcasting: 3\nI0129 12:46:27.953161    2932 log.go:172] (0xc0003d2420) Reply frame received for 3\nI0129 12:46:27.953388    2932 log.go:172] (0xc0003d2420) (0xc0001d8000) Create stream\nI0129 12:46:27.953421    2932 log.go:172] (0xc0003d2420) (0xc0001d8000) Stream added, broadcasting: 5\nI0129 12:46:27.956816    2932 log.go:172] (0xc0003d2420) Reply frame received for 5\nI0129 12:46:28.110101    2932 log.go:172] (0xc0003d2420) Data frame received for 3\nI0129 12:46:28.110166    2932 log.go:172] (0xc000392000) (3) Data frame handling\nI0129 12:46:28.110184    2932 log.go:172] (0xc000392000) (3) Data frame sent\nI0129 12:46:28.316846    2932 log.go:172] (0xc0003d2420) (0xc0001d8000) Stream removed, broadcasting: 5\nI0129 12:46:28.317383    2932 log.go:172] (0xc0003d2420) (0xc000392000) Stream removed, broadcasting: 3\nI0129 12:46:28.317608    2932 log.go:172] (0xc0003d2420) Data frame received for 1\nI0129 12:46:28.317653    2932 log.go:172] (0xc000637540) (1) Data frame handling\nI0129 12:46:28.317689    2932 log.go:172] (0xc000637540) (1) Data frame sent\nI0129 12:46:28.317709    2932 log.go:172] (0xc0003d2420) (0xc000637540) Stream removed, broadcasting: 1\nI0129 12:46:28.317760    2932 log.go:172] (0xc0003d2420) Go away received\nI0129 12:46:28.319336    2932 log.go:172] (0xc0003d2420) (0xc000637540) Stream removed, broadcasting: 1\nI0129 12:46:28.319372    2932 log.go:172] (0xc0003d2420) (0xc000392000) Stream removed, broadcasting: 3\nI0129 12:46:28.319408    2932 log.go:172] (0xc0003d2420) (0xc0001d8000) Stream removed, broadcasting: 5\n"
Jan 29 12:46:28.334: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan 29 12:46:28.334: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan 29 12:46:28.352: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Jan 29 12:46:28.352: INFO: Waiting for statefulset status.replicas updated to 0
Jan 29 12:46:28.447: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999825s
Jan 29 12:46:29.497: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.93407538s
Jan 29 12:46:30.580: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.883292064s
Jan 29 12:46:31.599: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.801790521s
Jan 29 12:46:32.630: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.782137776s
Jan 29 12:46:33.719: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.751144945s
Jan 29 12:46:34.739: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.662315485s
Jan 29 12:46:35.759: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.642889634s
Jan 29 12:46:36.777: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.622492562s
Jan 29 12:46:37.799: INFO: Verifying statefulset ss doesn't scale past 1 for another 604.697831ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace e2e-tests-statefulset-c75qw
Jan 29 12:46:38.809: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-c75qw ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 29 12:46:39.348: INFO: stderr: "I0129 12:46:39.022394    2955 log.go:172] (0xc000724370) (0xc000744640) Create stream\nI0129 12:46:39.022529    2955 log.go:172] (0xc000724370) (0xc000744640) Stream added, broadcasting: 1\nI0129 12:46:39.028513    2955 log.go:172] (0xc000724370) Reply frame received for 1\nI0129 12:46:39.028580    2955 log.go:172] (0xc000724370) (0xc0007446e0) Create stream\nI0129 12:46:39.028587    2955 log.go:172] (0xc000724370) (0xc0007446e0) Stream added, broadcasting: 3\nI0129 12:46:39.029834    2955 log.go:172] (0xc000724370) Reply frame received for 3\nI0129 12:46:39.029869    2955 log.go:172] (0xc000724370) (0xc000478c80) Create stream\nI0129 12:46:39.029879    2955 log.go:172] (0xc000724370) (0xc000478c80) Stream added, broadcasting: 5\nI0129 12:46:39.031101    2955 log.go:172] (0xc000724370) Reply frame received for 5\nI0129 12:46:39.149955    2955 log.go:172] (0xc000724370) Data frame received for 3\nI0129 12:46:39.150394    2955 log.go:172] (0xc0007446e0) (3) Data frame handling\nI0129 12:46:39.150431    2955 log.go:172] (0xc0007446e0) (3) Data frame sent\nI0129 12:46:39.335600    2955 log.go:172] (0xc000724370) Data frame received for 1\nI0129 12:46:39.335783    2955 log.go:172] (0xc000724370) (0xc0007446e0) Stream removed, broadcasting: 3\nI0129 12:46:39.335852    2955 log.go:172] (0xc000744640) (1) Data frame handling\nI0129 12:46:39.335874    2955 log.go:172] (0xc000744640) (1) Data frame sent\nI0129 12:46:39.335921    2955 log.go:172] (0xc000724370) (0xc000744640) Stream removed, broadcasting: 1\nI0129 12:46:39.336586    2955 log.go:172] (0xc000724370) (0xc000478c80) Stream removed, broadcasting: 5\nI0129 12:46:39.336653    2955 log.go:172] (0xc000724370) (0xc000744640) Stream removed, broadcasting: 1\nI0129 12:46:39.336663    2955 log.go:172] (0xc000724370) (0xc0007446e0) Stream removed, broadcasting: 3\nI0129 12:46:39.336671    2955 log.go:172] (0xc000724370) (0xc000478c80) Stream removed, broadcasting: 5\n"
Jan 29 12:46:39.348: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jan 29 12:46:39.348: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jan 29 12:46:39.382: INFO: Found 1 stateful pods, waiting for 3
Jan 29 12:46:49.402: INFO: Found 2 stateful pods, waiting for 3
Jan 29 12:46:59.402: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Jan 29 12:46:59.402: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Jan 29 12:46:59.402: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Pending - Ready=false
Jan 29 12:47:09.403: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Jan 29 12:47:09.403: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Jan 29 12:47:09.403: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Verifying that stateful set ss was scaled up in order
STEP: Scale down will halt with unhealthy stateful pod
Jan 29 12:47:09.415: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-c75qw ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan 29 12:47:09.998: INFO: stderr: "I0129 12:47:09.647328    2976 log.go:172] (0xc00073c2c0) (0xc00075e640) Create stream\nI0129 12:47:09.647487    2976 log.go:172] (0xc00073c2c0) (0xc00075e640) Stream added, broadcasting: 1\nI0129 12:47:09.653347    2976 log.go:172] (0xc00073c2c0) Reply frame received for 1\nI0129 12:47:09.653458    2976 log.go:172] (0xc00073c2c0) (0xc0005d4c80) Create stream\nI0129 12:47:09.653481    2976 log.go:172] (0xc00073c2c0) (0xc0005d4c80) Stream added, broadcasting: 3\nI0129 12:47:09.654445    2976 log.go:172] (0xc00073c2c0) Reply frame received for 3\nI0129 12:47:09.654508    2976 log.go:172] (0xc00073c2c0) (0xc0006bc000) Create stream\nI0129 12:47:09.654515    2976 log.go:172] (0xc00073c2c0) (0xc0006bc000) Stream added, broadcasting: 5\nI0129 12:47:09.655876    2976 log.go:172] (0xc00073c2c0) Reply frame received for 5\nI0129 12:47:09.786953    2976 log.go:172] (0xc00073c2c0) Data frame received for 3\nI0129 12:47:09.787059    2976 log.go:172] (0xc0005d4c80) (3) Data frame handling\nI0129 12:47:09.787095    2976 log.go:172] (0xc0005d4c80) (3) Data frame sent\nI0129 12:47:09.986705    2976 log.go:172] (0xc00073c2c0) (0xc0005d4c80) Stream removed, broadcasting: 3\nI0129 12:47:09.986948    2976 log.go:172] (0xc00073c2c0) Data frame received for 1\nI0129 12:47:09.987002    2976 log.go:172] (0xc00075e640) (1) Data frame handling\nI0129 12:47:09.987033    2976 log.go:172] (0xc00075e640) (1) Data frame sent\nI0129 12:47:09.987056    2976 log.go:172] (0xc00073c2c0) (0xc00075e640) Stream removed, broadcasting: 1\nI0129 12:47:09.987307    2976 log.go:172] (0xc00073c2c0) (0xc0006bc000) Stream removed, broadcasting: 5\nI0129 12:47:09.987499    2976 log.go:172] (0xc00073c2c0) Go away received\nI0129 12:47:09.987643    2976 log.go:172] (0xc00073c2c0) (0xc00075e640) Stream removed, broadcasting: 1\nI0129 12:47:09.987658    2976 log.go:172] (0xc00073c2c0) (0xc0005d4c80) Stream removed, broadcasting: 3\nI0129 12:47:09.987666    2976 log.go:172] (0xc00073c2c0) (0xc0006bc000) Stream removed, broadcasting: 5\n"
Jan 29 12:47:09.999: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan 29 12:47:09.999: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan 29 12:47:09.999: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-c75qw ss-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan 29 12:47:10.707: INFO: stderr: "I0129 12:47:10.234936    2999 log.go:172] (0xc000732370) (0xc00078e640) Create stream\nI0129 12:47:10.235180    2999 log.go:172] (0xc000732370) (0xc00078e640) Stream added, broadcasting: 1\nI0129 12:47:10.241501    2999 log.go:172] (0xc000732370) Reply frame received for 1\nI0129 12:47:10.241542    2999 log.go:172] (0xc000732370) (0xc0005b8e60) Create stream\nI0129 12:47:10.241556    2999 log.go:172] (0xc000732370) (0xc0005b8e60) Stream added, broadcasting: 3\nI0129 12:47:10.242639    2999 log.go:172] (0xc000732370) Reply frame received for 3\nI0129 12:47:10.242679    2999 log.go:172] (0xc000732370) (0xc00078e6e0) Create stream\nI0129 12:47:10.242698    2999 log.go:172] (0xc000732370) (0xc00078e6e0) Stream added, broadcasting: 5\nI0129 12:47:10.244299    2999 log.go:172] (0xc000732370) Reply frame received for 5\nI0129 12:47:10.421799    2999 log.go:172] (0xc000732370) Data frame received for 3\nI0129 12:47:10.421861    2999 log.go:172] (0xc0005b8e60) (3) Data frame handling\nI0129 12:47:10.421895    2999 log.go:172] (0xc0005b8e60) (3) Data frame sent\nI0129 12:47:10.698597    2999 log.go:172] (0xc000732370) Data frame received for 1\nI0129 12:47:10.698838    2999 log.go:172] (0xc000732370) (0xc00078e6e0) Stream removed, broadcasting: 5\nI0129 12:47:10.698875    2999 log.go:172] (0xc00078e640) (1) Data frame handling\nI0129 12:47:10.698922    2999 log.go:172] (0xc000732370) (0xc0005b8e60) Stream removed, broadcasting: 3\nI0129 12:47:10.699061    2999 log.go:172] (0xc00078e640) (1) Data frame sent\nI0129 12:47:10.699077    2999 log.go:172] (0xc000732370) (0xc00078e640) Stream removed, broadcasting: 1\nI0129 12:47:10.699102    2999 log.go:172] (0xc000732370) Go away received\nI0129 12:47:10.699665    2999 log.go:172] (0xc000732370) (0xc00078e640) Stream removed, broadcasting: 1\nI0129 12:47:10.699741    2999 log.go:172] (0xc000732370) (0xc0005b8e60) Stream removed, broadcasting: 3\nI0129 12:47:10.699769    2999 log.go:172] (0xc000732370) (0xc00078e6e0) Stream removed, broadcasting: 5\n"
Jan 29 12:47:10.708: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan 29 12:47:10.708: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan 29 12:47:10.708: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-c75qw ss-2 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan 29 12:47:11.121: INFO: stderr: "I0129 12:47:10.884386    3020 log.go:172] (0xc0006e4370) (0xc000708640) Create stream\nI0129 12:47:10.884529    3020 log.go:172] (0xc0006e4370) (0xc000708640) Stream added, broadcasting: 1\nI0129 12:47:10.888088    3020 log.go:172] (0xc0006e4370) Reply frame received for 1\nI0129 12:47:10.888118    3020 log.go:172] (0xc0006e4370) (0xc0005b4c80) Create stream\nI0129 12:47:10.888127    3020 log.go:172] (0xc0006e4370) (0xc0005b4c80) Stream added, broadcasting: 3\nI0129 12:47:10.889423    3020 log.go:172] (0xc0006e4370) Reply frame received for 3\nI0129 12:47:10.889468    3020 log.go:172] (0xc0006e4370) (0xc0000f0000) Create stream\nI0129 12:47:10.889516    3020 log.go:172] (0xc0006e4370) (0xc0000f0000) Stream added, broadcasting: 5\nI0129 12:47:10.890688    3020 log.go:172] (0xc0006e4370) Reply frame received for 5\nI0129 12:47:11.021903    3020 log.go:172] (0xc0006e4370) Data frame received for 3\nI0129 12:47:11.021958    3020 log.go:172] (0xc0005b4c80) (3) Data frame handling\nI0129 12:47:11.021973    3020 log.go:172] (0xc0005b4c80) (3) Data frame sent\nI0129 12:47:11.113751    3020 log.go:172] (0xc0006e4370) (0xc0000f0000) Stream removed, broadcasting: 5\nI0129 12:47:11.114152    3020 log.go:172] (0xc0006e4370) Data frame received for 1\nI0129 12:47:11.114202    3020 log.go:172] (0xc0006e4370) (0xc0005b4c80) Stream removed, broadcasting: 3\nI0129 12:47:11.114248    3020 log.go:172] (0xc000708640) (1) Data frame handling\nI0129 12:47:11.114267    3020 log.go:172] (0xc000708640) (1) Data frame sent\nI0129 12:47:11.114282    3020 log.go:172] (0xc0006e4370) (0xc000708640) Stream removed, broadcasting: 1\nI0129 12:47:11.114327    3020 log.go:172] (0xc0006e4370) Go away received\nI0129 12:47:11.115008    3020 log.go:172] (0xc0006e4370) (0xc000708640) Stream removed, broadcasting: 1\nI0129 12:47:11.115069    3020 log.go:172] (0xc0006e4370) (0xc0005b4c80) Stream removed, broadcasting: 3\nI0129 12:47:11.115079    3020 log.go:172] (0xc0006e4370) (0xc0000f0000) Stream removed, broadcasting: 5\n"
Jan 29 12:47:11.122: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan 29 12:47:11.122: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan 29 12:47:11.122: INFO: Waiting for statefulset status.replicas updated to 0
Jan 29 12:47:11.133: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1
Jan 29 12:47:21.174: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Jan 29 12:47:21.174: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Jan 29 12:47:21.174: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Jan 29 12:47:21.269: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999995773s
Jan 29 12:47:22.287: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.932147719s
Jan 29 12:47:23.306: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.914298727s
Jan 29 12:47:24.322: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.89549431s
Jan 29 12:47:25.338: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.879459973s
Jan 29 12:47:26.357: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.863165289s
Jan 29 12:47:27.374: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.844421732s
Jan 29 12:47:29.244: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.826652908s
Jan 29 12:47:30.257: INFO: Verifying statefulset ss doesn't scale past 3 for another 956.69539ms
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacee2e-tests-statefulset-c75qw
Jan 29 12:47:31.377: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-c75qw ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 29 12:47:32.092: INFO: stderr: "I0129 12:47:31.714998    3041 log.go:172] (0xc0006f8370) (0xc00079a640) Create stream\nI0129 12:47:31.716116    3041 log.go:172] (0xc0006f8370) (0xc00079a640) Stream added, broadcasting: 1\nI0129 12:47:31.731344    3041 log.go:172] (0xc0006f8370) Reply frame received for 1\nI0129 12:47:31.731429    3041 log.go:172] (0xc0006f8370) (0xc00062ec80) Create stream\nI0129 12:47:31.731443    3041 log.go:172] (0xc0006f8370) (0xc00062ec80) Stream added, broadcasting: 3\nI0129 12:47:31.732867    3041 log.go:172] (0xc0006f8370) Reply frame received for 3\nI0129 12:47:31.732896    3041 log.go:172] (0xc0006f8370) (0xc00079a6e0) Create stream\nI0129 12:47:31.732910    3041 log.go:172] (0xc0006f8370) (0xc00079a6e0) Stream added, broadcasting: 5\nI0129 12:47:31.734602    3041 log.go:172] (0xc0006f8370) Reply frame received for 5\nI0129 12:47:31.910220    3041 log.go:172] (0xc0006f8370) Data frame received for 3\nI0129 12:47:31.910526    3041 log.go:172] (0xc00062ec80) (3) Data frame handling\nI0129 12:47:31.910611    3041 log.go:172] (0xc00062ec80) (3) Data frame sent\nI0129 12:47:32.074501    3041 log.go:172] (0xc0006f8370) (0xc00062ec80) Stream removed, broadcasting: 3\nI0129 12:47:32.074766    3041 log.go:172] (0xc0006f8370) Data frame received for 1\nI0129 12:47:32.074820    3041 log.go:172] (0xc00079a640) (1) Data frame handling\nI0129 12:47:32.074881    3041 log.go:172] (0xc00079a640) (1) Data frame sent\nI0129 12:47:32.074908    3041 log.go:172] (0xc0006f8370) (0xc00079a640) Stream removed, broadcasting: 1\nI0129 12:47:32.075207    3041 log.go:172] (0xc0006f8370) (0xc00079a6e0) Stream removed, broadcasting: 5\nI0129 12:47:32.075516    3041 log.go:172] (0xc0006f8370) Go away received\nI0129 12:47:32.075682    3041 log.go:172] (0xc0006f8370) (0xc00079a640) Stream removed, broadcasting: 1\nI0129 12:47:32.075752    3041 log.go:172] (0xc0006f8370) (0xc00062ec80) Stream removed, broadcasting: 3\nI0129 12:47:32.075785    3041 log.go:172] (0xc0006f8370) (0xc00079a6e0) Stream removed, broadcasting: 5\n"
Jan 29 12:47:32.092: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jan 29 12:47:32.092: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jan 29 12:47:32.093: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-c75qw ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 29 12:47:32.929: INFO: stderr: "I0129 12:47:32.596959    3061 log.go:172] (0xc0007424d0) (0xc0008426e0) Create stream\nI0129 12:47:32.597146    3061 log.go:172] (0xc0007424d0) (0xc0008426e0) Stream added, broadcasting: 1\nI0129 12:47:32.609558    3061 log.go:172] (0xc0007424d0) Reply frame received for 1\nI0129 12:47:32.609688    3061 log.go:172] (0xc0007424d0) (0xc0005f0fa0) Create stream\nI0129 12:47:32.609720    3061 log.go:172] (0xc0007424d0) (0xc0005f0fa0) Stream added, broadcasting: 3\nI0129 12:47:32.611303    3061 log.go:172] (0xc0007424d0) Reply frame received for 3\nI0129 12:47:32.611340    3061 log.go:172] (0xc0007424d0) (0xc000842780) Create stream\nI0129 12:47:32.611351    3061 log.go:172] (0xc0007424d0) (0xc000842780) Stream added, broadcasting: 5\nI0129 12:47:32.613205    3061 log.go:172] (0xc0007424d0) Reply frame received for 5\nI0129 12:47:32.761086    3061 log.go:172] (0xc0007424d0) Data frame received for 3\nI0129 12:47:32.761178    3061 log.go:172] (0xc0005f0fa0) (3) Data frame handling\nI0129 12:47:32.761201    3061 log.go:172] (0xc0005f0fa0) (3) Data frame sent\nI0129 12:47:32.905858    3061 log.go:172] (0xc0007424d0) (0xc0005f0fa0) Stream removed, broadcasting: 3\nI0129 12:47:32.906055    3061 log.go:172] (0xc0007424d0) Data frame received for 1\nI0129 12:47:32.906085    3061 log.go:172] (0xc0008426e0) (1) Data frame handling\nI0129 12:47:32.906123    3061 log.go:172] (0xc0008426e0) (1) Data frame sent\nI0129 12:47:32.906444    3061 log.go:172] (0xc0007424d0) (0xc0008426e0) Stream removed, broadcasting: 1\nI0129 12:47:32.911074    3061 log.go:172] (0xc0007424d0) (0xc000842780) Stream removed, broadcasting: 5\nI0129 12:47:32.912603    3061 log.go:172] (0xc0007424d0) Go away received\nI0129 12:47:32.913004    3061 log.go:172] (0xc0007424d0) (0xc0008426e0) Stream removed, broadcasting: 1\nI0129 12:47:32.913060    3061 log.go:172] (0xc0007424d0) (0xc0005f0fa0) Stream removed, broadcasting: 3\nI0129 12:47:32.913078    3061 log.go:172] (0xc0007424d0) (0xc000842780) Stream removed, broadcasting: 5\n"
Jan 29 12:47:32.929: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jan 29 12:47:32.929: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jan 29 12:47:32.930: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-c75qw ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 29 12:47:33.458: INFO: stderr: "I0129 12:47:33.103357    3083 log.go:172] (0xc00072e370) (0xc000756640) Create stream\nI0129 12:47:33.103439    3083 log.go:172] (0xc00072e370) (0xc000756640) Stream added, broadcasting: 1\nI0129 12:47:33.134278    3083 log.go:172] (0xc00072e370) Reply frame received for 1\nI0129 12:47:33.134384    3083 log.go:172] (0xc00072e370) (0xc000670be0) Create stream\nI0129 12:47:33.134419    3083 log.go:172] (0xc00072e370) (0xc000670be0) Stream added, broadcasting: 3\nI0129 12:47:33.137967    3083 log.go:172] (0xc00072e370) Reply frame received for 3\nI0129 12:47:33.138016    3083 log.go:172] (0xc00072e370) (0xc000726000) Create stream\nI0129 12:47:33.138048    3083 log.go:172] (0xc00072e370) (0xc000726000) Stream added, broadcasting: 5\nI0129 12:47:33.140762    3083 log.go:172] (0xc00072e370) Reply frame received for 5\nI0129 12:47:33.326958    3083 log.go:172] (0xc00072e370) Data frame received for 3\nI0129 12:47:33.327025    3083 log.go:172] (0xc000670be0) (3) Data frame handling\nI0129 12:47:33.327036    3083 log.go:172] (0xc000670be0) (3) Data frame sent\nI0129 12:47:33.446462    3083 log.go:172] (0xc00072e370) Data frame received for 1\nI0129 12:47:33.446562    3083 log.go:172] (0xc000756640) (1) Data frame handling\nI0129 12:47:33.446572    3083 log.go:172] (0xc000756640) (1) Data frame sent\nI0129 12:47:33.448270    3083 log.go:172] (0xc00072e370) (0xc000756640) Stream removed, broadcasting: 1\nI0129 12:47:33.448372    3083 log.go:172] (0xc00072e370) (0xc000726000) Stream removed, broadcasting: 5\nI0129 12:47:33.448403    3083 log.go:172] (0xc00072e370) (0xc000670be0) Stream removed, broadcasting: 3\nI0129 12:47:33.448451    3083 log.go:172] (0xc00072e370) Go away received\nI0129 12:47:33.448668    3083 log.go:172] (0xc00072e370) (0xc000756640) Stream removed, broadcasting: 1\nI0129 12:47:33.448683    3083 log.go:172] (0xc00072e370) (0xc000670be0) Stream removed, broadcasting: 3\nI0129 12:47:33.448698    3083 log.go:172] (0xc00072e370) (0xc000726000) Stream removed, broadcasting: 5\n"
Jan 29 12:47:33.459: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jan 29 12:47:33.459: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jan 29 12:47:33.459: INFO: Scaling statefulset ss to 0
STEP: Verifying that stateful set ss was scaled down in reverse order
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85
Jan 29 12:47:53.527: INFO: Deleting all statefulset in ns e2e-tests-statefulset-c75qw
Jan 29 12:47:53.534: INFO: Scaling statefulset ss to 0
Jan 29 12:47:53.551: INFO: Waiting for statefulset status.replicas updated to 0
Jan 29 12:47:53.556: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 29 12:47:53.600: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-statefulset-c75qw" for this suite.
Jan 29 12:48:01.707: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 12:48:01.875: INFO: namespace: e2e-tests-statefulset-c75qw, resource: bindings, ignored listing per whitelist
Jan 29 12:48:01.894: INFO: namespace e2e-tests-statefulset-c75qw deletion completed in 8.28746774s

• [SLOW TEST:104.672 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl replace 
  should update a single-container pod's image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 29 12:48:01.895: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1563
[It] should update a single-container pod's image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Jan 29 12:48:02.094: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --labels=run=e2e-test-nginx-pod --namespace=e2e-tests-kubectl-p5v58'
Jan 29 12:48:02.300: INFO: stderr: ""
Jan 29 12:48:02.300: INFO: stdout: "pod/e2e-test-nginx-pod created\n"
STEP: verifying the pod e2e-test-nginx-pod is running
STEP: verifying the pod e2e-test-nginx-pod was created
Jan 29 12:48:12.352: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-nginx-pod --namespace=e2e-tests-kubectl-p5v58 -o json'
Jan 29 12:48:12.510: INFO: stderr: ""
Jan 29 12:48:12.510: INFO: stdout: "{\n    \"apiVersion\": \"v1\",\n    \"kind\": \"Pod\",\n    \"metadata\": {\n        \"creationTimestamp\": \"2020-01-29T12:48:02Z\",\n        \"labels\": {\n            \"run\": \"e2e-test-nginx-pod\"\n        },\n        \"name\": \"e2e-test-nginx-pod\",\n        \"namespace\": \"e2e-tests-kubectl-p5v58\",\n        \"resourceVersion\": \"19861643\",\n        \"selfLink\": \"/api/v1/namespaces/e2e-tests-kubectl-p5v58/pods/e2e-test-nginx-pod\",\n        \"uid\": \"9602f18c-4295-11ea-a994-fa163e34d433\"\n    },\n    \"spec\": {\n        \"containers\": [\n            {\n                \"image\": \"docker.io/library/nginx:1.14-alpine\",\n                \"imagePullPolicy\": \"IfNotPresent\",\n                \"name\": \"e2e-test-nginx-pod\",\n                \"resources\": {},\n                \"terminationMessagePath\": \"/dev/termination-log\",\n                \"terminationMessagePolicy\": \"File\",\n                \"volumeMounts\": [\n                    {\n                        \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n                        \"name\": \"default-token-vz8jc\",\n                        \"readOnly\": true\n                    }\n                ]\n            }\n        ],\n        \"dnsPolicy\": \"ClusterFirst\",\n        \"enableServiceLinks\": true,\n        \"nodeName\": \"hunter-server-hu5at5svl7ps\",\n        \"priority\": 0,\n        \"restartPolicy\": \"Always\",\n        \"schedulerName\": \"default-scheduler\",\n        \"securityContext\": {},\n        \"serviceAccount\": \"default\",\n        \"serviceAccountName\": \"default\",\n        \"terminationGracePeriodSeconds\": 30,\n        \"tolerations\": [\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/not-ready\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            },\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/unreachable\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            }\n        ],\n        \"volumes\": [\n            {\n                \"name\": \"default-token-vz8jc\",\n                \"secret\": {\n                    \"defaultMode\": 420,\n                    \"secretName\": \"default-token-vz8jc\"\n                }\n            }\n        ]\n    },\n    \"status\": {\n        \"conditions\": [\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-01-29T12:48:02Z\",\n                \"status\": \"True\",\n                \"type\": \"Initialized\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-01-29T12:48:11Z\",\n                \"status\": \"True\",\n                \"type\": \"Ready\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-01-29T12:48:11Z\",\n                \"status\": \"True\",\n                \"type\": \"ContainersReady\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-01-29T12:48:02Z\",\n                \"status\": \"True\",\n                \"type\": \"PodScheduled\"\n            }\n        ],\n        \"containerStatuses\": [\n            {\n                \"containerID\": \"docker://90b75453f2b0049899fa1daceac5e4fdf89b9f3f93d5c9559760c6cc935d4f8a\",\n                \"image\": \"nginx:1.14-alpine\",\n                \"imageID\": \"docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7\",\n                \"lastState\": {},\n                \"name\": \"e2e-test-nginx-pod\",\n                \"ready\": true,\n                \"restartCount\": 0,\n                \"state\": {\n                    \"running\": {\n                        \"startedAt\": \"2020-01-29T12:48:10Z\"\n                    }\n                }\n            }\n        ],\n        \"hostIP\": \"10.96.1.240\",\n        \"phase\": \"Running\",\n        \"podIP\": \"10.32.0.4\",\n        \"qosClass\": \"BestEffort\",\n        \"startTime\": \"2020-01-29T12:48:02Z\"\n    }\n}\n"
STEP: replace the image in the pod
Jan 29 12:48:12.511: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=e2e-tests-kubectl-p5v58'
Jan 29 12:48:13.059: INFO: stderr: ""
Jan 29 12:48:13.060: INFO: stdout: "pod/e2e-test-nginx-pod replaced\n"
STEP: verifying the pod e2e-test-nginx-pod has the right image docker.io/library/busybox:1.29
[AfterEach] [k8s.io] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1568
Jan 29 12:48:13.066: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=e2e-tests-kubectl-p5v58'
Jan 29 12:48:22.146: INFO: stderr: ""
Jan 29 12:48:22.146: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 29 12:48:22.146: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-p5v58" for this suite.
Jan 29 12:48:28.242: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 12:48:28.307: INFO: namespace: e2e-tests-kubectl-p5v58, resource: bindings, ignored listing per whitelist
Jan 29 12:48:28.390: INFO: namespace e2e-tests-kubectl-p5v58 deletion completed in 6.217965404s

• [SLOW TEST:26.495 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should update a single-container pod's image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl patch 
  should add annotations for pods in rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 29 12:48:28.390: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should add annotations for pods in rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating Redis RC
Jan 29 12:48:28.905: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-m2zjw'
Jan 29 12:48:29.348: INFO: stderr: ""
Jan 29 12:48:29.348: INFO: stdout: "replicationcontroller/redis-master created\n"
STEP: Waiting for Redis master to start.
Jan 29 12:48:30.376: INFO: Selector matched 1 pods for map[app:redis]
Jan 29 12:48:30.376: INFO: Found 0 / 1
Jan 29 12:48:31.370: INFO: Selector matched 1 pods for map[app:redis]
Jan 29 12:48:31.371: INFO: Found 0 / 1
Jan 29 12:48:32.366: INFO: Selector matched 1 pods for map[app:redis]
Jan 29 12:48:32.367: INFO: Found 0 / 1
Jan 29 12:48:33.384: INFO: Selector matched 1 pods for map[app:redis]
Jan 29 12:48:33.384: INFO: Found 0 / 1
Jan 29 12:48:34.836: INFO: Selector matched 1 pods for map[app:redis]
Jan 29 12:48:34.836: INFO: Found 0 / 1
Jan 29 12:48:35.377: INFO: Selector matched 1 pods for map[app:redis]
Jan 29 12:48:35.377: INFO: Found 0 / 1
Jan 29 12:48:36.393: INFO: Selector matched 1 pods for map[app:redis]
Jan 29 12:48:36.393: INFO: Found 0 / 1
Jan 29 12:48:37.369: INFO: Selector matched 1 pods for map[app:redis]
Jan 29 12:48:37.369: INFO: Found 0 / 1
Jan 29 12:48:38.383: INFO: Selector matched 1 pods for map[app:redis]
Jan 29 12:48:38.383: INFO: Found 0 / 1
Jan 29 12:48:39.401: INFO: Selector matched 1 pods for map[app:redis]
Jan 29 12:48:39.401: INFO: Found 1 / 1
Jan 29 12:48:39.401: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
STEP: patching all pods
Jan 29 12:48:39.411: INFO: Selector matched 1 pods for map[app:redis]
Jan 29 12:48:39.411: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Jan 29 12:48:39.412: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod redis-master-fdwhb --namespace=e2e-tests-kubectl-m2zjw -p {"metadata":{"annotations":{"x":"y"}}}'
Jan 29 12:48:39.617: INFO: stderr: ""
Jan 29 12:48:39.617: INFO: stdout: "pod/redis-master-fdwhb patched\n"
STEP: checking annotations
Jan 29 12:48:39.671: INFO: Selector matched 1 pods for map[app:redis]
Jan 29 12:48:39.671: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 29 12:48:39.671: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-m2zjw" for this suite.
Jan 29 12:49:01.730: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 12:49:01.777: INFO: namespace: e2e-tests-kubectl-m2zjw, resource: bindings, ignored listing per whitelist
Jan 29 12:49:01.881: INFO: namespace e2e-tests-kubectl-m2zjw deletion completed in 22.198710927s

• [SLOW TEST:33.490 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl patch
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should add annotations for pods in rc  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should be possible to delete [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 29 12:49:01.881: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should be possible to delete [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 29 12:49:02.177: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubelet-test-4t8xg" for this suite.
Jan 29 12:49:08.215: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 12:49:08.369: INFO: namespace: e2e-tests-kubelet-test-4t8xg, resource: bindings, ignored listing per whitelist
Jan 29 12:49:08.445: INFO: namespace e2e-tests-kubelet-test-4t8xg deletion completed in 6.253749294s

• [SLOW TEST:6.564 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78
    should be possible to delete [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-network] Proxy version v1 
  should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 29 12:49:08.445: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan 29 12:49:09.098: INFO: (0) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 15.111165ms)
Jan 29 12:49:09.105: INFO: (1) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.38951ms)
Jan 29 12:49:09.109: INFO: (2) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.300433ms)
Jan 29 12:49:09.114: INFO: (3) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.649642ms)
Jan 29 12:49:09.117: INFO: (4) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.568014ms)
Jan 29 12:49:09.123: INFO: (5) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.473537ms)
Jan 29 12:49:09.127: INFO: (6) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.444632ms)
Jan 29 12:49:09.132: INFO: (7) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.357222ms)
Jan 29 12:49:09.147: INFO: (8) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 15.055896ms)
Jan 29 12:49:09.152: INFO: (9) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.500849ms)
Jan 29 12:49:09.157: INFO: (10) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.158825ms)
Jan 29 12:49:09.162: INFO: (11) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.900322ms)
Jan 29 12:49:09.167: INFO: (12) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.794765ms)
Jan 29 12:49:09.172: INFO: (13) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.692494ms)
Jan 29 12:49:09.177: INFO: (14) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.720217ms)
Jan 29 12:49:09.181: INFO: (15) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.568739ms)
Jan 29 12:49:09.185: INFO: (16) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.831404ms)
Jan 29 12:49:09.190: INFO: (17) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.669758ms)
Jan 29 12:49:09.193: INFO: (18) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.71096ms)
Jan 29 12:49:09.199: INFO: (19) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.076972ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 29 12:49:09.199: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-proxy-wrdt6" for this suite.
Jan 29 12:49:15.259: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 12:49:15.437: INFO: namespace: e2e-tests-proxy-wrdt6, resource: bindings, ignored listing per whitelist
Jan 29 12:49:15.483: INFO: namespace e2e-tests-proxy-wrdt6 deletion completed in 6.274539118s

• [SLOW TEST:7.037 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:56
    should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 29 12:49:15.483: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan 29 12:49:15.684: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c1be3989-4295-11ea-8d54-0242ac110005" in namespace "e2e-tests-projected-4t9ds" to be "success or failure"
Jan 29 12:49:15.768: INFO: Pod "downwardapi-volume-c1be3989-4295-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 84.293379ms
Jan 29 12:49:17.785: INFO: Pod "downwardapi-volume-c1be3989-4295-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.100877424s
Jan 29 12:49:19.805: INFO: Pod "downwardapi-volume-c1be3989-4295-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.121449525s
Jan 29 12:49:21.862: INFO: Pod "downwardapi-volume-c1be3989-4295-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.178366596s
Jan 29 12:49:24.034: INFO: Pod "downwardapi-volume-c1be3989-4295-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.349796247s
Jan 29 12:49:26.621: INFO: Pod "downwardapi-volume-c1be3989-4295-11ea-8d54-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.93675728s
STEP: Saw pod success
Jan 29 12:49:26.621: INFO: Pod "downwardapi-volume-c1be3989-4295-11ea-8d54-0242ac110005" satisfied condition "success or failure"
Jan 29 12:49:26.656: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-c1be3989-4295-11ea-8d54-0242ac110005 container client-container: 
STEP: delete the pod
Jan 29 12:49:26.979: INFO: Waiting for pod downwardapi-volume-c1be3989-4295-11ea-8d54-0242ac110005 to disappear
Jan 29 12:49:27.041: INFO: Pod downwardapi-volume-c1be3989-4295-11ea-8d54-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 29 12:49:27.042: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-4t9ds" for this suite.
Jan 29 12:49:33.115: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 12:49:33.154: INFO: namespace: e2e-tests-projected-4t9ds, resource: bindings, ignored listing per whitelist
Jan 29 12:49:33.256: INFO: namespace e2e-tests-projected-4t9ds deletion completed in 6.204300682s

• [SLOW TEST:17.774 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 29 12:49:33.257: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-2qk7s A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.e2e-tests-dns-2qk7s;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-2qk7s A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.e2e-tests-dns-2qk7s;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-2qk7s.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.e2e-tests-dns-2qk7s.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-2qk7s.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.e2e-tests-dns-2qk7s.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-2qk7s.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-2qk7s.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-2qk7s.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-2qk7s.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-2qk7s.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.e2e-tests-dns-2qk7s.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-2qk7s.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.e2e-tests-dns-2qk7s.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-2qk7s.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 212.221.110.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.110.221.212_udp@PTR;check="$$(dig +tcp +noall +answer +search 212.221.110.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.110.221.212_tcp@PTR;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-2qk7s A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.e2e-tests-dns-2qk7s;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-2qk7s A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.e2e-tests-dns-2qk7s;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-2qk7s.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.e2e-tests-dns-2qk7s.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-2qk7s.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.e2e-tests-dns-2qk7s.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-2qk7s.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-2qk7s.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-2qk7s.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-2qk7s.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-2qk7s.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.e2e-tests-dns-2qk7s.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-2qk7s.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.e2e-tests-dns-2qk7s.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-2qk7s.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 212.221.110.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.110.221.212_udp@PTR;check="$$(dig +tcp +noall +answer +search 212.221.110.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.110.221.212_tcp@PTR;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jan 29 12:49:50.300: INFO: Unable to read wheezy_udp@dns-test-service from pod e2e-tests-dns-2qk7s/dns-test-cc987790-4295-11ea-8d54-0242ac110005: the server could not find the requested resource (get pods dns-test-cc987790-4295-11ea-8d54-0242ac110005)
Jan 29 12:49:50.306: INFO: Unable to read wheezy_tcp@dns-test-service from pod e2e-tests-dns-2qk7s/dns-test-cc987790-4295-11ea-8d54-0242ac110005: the server could not find the requested resource (get pods dns-test-cc987790-4295-11ea-8d54-0242ac110005)
Jan 29 12:49:50.317: INFO: Unable to read wheezy_udp@dns-test-service.e2e-tests-dns-2qk7s from pod e2e-tests-dns-2qk7s/dns-test-cc987790-4295-11ea-8d54-0242ac110005: the server could not find the requested resource (get pods dns-test-cc987790-4295-11ea-8d54-0242ac110005)
Jan 29 12:49:50.324: INFO: Unable to read wheezy_tcp@dns-test-service.e2e-tests-dns-2qk7s from pod e2e-tests-dns-2qk7s/dns-test-cc987790-4295-11ea-8d54-0242ac110005: the server could not find the requested resource (get pods dns-test-cc987790-4295-11ea-8d54-0242ac110005)
Jan 29 12:49:50.331: INFO: Unable to read wheezy_udp@dns-test-service.e2e-tests-dns-2qk7s.svc from pod e2e-tests-dns-2qk7s/dns-test-cc987790-4295-11ea-8d54-0242ac110005: the server could not find the requested resource (get pods dns-test-cc987790-4295-11ea-8d54-0242ac110005)
Jan 29 12:49:50.346: INFO: Unable to read wheezy_tcp@dns-test-service.e2e-tests-dns-2qk7s.svc from pod e2e-tests-dns-2qk7s/dns-test-cc987790-4295-11ea-8d54-0242ac110005: the server could not find the requested resource (get pods dns-test-cc987790-4295-11ea-8d54-0242ac110005)
Jan 29 12:49:50.352: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-2qk7s.svc from pod e2e-tests-dns-2qk7s/dns-test-cc987790-4295-11ea-8d54-0242ac110005: the server could not find the requested resource (get pods dns-test-cc987790-4295-11ea-8d54-0242ac110005)
Jan 29 12:49:50.358: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-2qk7s.svc from pod e2e-tests-dns-2qk7s/dns-test-cc987790-4295-11ea-8d54-0242ac110005: the server could not find the requested resource (get pods dns-test-cc987790-4295-11ea-8d54-0242ac110005)
Jan 29 12:49:50.366: INFO: Unable to read wheezy_udp@_http._tcp.test-service-2.e2e-tests-dns-2qk7s.svc from pod e2e-tests-dns-2qk7s/dns-test-cc987790-4295-11ea-8d54-0242ac110005: the server could not find the requested resource (get pods dns-test-cc987790-4295-11ea-8d54-0242ac110005)
Jan 29 12:49:50.371: INFO: Unable to read wheezy_tcp@_http._tcp.test-service-2.e2e-tests-dns-2qk7s.svc from pod e2e-tests-dns-2qk7s/dns-test-cc987790-4295-11ea-8d54-0242ac110005: the server could not find the requested resource (get pods dns-test-cc987790-4295-11ea-8d54-0242ac110005)
Jan 29 12:49:50.376: INFO: Unable to read wheezy_udp@PodARecord from pod e2e-tests-dns-2qk7s/dns-test-cc987790-4295-11ea-8d54-0242ac110005: the server could not find the requested resource (get pods dns-test-cc987790-4295-11ea-8d54-0242ac110005)
Jan 29 12:49:50.381: INFO: Unable to read wheezy_tcp@PodARecord from pod e2e-tests-dns-2qk7s/dns-test-cc987790-4295-11ea-8d54-0242ac110005: the server could not find the requested resource (get pods dns-test-cc987790-4295-11ea-8d54-0242ac110005)
Jan 29 12:49:50.386: INFO: Unable to read 10.110.221.212_udp@PTR from pod e2e-tests-dns-2qk7s/dns-test-cc987790-4295-11ea-8d54-0242ac110005: the server could not find the requested resource (get pods dns-test-cc987790-4295-11ea-8d54-0242ac110005)
Jan 29 12:49:50.392: INFO: Unable to read 10.110.221.212_tcp@PTR from pod e2e-tests-dns-2qk7s/dns-test-cc987790-4295-11ea-8d54-0242ac110005: the server could not find the requested resource (get pods dns-test-cc987790-4295-11ea-8d54-0242ac110005)
Jan 29 12:49:50.398: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-2qk7s/dns-test-cc987790-4295-11ea-8d54-0242ac110005: the server could not find the requested resource (get pods dns-test-cc987790-4295-11ea-8d54-0242ac110005)
Jan 29 12:49:50.403: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-2qk7s/dns-test-cc987790-4295-11ea-8d54-0242ac110005: the server could not find the requested resource (get pods dns-test-cc987790-4295-11ea-8d54-0242ac110005)
Jan 29 12:49:50.408: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-2qk7s from pod e2e-tests-dns-2qk7s/dns-test-cc987790-4295-11ea-8d54-0242ac110005: the server could not find the requested resource (get pods dns-test-cc987790-4295-11ea-8d54-0242ac110005)
Jan 29 12:49:50.414: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-2qk7s from pod e2e-tests-dns-2qk7s/dns-test-cc987790-4295-11ea-8d54-0242ac110005: the server could not find the requested resource (get pods dns-test-cc987790-4295-11ea-8d54-0242ac110005)
Jan 29 12:49:50.421: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-2qk7s.svc from pod e2e-tests-dns-2qk7s/dns-test-cc987790-4295-11ea-8d54-0242ac110005: the server could not find the requested resource (get pods dns-test-cc987790-4295-11ea-8d54-0242ac110005)
Jan 29 12:49:50.427: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-2qk7s.svc from pod e2e-tests-dns-2qk7s/dns-test-cc987790-4295-11ea-8d54-0242ac110005: the server could not find the requested resource (get pods dns-test-cc987790-4295-11ea-8d54-0242ac110005)
Jan 29 12:49:50.433: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-2qk7s.svc from pod e2e-tests-dns-2qk7s/dns-test-cc987790-4295-11ea-8d54-0242ac110005: the server could not find the requested resource (get pods dns-test-cc987790-4295-11ea-8d54-0242ac110005)
Jan 29 12:49:50.437: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-2qk7s.svc from pod e2e-tests-dns-2qk7s/dns-test-cc987790-4295-11ea-8d54-0242ac110005: the server could not find the requested resource (get pods dns-test-cc987790-4295-11ea-8d54-0242ac110005)
Jan 29 12:49:50.455: INFO: Unable to read jessie_udp@_http._tcp.test-service-2.e2e-tests-dns-2qk7s.svc from pod e2e-tests-dns-2qk7s/dns-test-cc987790-4295-11ea-8d54-0242ac110005: the server could not find the requested resource (get pods dns-test-cc987790-4295-11ea-8d54-0242ac110005)
Jan 29 12:49:50.467: INFO: Unable to read jessie_tcp@_http._tcp.test-service-2.e2e-tests-dns-2qk7s.svc from pod e2e-tests-dns-2qk7s/dns-test-cc987790-4295-11ea-8d54-0242ac110005: the server could not find the requested resource (get pods dns-test-cc987790-4295-11ea-8d54-0242ac110005)
Jan 29 12:49:50.474: INFO: Unable to read jessie_udp@PodARecord from pod e2e-tests-dns-2qk7s/dns-test-cc987790-4295-11ea-8d54-0242ac110005: the server could not find the requested resource (get pods dns-test-cc987790-4295-11ea-8d54-0242ac110005)
Jan 29 12:49:50.490: INFO: Unable to read jessie_tcp@PodARecord from pod e2e-tests-dns-2qk7s/dns-test-cc987790-4295-11ea-8d54-0242ac110005: the server could not find the requested resource (get pods dns-test-cc987790-4295-11ea-8d54-0242ac110005)
Jan 29 12:49:50.499: INFO: Unable to read 10.110.221.212_udp@PTR from pod e2e-tests-dns-2qk7s/dns-test-cc987790-4295-11ea-8d54-0242ac110005: the server could not find the requested resource (get pods dns-test-cc987790-4295-11ea-8d54-0242ac110005)
Jan 29 12:49:50.512: INFO: Unable to read 10.110.221.212_tcp@PTR from pod e2e-tests-dns-2qk7s/dns-test-cc987790-4295-11ea-8d54-0242ac110005: the server could not find the requested resource (get pods dns-test-cc987790-4295-11ea-8d54-0242ac110005)
Jan 29 12:49:50.512: INFO: Lookups using e2e-tests-dns-2qk7s/dns-test-cc987790-4295-11ea-8d54-0242ac110005 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.e2e-tests-dns-2qk7s wheezy_tcp@dns-test-service.e2e-tests-dns-2qk7s wheezy_udp@dns-test-service.e2e-tests-dns-2qk7s.svc wheezy_tcp@dns-test-service.e2e-tests-dns-2qk7s.svc wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-2qk7s.svc wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-2qk7s.svc wheezy_udp@_http._tcp.test-service-2.e2e-tests-dns-2qk7s.svc wheezy_tcp@_http._tcp.test-service-2.e2e-tests-dns-2qk7s.svc wheezy_udp@PodARecord wheezy_tcp@PodARecord 10.110.221.212_udp@PTR 10.110.221.212_tcp@PTR jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-2qk7s jessie_tcp@dns-test-service.e2e-tests-dns-2qk7s jessie_udp@dns-test-service.e2e-tests-dns-2qk7s.svc jessie_tcp@dns-test-service.e2e-tests-dns-2qk7s.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-2qk7s.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-2qk7s.svc jessie_udp@_http._tcp.test-service-2.e2e-tests-dns-2qk7s.svc jessie_tcp@_http._tcp.test-service-2.e2e-tests-dns-2qk7s.svc jessie_udp@PodARecord jessie_tcp@PodARecord 10.110.221.212_udp@PTR 10.110.221.212_tcp@PTR]

Jan 29 12:49:56.198: INFO: DNS probes using e2e-tests-dns-2qk7s/dns-test-cc987790-4295-11ea-8d54-0242ac110005 succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 29 12:49:56.717: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-dns-2qk7s" for this suite.
Jan 29 12:50:02.876: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 12:50:03.035: INFO: namespace: e2e-tests-dns-2qk7s, resource: bindings, ignored listing per whitelist
Jan 29 12:50:03.061: INFO: namespace e2e-tests-dns-2qk7s deletion completed in 6.316844241s

• [SLOW TEST:29.805 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-apps] Deployment 
  deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 29 12:50:03.062: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65
[It] deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan 29 12:50:03.279: INFO: Creating deployment "nginx-deployment"
Jan 29 12:50:03.333: INFO: Waiting for observed generation 1
Jan 29 12:50:07.168: INFO: Waiting for all required pods to come up
Jan 29 12:50:07.195: INFO: Pod name nginx: Found 10 pods out of 10
STEP: ensuring each pod is running
Jan 29 12:50:45.611: INFO: Waiting for deployment "nginx-deployment" to complete
Jan 29 12:50:45.625: INFO: Updating deployment "nginx-deployment" with a non-existent image
Jan 29 12:50:45.646: INFO: Updating deployment nginx-deployment
Jan 29 12:50:45.646: INFO: Waiting for observed generation 2
Jan 29 12:50:49.100: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8
Jan 29 12:50:49.117: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8
Jan 29 12:50:49.993: INFO: Waiting for the first rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas
Jan 29 12:50:50.422: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0
Jan 29 12:50:50.422: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5
Jan 29 12:50:50.513: INFO: Waiting for the second rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas
Jan 29 12:50:50.917: INFO: Verifying that deployment "nginx-deployment" has minimum required number of available replicas
Jan 29 12:50:50.917: INFO: Scaling up the deployment "nginx-deployment" from 10 to 30
Jan 29 12:50:51.306: INFO: Updating deployment nginx-deployment
Jan 29 12:50:51.306: INFO: Waiting for the replicasets of deployment "nginx-deployment" to have desired number of replicas
Jan 29 12:50:52.425: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20
Jan 29 12:50:55.242: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59
Jan 29 12:50:56.342: INFO: Deployment "nginx-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment,GenerateName:,Namespace:e2e-tests-deployment-2vsg6,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-2vsg6/deployments/nginx-deployment,UID:de262d24-4295-11ea-a994-fa163e34d433,ResourceVersion:19862155,Generation:3,CreationTimestamp:2020-01-29 12:50:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*30,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:13,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[{Progressing True 2020-01-29 12:50:46 +0000 UTC 2020-01-29 12:50:03 +0000 UTC ReplicaSetUpdated ReplicaSet "nginx-deployment-5c98f8fb5" is progressing.} {Available False 2020-01-29 12:50:52 +0000 UTC 2020-01-29 12:50:52 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.}],ReadyReplicas:8,CollisionCount:nil,},}

Jan 29 12:50:56.926: INFO: New ReplicaSet "nginx-deployment-5c98f8fb5" of Deployment "nginx-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5,GenerateName:,Namespace:e2e-tests-deployment-2vsg6,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-2vsg6/replicasets/nginx-deployment-5c98f8fb5,UID:f7672d17-4295-11ea-a994-fa163e34d433,ResourceVersion:19862205,Generation:3,CreationTimestamp:2020-01-29 12:50:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment nginx-deployment de262d24-4295-11ea-a994-fa163e34d433 0xc0021fa157 0xc0021fa158}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*13,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:5,FullyLabeledReplicas:5,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Jan 29 12:50:56.926: INFO: All old ReplicaSets of Deployment "nginx-deployment":
Jan 29 12:50:56.927: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d,GenerateName:,Namespace:e2e-tests-deployment-2vsg6,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-2vsg6/replicasets/nginx-deployment-85ddf47c5d,UID:de31b318-4295-11ea-a994-fa163e34d433,ResourceVersion:19862204,Generation:3,CreationTimestamp:2020-01-29 12:50:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment nginx-deployment de262d24-4295-11ea-a994-fa163e34d433 0xc0021fa217 0xc0021fa218}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*20,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:8,FullyLabeledReplicas:8,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[],},}
Jan 29 12:50:57.191: INFO: Pod "nginx-deployment-5c98f8fb5-8t6r4" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-8t6r4,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-2vsg6,SelfLink:/api/v1/namespaces/e2e-tests-deployment-2vsg6/pods/nginx-deployment-5c98f8fb5-8t6r4,UID:f77bca47-4295-11ea-a994-fa163e34d433,ResourceVersion:19862138,Generation:0,CreationTimestamp:2020-01-29 12:50:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 f7672d17-4295-11ea-a994-fa163e34d433 0xc002b77be7 0xc002b77be8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-xhmr7 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-xhmr7,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-xhmr7 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002b77c60} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002b77c80}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 12:50:46 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-29 12:50:46 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-29 12:50:46 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 12:50:45 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-01-29 12:50:46 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 29 12:50:57.192: INFO: Pod "nginx-deployment-5c98f8fb5-9hscc" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-9hscc,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-2vsg6,SelfLink:/api/v1/namespaces/e2e-tests-deployment-2vsg6/pods/nginx-deployment-5c98f8fb5-9hscc,UID:f7f2422d-4295-11ea-a994-fa163e34d433,ResourceVersion:19862153,Generation:0,CreationTimestamp:2020-01-29 12:50:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 f7672d17-4295-11ea-a994-fa163e34d433 0xc002b77df7 0xc002b77df8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-xhmr7 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-xhmr7,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-xhmr7 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002b77e60} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002b77e80}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 12:50:46 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-29 12:50:46 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-29 12:50:46 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 12:50:46 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-01-29 12:50:46 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 29 12:50:57.192: INFO: Pod "nginx-deployment-5c98f8fb5-b8l7c" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-b8l7c,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-2vsg6,SelfLink:/api/v1/namespaces/e2e-tests-deployment-2vsg6/pods/nginx-deployment-5c98f8fb5-b8l7c,UID:fd20f763-4295-11ea-a994-fa163e34d433,ResourceVersion:19862182,Generation:0,CreationTimestamp:2020-01-29 12:50:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 f7672d17-4295-11ea-a994-fa163e34d433 0xc0025201a7 0xc0025201a8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-xhmr7 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-xhmr7,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-xhmr7 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002520210} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002520230}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 12:50:55 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 29 12:50:57.192: INFO: Pod "nginx-deployment-5c98f8fb5-bcqcj" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-bcqcj,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-2vsg6,SelfLink:/api/v1/namespaces/e2e-tests-deployment-2vsg6/pods/nginx-deployment-5c98f8fb5-bcqcj,UID:f7d020be-4295-11ea-a994-fa163e34d433,ResourceVersion:19862146,Generation:0,CreationTimestamp:2020-01-29 12:50:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 f7672d17-4295-11ea-a994-fa163e34d433 0xc0025202a7 0xc0025202a8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-xhmr7 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-xhmr7,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-xhmr7 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002520370} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002520390}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 12:50:46 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-29 12:50:46 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-29 12:50:46 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 12:50:46 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-01-29 12:50:46 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 29 12:50:57.193: INFO: Pod "nginx-deployment-5c98f8fb5-bsqlb" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-bsqlb,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-2vsg6,SelfLink:/api/v1/namespaces/e2e-tests-deployment-2vsg6/pods/nginx-deployment-5c98f8fb5-bsqlb,UID:f77cef60-4295-11ea-a994-fa163e34d433,ResourceVersion:19862139,Generation:0,CreationTimestamp:2020-01-29 12:50:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 f7672d17-4295-11ea-a994-fa163e34d433 0xc002520457 0xc002520458}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-xhmr7 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-xhmr7,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-xhmr7 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0025204c0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0025204e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 12:50:46 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-29 12:50:46 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-29 12:50:46 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 12:50:45 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-01-29 12:50:46 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 29 12:50:57.193: INFO: Pod "nginx-deployment-5c98f8fb5-dtcq6" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-dtcq6,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-2vsg6,SelfLink:/api/v1/namespaces/e2e-tests-deployment-2vsg6/pods/nginx-deployment-5c98f8fb5-dtcq6,UID:fc68775a-4295-11ea-a994-fa163e34d433,ResourceVersion:19862178,Generation:0,CreationTimestamp:2020-01-29 12:50:54 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 f7672d17-4295-11ea-a994-fa163e34d433 0xc0025205a7 0xc0025205a8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-xhmr7 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-xhmr7,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-xhmr7 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002520610} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002520630}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 12:50:55 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 29 12:50:57.193: INFO: Pod "nginx-deployment-5c98f8fb5-f76qw" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-f76qw,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-2vsg6,SelfLink:/api/v1/namespaces/e2e-tests-deployment-2vsg6/pods/nginx-deployment-5c98f8fb5-f76qw,UID:fd21489f-4295-11ea-a994-fa163e34d433,ResourceVersion:19862193,Generation:0,CreationTimestamp:2020-01-29 12:50:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 f7672d17-4295-11ea-a994-fa163e34d433 0xc0025206a7 0xc0025206a8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-xhmr7 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-xhmr7,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-xhmr7 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002520710} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002520730}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 12:50:55 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 29 12:50:57.194: INFO: Pod "nginx-deployment-5c98f8fb5-gt727" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-gt727,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-2vsg6,SelfLink:/api/v1/namespaces/e2e-tests-deployment-2vsg6/pods/nginx-deployment-5c98f8fb5-gt727,UID:fc68af12-4295-11ea-a994-fa163e34d433,ResourceVersion:19862179,Generation:0,CreationTimestamp:2020-01-29 12:50:54 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 f7672d17-4295-11ea-a994-fa163e34d433 0xc0025207a7 0xc0025207a8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-xhmr7 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-xhmr7,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-xhmr7 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002520810} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002520830}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 12:50:55 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 29 12:50:57.194: INFO: Pod "nginx-deployment-5c98f8fb5-jgqql" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-jgqql,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-2vsg6,SelfLink:/api/v1/namespaces/e2e-tests-deployment-2vsg6/pods/nginx-deployment-5c98f8fb5-jgqql,UID:fd571d04-4295-11ea-a994-fa163e34d433,ResourceVersion:19862200,Generation:0,CreationTimestamp:2020-01-29 12:50:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 f7672d17-4295-11ea-a994-fa163e34d433 0xc0025208a7 0xc0025208a8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-xhmr7 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-xhmr7,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-xhmr7 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002520910} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002520930}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 12:50:56 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 29 12:50:57.195: INFO: Pod "nginx-deployment-5c98f8fb5-jjhh4" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-jjhh4,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-2vsg6,SelfLink:/api/v1/namespaces/e2e-tests-deployment-2vsg6/pods/nginx-deployment-5c98f8fb5-jjhh4,UID:fd20e4df-4295-11ea-a994-fa163e34d433,ResourceVersion:19862184,Generation:0,CreationTimestamp:2020-01-29 12:50:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 f7672d17-4295-11ea-a994-fa163e34d433 0xc0025209a7 0xc0025209a8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-xhmr7 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-xhmr7,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-xhmr7 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002520a10} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002520a30}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 12:50:55 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 29 12:50:57.196: INFO: Pod "nginx-deployment-5c98f8fb5-qxwf8" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-qxwf8,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-2vsg6,SelfLink:/api/v1/namespaces/e2e-tests-deployment-2vsg6/pods/nginx-deployment-5c98f8fb5-qxwf8,UID:fd20c81d-4295-11ea-a994-fa163e34d433,ResourceVersion:19862187,Generation:0,CreationTimestamp:2020-01-29 12:50:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 f7672d17-4295-11ea-a994-fa163e34d433 0xc002520aa7 0xc002520aa8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-xhmr7 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-xhmr7,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-xhmr7 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002520b10} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002520b30}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 12:50:55 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 29 12:50:57.196: INFO: Pod "nginx-deployment-5c98f8fb5-wlv4f" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-wlv4f,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-2vsg6,SelfLink:/api/v1/namespaces/e2e-tests-deployment-2vsg6/pods/nginx-deployment-5c98f8fb5-wlv4f,UID:fc2bf90e-4295-11ea-a994-fa163e34d433,ResourceVersion:19862160,Generation:0,CreationTimestamp:2020-01-29 12:50:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 f7672d17-4295-11ea-a994-fa163e34d433 0xc002520ba7 0xc002520ba8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-xhmr7 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-xhmr7,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-xhmr7 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002520c10} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002520c30}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 12:50:54 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 29 12:50:57.197: INFO: Pod "nginx-deployment-5c98f8fb5-z272v" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-z272v,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-2vsg6,SelfLink:/api/v1/namespaces/e2e-tests-deployment-2vsg6/pods/nginx-deployment-5c98f8fb5-z272v,UID:f7720290-4295-11ea-a994-fa163e34d433,ResourceVersion:19862135,Generation:0,CreationTimestamp:2020-01-29 12:50:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 f7672d17-4295-11ea-a994-fa163e34d433 0xc002520ca7 0xc002520ca8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-xhmr7 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-xhmr7,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-xhmr7 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002520d20} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002520d40}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 12:50:45 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-29 12:50:45 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-29 12:50:45 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 12:50:45 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-01-29 12:50:45 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 29 12:50:57.197: INFO: Pod "nginx-deployment-85ddf47c5d-4sl44" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-4sl44,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-2vsg6,SelfLink:/api/v1/namespaces/e2e-tests-deployment-2vsg6/pods/nginx-deployment-85ddf47c5d-4sl44,UID:fc29d6fd-4295-11ea-a994-fa163e34d433,ResourceVersion:19862206,Generation:0,CreationTimestamp:2020-01-29 12:50:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d de31b318-4295-11ea-a994-fa163e34d433 0xc002520e07 0xc002520e08}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-xhmr7 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-xhmr7,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-xhmr7 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002520e70} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002520e90}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 12:50:55 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-29 12:50:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-29 12:50:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 12:50:54 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-01-29 12:50:55 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 29 12:50:57.198: INFO: Pod "nginx-deployment-85ddf47c5d-b8wxg" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-b8wxg,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-2vsg6,SelfLink:/api/v1/namespaces/e2e-tests-deployment-2vsg6/pods/nginx-deployment-85ddf47c5d-b8wxg,UID:fd571b08-4295-11ea-a994-fa163e34d433,ResourceVersion:19862199,Generation:0,CreationTimestamp:2020-01-29 12:50:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d de31b318-4295-11ea-a994-fa163e34d433 0xc002520f47 0xc002520f48}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-xhmr7 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-xhmr7,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-xhmr7 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002520fc0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0025210c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 12:50:56 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 29 12:50:57.198: INFO: Pod "nginx-deployment-85ddf47c5d-bt7d2" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-bt7d2,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-2vsg6,SelfLink:/api/v1/namespaces/e2e-tests-deployment-2vsg6/pods/nginx-deployment-85ddf47c5d-bt7d2,UID:fc68a108-4295-11ea-a994-fa163e34d433,ResourceVersion:19862177,Generation:0,CreationTimestamp:2020-01-29 12:50:54 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d de31b318-4295-11ea-a994-fa163e34d433 0xc002521137 0xc002521138}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-xhmr7 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-xhmr7,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-xhmr7 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0025211a0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0025211d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 12:50:55 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 29 12:50:57.198: INFO: Pod "nginx-deployment-85ddf47c5d-ddsx7" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-ddsx7,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-2vsg6,SelfLink:/api/v1/namespaces/e2e-tests-deployment-2vsg6/pods/nginx-deployment-85ddf47c5d-ddsx7,UID:fd572545-4295-11ea-a994-fa163e34d433,ResourceVersion:19862201,Generation:0,CreationTimestamp:2020-01-29 12:50:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d de31b318-4295-11ea-a994-fa163e34d433 0xc002521507 0xc002521508}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-xhmr7 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-xhmr7,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-xhmr7 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002521570} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002521590}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 12:50:56 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 29 12:50:57.199: INFO: Pod "nginx-deployment-85ddf47c5d-dtv4x" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-dtv4x,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-2vsg6,SelfLink:/api/v1/namespaces/e2e-tests-deployment-2vsg6/pods/nginx-deployment-85ddf47c5d-dtv4x,UID:de7c8a4c-4295-11ea-a994-fa163e34d433,ResourceVersion:19862059,Generation:0,CreationTimestamp:2020-01-29 12:50:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d de31b318-4295-11ea-a994-fa163e34d433 0xc002521897 0xc002521898}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-xhmr7 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-xhmr7,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-xhmr7 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0025219c0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0025219e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 12:50:08 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 12:50:40 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 12:50:40 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 12:50:03 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.12,StartTime:2020-01-29 12:50:08 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-29 12:50:39 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://c329e344c350d16748eaad4cb044b755d7d6a13afc854ce32bbd8919a7c83876}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 29 12:50:57.199: INFO: Pod "nginx-deployment-85ddf47c5d-f6c4g" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-f6c4g,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-2vsg6,SelfLink:/api/v1/namespaces/e2e-tests-deployment-2vsg6/pods/nginx-deployment-85ddf47c5d-f6c4g,UID:de62fdc9-4295-11ea-a994-fa163e34d433,ResourceVersion:19862052,Generation:0,CreationTimestamp:2020-01-29 12:50:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d de31b318-4295-11ea-a994-fa163e34d433 0xc002521d77 0xc002521d78}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-xhmr7 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-xhmr7,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-xhmr7 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002521f50} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0021d2000}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 12:50:04 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 12:50:39 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 12:50:39 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 12:50:03 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.7,StartTime:2020-01-29 12:50:04 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-29 12:50:37 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://587d5e67fb98fd760a86525a144514cca5dcbbd951a233e36fb237f9fd90ea30}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 29 12:50:57.199: INFO: Pod "nginx-deployment-85ddf47c5d-hbzrc" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-hbzrc,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-2vsg6,SelfLink:/api/v1/namespaces/e2e-tests-deployment-2vsg6/pods/nginx-deployment-85ddf47c5d-hbzrc,UID:fc63a90f-4295-11ea-a994-fa163e34d433,ResourceVersion:19862168,Generation:0,CreationTimestamp:2020-01-29 12:50:54 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d de31b318-4295-11ea-a994-fa163e34d433 0xc0021d20c7 0xc0021d20c8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-xhmr7 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-xhmr7,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-xhmr7 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0021d2130} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0021d2150}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 12:50:55 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 29 12:50:57.200: INFO: Pod "nginx-deployment-85ddf47c5d-khqwb" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-khqwb,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-2vsg6,SelfLink:/api/v1/namespaces/e2e-tests-deployment-2vsg6/pods/nginx-deployment-85ddf47c5d-khqwb,UID:de6321c2-4295-11ea-a994-fa163e34d433,ResourceVersion:19862066,Generation:0,CreationTimestamp:2020-01-29 12:50:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d de31b318-4295-11ea-a994-fa163e34d433 0xc0021d21c7 0xc0021d21c8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-xhmr7 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-xhmr7,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-xhmr7 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0021d2230} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0021d2250}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 12:50:06 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 12:50:41 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 12:50:41 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 12:50:03 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.11,StartTime:2020-01-29 12:50:06 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-29 12:50:39 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://d55249990adc7a8e3e95b114c7f96b681fd59f5ce02abcf2f5d75ddc576ae2f3}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 29 12:50:57.200: INFO: Pod "nginx-deployment-85ddf47c5d-l96rk" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-l96rk,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-2vsg6,SelfLink:/api/v1/namespaces/e2e-tests-deployment-2vsg6/pods/nginx-deployment-85ddf47c5d-l96rk,UID:de59e209-4295-11ea-a994-fa163e34d433,ResourceVersion:19862062,Generation:0,CreationTimestamp:2020-01-29 12:50:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d de31b318-4295-11ea-a994-fa163e34d433 0xc0021d23a7 0xc0021d23a8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-xhmr7 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-xhmr7,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-xhmr7 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0021d2410} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0021d2430}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 12:50:03 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 12:50:40 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 12:50:40 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 12:50:03 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.4,StartTime:2020-01-29 12:50:03 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-29 12:50:31 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://ab7c827403965ba8f37710b6aab44ebdb9b480bdc3225d58977ec63deb94cf92}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 29 12:50:57.201: INFO: Pod "nginx-deployment-85ddf47c5d-lvdqm" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-lvdqm,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-2vsg6,SelfLink:/api/v1/namespaces/e2e-tests-deployment-2vsg6/pods/nginx-deployment-85ddf47c5d-lvdqm,UID:fd208785-4295-11ea-a994-fa163e34d433,ResourceVersion:19862181,Generation:0,CreationTimestamp:2020-01-29 12:50:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d de31b318-4295-11ea-a994-fa163e34d433 0xc0021d2557 0xc0021d2558}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-xhmr7 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-xhmr7,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-xhmr7 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0021d25c0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0021d25e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 12:50:55 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 29 12:50:57.201: INFO: Pod "nginx-deployment-85ddf47c5d-lx89m" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-lx89m,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-2vsg6,SelfLink:/api/v1/namespaces/e2e-tests-deployment-2vsg6/pods/nginx-deployment-85ddf47c5d-lx89m,UID:de7c43d2-4295-11ea-a994-fa163e34d433,ResourceVersion:19862053,Generation:0,CreationTimestamp:2020-01-29 12:50:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d de31b318-4295-11ea-a994-fa163e34d433 0xc0021d2657 0xc0021d2658}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-xhmr7 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-xhmr7,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-xhmr7 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0021d26c0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0021d26e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 12:50:09 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 12:50:39 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 12:50:39 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 12:50:03 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.9,StartTime:2020-01-29 12:50:09 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-29 12:50:39 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://6849bc8c15b4a85999f662845a6bd959ef73cae1074ee3aa59a031a6d773c2a9}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 29 12:50:57.201: INFO: Pod "nginx-deployment-85ddf47c5d-mfb6b" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-mfb6b,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-2vsg6,SelfLink:/api/v1/namespaces/e2e-tests-deployment-2vsg6/pods/nginx-deployment-85ddf47c5d-mfb6b,UID:fd212ab8-4295-11ea-a994-fa163e34d433,ResourceVersion:19862185,Generation:0,CreationTimestamp:2020-01-29 12:50:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d de31b318-4295-11ea-a994-fa163e34d433 0xc0021d27a7 0xc0021d27a8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-xhmr7 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-xhmr7,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-xhmr7 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0021d2810} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0021d2830}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 12:50:55 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 29 12:50:57.202: INFO: Pod "nginx-deployment-85ddf47c5d-mvmr2" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-mvmr2,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-2vsg6,SelfLink:/api/v1/namespaces/e2e-tests-deployment-2vsg6/pods/nginx-deployment-85ddf47c5d-mvmr2,UID:de5e0469-4295-11ea-a994-fa163e34d433,ResourceVersion:19862086,Generation:0,CreationTimestamp:2020-01-29 12:50:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d de31b318-4295-11ea-a994-fa163e34d433 0xc0021d28a7 0xc0021d28a8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-xhmr7 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-xhmr7,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-xhmr7 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0021d2910} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0021d2930}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 12:50:04 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 12:50:41 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 12:50:41 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 12:50:03 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.5,StartTime:2020-01-29 12:50:04 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-29 12:50:35 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://ff9af00f806ed8032d4970f98be1984a6e9f78d7954f15be872f0b3045b5130c}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 29 12:50:57.202: INFO: Pod "nginx-deployment-85ddf47c5d-n6bhf" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-n6bhf,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-2vsg6,SelfLink:/api/v1/namespaces/e2e-tests-deployment-2vsg6/pods/nginx-deployment-85ddf47c5d-n6bhf,UID:fd5725bc-4295-11ea-a994-fa163e34d433,ResourceVersion:19862198,Generation:0,CreationTimestamp:2020-01-29 12:50:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d de31b318-4295-11ea-a994-fa163e34d433 0xc0021d29f7 0xc0021d29f8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-xhmr7 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-xhmr7,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-xhmr7 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0021d2a60} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0021d2a80}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 12:50:56 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 29 12:50:57.202: INFO: Pod "nginx-deployment-85ddf47c5d-nqmq8" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-nqmq8,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-2vsg6,SelfLink:/api/v1/namespaces/e2e-tests-deployment-2vsg6/pods/nginx-deployment-85ddf47c5d-nqmq8,UID:fd20c2e3-4295-11ea-a994-fa163e34d433,ResourceVersion:19862186,Generation:0,CreationTimestamp:2020-01-29 12:50:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d de31b318-4295-11ea-a994-fa163e34d433 0xc0021d2af7 0xc0021d2af8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-xhmr7 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-xhmr7,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-xhmr7 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0021d2b60} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0021d2b80}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 12:50:55 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 29 12:50:57.203: INFO: Pod "nginx-deployment-85ddf47c5d-pfrw2" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-pfrw2,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-2vsg6,SelfLink:/api/v1/namespaces/e2e-tests-deployment-2vsg6/pods/nginx-deployment-85ddf47c5d-pfrw2,UID:fd21502c-4295-11ea-a994-fa163e34d433,ResourceVersion:19862183,Generation:0,CreationTimestamp:2020-01-29 12:50:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d de31b318-4295-11ea-a994-fa163e34d433 0xc0021d2bf7 0xc0021d2bf8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-xhmr7 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-xhmr7,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-xhmr7 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0021d2c60} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0021d2c80}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 12:50:55 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 29 12:50:57.203: INFO: Pod "nginx-deployment-85ddf47c5d-sklhg" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-sklhg,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-2vsg6,SelfLink:/api/v1/namespaces/e2e-tests-deployment-2vsg6/pods/nginx-deployment-85ddf47c5d-sklhg,UID:fd572a19-4295-11ea-a994-fa163e34d433,ResourceVersion:19862203,Generation:0,CreationTimestamp:2020-01-29 12:50:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d de31b318-4295-11ea-a994-fa163e34d433 0xc0021d2cf7 0xc0021d2cf8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-xhmr7 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-xhmr7,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-xhmr7 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0021d2d60} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0021d2d80}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 12:50:56 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 29 12:50:57.203: INFO: Pod "nginx-deployment-85ddf47c5d-v8cx6" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-v8cx6,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-2vsg6,SelfLink:/api/v1/namespaces/e2e-tests-deployment-2vsg6/pods/nginx-deployment-85ddf47c5d-v8cx6,UID:de5f1aae-4295-11ea-a994-fa163e34d433,ResourceVersion:19862082,Generation:0,CreationTimestamp:2020-01-29 12:50:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d de31b318-4295-11ea-a994-fa163e34d433 0xc0021d2df7 0xc0021d2df8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-xhmr7 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-xhmr7,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-xhmr7 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0021d2e60} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0021d2e80}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 12:50:04 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 12:50:41 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 12:50:41 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 12:50:03 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.6,StartTime:2020-01-29 12:50:04 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-29 12:50:33 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://abeab6f3740b3bad47938fa1cea207370c2f98bcbc8415b150040c0962f799fc}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 29 12:50:57.204: INFO: Pod "nginx-deployment-85ddf47c5d-vgx69" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-vgx69,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-2vsg6,SelfLink:/api/v1/namespaces/e2e-tests-deployment-2vsg6/pods/nginx-deployment-85ddf47c5d-vgx69,UID:de62d08b-4295-11ea-a994-fa163e34d433,ResourceVersion:19862078,Generation:0,CreationTimestamp:2020-01-29 12:50:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d de31b318-4295-11ea-a994-fa163e34d433 0xc0021d2f47 0xc0021d2f48}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-xhmr7 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-xhmr7,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-xhmr7 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0021d2fb0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0021d2fd0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 12:50:04 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 12:50:41 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 12:50:41 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 12:50:03 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.8,StartTime:2020-01-29 12:50:04 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-29 12:50:37 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://9646de63545555e01bdab6cb55fe009037101c6444b4e2fd9dc72de64ba7a5fc}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan 29 12:50:57.204: INFO: Pod "nginx-deployment-85ddf47c5d-z5227" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-z5227,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-2vsg6,SelfLink:/api/v1/namespaces/e2e-tests-deployment-2vsg6/pods/nginx-deployment-85ddf47c5d-z5227,UID:fd574d64-4295-11ea-a994-fa163e34d433,ResourceVersion:19862197,Generation:0,CreationTimestamp:2020-01-29 12:50:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d de31b318-4295-11ea-a994-fa163e34d433 0xc0021d3097 0xc0021d3098}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-xhmr7 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-xhmr7,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-xhmr7 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0021d3100} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0021d3120}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 12:50:56 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 29 12:50:57.204: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-deployment-2vsg6" for this suite.
Jan 29 12:52:32.741: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 12:52:33.337: INFO: namespace: e2e-tests-deployment-2vsg6, resource: bindings, ignored listing per whitelist
Jan 29 12:52:40.722: INFO: namespace e2e-tests-deployment-2vsg6 deletion completed in 1m42.636944469s

• [SLOW TEST:157.661 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-cli] Kubectl client [k8s.io] Proxy server 
  should support proxy with --port 0  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 29 12:52:40.723: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should support proxy with --port 0  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: starting the proxy server
Jan 29 12:52:41.927: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter'
STEP: curling proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 29 12:52:42.120: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-vhfvr" for this suite.
Jan 29 12:52:48.264: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 12:52:48.531: INFO: namespace: e2e-tests-kubectl-vhfvr, resource: bindings, ignored listing per whitelist
Jan 29 12:52:48.665: INFO: namespace e2e-tests-kubectl-vhfvr deletion completed in 6.484005683s

• [SLOW TEST:7.942 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Proxy server
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should support proxy with --port 0  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-auth] ServiceAccounts 
  should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 29 12:52:48.667: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: getting the auto-created API token
Jan 29 12:52:49.537: INFO: created pod pod-service-account-defaultsa
Jan 29 12:52:49.537: INFO: pod pod-service-account-defaultsa service account token volume mount: true
Jan 29 12:52:49.582: INFO: created pod pod-service-account-mountsa
Jan 29 12:52:49.582: INFO: pod pod-service-account-mountsa service account token volume mount: true
Jan 29 12:52:49.799: INFO: created pod pod-service-account-nomountsa
Jan 29 12:52:49.799: INFO: pod pod-service-account-nomountsa service account token volume mount: false
Jan 29 12:52:49.861: INFO: created pod pod-service-account-defaultsa-mountspec
Jan 29 12:52:49.861: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true
Jan 29 12:52:50.036: INFO: created pod pod-service-account-mountsa-mountspec
Jan 29 12:52:50.036: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true
Jan 29 12:52:50.059: INFO: created pod pod-service-account-nomountsa-mountspec
Jan 29 12:52:50.059: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true
Jan 29 12:52:50.107: INFO: created pod pod-service-account-defaultsa-nomountspec
Jan 29 12:52:50.107: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false
Jan 29 12:52:50.913: INFO: created pod pod-service-account-mountsa-nomountspec
Jan 29 12:52:50.913: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false
Jan 29 12:52:51.021: INFO: created pod pod-service-account-nomountsa-nomountspec
Jan 29 12:52:51.021: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false
[AfterEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 29 12:52:51.021: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-svcaccounts-qtwm9" for this suite.
Jan 29 12:53:21.291: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 12:53:21.355: INFO: namespace: e2e-tests-svcaccounts-qtwm9, resource: bindings, ignored listing per whitelist
Jan 29 12:53:21.544: INFO: namespace e2e-tests-svcaccounts-qtwm9 deletion completed in 30.07522407s

• [SLOW TEST:32.878 seconds]
[sig-auth] ServiceAccounts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:22
  should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] Projected secret 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 29 12:53:21.545: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name s-test-opt-del-5481637a-4296-11ea-8d54-0242ac110005
STEP: Creating secret with name s-test-opt-upd-54816594-4296-11ea-8d54-0242ac110005
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-5481637a-4296-11ea-8d54-0242ac110005
STEP: Updating secret s-test-opt-upd-54816594-4296-11ea-8d54-0242ac110005
STEP: Creating secret with name s-test-opt-create-54816657-4296-11ea-8d54-0242ac110005
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 29 12:54:47.493: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-ld825" for this suite.
Jan 29 12:55:29.815: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 12:55:29.991: INFO: namespace: e2e-tests-projected-ld825, resource: bindings, ignored listing per whitelist
Jan 29 12:55:30.036: INFO: namespace e2e-tests-projected-ld825 deletion completed in 42.491882987s

• [SLOW TEST:128.491 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 29 12:55:30.037: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with secret that has name projected-secret-test-a1107a3c-4296-11ea-8d54-0242ac110005
STEP: Creating a pod to test consume secrets
Jan 29 12:55:30.321: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-a112ffc5-4296-11ea-8d54-0242ac110005" in namespace "e2e-tests-projected-j8lh8" to be "success or failure"
Jan 29 12:55:30.338: INFO: Pod "pod-projected-secrets-a112ffc5-4296-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 16.592874ms
Jan 29 12:55:32.481: INFO: Pod "pod-projected-secrets-a112ffc5-4296-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.160159101s
Jan 29 12:55:34.515: INFO: Pod "pod-projected-secrets-a112ffc5-4296-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.193565091s
Jan 29 12:55:36.543: INFO: Pod "pod-projected-secrets-a112ffc5-4296-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.221732877s
Jan 29 12:55:38.567: INFO: Pod "pod-projected-secrets-a112ffc5-4296-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.245274716s
Jan 29 12:55:40.595: INFO: Pod "pod-projected-secrets-a112ffc5-4296-11ea-8d54-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.274092761s
STEP: Saw pod success
Jan 29 12:55:40.596: INFO: Pod "pod-projected-secrets-a112ffc5-4296-11ea-8d54-0242ac110005" satisfied condition "success or failure"
Jan 29 12:55:40.624: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-a112ffc5-4296-11ea-8d54-0242ac110005 container projected-secret-volume-test: 
STEP: delete the pod
Jan 29 12:55:41.064: INFO: Waiting for pod pod-projected-secrets-a112ffc5-4296-11ea-8d54-0242ac110005 to disappear
Jan 29 12:55:41.152: INFO: Pod pod-projected-secrets-a112ffc5-4296-11ea-8d54-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 29 12:55:41.152: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-j8lh8" for this suite.
Jan 29 12:55:47.216: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 12:55:47.258: INFO: namespace: e2e-tests-projected-j8lh8, resource: bindings, ignored listing per whitelist
Jan 29 12:55:47.384: INFO: namespace e2e-tests-projected-j8lh8 deletion completed in 6.220422385s

• [SLOW TEST:17.347 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 29 12:55:47.385: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65
[It] RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan 29 12:55:47.648: INFO: Creating deployment "test-recreate-deployment"
Jan 29 12:55:47.712: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1
Jan 29 12:55:47.742: INFO: new replicaset for deployment "test-recreate-deployment" is yet to be created
Jan 29 12:55:49.778: INFO: Waiting deployment "test-recreate-deployment" to complete
Jan 29 12:55:49.786: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715899347, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715899347, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715899348, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715899347, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 29 12:55:51.808: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715899347, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715899347, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715899348, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715899347, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 29 12:55:53.807: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715899347, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715899347, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715899348, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715899347, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 29 12:55:55.800: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715899347, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715899347, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715899348, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715899347, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 29 12:55:57.811: INFO: Triggering a new rollout for deployment "test-recreate-deployment"
Jan 29 12:55:57.858: INFO: Updating deployment test-recreate-deployment
Jan 29 12:55:57.858: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59
Jan 29 12:55:58.603: INFO: Deployment "test-recreate-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment,GenerateName:,Namespace:e2e-tests-deployment-vszfg,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-vszfg/deployments/test-recreate-deployment,UID:ab6924d6-4296-11ea-a994-fa163e34d433,ResourceVersion:19862981,Generation:2,CreationTimestamp:2020-01-29 12:55:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[{Available False 2020-01-29 12:55:58 +0000 UTC 2020-01-29 12:55:58 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2020-01-29 12:55:58 +0000 UTC 2020-01-29 12:55:47 +0000 UTC ReplicaSetUpdated ReplicaSet "test-recreate-deployment-589c4bfd" is progressing.}],ReadyReplicas:0,CollisionCount:nil,},}

Jan 29 12:55:58.645: INFO: New ReplicaSet "test-recreate-deployment-589c4bfd" of Deployment "test-recreate-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-589c4bfd,GenerateName:,Namespace:e2e-tests-deployment-vszfg,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-vszfg/replicasets/test-recreate-deployment-589c4bfd,UID:b1b08109-4296-11ea-a994-fa163e34d433,ResourceVersion:19862979,Generation:1,CreationTimestamp:2020-01-29 12:55:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment ab6924d6-4296-11ea-a994-fa163e34d433 0xc001e2819f 0xc001e281b0}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Jan 29 12:55:58.645: INFO: All old ReplicaSets of Deployment "test-recreate-deployment":
Jan 29 12:55:58.646: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5bf7f65dc,GenerateName:,Namespace:e2e-tests-deployment-vszfg,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-vszfg/replicasets/test-recreate-deployment-5bf7f65dc,UID:ab768f2b-4296-11ea-a994-fa163e34d433,ResourceVersion:19862969,Generation:2,CreationTimestamp:2020-01-29 12:55:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment ab6924d6-4296-11ea-a994-fa163e34d433 0xc001e28270 0xc001e28271}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Jan 29 12:55:58.705: INFO: Pod "test-recreate-deployment-589c4bfd-79lzj" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-589c4bfd-79lzj,GenerateName:test-recreate-deployment-589c4bfd-,Namespace:e2e-tests-deployment-vszfg,SelfLink:/api/v1/namespaces/e2e-tests-deployment-vszfg/pods/test-recreate-deployment-589c4bfd-79lzj,UID:b1b34487-4296-11ea-a994-fa163e34d433,ResourceVersion:19862982,Generation:0,CreationTimestamp:2020-01-29 12:55:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-recreate-deployment-589c4bfd b1b08109-4296-11ea-a994-fa163e34d433 0xc0010de0af 0xc0010de0c0}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-xh5q6 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-xh5q6,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-xh5q6 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0010de120} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0010de140}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 12:55:58 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-29 12:55:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-29 12:55:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 12:55:58 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-01-29 12:55:58 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 29 12:55:58.705: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-deployment-vszfg" for this suite.
Jan 29 12:56:14.788: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 12:56:15.078: INFO: namespace: e2e-tests-deployment-vszfg, resource: bindings, ignored listing per whitelist
Jan 29 12:56:15.117: INFO: namespace e2e-tests-deployment-vszfg deletion completed in 16.403630766s

• [SLOW TEST:27.733 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl expose 
  should create services for rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 29 12:56:15.118: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should create services for rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating Redis RC
Jan 29 12:56:15.290: INFO: namespace e2e-tests-kubectl-jslt7
Jan 29 12:56:15.290: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-jslt7'
Jan 29 12:56:17.994: INFO: stderr: ""
Jan 29 12:56:17.994: INFO: stdout: "replicationcontroller/redis-master created\n"
STEP: Waiting for Redis master to start.
Jan 29 12:56:19.010: INFO: Selector matched 1 pods for map[app:redis]
Jan 29 12:56:19.010: INFO: Found 0 / 1
Jan 29 12:56:20.293: INFO: Selector matched 1 pods for map[app:redis]
Jan 29 12:56:20.293: INFO: Found 0 / 1
Jan 29 12:56:21.012: INFO: Selector matched 1 pods for map[app:redis]
Jan 29 12:56:21.013: INFO: Found 0 / 1
Jan 29 12:56:22.013: INFO: Selector matched 1 pods for map[app:redis]
Jan 29 12:56:22.014: INFO: Found 0 / 1
Jan 29 12:56:23.067: INFO: Selector matched 1 pods for map[app:redis]
Jan 29 12:56:23.067: INFO: Found 0 / 1
Jan 29 12:56:24.055: INFO: Selector matched 1 pods for map[app:redis]
Jan 29 12:56:24.055: INFO: Found 0 / 1
Jan 29 12:56:25.776: INFO: Selector matched 1 pods for map[app:redis]
Jan 29 12:56:25.776: INFO: Found 0 / 1
Jan 29 12:56:26.604: INFO: Selector matched 1 pods for map[app:redis]
Jan 29 12:56:26.604: INFO: Found 0 / 1
Jan 29 12:56:27.008: INFO: Selector matched 1 pods for map[app:redis]
Jan 29 12:56:27.008: INFO: Found 0 / 1
Jan 29 12:56:28.267: INFO: Selector matched 1 pods for map[app:redis]
Jan 29 12:56:28.268: INFO: Found 0 / 1
Jan 29 12:56:29.015: INFO: Selector matched 1 pods for map[app:redis]
Jan 29 12:56:29.015: INFO: Found 0 / 1
Jan 29 12:56:30.048: INFO: Selector matched 1 pods for map[app:redis]
Jan 29 12:56:30.048: INFO: Found 0 / 1
Jan 29 12:56:31.006: INFO: Selector matched 1 pods for map[app:redis]
Jan 29 12:56:31.006: INFO: Found 1 / 1
Jan 29 12:56:31.006: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Jan 29 12:56:31.012: INFO: Selector matched 1 pods for map[app:redis]
Jan 29 12:56:31.012: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Jan 29 12:56:31.012: INFO: wait on redis-master startup in e2e-tests-kubectl-jslt7 
Jan 29 12:56:31.012: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-qz55w redis-master --namespace=e2e-tests-kubectl-jslt7'
Jan 29 12:56:31.316: INFO: stderr: ""
Jan 29 12:56:31.316: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 29 Jan 12:56:29.023 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 29 Jan 12:56:29.023 # Server started, Redis version 3.2.12\n1:M 29 Jan 12:56:29.024 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 29 Jan 12:56:29.024 * The server is now ready to accept connections on port 6379\n"
STEP: exposing RC
Jan 29 12:56:31.316: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc redis-master --name=rm2 --port=1234 --target-port=6379 --namespace=e2e-tests-kubectl-jslt7'
Jan 29 12:56:31.483: INFO: stderr: ""
Jan 29 12:56:31.483: INFO: stdout: "service/rm2 exposed\n"
Jan 29 12:56:31.495: INFO: Service rm2 in namespace e2e-tests-kubectl-jslt7 found.
STEP: exposing service
Jan 29 12:56:33.524: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=e2e-tests-kubectl-jslt7'
Jan 29 12:56:33.980: INFO: stderr: ""
Jan 29 12:56:33.981: INFO: stdout: "service/rm3 exposed\n"
Jan 29 12:56:33.990: INFO: Service rm3 in namespace e2e-tests-kubectl-jslt7 found.
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 29 12:56:36.007: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-jslt7" for this suite.
Jan 29 12:57:00.069: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 12:57:00.209: INFO: namespace: e2e-tests-kubectl-jslt7, resource: bindings, ignored listing per whitelist
Jan 29 12:57:00.212: INFO: namespace e2e-tests-kubectl-jslt7 deletion completed in 24.196824253s

• [SLOW TEST:45.094 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl expose
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create services for rc  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 29 12:57:00.212: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0777 on node default medium
Jan 29 12:57:00.533: INFO: Waiting up to 5m0s for pod "pod-d6d2fb9e-4296-11ea-8d54-0242ac110005" in namespace "e2e-tests-emptydir-5ddvz" to be "success or failure"
Jan 29 12:57:00.562: INFO: Pod "pod-d6d2fb9e-4296-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 28.50052ms
Jan 29 12:57:02.936: INFO: Pod "pod-d6d2fb9e-4296-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.401865638s
Jan 29 12:57:05.025: INFO: Pod "pod-d6d2fb9e-4296-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.491030456s
Jan 29 12:57:07.045: INFO: Pod "pod-d6d2fb9e-4296-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.511301304s
Jan 29 12:57:09.090: INFO: Pod "pod-d6d2fb9e-4296-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.556206757s
Jan 29 12:57:11.123: INFO: Pod "pod-d6d2fb9e-4296-11ea-8d54-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.589148213s
STEP: Saw pod success
Jan 29 12:57:11.123: INFO: Pod "pod-d6d2fb9e-4296-11ea-8d54-0242ac110005" satisfied condition "success or failure"
Jan 29 12:57:11.154: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-d6d2fb9e-4296-11ea-8d54-0242ac110005 container test-container: 
STEP: delete the pod
Jan 29 12:57:11.369: INFO: Waiting for pod pod-d6d2fb9e-4296-11ea-8d54-0242ac110005 to disappear
Jan 29 12:57:11.411: INFO: Pod pod-d6d2fb9e-4296-11ea-8d54-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 29 12:57:11.412: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-5ddvz" for this suite.
Jan 29 12:57:17.468: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 12:57:17.620: INFO: namespace: e2e-tests-emptydir-5ddvz, resource: bindings, ignored listing per whitelist
Jan 29 12:57:17.651: INFO: namespace e2e-tests-emptydir-5ddvz deletion completed in 6.224895665s

• [SLOW TEST:17.439 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should do a rolling update of a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 29 12:57:17.652: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295
[It] should do a rolling update of a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the initial replication controller
Jan 29 12:57:17.787: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-259cb'
Jan 29 12:57:18.290: INFO: stderr: ""
Jan 29 12:57:18.290: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jan 29 12:57:18.291: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-259cb'
Jan 29 12:57:18.608: INFO: stderr: ""
Jan 29 12:57:18.609: INFO: stdout: "update-demo-nautilus-4ctkw update-demo-nautilus-jf8tb "
Jan 29 12:57:18.609: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4ctkw -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-259cb'
Jan 29 12:57:18.758: INFO: stderr: ""
Jan 29 12:57:18.759: INFO: stdout: ""
Jan 29 12:57:18.759: INFO: update-demo-nautilus-4ctkw is created but not running
Jan 29 12:57:23.760: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-259cb'
Jan 29 12:57:23.917: INFO: stderr: ""
Jan 29 12:57:23.917: INFO: stdout: "update-demo-nautilus-4ctkw update-demo-nautilus-jf8tb "
Jan 29 12:57:23.917: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4ctkw -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-259cb'
Jan 29 12:57:24.093: INFO: stderr: ""
Jan 29 12:57:24.093: INFO: stdout: ""
Jan 29 12:57:24.093: INFO: update-demo-nautilus-4ctkw is created but not running
Jan 29 12:57:29.094: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-259cb'
Jan 29 12:57:29.329: INFO: stderr: ""
Jan 29 12:57:29.329: INFO: stdout: "update-demo-nautilus-4ctkw update-demo-nautilus-jf8tb "
Jan 29 12:57:29.330: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4ctkw -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-259cb'
Jan 29 12:57:29.475: INFO: stderr: ""
Jan 29 12:57:29.476: INFO: stdout: ""
Jan 29 12:57:29.476: INFO: update-demo-nautilus-4ctkw is created but not running
Jan 29 12:57:34.476: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-259cb'
Jan 29 12:57:34.660: INFO: stderr: ""
Jan 29 12:57:34.660: INFO: stdout: "update-demo-nautilus-4ctkw update-demo-nautilus-jf8tb "
Jan 29 12:57:34.660: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4ctkw -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-259cb'
Jan 29 12:57:35.120: INFO: stderr: ""
Jan 29 12:57:35.120: INFO: stdout: ""
Jan 29 12:57:35.120: INFO: update-demo-nautilus-4ctkw is created but not running
Jan 29 12:57:40.120: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-259cb'
Jan 29 12:57:40.265: INFO: stderr: ""
Jan 29 12:57:40.265: INFO: stdout: "update-demo-nautilus-4ctkw update-demo-nautilus-jf8tb "
Jan 29 12:57:40.265: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4ctkw -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-259cb'
Jan 29 12:57:40.392: INFO: stderr: ""
Jan 29 12:57:40.392: INFO: stdout: "true"
Jan 29 12:57:40.392: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4ctkw -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-259cb'
Jan 29 12:57:40.587: INFO: stderr: ""
Jan 29 12:57:40.587: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan 29 12:57:40.587: INFO: validating pod update-demo-nautilus-4ctkw
Jan 29 12:57:40.668: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan 29 12:57:40.668: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan 29 12:57:40.668: INFO: update-demo-nautilus-4ctkw is verified up and running
Jan 29 12:57:40.668: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jf8tb -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-259cb'
Jan 29 12:57:40.793: INFO: stderr: ""
Jan 29 12:57:40.793: INFO: stdout: "true"
Jan 29 12:57:40.793: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jf8tb -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-259cb'
Jan 29 12:57:40.936: INFO: stderr: ""
Jan 29 12:57:40.936: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan 29 12:57:40.936: INFO: validating pod update-demo-nautilus-jf8tb
Jan 29 12:57:40.949: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan 29 12:57:40.949: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan 29 12:57:40.949: INFO: update-demo-nautilus-jf8tb is verified up and running
STEP: rolling-update to new replication controller
Jan 29 12:57:40.952: INFO: scanned /root for discovery docs: 
Jan 29 12:57:40.952: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=e2e-tests-kubectl-259cb'
Jan 29 12:58:19.637: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Jan 29 12:58:19.637: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jan 29 12:58:19.638: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-259cb'
Jan 29 12:58:19.861: INFO: stderr: ""
Jan 29 12:58:19.861: INFO: stdout: "update-demo-kitten-5j8pc update-demo-kitten-rlwt9 update-demo-nautilus-jf8tb "
STEP: Replicas for name=update-demo: expected=2 actual=3
Jan 29 12:58:24.861: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-259cb'
Jan 29 12:58:25.094: INFO: stderr: ""
Jan 29 12:58:25.094: INFO: stdout: "update-demo-kitten-5j8pc update-demo-kitten-rlwt9 "
Jan 29 12:58:25.095: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-5j8pc -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-259cb'
Jan 29 12:58:25.291: INFO: stderr: ""
Jan 29 12:58:25.291: INFO: stdout: "true"
Jan 29 12:58:25.292: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-5j8pc -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-259cb'
Jan 29 12:58:25.420: INFO: stderr: ""
Jan 29 12:58:25.420: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Jan 29 12:58:25.420: INFO: validating pod update-demo-kitten-5j8pc
Jan 29 12:58:25.605: INFO: got data: {
  "image": "kitten.jpg"
}

Jan 29 12:58:25.605: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Jan 29 12:58:25.605: INFO: update-demo-kitten-5j8pc is verified up and running
Jan 29 12:58:25.606: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-rlwt9 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-259cb'
Jan 29 12:58:25.758: INFO: stderr: ""
Jan 29 12:58:25.758: INFO: stdout: "true"
Jan 29 12:58:25.758: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-rlwt9 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-259cb'
Jan 29 12:58:25.970: INFO: stderr: ""
Jan 29 12:58:25.971: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Jan 29 12:58:25.971: INFO: validating pod update-demo-kitten-rlwt9
Jan 29 12:58:25.992: INFO: got data: {
  "image": "kitten.jpg"
}

Jan 29 12:58:25.992: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Jan 29 12:58:25.992: INFO: update-demo-kitten-rlwt9 is verified up and running
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 29 12:58:25.993: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-259cb" for this suite.
Jan 29 12:58:52.076: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 12:58:52.349: INFO: namespace: e2e-tests-kubectl-259cb, resource: bindings, ignored listing per whitelist
Jan 29 12:58:52.568: INFO: namespace e2e-tests-kubectl-259cb deletion completed in 26.566150786s

• [SLOW TEST:94.916 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should do a rolling update of a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 29 12:58:52.571: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan 29 12:58:52.830: INFO: Waiting up to 5m0s for pod "downwardapi-volume-19c50549-4297-11ea-8d54-0242ac110005" in namespace "e2e-tests-projected-k2q7c" to be "success or failure"
Jan 29 12:58:52.856: INFO: Pod "downwardapi-volume-19c50549-4297-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 26.438514ms
Jan 29 12:58:54.979: INFO: Pod "downwardapi-volume-19c50549-4297-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.149065549s
Jan 29 12:58:56.996: INFO: Pod "downwardapi-volume-19c50549-4297-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.166251226s
Jan 29 12:58:59.447: INFO: Pod "downwardapi-volume-19c50549-4297-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.617855358s
Jan 29 12:59:01.841: INFO: Pod "downwardapi-volume-19c50549-4297-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.011656534s
Jan 29 12:59:03.926: INFO: Pod "downwardapi-volume-19c50549-4297-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 11.096295734s
Jan 29 12:59:06.782: INFO: Pod "downwardapi-volume-19c50549-4297-11ea-8d54-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.95233873s
STEP: Saw pod success
Jan 29 12:59:06.782: INFO: Pod "downwardapi-volume-19c50549-4297-11ea-8d54-0242ac110005" satisfied condition "success or failure"
Jan 29 12:59:06.796: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-19c50549-4297-11ea-8d54-0242ac110005 container client-container: 
STEP: delete the pod
Jan 29 12:59:07.253: INFO: Waiting for pod downwardapi-volume-19c50549-4297-11ea-8d54-0242ac110005 to disappear
Jan 29 12:59:07.318: INFO: Pod downwardapi-volume-19c50549-4297-11ea-8d54-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 29 12:59:07.319: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-k2q7c" for this suite.
Jan 29 12:59:13.413: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 12:59:13.543: INFO: namespace: e2e-tests-projected-k2q7c, resource: bindings, ignored listing per whitelist
Jan 29 12:59:13.652: INFO: namespace e2e-tests-projected-k2q7c deletion completed in 6.313824782s

• [SLOW TEST:21.081 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should create and stop a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 29 12:59:13.653: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295
[It] should create and stop a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a replication controller
Jan 29 12:59:14.019: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-bnkh9'
Jan 29 12:59:14.473: INFO: stderr: ""
Jan 29 12:59:14.473: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jan 29 12:59:14.474: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-bnkh9'
Jan 29 12:59:14.804: INFO: stderr: ""
Jan 29 12:59:14.804: INFO: stdout: "update-demo-nautilus-jj4qn "
STEP: Replicas for name=update-demo: expected=2 actual=1
Jan 29 12:59:19.806: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-bnkh9'
Jan 29 12:59:19.975: INFO: stderr: ""
Jan 29 12:59:19.975: INFO: stdout: "update-demo-nautilus-cfwsq update-demo-nautilus-jj4qn "
Jan 29 12:59:19.975: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-cfwsq -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-bnkh9'
Jan 29 12:59:20.123: INFO: stderr: ""
Jan 29 12:59:20.123: INFO: stdout: ""
Jan 29 12:59:20.123: INFO: update-demo-nautilus-cfwsq is created but not running
Jan 29 12:59:25.124: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-bnkh9'
Jan 29 12:59:25.916: INFO: stderr: ""
Jan 29 12:59:25.916: INFO: stdout: "update-demo-nautilus-cfwsq update-demo-nautilus-jj4qn "
Jan 29 12:59:25.917: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-cfwsq -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-bnkh9'
Jan 29 12:59:26.235: INFO: stderr: ""
Jan 29 12:59:26.235: INFO: stdout: ""
Jan 29 12:59:26.235: INFO: update-demo-nautilus-cfwsq is created but not running
Jan 29 12:59:31.236: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-bnkh9'
Jan 29 12:59:31.430: INFO: stderr: ""
Jan 29 12:59:31.430: INFO: stdout: "update-demo-nautilus-cfwsq update-demo-nautilus-jj4qn "
Jan 29 12:59:31.430: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-cfwsq -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-bnkh9'
Jan 29 12:59:31.527: INFO: stderr: ""
Jan 29 12:59:31.527: INFO: stdout: "true"
Jan 29 12:59:31.528: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-cfwsq -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-bnkh9'
Jan 29 12:59:31.630: INFO: stderr: ""
Jan 29 12:59:31.630: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan 29 12:59:31.630: INFO: validating pod update-demo-nautilus-cfwsq
Jan 29 12:59:31.665: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan 29 12:59:31.665: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan 29 12:59:31.665: INFO: update-demo-nautilus-cfwsq is verified up and running
Jan 29 12:59:31.666: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jj4qn -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-bnkh9'
Jan 29 12:59:31.781: INFO: stderr: ""
Jan 29 12:59:31.781: INFO: stdout: "true"
Jan 29 12:59:31.781: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jj4qn -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-bnkh9'
Jan 29 12:59:31.945: INFO: stderr: ""
Jan 29 12:59:31.945: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan 29 12:59:31.945: INFO: validating pod update-demo-nautilus-jj4qn
Jan 29 12:59:31.956: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan 29 12:59:31.956: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan 29 12:59:31.956: INFO: update-demo-nautilus-jj4qn is verified up and running
STEP: using delete to clean up resources
Jan 29 12:59:31.956: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-bnkh9'
Jan 29 12:59:32.108: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan 29 12:59:32.108: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Jan 29 12:59:32.108: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=e2e-tests-kubectl-bnkh9'
Jan 29 12:59:32.324: INFO: stderr: "No resources found.\n"
Jan 29 12:59:32.324: INFO: stdout: ""
Jan 29 12:59:32.325: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=e2e-tests-kubectl-bnkh9 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Jan 29 12:59:32.646: INFO: stderr: ""
Jan 29 12:59:32.646: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 29 12:59:32.646: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-bnkh9" for this suite.
Jan 29 12:59:56.939: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 12:59:57.028: INFO: namespace: e2e-tests-kubectl-bnkh9, resource: bindings, ignored listing per whitelist
Jan 29 12:59:57.295: INFO: namespace e2e-tests-kubectl-bnkh9 deletion completed in 24.611241697s

• [SLOW TEST:43.643 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create and stop a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 29 12:59:57.297: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-407aef36-4297-11ea-8d54-0242ac110005
STEP: Creating a pod to test consume configMaps
Jan 29 12:59:57.953: INFO: Waiting up to 5m0s for pod "pod-configmaps-407e08d6-4297-11ea-8d54-0242ac110005" in namespace "e2e-tests-configmap-vjwlh" to be "success or failure"
Jan 29 12:59:57.983: INFO: Pod "pod-configmaps-407e08d6-4297-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 30.101146ms
Jan 29 13:00:00.383: INFO: Pod "pod-configmaps-407e08d6-4297-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.429921646s
Jan 29 13:00:02.425: INFO: Pod "pod-configmaps-407e08d6-4297-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.472121281s
Jan 29 13:00:04.463: INFO: Pod "pod-configmaps-407e08d6-4297-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.51020015s
Jan 29 13:00:06.985: INFO: Pod "pod-configmaps-407e08d6-4297-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.032022609s
Jan 29 13:00:09.082: INFO: Pod "pod-configmaps-407e08d6-4297-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 11.12896633s
Jan 29 13:00:11.122: INFO: Pod "pod-configmaps-407e08d6-4297-11ea-8d54-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.168660273s
STEP: Saw pod success
Jan 29 13:00:11.122: INFO: Pod "pod-configmaps-407e08d6-4297-11ea-8d54-0242ac110005" satisfied condition "success or failure"
Jan 29 13:00:11.149: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-407e08d6-4297-11ea-8d54-0242ac110005 container configmap-volume-test: 
STEP: delete the pod
Jan 29 13:00:11.357: INFO: Waiting for pod pod-configmaps-407e08d6-4297-11ea-8d54-0242ac110005 to disappear
Jan 29 13:00:11.367: INFO: Pod pod-configmaps-407e08d6-4297-11ea-8d54-0242ac110005 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 29 13:00:11.367: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-vjwlh" for this suite.
Jan 29 13:00:17.670: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 13:00:17.786: INFO: namespace: e2e-tests-configmap-vjwlh, resource: bindings, ignored listing per whitelist
Jan 29 13:00:17.897: INFO: namespace e2e-tests-configmap-vjwlh deletion completed in 6.518879724s

• [SLOW TEST:20.600 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[k8s.io] Pods 
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 29 13:00:17.898: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan 29 13:00:18.231: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 29 13:00:30.752: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-vkx4l" for this suite.
Jan 29 13:01:24.831: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 13:01:25.011: INFO: namespace: e2e-tests-pods-vkx4l, resource: bindings, ignored listing per whitelist
Jan 29 13:01:25.018: INFO: namespace e2e-tests-pods-vkx4l deletion completed in 54.236582978s

• [SLOW TEST:67.120 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 29 13:01:25.019: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward api env vars
Jan 29 13:01:25.208: INFO: Waiting up to 5m0s for pod "downward-api-748f09c9-4297-11ea-8d54-0242ac110005" in namespace "e2e-tests-downward-api-lqhlr" to be "success or failure"
Jan 29 13:01:25.225: INFO: Pod "downward-api-748f09c9-4297-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 16.833058ms
Jan 29 13:01:27.244: INFO: Pod "downward-api-748f09c9-4297-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035672911s
Jan 29 13:01:29.272: INFO: Pod "downward-api-748f09c9-4297-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.063480223s
Jan 29 13:01:31.497: INFO: Pod "downward-api-748f09c9-4297-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.288378561s
Jan 29 13:01:33.535: INFO: Pod "downward-api-748f09c9-4297-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.326860041s
Jan 29 13:01:35.554: INFO: Pod "downward-api-748f09c9-4297-11ea-8d54-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.34541345s
STEP: Saw pod success
Jan 29 13:01:35.554: INFO: Pod "downward-api-748f09c9-4297-11ea-8d54-0242ac110005" satisfied condition "success or failure"
Jan 29 13:01:35.557: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-748f09c9-4297-11ea-8d54-0242ac110005 container dapi-container: 
STEP: delete the pod
Jan 29 13:01:35.658: INFO: Waiting for pod downward-api-748f09c9-4297-11ea-8d54-0242ac110005 to disappear
Jan 29 13:01:35.857: INFO: Pod downward-api-748f09c9-4297-11ea-8d54-0242ac110005 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 29 13:01:35.858: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-lqhlr" for this suite.
Jan 29 13:01:41.955: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 13:01:42.109: INFO: namespace: e2e-tests-downward-api-lqhlr, resource: bindings, ignored listing per whitelist
Jan 29 13:01:42.124: INFO: namespace e2e-tests-downward-api-lqhlr deletion completed in 6.247217177s

• [SLOW TEST:17.106 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[k8s.io] Docker Containers 
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 29 13:01:42.125: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test use defaults
Jan 29 13:01:42.391: INFO: Waiting up to 5m0s for pod "client-containers-7ed4bd6b-4297-11ea-8d54-0242ac110005" in namespace "e2e-tests-containers-fwkvq" to be "success or failure"
Jan 29 13:01:42.402: INFO: Pod "client-containers-7ed4bd6b-4297-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 11.356313ms
Jan 29 13:01:44.429: INFO: Pod "client-containers-7ed4bd6b-4297-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038507868s
Jan 29 13:01:46.444: INFO: Pod "client-containers-7ed4bd6b-4297-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.052866126s
Jan 29 13:01:48.733: INFO: Pod "client-containers-7ed4bd6b-4297-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.342424315s
Jan 29 13:01:50.886: INFO: Pod "client-containers-7ed4bd6b-4297-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.495346071s
Jan 29 13:01:52.917: INFO: Pod "client-containers-7ed4bd6b-4297-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.526470288s
Jan 29 13:01:54.936: INFO: Pod "client-containers-7ed4bd6b-4297-11ea-8d54-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.545406591s
STEP: Saw pod success
Jan 29 13:01:54.936: INFO: Pod "client-containers-7ed4bd6b-4297-11ea-8d54-0242ac110005" satisfied condition "success or failure"
Jan 29 13:01:54.956: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-containers-7ed4bd6b-4297-11ea-8d54-0242ac110005 container test-container: 
STEP: delete the pod
Jan 29 13:01:56.041: INFO: Waiting for pod client-containers-7ed4bd6b-4297-11ea-8d54-0242ac110005 to disappear
Jan 29 13:01:56.055: INFO: Pod client-containers-7ed4bd6b-4297-11ea-8d54-0242ac110005 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 29 13:01:56.056: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-containers-fwkvq" for this suite.
Jan 29 13:02:02.122: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 13:02:02.199: INFO: namespace: e2e-tests-containers-fwkvq, resource: bindings, ignored listing per whitelist
Jan 29 13:02:02.261: INFO: namespace e2e-tests-containers-fwkvq deletion completed in 6.189530141s

• [SLOW TEST:20.136 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-node] Downward API 
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 29 13:02:02.261: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward api env vars
Jan 29 13:02:02.549: INFO: Waiting up to 5m0s for pod "downward-api-8acfb893-4297-11ea-8d54-0242ac110005" in namespace "e2e-tests-downward-api-mxplm" to be "success or failure"
Jan 29 13:02:02.633: INFO: Pod "downward-api-8acfb893-4297-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 83.919543ms
Jan 29 13:02:04.663: INFO: Pod "downward-api-8acfb893-4297-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.113595917s
Jan 29 13:02:06.683: INFO: Pod "downward-api-8acfb893-4297-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.13356942s
Jan 29 13:02:08.709: INFO: Pod "downward-api-8acfb893-4297-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.160495149s
Jan 29 13:02:10.724: INFO: Pod "downward-api-8acfb893-4297-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.175414673s
Jan 29 13:02:12.739: INFO: Pod "downward-api-8acfb893-4297-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.189911132s
Jan 29 13:02:14.766: INFO: Pod "downward-api-8acfb893-4297-11ea-8d54-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.217426726s
STEP: Saw pod success
Jan 29 13:02:14.767: INFO: Pod "downward-api-8acfb893-4297-11ea-8d54-0242ac110005" satisfied condition "success or failure"
Jan 29 13:02:14.782: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-8acfb893-4297-11ea-8d54-0242ac110005 container dapi-container: 
STEP: delete the pod
Jan 29 13:02:15.105: INFO: Waiting for pod downward-api-8acfb893-4297-11ea-8d54-0242ac110005 to disappear
Jan 29 13:02:15.198: INFO: Pod downward-api-8acfb893-4297-11ea-8d54-0242ac110005 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 29 13:02:15.199: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-mxplm" for this suite.
Jan 29 13:02:21.316: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 13:02:21.481: INFO: namespace: e2e-tests-downward-api-mxplm, resource: bindings, ignored listing per whitelist
Jan 29 13:02:21.501: INFO: namespace e2e-tests-downward-api-mxplm deletion completed in 6.291471944s

• [SLOW TEST:19.240 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 29 13:02:21.502: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the rc1
STEP: create the rc2
STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well
STEP: delete the rc simpletest-rc-to-be-deleted
STEP: wait for the rc to be deleted
STEP: Gathering metrics
W0129 13:02:35.980250       8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jan 29 13:02:35.980: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 29 13:02:35.980: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-4f98l" for this suite.
Jan 29 13:03:10.078: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 13:03:10.734: INFO: namespace: e2e-tests-gc-4f98l, resource: bindings, ignored listing per whitelist
Jan 29 13:03:10.812: INFO: namespace e2e-tests-gc-4f98l deletion completed in 34.826288818s

• [SLOW TEST:49.310 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 29 13:03:10.814: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102
[It] should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Jan 29 13:03:13.736: INFO: Number of nodes with available pods: 0
Jan 29 13:03:13.736: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 29 13:03:14.797: INFO: Number of nodes with available pods: 0
Jan 29 13:03:14.797: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 29 13:03:16.364: INFO: Number of nodes with available pods: 0
Jan 29 13:03:16.364: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 29 13:03:16.766: INFO: Number of nodes with available pods: 0
Jan 29 13:03:16.766: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 29 13:03:17.790: INFO: Number of nodes with available pods: 0
Jan 29 13:03:17.790: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 29 13:03:18.817: INFO: Number of nodes with available pods: 0
Jan 29 13:03:18.817: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 29 13:03:19.757: INFO: Number of nodes with available pods: 0
Jan 29 13:03:19.757: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 29 13:03:20.808: INFO: Number of nodes with available pods: 0
Jan 29 13:03:20.808: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 29 13:03:22.778: INFO: Number of nodes with available pods: 0
Jan 29 13:03:22.778: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 29 13:03:24.496: INFO: Number of nodes with available pods: 0
Jan 29 13:03:24.496: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 29 13:03:24.753: INFO: Number of nodes with available pods: 0
Jan 29 13:03:24.753: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 29 13:03:25.763: INFO: Number of nodes with available pods: 0
Jan 29 13:03:25.763: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 29 13:03:26.763: INFO: Number of nodes with available pods: 0
Jan 29 13:03:26.763: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 29 13:03:27.764: INFO: Number of nodes with available pods: 1
Jan 29 13:03:27.764: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived.
Jan 29 13:03:27.917: INFO: Number of nodes with available pods: 0
Jan 29 13:03:27.917: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 29 13:03:28.938: INFO: Number of nodes with available pods: 0
Jan 29 13:03:28.938: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 29 13:03:30.191: INFO: Number of nodes with available pods: 0
Jan 29 13:03:30.191: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 29 13:03:30.975: INFO: Number of nodes with available pods: 0
Jan 29 13:03:30.975: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 29 13:03:33.052: INFO: Number of nodes with available pods: 0
Jan 29 13:03:33.052: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 29 13:03:34.022: INFO: Number of nodes with available pods: 0
Jan 29 13:03:34.023: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 29 13:03:34.932: INFO: Number of nodes with available pods: 0
Jan 29 13:03:34.932: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 29 13:03:36.976: INFO: Number of nodes with available pods: 0
Jan 29 13:03:36.976: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 29 13:03:38.707: INFO: Number of nodes with available pods: 0
Jan 29 13:03:38.708: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 29 13:03:39.680: INFO: Number of nodes with available pods: 0
Jan 29 13:03:39.680: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 29 13:03:39.992: INFO: Number of nodes with available pods: 0
Jan 29 13:03:39.992: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 29 13:03:40.933: INFO: Number of nodes with available pods: 1
Jan 29 13:03:40.933: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Wait for the failed daemon pod to be completely deleted.
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-kqm6b, will wait for the garbage collector to delete the pods
Jan 29 13:03:41.014: INFO: Deleting DaemonSet.extensions daemon-set took: 16.710197ms
Jan 29 13:03:41.114: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.561696ms
Jan 29 13:03:48.830: INFO: Number of nodes with available pods: 0
Jan 29 13:03:48.830: INFO: Number of running nodes: 0, number of available pods: 0
Jan 29 13:03:48.840: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-kqm6b/daemonsets","resourceVersion":"19864074"},"items":null}

Jan 29 13:03:48.851: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-kqm6b/pods","resourceVersion":"19864074"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 29 13:03:48.873: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-daemonsets-kqm6b" for this suite.
Jan 29 13:03:55.066: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 13:03:55.124: INFO: namespace: e2e-tests-daemonsets-kqm6b, resource: bindings, ignored listing per whitelist
Jan 29 13:03:55.268: INFO: namespace e2e-tests-daemonsets-kqm6b deletion completed in 6.38708683s

• [SLOW TEST:44.455 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[k8s.io] Pods 
  should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 29 13:03:55.269: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
STEP: setting up watch
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: verifying pod creation was observed
Jan 29 13:04:06.063: INFO: running pod: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-submit-remove-ce4c0a17-4297-11ea-8d54-0242ac110005", GenerateName:"", Namespace:"e2e-tests-pods-4d6vd", SelfLink:"/api/v1/namespaces/e2e-tests-pods-4d6vd/pods/pod-submit-remove-ce4c0a17-4297-11ea-8d54-0242ac110005", UID:"ce4ec33d-4297-11ea-a994-fa163e34d433", ResourceVersion:"19864126", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63715899835, loc:(*time.Location)(0x7950ac0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"680689036"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-844zl", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc002a7bb00), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"nginx", Image:"docker.io/library/nginx:1.14-alpine", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-844zl", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc001a80628), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"hunter-server-hu5at5svl7ps", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc001fddf20), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc001a80660)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc001a80680)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc001a80688), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc001a8068c)}, Status:v1.PodStatus{Phase:"Running", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715899835, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715899845, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"ContainersReady", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715899845, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715899835, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.96.1.240", PodIP:"10.32.0.4", StartTime:(*v1.Time)(0xc00186cfc0), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"nginx", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(0xc00186cfe0), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:true, RestartCount:0, Image:"nginx:1.14-alpine", ImageID:"docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7", ContainerID:"docker://fbdc3f615a3ff45b9499da7f71392450058225167e9b8355840c8f0d2dfba8d9"}}, QOSClass:"BestEffort"}}
STEP: deleting the pod gracefully
STEP: verifying the kubelet observed the termination notice
STEP: verifying pod deletion was observed
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 29 13:04:22.858: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-4d6vd" for this suite.
Jan 29 13:04:30.916: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 13:04:31.094: INFO: namespace: e2e-tests-pods-4d6vd, resource: bindings, ignored listing per whitelist
Jan 29 13:04:31.108: INFO: namespace e2e-tests-pods-4d6vd deletion completed in 8.235963226s

• [SLOW TEST:35.840 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should have an terminated reason [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 29 13:04:31.109: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should have an terminated reason [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 29 13:04:44.487: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubelet-test-fhf42" for this suite.
Jan 29 13:04:51.425: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 13:04:51.527: INFO: namespace: e2e-tests-kubelet-test-fhf42, resource: bindings, ignored listing per whitelist
Jan 29 13:04:51.662: INFO: namespace e2e-tests-kubelet-test-fhf42 deletion completed in 7.149923921s

• [SLOW TEST:20.553 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78
    should have an terminated reason [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 29 13:04:51.663: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test override all
Jan 29 13:04:51.974: INFO: Waiting up to 5m0s for pod "client-containers-efd63346-4297-11ea-8d54-0242ac110005" in namespace "e2e-tests-containers-nftml" to be "success or failure"
Jan 29 13:04:51.984: INFO: Pod "client-containers-efd63346-4297-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.283783ms
Jan 29 13:04:54.199: INFO: Pod "client-containers-efd63346-4297-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.224854568s
Jan 29 13:04:56.221: INFO: Pod "client-containers-efd63346-4297-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.247541334s
Jan 29 13:04:59.111: INFO: Pod "client-containers-efd63346-4297-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.137244652s
Jan 29 13:05:01.125: INFO: Pod "client-containers-efd63346-4297-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.150797713s
Jan 29 13:05:03.163: INFO: Pod "client-containers-efd63346-4297-11ea-8d54-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.189095344s
STEP: Saw pod success
Jan 29 13:05:03.163: INFO: Pod "client-containers-efd63346-4297-11ea-8d54-0242ac110005" satisfied condition "success or failure"
Jan 29 13:05:03.180: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-containers-efd63346-4297-11ea-8d54-0242ac110005 container test-container: 
STEP: delete the pod
Jan 29 13:05:03.446: INFO: Waiting for pod client-containers-efd63346-4297-11ea-8d54-0242ac110005 to disappear
Jan 29 13:05:03.646: INFO: Pod client-containers-efd63346-4297-11ea-8d54-0242ac110005 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 29 13:05:03.647: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-containers-nftml" for this suite.
Jan 29 13:05:09.926: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 13:05:10.025: INFO: namespace: e2e-tests-containers-nftml, resource: bindings, ignored listing per whitelist
Jan 29 13:05:10.105: INFO: namespace e2e-tests-containers-nftml deletion completed in 6.431353487s

• [SLOW TEST:18.442 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 29 13:05:10.106: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for all rs to be garbage collected
STEP: expected 0 rs, got 1 rs
STEP: expected 0 pods, got 2 pods
STEP: Gathering metrics
W0129 13:05:13.932373       8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jan 29 13:05:13.932: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 29 13:05:13.932: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-tzgp7" for this suite.
Jan 29 13:05:24.733: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 13:05:24.919: INFO: namespace: e2e-tests-gc-tzgp7, resource: bindings, ignored listing per whitelist
Jan 29 13:05:24.926: INFO: namespace e2e-tests-gc-tzgp7 deletion completed in 10.975730437s

• [SLOW TEST:14.819 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[k8s.io] Pods 
  should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 29 13:05:24.926: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating pod
Jan 29 13:05:41.200: INFO: Pod pod-hostip-039f862b-4298-11ea-8d54-0242ac110005 has hostIP: 10.96.1.240
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 29 13:05:41.200: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-ktnvf" for this suite.
Jan 29 13:06:05.725: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 13:06:05.918: INFO: namespace: e2e-tests-pods-ktnvf, resource: bindings, ignored listing per whitelist
Jan 29 13:06:05.979: INFO: namespace e2e-tests-pods-ktnvf deletion completed in 24.774001042s

• [SLOW TEST:41.053 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 29 13:06:05.980: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-1c364a4a-4298-11ea-8d54-0242ac110005
STEP: Creating a pod to test consume configMaps
Jan 29 13:06:06.436: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-1c380632-4298-11ea-8d54-0242ac110005" in namespace "e2e-tests-projected-c7rs9" to be "success or failure"
Jan 29 13:06:06.511: INFO: Pod "pod-projected-configmaps-1c380632-4298-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 74.907775ms
Jan 29 13:06:09.398: INFO: Pod "pod-projected-configmaps-1c380632-4298-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.961901591s
Jan 29 13:06:11.413: INFO: Pod "pod-projected-configmaps-1c380632-4298-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.976136414s
Jan 29 13:06:13.527: INFO: Pod "pod-projected-configmaps-1c380632-4298-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.090327692s
Jan 29 13:06:16.561: INFO: Pod "pod-projected-configmaps-1c380632-4298-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.124836211s
Jan 29 13:06:18.590: INFO: Pod "pod-projected-configmaps-1c380632-4298-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 12.153831617s
Jan 29 13:06:20.617: INFO: Pod "pod-projected-configmaps-1c380632-4298-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 14.180849082s
Jan 29 13:06:22.751: INFO: Pod "pod-projected-configmaps-1c380632-4298-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 16.315087755s
Jan 29 13:06:24.770: INFO: Pod "pod-projected-configmaps-1c380632-4298-11ea-8d54-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 18.333590927s
STEP: Saw pod success
Jan 29 13:06:24.770: INFO: Pod "pod-projected-configmaps-1c380632-4298-11ea-8d54-0242ac110005" satisfied condition "success or failure"
Jan 29 13:06:24.777: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-1c380632-4298-11ea-8d54-0242ac110005 container projected-configmap-volume-test: 
STEP: delete the pod
Jan 29 13:06:26.268: INFO: Waiting for pod pod-projected-configmaps-1c380632-4298-11ea-8d54-0242ac110005 to disappear
Jan 29 13:06:26.720: INFO: Pod pod-projected-configmaps-1c380632-4298-11ea-8d54-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 29 13:06:26.720: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-c7rs9" for this suite.
Jan 29 13:06:34.904: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 13:06:34.987: INFO: namespace: e2e-tests-projected-c7rs9, resource: bindings, ignored listing per whitelist
Jan 29 13:06:35.224: INFO: namespace e2e-tests-projected-c7rs9 deletion completed in 8.478859276s

• [SLOW TEST:29.243 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 29 13:06:35.224: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with secret that has name projected-secret-test-2d822cc5-4298-11ea-8d54-0242ac110005
STEP: Creating a pod to test consume secrets
Jan 29 13:06:35.434: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-2d82fe1b-4298-11ea-8d54-0242ac110005" in namespace "e2e-tests-projected-rn9qw" to be "success or failure"
Jan 29 13:06:35.441: INFO: Pod "pod-projected-secrets-2d82fe1b-4298-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.879592ms
Jan 29 13:06:37.624: INFO: Pod "pod-projected-secrets-2d82fe1b-4298-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.18996226s
Jan 29 13:06:39.799: INFO: Pod "pod-projected-secrets-2d82fe1b-4298-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.364613441s
Jan 29 13:06:41.859: INFO: Pod "pod-projected-secrets-2d82fe1b-4298-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.424282054s
Jan 29 13:06:43.882: INFO: Pod "pod-projected-secrets-2d82fe1b-4298-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.447399003s
Jan 29 13:06:45.901: INFO: Pod "pod-projected-secrets-2d82fe1b-4298-11ea-8d54-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.466514381s
STEP: Saw pod success
Jan 29 13:06:45.901: INFO: Pod "pod-projected-secrets-2d82fe1b-4298-11ea-8d54-0242ac110005" satisfied condition "success or failure"
Jan 29 13:06:45.908: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-2d82fe1b-4298-11ea-8d54-0242ac110005 container projected-secret-volume-test: 
STEP: delete the pod
Jan 29 13:06:46.558: INFO: Waiting for pod pod-projected-secrets-2d82fe1b-4298-11ea-8d54-0242ac110005 to disappear
Jan 29 13:06:46.601: INFO: Pod pod-projected-secrets-2d82fe1b-4298-11ea-8d54-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 29 13:06:46.602: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-rn9qw" for this suite.
Jan 29 13:06:52.752: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 13:06:52.837: INFO: namespace: e2e-tests-projected-rn9qw, resource: bindings, ignored listing per whitelist
Jan 29 13:06:52.961: INFO: namespace e2e-tests-projected-rn9qw deletion completed in 6.333544176s

• [SLOW TEST:17.737 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] Secrets 
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 29 13:06:52.962: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-380da539-4298-11ea-8d54-0242ac110005
STEP: Creating a pod to test consume secrets
Jan 29 13:06:53.233: INFO: Waiting up to 5m0s for pod "pod-secrets-381f1b4c-4298-11ea-8d54-0242ac110005" in namespace "e2e-tests-secrets-r85z4" to be "success or failure"
Jan 29 13:06:53.265: INFO: Pod "pod-secrets-381f1b4c-4298-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 31.577976ms
Jan 29 13:06:55.755: INFO: Pod "pod-secrets-381f1b4c-4298-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.521862581s
Jan 29 13:06:57.769: INFO: Pod "pod-secrets-381f1b4c-4298-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.535992401s
Jan 29 13:07:00.309: INFO: Pod "pod-secrets-381f1b4c-4298-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.075445666s
Jan 29 13:07:02.552: INFO: Pod "pod-secrets-381f1b4c-4298-11ea-8d54-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.318984698s
Jan 29 13:07:04.611: INFO: Pod "pod-secrets-381f1b4c-4298-11ea-8d54-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.377522802s
STEP: Saw pod success
Jan 29 13:07:04.612: INFO: Pod "pod-secrets-381f1b4c-4298-11ea-8d54-0242ac110005" satisfied condition "success or failure"
Jan 29 13:07:04.633: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-381f1b4c-4298-11ea-8d54-0242ac110005 container secret-volume-test: 
STEP: delete the pod
Jan 29 13:07:05.781: INFO: Waiting for pod pod-secrets-381f1b4c-4298-11ea-8d54-0242ac110005 to disappear
Jan 29 13:07:05.809: INFO: Pod pod-secrets-381f1b4c-4298-11ea-8d54-0242ac110005 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 29 13:07:05.809: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-r85z4" for this suite.
Jan 29 13:07:14.197: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 13:07:14.262: INFO: namespace: e2e-tests-secrets-r85z4, resource: bindings, ignored listing per whitelist
Jan 29 13:07:14.339: INFO: namespace e2e-tests-secrets-r85z4 deletion completed in 8.512173556s
STEP: Destroying namespace "e2e-tests-secret-namespace-lhh2j" for this suite.
Jan 29 13:07:20.407: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 13:07:20.736: INFO: namespace: e2e-tests-secret-namespace-lhh2j, resource: bindings, ignored listing per whitelist
Jan 29 13:07:20.933: INFO: namespace e2e-tests-secret-namespace-lhh2j deletion completed in 6.593616457s

• [SLOW TEST:27.971 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-apps] Deployment 
  deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 29 13:07:20.933: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65
[It] deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan 29 13:07:21.279: INFO: Pod name rollover-pod: Found 0 pods out of 1
Jan 29 13:07:26.326: INFO: Pod name rollover-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Jan 29 13:07:30.449: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready
Jan 29 13:07:32.472: INFO: Creating deployment "test-rollover-deployment"
Jan 29 13:07:32.568: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations
Jan 29 13:07:35.209: INFO: Check revision of new replica set for deployment "test-rollover-deployment"
Jan 29 13:07:35.226: INFO: Ensure that both replica sets have 1 created replica
Jan 29 13:07:35.237: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update
Jan 29 13:07:35.261: INFO: Updating deployment test-rollover-deployment
Jan 29 13:07:35.262: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller
Jan 29 13:07:37.506: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2
Jan 29 13:07:37.523: INFO: Make sure deployment "test-rollover-deployment" is complete
Jan 29 13:07:37.533: INFO: all replica sets need to contain the pod-template-hash label
Jan 29 13:07:37.533: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715900053, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715900053, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715900056, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715900052, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 29 13:07:39.788: INFO: all replica sets need to contain the pod-template-hash label
Jan 29 13:07:39.789: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715900053, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715900053, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715900056, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715900052, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 29 13:07:41.766: INFO: all replica sets need to contain the pod-template-hash label
Jan 29 13:07:41.766: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715900053, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715900053, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715900056, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715900052, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 29 13:07:43.569: INFO: all replica sets need to contain the pod-template-hash label
Jan 29 13:07:43.569: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715900053, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715900053, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715900056, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715900052, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 29 13:07:45.588: INFO: all replica sets need to contain the pod-template-hash label
Jan 29 13:07:45.589: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715900053, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715900053, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715900056, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715900052, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 29 13:07:47.567: INFO: all replica sets need to contain the pod-template-hash label
Jan 29 13:07:47.567: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715900053, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715900053, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715900056, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715900052, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 29 13:07:49.554: INFO: all replica sets need to contain the pod-template-hash label
Jan 29 13:07:49.555: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715900053, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715900053, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715900068, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715900052, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 29 13:07:51.581: INFO: all replica sets need to contain the pod-template-hash label
Jan 29 13:07:51.582: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715900053, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715900053, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715900068, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715900052, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 29 13:07:53.561: INFO: all replica sets need to contain the pod-template-hash label
Jan 29 13:07:53.561: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715900053, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715900053, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715900068, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715900052, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 29 13:07:55.557: INFO: all replica sets need to contain the pod-template-hash label
Jan 29 13:07:55.557: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715900053, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715900053, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715900068, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715900052, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 29 13:07:57.579: INFO: all replica sets need to contain the pod-template-hash label
Jan 29 13:07:57.579: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715900053, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715900053, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715900068, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715900052, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 29 13:07:59.567: INFO: 
Jan 29 13:07:59.567: INFO: Ensure that both old replica sets have no replicas
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59
Jan 29 13:07:59.589: INFO: Deployment "test-rollover-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment,GenerateName:,Namespace:e2e-tests-deployment-m2jd9,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-m2jd9/deployments/test-rollover-deployment,UID:4f84de64-4298-11ea-a994-fa163e34d433,ResourceVersion:19864676,Generation:2,CreationTimestamp:2020-01-29 13:07:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-01-29 13:07:33 +0000 UTC 2020-01-29 13:07:33 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-01-29 13:07:58 +0000 UTC 2020-01-29 13:07:32 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rollover-deployment-5b8479fdb6" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},}

Jan 29 13:07:59.600: INFO: New ReplicaSet "test-rollover-deployment-5b8479fdb6" of Deployment "test-rollover-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-5b8479fdb6,GenerateName:,Namespace:e2e-tests-deployment-m2jd9,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-m2jd9/replicasets/test-rollover-deployment-5b8479fdb6,UID:512f2006-4298-11ea-a994-fa163e34d433,ResourceVersion:19864667,Generation:2,CreationTimestamp:2020-01-29 13:07:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 4f84de64-4298-11ea-a994-fa163e34d433 0xc00122fa87 0xc00122fa88}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Jan 29 13:07:59.600: INFO: All old ReplicaSets of Deployment "test-rollover-deployment":
Jan 29 13:07:59.601: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-controller,GenerateName:,Namespace:e2e-tests-deployment-m2jd9,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-m2jd9/replicasets/test-rollover-controller,UID:48d24173-4298-11ea-a994-fa163e34d433,ResourceVersion:19864675,Generation:2,CreationTimestamp:2020-01-29 13:07:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 4f84de64-4298-11ea-a994-fa163e34d433 0xc00122f6bf 0xc00122f6d0}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Jan 29 13:07:59.601: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-58494b7559,GenerateName:,Namespace:e2e-tests-deployment-m2jd9,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-m2jd9/replicasets/test-rollover-deployment-58494b7559,UID:4fbcd945-4298-11ea-a994-fa163e34d433,ResourceVersion:19864632,Generation:2,CreationTimestamp:2020-01-29 13:07:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 4f84de64-4298-11ea-a994-fa163e34d433 0xc00122f9b7 0xc00122f9b8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Jan 29 13:07:59.618: INFO: Pod "test-rollover-deployment-5b8479fdb6-j8m2h" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-5b8479fdb6-j8m2h,GenerateName:test-rollover-deployment-5b8479fdb6-,Namespace:e2e-tests-deployment-m2jd9,SelfLink:/api/v1/namespaces/e2e-tests-deployment-m2jd9/pods/test-rollover-deployment-5b8479fdb6-j8m2h,UID:51952d6f-4298-11ea-a994-fa163e34d433,ResourceVersion:19864652,Generation:0,CreationTimestamp:2020-01-29 13:07:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rollover-deployment-5b8479fdb6 512f2006-4298-11ea-a994-fa163e34d433 0xc001a27417 0xc001a27418}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-gtb75 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-gtb75,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [{default-token-gtb75 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001a274b0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001a274d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 13:07:36 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 13:07:48 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 13:07:48 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-29 13:07:35 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.5,StartTime:2020-01-29 13:07:36 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-01-29 13:07:46 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 docker://18b38003460a1de4610f96bf29f40caba7fa97f8bb7ee4072101072a546a4052}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 29 13:07:59.618: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-deployment-m2jd9" for this suite.
Jan 29 13:08:10.200: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 13:08:10.404: INFO: namespace: e2e-tests-deployment-m2jd9, resource: bindings, ignored listing per whitelist
Jan 29 13:08:10.482: INFO: namespace e2e-tests-deployment-m2jd9 deletion completed in 10.852220446s

• [SLOW TEST:49.549 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 29 13:08:10.483: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default;check="$$(dig +tcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;test -n "$$(getent hosts dns-querier-1.dns-test-service.e2e-tests-dns-hs59h.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-hs59h.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-hs59h.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default;check="$$(dig +tcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;test -n "$$(getent hosts dns-querier-1.dns-test-service.e2e-tests-dns-hs59h.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-hs59h.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-hs59h.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jan 29 13:08:28.867: INFO: Unable to read wheezy_udp@kubernetes.default from pod e2e-tests-dns-hs59h/dns-test-6649318f-4298-11ea-8d54-0242ac110005: the server could not find the requested resource (get pods dns-test-6649318f-4298-11ea-8d54-0242ac110005)
Jan 29 13:08:28.876: INFO: Unable to read wheezy_tcp@kubernetes.default from pod e2e-tests-dns-hs59h/dns-test-6649318f-4298-11ea-8d54-0242ac110005: the server could not find the requested resource (get pods dns-test-6649318f-4298-11ea-8d54-0242ac110005)
Jan 29 13:08:28.884: INFO: Unable to read wheezy_udp@kubernetes.default.svc from pod e2e-tests-dns-hs59h/dns-test-6649318f-4298-11ea-8d54-0242ac110005: the server could not find the requested resource (get pods dns-test-6649318f-4298-11ea-8d54-0242ac110005)
Jan 29 13:08:28.893: INFO: Unable to read wheezy_tcp@kubernetes.default.svc from pod e2e-tests-dns-hs59h/dns-test-6649318f-4298-11ea-8d54-0242ac110005: the server could not find the requested resource (get pods dns-test-6649318f-4298-11ea-8d54-0242ac110005)
Jan 29 13:08:28.898: INFO: Unable to read wheezy_udp@kubernetes.default.svc.cluster.local from pod e2e-tests-dns-hs59h/dns-test-6649318f-4298-11ea-8d54-0242ac110005: the server could not find the requested resource (get pods dns-test-6649318f-4298-11ea-8d54-0242ac110005)
Jan 29 13:08:28.902: INFO: Unable to read wheezy_tcp@kubernetes.default.svc.cluster.local from pod e2e-tests-dns-hs59h/dns-test-6649318f-4298-11ea-8d54-0242ac110005: the server could not find the requested resource (get pods dns-test-6649318f-4298-11ea-8d54-0242ac110005)
Jan 29 13:08:28.911: INFO: Unable to read wheezy_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-hs59h.svc.cluster.local from pod e2e-tests-dns-hs59h/dns-test-6649318f-4298-11ea-8d54-0242ac110005: the server could not find the requested resource (get pods dns-test-6649318f-4298-11ea-8d54-0242ac110005)
Jan 29 13:08:28.917: INFO: Unable to read wheezy_hosts@dns-querier-1 from pod e2e-tests-dns-hs59h/dns-test-6649318f-4298-11ea-8d54-0242ac110005: the server could not find the requested resource (get pods dns-test-6649318f-4298-11ea-8d54-0242ac110005)
Jan 29 13:08:28.922: INFO: Unable to read wheezy_udp@PodARecord from pod e2e-tests-dns-hs59h/dns-test-6649318f-4298-11ea-8d54-0242ac110005: the server could not find the requested resource (get pods dns-test-6649318f-4298-11ea-8d54-0242ac110005)
Jan 29 13:08:28.927: INFO: Unable to read wheezy_tcp@PodARecord from pod e2e-tests-dns-hs59h/dns-test-6649318f-4298-11ea-8d54-0242ac110005: the server could not find the requested resource (get pods dns-test-6649318f-4298-11ea-8d54-0242ac110005)
Jan 29 13:08:29.011: INFO: Unable to read jessie_udp@kubernetes.default from pod e2e-tests-dns-hs59h/dns-test-6649318f-4298-11ea-8d54-0242ac110005: the server could not find the requested resource (get pods dns-test-6649318f-4298-11ea-8d54-0242ac110005)
Jan 29 13:08:29.023: INFO: Unable to read jessie_tcp@kubernetes.default from pod e2e-tests-dns-hs59h/dns-test-6649318f-4298-11ea-8d54-0242ac110005: the server could not find the requested resource (get pods dns-test-6649318f-4298-11ea-8d54-0242ac110005)
Jan 29 13:08:29.029: INFO: Unable to read jessie_udp@kubernetes.default.svc from pod e2e-tests-dns-hs59h/dns-test-6649318f-4298-11ea-8d54-0242ac110005: the server could not find the requested resource (get pods dns-test-6649318f-4298-11ea-8d54-0242ac110005)
Jan 29 13:08:29.034: INFO: Unable to read jessie_tcp@kubernetes.default.svc from pod e2e-tests-dns-hs59h/dns-test-6649318f-4298-11ea-8d54-0242ac110005: the server could not find the requested resource (get pods dns-test-6649318f-4298-11ea-8d54-0242ac110005)
Jan 29 13:08:29.042: INFO: Unable to read jessie_udp@kubernetes.default.svc.cluster.local from pod e2e-tests-dns-hs59h/dns-test-6649318f-4298-11ea-8d54-0242ac110005: the server could not find the requested resource (get pods dns-test-6649318f-4298-11ea-8d54-0242ac110005)
Jan 29 13:08:29.046: INFO: Unable to read jessie_tcp@kubernetes.default.svc.cluster.local from pod e2e-tests-dns-hs59h/dns-test-6649318f-4298-11ea-8d54-0242ac110005: the server could not find the requested resource (get pods dns-test-6649318f-4298-11ea-8d54-0242ac110005)
Jan 29 13:08:29.050: INFO: Unable to read jessie_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-hs59h.svc.cluster.local from pod e2e-tests-dns-hs59h/dns-test-6649318f-4298-11ea-8d54-0242ac110005: the server could not find the requested resource (get pods dns-test-6649318f-4298-11ea-8d54-0242ac110005)
Jan 29 13:08:29.055: INFO: Unable to read jessie_hosts@dns-querier-1 from pod e2e-tests-dns-hs59h/dns-test-6649318f-4298-11ea-8d54-0242ac110005: the server could not find the requested resource (get pods dns-test-6649318f-4298-11ea-8d54-0242ac110005)
Jan 29 13:08:29.059: INFO: Unable to read jessie_udp@PodARecord from pod e2e-tests-dns-hs59h/dns-test-6649318f-4298-11ea-8d54-0242ac110005: the server could not find the requested resource (get pods dns-test-6649318f-4298-11ea-8d54-0242ac110005)
Jan 29 13:08:29.063: INFO: Unable to read jessie_tcp@PodARecord from pod e2e-tests-dns-hs59h/dns-test-6649318f-4298-11ea-8d54-0242ac110005: the server could not find the requested resource (get pods dns-test-6649318f-4298-11ea-8d54-0242ac110005)
Jan 29 13:08:29.063: INFO: Lookups using e2e-tests-dns-hs59h/dns-test-6649318f-4298-11ea-8d54-0242ac110005 failed for: [wheezy_udp@kubernetes.default wheezy_tcp@kubernetes.default wheezy_udp@kubernetes.default.svc wheezy_tcp@kubernetes.default.svc wheezy_udp@kubernetes.default.svc.cluster.local wheezy_tcp@kubernetes.default.svc.cluster.local wheezy_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-hs59h.svc.cluster.local wheezy_hosts@dns-querier-1 wheezy_udp@PodARecord wheezy_tcp@PodARecord jessie_udp@kubernetes.default jessie_tcp@kubernetes.default jessie_udp@kubernetes.default.svc jessie_tcp@kubernetes.default.svc jessie_udp@kubernetes.default.svc.cluster.local jessie_tcp@kubernetes.default.svc.cluster.local jessie_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-hs59h.svc.cluster.local jessie_hosts@dns-querier-1 jessie_udp@PodARecord jessie_tcp@PodARecord]

Jan 29 13:08:34.474: INFO: DNS probes using e2e-tests-dns-hs59h/dns-test-6649318f-4298-11ea-8d54-0242ac110005 succeeded

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 29 13:08:34.555: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-dns-hs59h" for this suite.
Jan 29 13:08:42.724: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 29 13:08:42.786: INFO: namespace: e2e-tests-dns-hs59h, resource: bindings, ignored listing per whitelist
Jan 29 13:08:42.880: INFO: namespace e2e-tests-dns-hs59h deletion completed in 8.298715678s

• [SLOW TEST:32.397 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSJan 29 13:08:42.880: INFO: Running AfterSuite actions on all nodes
Jan 29 13:08:42.881: INFO: Running AfterSuite actions on node 1
Jan 29 13:08:42.881: INFO: Skipping dumping logs from cluster

Ran 199 of 2164 Specs in 8498.669 seconds
SUCCESS! -- 199 Passed | 0 Failed | 0 Pending | 1965 Skipped PASS