I0221 10:47:13.255316 8 e2e.go:224] Starting e2e run "847a4476-5497-11ea-b1f8-0242ac110008" on Ginkgo node 1 Running Suite: Kubernetes e2e suite =================================== Random Seed: 1582282032 - Will randomize all specs Will run 201 of 2164 specs Feb 21 10:47:13.472: INFO: >>> kubeConfig: /root/.kube/config Feb 21 10:47:13.477: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Feb 21 10:47:13.500: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Feb 21 10:47:13.542: INFO: 8 / 8 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Feb 21 10:47:13.542: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Feb 21 10:47:13.542: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Feb 21 10:47:13.554: INFO: 1 / 1 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Feb 21 10:47:13.554: INFO: 1 / 1 pods ready in namespace 'kube-system' in daemonset 'weave-net' (0 seconds elapsed) Feb 21 10:47:13.554: INFO: e2e test version: v1.13.12 Feb 21 10:47:13.555: INFO: kube-apiserver version: v1.13.8 SSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 21 10:47:13.555: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl Feb 21 10:47:13.805: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295 [It] should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a replication controller Feb 21 10:47:13.811: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-226fw' Feb 21 10:47:15.891: INFO: stderr: "" Feb 21 10:47:15.891: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Feb 21 10:47:15.891: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-226fw' Feb 21 10:47:16.024: INFO: stderr: "" Feb 21 10:47:16.024: INFO: stdout: "update-demo-nautilus-968nr " STEP: Replicas for name=update-demo: expected=2 actual=1 Feb 21 10:47:21.025: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-226fw' Feb 21 10:47:21.226: INFO: stderr: "" Feb 21 10:47:21.227: INFO: stdout: "update-demo-nautilus-968nr update-demo-nautilus-ss8q9 " Feb 21 10:47:21.227: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-968nr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-226fw' Feb 21 10:47:21.322: INFO: stderr: "" Feb 21 10:47:21.322: INFO: stdout: "" Feb 21 10:47:21.322: INFO: update-demo-nautilus-968nr is created but not running Feb 21 10:47:26.323: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-226fw' Feb 21 10:47:26.590: INFO: stderr: "" Feb 21 10:47:26.590: INFO: stdout: "update-demo-nautilus-968nr update-demo-nautilus-ss8q9 " Feb 21 10:47:26.591: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-968nr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-226fw' Feb 21 10:47:26.710: INFO: stderr: "" Feb 21 10:47:26.710: INFO: stdout: "" Feb 21 10:47:26.710: INFO: update-demo-nautilus-968nr is created but not running Feb 21 10:47:31.711: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-226fw' Feb 21 10:47:31.954: INFO: stderr: "" Feb 21 10:47:31.955: INFO: stdout: "update-demo-nautilus-968nr update-demo-nautilus-ss8q9 " Feb 21 10:47:31.955: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-968nr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-226fw' Feb 21 10:47:32.049: INFO: stderr: "" Feb 21 10:47:32.049: INFO: stdout: "true" Feb 21 10:47:32.049: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-968nr -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-226fw' Feb 21 10:47:32.156: INFO: stderr: "" Feb 21 10:47:32.156: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Feb 21 10:47:32.156: INFO: validating pod update-demo-nautilus-968nr Feb 21 10:47:32.243: INFO: got data: { "image": "nautilus.jpg" } Feb 21 10:47:32.243: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Feb 21 10:47:32.243: INFO: update-demo-nautilus-968nr is verified up and running Feb 21 10:47:32.244: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ss8q9 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-226fw' Feb 21 10:47:32.395: INFO: stderr: "" Feb 21 10:47:32.395: INFO: stdout: "true" Feb 21 10:47:32.395: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ss8q9 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-226fw' Feb 21 10:47:32.536: INFO: stderr: "" Feb 21 10:47:32.537: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Feb 21 10:47:32.537: INFO: validating pod update-demo-nautilus-ss8q9 Feb 21 10:47:32.568: INFO: got data: { "image": "nautilus.jpg" } Feb 21 10:47:32.568: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Feb 21 10:47:32.568: INFO: update-demo-nautilus-ss8q9 is verified up and running STEP: scaling down the replication controller Feb 21 10:47:32.573: INFO: scanned /root for discovery docs: Feb 21 10:47:32.573: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=e2e-tests-kubectl-226fw' Feb 21 10:47:33.993: INFO: stderr: "" Feb 21 10:47:33.994: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Feb 21 10:47:33.994: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-226fw' Feb 21 10:47:34.176: INFO: stderr: "" Feb 21 10:47:34.176: INFO: stdout: "update-demo-nautilus-968nr update-demo-nautilus-ss8q9 " STEP: Replicas for name=update-demo: expected=1 actual=2 Feb 21 10:47:39.177: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-226fw' Feb 21 10:47:39.367: INFO: stderr: "" Feb 21 10:47:39.367: INFO: stdout: "update-demo-nautilus-968nr update-demo-nautilus-ss8q9 " STEP: Replicas for name=update-demo: expected=1 actual=2 Feb 21 10:47:44.368: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-226fw' Feb 21 10:47:44.532: INFO: stderr: "" Feb 21 10:47:44.533: INFO: stdout: "update-demo-nautilus-968nr " Feb 21 10:47:44.533: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-968nr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-226fw' Feb 21 10:47:44.637: INFO: stderr: "" Feb 21 10:47:44.637: INFO: stdout: "true" Feb 21 10:47:44.637: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-968nr -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-226fw' Feb 21 10:47:44.728: INFO: stderr: "" Feb 21 10:47:44.728: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Feb 21 10:47:44.729: INFO: validating pod update-demo-nautilus-968nr Feb 21 10:47:44.735: INFO: got data: { "image": "nautilus.jpg" } Feb 21 10:47:44.736: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Feb 21 10:47:44.736: INFO: update-demo-nautilus-968nr is verified up and running STEP: scaling up the replication controller Feb 21 10:47:44.737: INFO: scanned /root for discovery docs: Feb 21 10:47:44.737: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=e2e-tests-kubectl-226fw' Feb 21 10:47:46.244: INFO: stderr: "" Feb 21 10:47:46.245: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Feb 21 10:47:46.245: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-226fw' Feb 21 10:47:46.622: INFO: stderr: "" Feb 21 10:47:46.622: INFO: stdout: "update-demo-nautilus-968nr update-demo-nautilus-zz8r9 " Feb 21 10:47:46.622: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-968nr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-226fw' Feb 21 10:47:46.750: INFO: stderr: "" Feb 21 10:47:46.750: INFO: stdout: "true" Feb 21 10:47:46.751: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-968nr -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-226fw' Feb 21 10:47:46.838: INFO: stderr: "" Feb 21 10:47:46.838: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Feb 21 10:47:46.838: INFO: validating pod update-demo-nautilus-968nr Feb 21 10:47:46.854: INFO: got data: { "image": "nautilus.jpg" } Feb 21 10:47:46.854: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Feb 21 10:47:46.854: INFO: update-demo-nautilus-968nr is verified up and running Feb 21 10:47:46.854: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-zz8r9 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-226fw' Feb 21 10:47:47.395: INFO: stderr: "" Feb 21 10:47:47.395: INFO: stdout: "" Feb 21 10:47:47.395: INFO: update-demo-nautilus-zz8r9 is created but not running Feb 21 10:47:52.396: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-226fw' Feb 21 10:47:52.717: INFO: stderr: "" Feb 21 10:47:52.717: INFO: stdout: "update-demo-nautilus-968nr update-demo-nautilus-zz8r9 " Feb 21 10:47:52.717: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-968nr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-226fw' Feb 21 10:47:52.874: INFO: stderr: "" Feb 21 10:47:52.874: INFO: stdout: "true" Feb 21 10:47:52.874: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-968nr -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-226fw' Feb 21 10:47:52.962: INFO: stderr: "" Feb 21 10:47:52.962: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Feb 21 10:47:52.963: INFO: validating pod update-demo-nautilus-968nr Feb 21 10:47:53.031: INFO: got data: { "image": "nautilus.jpg" } Feb 21 10:47:53.031: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Feb 21 10:47:53.031: INFO: update-demo-nautilus-968nr is verified up and running Feb 21 10:47:53.031: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-zz8r9 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-226fw' Feb 21 10:47:53.119: INFO: stderr: "" Feb 21 10:47:53.119: INFO: stdout: "" Feb 21 10:47:53.119: INFO: update-demo-nautilus-zz8r9 is created but not running Feb 21 10:47:58.120: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-226fw' Feb 21 10:47:58.279: INFO: stderr: "" Feb 21 10:47:58.279: INFO: stdout: "update-demo-nautilus-968nr update-demo-nautilus-zz8r9 " Feb 21 10:47:58.280: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-968nr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-226fw' Feb 21 10:47:58.410: INFO: stderr: "" Feb 21 10:47:58.410: INFO: stdout: "true" Feb 21 10:47:58.411: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-968nr -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-226fw' Feb 21 10:47:58.533: INFO: stderr: "" Feb 21 10:47:58.534: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Feb 21 10:47:58.534: INFO: validating pod update-demo-nautilus-968nr Feb 21 10:47:58.553: INFO: got data: { "image": "nautilus.jpg" } Feb 21 10:47:58.553: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Feb 21 10:47:58.553: INFO: update-demo-nautilus-968nr is verified up and running Feb 21 10:47:58.554: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-zz8r9 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-226fw' Feb 21 10:47:58.661: INFO: stderr: "" Feb 21 10:47:58.661: INFO: stdout: "true" Feb 21 10:47:58.662: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-zz8r9 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-226fw' Feb 21 10:47:58.748: INFO: stderr: "" Feb 21 10:47:58.748: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Feb 21 10:47:58.748: INFO: validating pod update-demo-nautilus-zz8r9 Feb 21 10:47:58.757: INFO: got data: { "image": "nautilus.jpg" } Feb 21 10:47:58.757: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Feb 21 10:47:58.757: INFO: update-demo-nautilus-zz8r9 is verified up and running STEP: using delete to clean up resources Feb 21 10:47:58.757: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-226fw' Feb 21 10:47:58.849: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Feb 21 10:47:58.850: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Feb 21 10:47:58.851: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=e2e-tests-kubectl-226fw' Feb 21 10:47:59.312: INFO: stderr: "No resources found.\n" Feb 21 10:47:59.313: INFO: stdout: "" Feb 21 10:47:59.313: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=e2e-tests-kubectl-226fw -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Feb 21 10:48:01.899: INFO: stderr: "" Feb 21 10:48:01.900: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 21 10:48:01.901: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-226fw" for this suite. Feb 21 10:48:26.209: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 21 10:48:26.345: INFO: namespace: e2e-tests-kubectl-226fw, resource: bindings, ignored listing per whitelist Feb 21 10:48:26.400: INFO: namespace e2e-tests-kubectl-226fw deletion completed in 24.450555696s • [SLOW TEST:72.846 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 21 10:48:26.401: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with secret pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-secret-jgdf STEP: Creating a pod to test atomic-volume-subpath Feb 21 10:48:26.921: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-jgdf" in namespace "e2e-tests-subpath-78bj9" to be "success or failure" Feb 21 10:48:26.932: INFO: Pod "pod-subpath-test-secret-jgdf": Phase="Pending", Reason="", readiness=false. Elapsed: 10.738544ms Feb 21 10:48:28.955: INFO: Pod "pod-subpath-test-secret-jgdf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033773488s Feb 21 10:48:30.979: INFO: Pod "pod-subpath-test-secret-jgdf": Phase="Pending", Reason="", readiness=false. Elapsed: 4.05772567s Feb 21 10:48:33.475: INFO: Pod "pod-subpath-test-secret-jgdf": Phase="Pending", Reason="", readiness=false. Elapsed: 6.553390662s Feb 21 10:48:35.487: INFO: Pod "pod-subpath-test-secret-jgdf": Phase="Pending", Reason="", readiness=false. Elapsed: 8.565504555s Feb 21 10:48:37.501: INFO: Pod "pod-subpath-test-secret-jgdf": Phase="Pending", Reason="", readiness=false. Elapsed: 10.579970216s Feb 21 10:48:39.535: INFO: Pod "pod-subpath-test-secret-jgdf": Phase="Pending", Reason="", readiness=false. Elapsed: 12.613524069s Feb 21 10:48:41.566: INFO: Pod "pod-subpath-test-secret-jgdf": Phase="Pending", Reason="", readiness=false. Elapsed: 14.645162052s Feb 21 10:48:43.649: INFO: Pod "pod-subpath-test-secret-jgdf": Phase="Running", Reason="", readiness=false. Elapsed: 16.727886203s Feb 21 10:48:45.666: INFO: Pod "pod-subpath-test-secret-jgdf": Phase="Running", Reason="", readiness=false. Elapsed: 18.744801299s Feb 21 10:48:47.682: INFO: Pod "pod-subpath-test-secret-jgdf": Phase="Running", Reason="", readiness=false. Elapsed: 20.760548863s Feb 21 10:48:49.697: INFO: Pod "pod-subpath-test-secret-jgdf": Phase="Running", Reason="", readiness=false. Elapsed: 22.776150659s Feb 21 10:48:51.717: INFO: Pod "pod-subpath-test-secret-jgdf": Phase="Running", Reason="", readiness=false. Elapsed: 24.796130469s Feb 21 10:48:53.733: INFO: Pod "pod-subpath-test-secret-jgdf": Phase="Running", Reason="", readiness=false. Elapsed: 26.811880041s Feb 21 10:48:55.752: INFO: Pod "pod-subpath-test-secret-jgdf": Phase="Running", Reason="", readiness=false. Elapsed: 28.831039353s Feb 21 10:48:57.770: INFO: Pod "pod-subpath-test-secret-jgdf": Phase="Running", Reason="", readiness=false. Elapsed: 30.848458249s Feb 21 10:48:59.902: INFO: Pod "pod-subpath-test-secret-jgdf": Phase="Running", Reason="", readiness=false. Elapsed: 32.980573357s Feb 21 10:49:01.920: INFO: Pod "pod-subpath-test-secret-jgdf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 34.999059221s STEP: Saw pod success Feb 21 10:49:01.920: INFO: Pod "pod-subpath-test-secret-jgdf" satisfied condition "success or failure" Feb 21 10:49:01.926: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-secret-jgdf container test-container-subpath-secret-jgdf: STEP: delete the pod Feb 21 10:49:02.062: INFO: Waiting for pod pod-subpath-test-secret-jgdf to disappear Feb 21 10:49:02.156: INFO: Pod pod-subpath-test-secret-jgdf no longer exists STEP: Deleting pod pod-subpath-test-secret-jgdf Feb 21 10:49:02.156: INFO: Deleting pod "pod-subpath-test-secret-jgdf" in namespace "e2e-tests-subpath-78bj9" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 21 10:49:02.168: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-78bj9" for this suite. Feb 21 10:49:08.255: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 21 10:49:08.334: INFO: namespace: e2e-tests-subpath-78bj9, resource: bindings, ignored listing per whitelist Feb 21 10:49:08.387: INFO: namespace e2e-tests-subpath-78bj9 deletion completed in 6.205143709s • [SLOW TEST:41.986 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with secret pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 21 10:49:08.387: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with projected pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-projected-gnf5 STEP: Creating a pod to test atomic-volume-subpath Feb 21 10:49:08.728: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-gnf5" in namespace "e2e-tests-subpath-qsv9v" to be "success or failure" Feb 21 10:49:08.741: INFO: Pod "pod-subpath-test-projected-gnf5": Phase="Pending", Reason="", readiness=false. Elapsed: 12.509683ms Feb 21 10:49:10.776: INFO: Pod "pod-subpath-test-projected-gnf5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.047667615s Feb 21 10:49:16.097: INFO: Pod "pod-subpath-test-projected-gnf5": Phase="Pending", Reason="", readiness=false. Elapsed: 7.36838694s Feb 21 10:49:18.108: INFO: Pod "pod-subpath-test-projected-gnf5": Phase="Pending", Reason="", readiness=false. Elapsed: 9.379509274s Feb 21 10:49:22.256: INFO: Pod "pod-subpath-test-projected-gnf5": Phase="Pending", Reason="", readiness=false. Elapsed: 13.526861531s Feb 21 10:49:24.271: INFO: Pod "pod-subpath-test-projected-gnf5": Phase="Pending", Reason="", readiness=false. Elapsed: 15.541854987s Feb 21 10:49:28.419: INFO: Pod "pod-subpath-test-projected-gnf5": Phase="Pending", Reason="", readiness=false. Elapsed: 19.690310697s Feb 21 10:49:30.712: INFO: Pod "pod-subpath-test-projected-gnf5": Phase="Pending", Reason="", readiness=false. Elapsed: 21.983614946s Feb 21 10:49:33.425: INFO: Pod "pod-subpath-test-projected-gnf5": Phase="Pending", Reason="", readiness=false. Elapsed: 24.696664405s Feb 21 10:49:35.444: INFO: Pod "pod-subpath-test-projected-gnf5": Phase="Pending", Reason="", readiness=false. Elapsed: 26.715750268s Feb 21 10:49:37.462: INFO: Pod "pod-subpath-test-projected-gnf5": Phase="Running", Reason="", readiness=false. Elapsed: 28.733491893s Feb 21 10:49:39.473: INFO: Pod "pod-subpath-test-projected-gnf5": Phase="Running", Reason="", readiness=false. Elapsed: 30.743840833s Feb 21 10:49:41.488: INFO: Pod "pod-subpath-test-projected-gnf5": Phase="Running", Reason="", readiness=false. Elapsed: 32.759669076s Feb 21 10:49:43.509: INFO: Pod "pod-subpath-test-projected-gnf5": Phase="Running", Reason="", readiness=false. Elapsed: 34.780020619s Feb 21 10:49:45.534: INFO: Pod "pod-subpath-test-projected-gnf5": Phase="Running", Reason="", readiness=false. Elapsed: 36.804913642s Feb 21 10:49:47.550: INFO: Pod "pod-subpath-test-projected-gnf5": Phase="Running", Reason="", readiness=false. Elapsed: 38.82086774s Feb 21 10:49:49.567: INFO: Pod "pod-subpath-test-projected-gnf5": Phase="Running", Reason="", readiness=false. Elapsed: 40.838675714s Feb 21 10:49:51.593: INFO: Pod "pod-subpath-test-projected-gnf5": Phase="Running", Reason="", readiness=false. Elapsed: 42.864276446s Feb 21 10:49:53.603: INFO: Pod "pod-subpath-test-projected-gnf5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 44.87426216s STEP: Saw pod success Feb 21 10:49:53.603: INFO: Pod "pod-subpath-test-projected-gnf5" satisfied condition "success or failure" Feb 21 10:49:53.608: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-projected-gnf5 container test-container-subpath-projected-gnf5: STEP: delete the pod Feb 21 10:49:53.971: INFO: Waiting for pod pod-subpath-test-projected-gnf5 to disappear Feb 21 10:49:54.005: INFO: Pod pod-subpath-test-projected-gnf5 no longer exists STEP: Deleting pod pod-subpath-test-projected-gnf5 Feb 21 10:49:54.006: INFO: Deleting pod "pod-subpath-test-projected-gnf5" in namespace "e2e-tests-subpath-qsv9v" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 21 10:49:54.026: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-qsv9v" for this suite. Feb 21 10:50:00.205: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 21 10:50:00.369: INFO: namespace: e2e-tests-subpath-qsv9v, resource: bindings, ignored listing per whitelist Feb 21 10:50:00.452: INFO: namespace e2e-tests-subpath-qsv9v deletion completed in 6.278060577s • [SLOW TEST:52.065 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with projected pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 21 10:50:00.453: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Feb 21 10:50:00.839: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 21 10:50:11.019: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-2jlz9" for this suite. Feb 21 10:51:05.064: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 21 10:51:05.163: INFO: namespace: e2e-tests-pods-2jlz9, resource: bindings, ignored listing per whitelist Feb 21 10:51:05.199: INFO: namespace e2e-tests-pods-2jlz9 deletion completed in 54.172439432s • [SLOW TEST:64.747 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 21 10:51:05.200: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-fdr44 STEP: creating a selector STEP: Creating the service pods in kubernetes Feb 21 10:51:05.396: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Feb 21 10:51:37.693: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.32.0.5:8080/dial?request=hostName&protocol=http&host=10.32.0.4&port=8080&tries=1'] Namespace:e2e-tests-pod-network-test-fdr44 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 21 10:51:37.694: INFO: >>> kubeConfig: /root/.kube/config I0221 10:51:37.778006 8 log.go:172] (0xc000a5c580) (0xc000da72c0) Create stream I0221 10:51:37.778126 8 log.go:172] (0xc000a5c580) (0xc000da72c0) Stream added, broadcasting: 1 I0221 10:51:37.784881 8 log.go:172] (0xc000a5c580) Reply frame received for 1 I0221 10:51:37.784983 8 log.go:172] (0xc000a5c580) (0xc000be1540) Create stream I0221 10:51:37.785001 8 log.go:172] (0xc000a5c580) (0xc000be1540) Stream added, broadcasting: 3 I0221 10:51:37.787985 8 log.go:172] (0xc000a5c580) Reply frame received for 3 I0221 10:51:37.788033 8 log.go:172] (0xc000a5c580) (0xc0003b5b80) Create stream I0221 10:51:37.788064 8 log.go:172] (0xc000a5c580) (0xc0003b5b80) Stream added, broadcasting: 5 I0221 10:51:37.790871 8 log.go:172] (0xc000a5c580) Reply frame received for 5 I0221 10:51:38.161806 8 log.go:172] (0xc000a5c580) Data frame received for 3 I0221 10:51:38.161878 8 log.go:172] (0xc000be1540) (3) Data frame handling I0221 10:51:38.161905 8 log.go:172] (0xc000be1540) (3) Data frame sent I0221 10:51:38.272452 8 log.go:172] (0xc000a5c580) Data frame received for 1 I0221 10:51:38.272539 8 log.go:172] (0xc000da72c0) (1) Data frame handling I0221 10:51:38.272568 8 log.go:172] (0xc000da72c0) (1) Data frame sent I0221 10:51:38.273048 8 log.go:172] (0xc000a5c580) (0xc000da72c0) Stream removed, broadcasting: 1 I0221 10:51:38.273620 8 log.go:172] (0xc000a5c580) (0xc000be1540) Stream removed, broadcasting: 3 I0221 10:51:38.273939 8 log.go:172] (0xc000a5c580) (0xc0003b5b80) Stream removed, broadcasting: 5 I0221 10:51:38.274059 8 log.go:172] (0xc000a5c580) (0xc000da72c0) Stream removed, broadcasting: 1 I0221 10:51:38.274110 8 log.go:172] (0xc000a5c580) (0xc000be1540) Stream removed, broadcasting: 3 I0221 10:51:38.274154 8 log.go:172] (0xc000a5c580) (0xc0003b5b80) Stream removed, broadcasting: 5 Feb 21 10:51:38.274: INFO: Waiting for endpoints: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 21 10:51:38.274: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready I0221 10:51:38.274611 8 log.go:172] (0xc000a5c580) Go away received STEP: Destroying namespace "e2e-tests-pod-network-test-fdr44" for this suite. Feb 21 10:52:02.328: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 21 10:52:02.505: INFO: namespace: e2e-tests-pod-network-test-fdr44, resource: bindings, ignored listing per whitelist Feb 21 10:52:02.650: INFO: namespace e2e-tests-pod-network-test-fdr44 deletion completed in 24.354652697s • [SLOW TEST:57.450 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 21 10:52:02.650: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: validating cluster-info Feb 21 10:52:02.832: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info' Feb 21 10:52:02.992: INFO: stderr: "" Feb 21 10:52:02.992: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.24.4.212:6443\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.24.4.212:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 21 10:52:02.992: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-sx68g" for this suite. Feb 21 10:52:09.068: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 21 10:52:09.160: INFO: namespace: e2e-tests-kubectl-sx68g, resource: bindings, ignored listing per whitelist Feb 21 10:52:09.369: INFO: namespace e2e-tests-kubectl-sx68g deletion completed in 6.368106595s • [SLOW TEST:6.719 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl cluster-info /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 21 10:52:09.369: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test env composition Feb 21 10:52:09.590: INFO: Waiting up to 5m0s for pod "var-expansion-35647f46-5498-11ea-b1f8-0242ac110008" in namespace "e2e-tests-var-expansion-g9clh" to be "success or failure" Feb 21 10:52:09.616: INFO: Pod "var-expansion-35647f46-5498-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 25.809228ms Feb 21 10:52:11.870: INFO: Pod "var-expansion-35647f46-5498-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.279198834s Feb 21 10:52:13.893: INFO: Pod "var-expansion-35647f46-5498-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.30198465s Feb 21 10:52:16.600: INFO: Pod "var-expansion-35647f46-5498-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 7.009170974s Feb 21 10:52:18.625: INFO: Pod "var-expansion-35647f46-5498-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 9.03471509s Feb 21 10:52:20.664: INFO: Pod "var-expansion-35647f46-5498-11ea-b1f8-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.073727009s STEP: Saw pod success Feb 21 10:52:20.665: INFO: Pod "var-expansion-35647f46-5498-11ea-b1f8-0242ac110008" satisfied condition "success or failure" Feb 21 10:52:20.671: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod var-expansion-35647f46-5498-11ea-b1f8-0242ac110008 container dapi-container: STEP: delete the pod Feb 21 10:52:20.974: INFO: Waiting for pod var-expansion-35647f46-5498-11ea-b1f8-0242ac110008 to disappear Feb 21 10:52:20.988: INFO: Pod var-expansion-35647f46-5498-11ea-b1f8-0242ac110008 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 21 10:52:20.988: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-var-expansion-g9clh" for this suite. Feb 21 10:52:27.036: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 21 10:52:27.110: INFO: namespace: e2e-tests-var-expansion-g9clh, resource: bindings, ignored listing per whitelist Feb 21 10:52:27.162: INFO: namespace e2e-tests-var-expansion-g9clh deletion completed in 6.165013131s • [SLOW TEST:17.793 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 21 10:52:27.163: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-4051c1a7-5498-11ea-b1f8-0242ac110008 STEP: Creating a pod to test consume configMaps Feb 21 10:52:28.095: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-406c0996-5498-11ea-b1f8-0242ac110008" in namespace "e2e-tests-projected-sf45r" to be "success or failure" Feb 21 10:52:28.113: INFO: Pod "pod-projected-configmaps-406c0996-5498-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 17.108181ms Feb 21 10:52:30.238: INFO: Pod "pod-projected-configmaps-406c0996-5498-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.142584401s Feb 21 10:52:32.262: INFO: Pod "pod-projected-configmaps-406c0996-5498-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.166782763s Feb 21 10:52:34.277: INFO: Pod "pod-projected-configmaps-406c0996-5498-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.181815828s Feb 21 10:52:36.304: INFO: Pod "pod-projected-configmaps-406c0996-5498-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.208340002s Feb 21 10:52:38.477: INFO: Pod "pod-projected-configmaps-406c0996-5498-11ea-b1f8-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.381712676s STEP: Saw pod success Feb 21 10:52:38.478: INFO: Pod "pod-projected-configmaps-406c0996-5498-11ea-b1f8-0242ac110008" satisfied condition "success or failure" Feb 21 10:52:38.494: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-406c0996-5498-11ea-b1f8-0242ac110008 container projected-configmap-volume-test: STEP: delete the pod Feb 21 10:52:38.834: INFO: Waiting for pod pod-projected-configmaps-406c0996-5498-11ea-b1f8-0242ac110008 to disappear Feb 21 10:52:38.849: INFO: Pod pod-projected-configmaps-406c0996-5498-11ea-b1f8-0242ac110008 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 21 10:52:38.850: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-sf45r" for this suite. Feb 21 10:52:45.057: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 21 10:52:45.098: INFO: namespace: e2e-tests-projected-sf45r, resource: bindings, ignored listing per whitelist Feb 21 10:52:45.162: INFO: namespace e2e-tests-projected-sf45r deletion completed in 6.284636701s • [SLOW TEST:18.000 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 21 10:52:45.162: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: getting the auto-created API token STEP: Creating a pod to test consume service account token Feb 21 10:52:45.779: INFO: Waiting up to 5m0s for pod "pod-service-account-4af6316b-5498-11ea-b1f8-0242ac110008-ddgkh" in namespace "e2e-tests-svcaccounts-pxfbr" to be "success or failure" Feb 21 10:52:45.882: INFO: Pod "pod-service-account-4af6316b-5498-11ea-b1f8-0242ac110008-ddgkh": Phase="Pending", Reason="", readiness=false. Elapsed: 103.336323ms Feb 21 10:52:47.906: INFO: Pod "pod-service-account-4af6316b-5498-11ea-b1f8-0242ac110008-ddgkh": Phase="Pending", Reason="", readiness=false. Elapsed: 2.12652263s Feb 21 10:52:49.924: INFO: Pod "pod-service-account-4af6316b-5498-11ea-b1f8-0242ac110008-ddgkh": Phase="Pending", Reason="", readiness=false. Elapsed: 4.145346501s Feb 21 10:52:52.724: INFO: Pod "pod-service-account-4af6316b-5498-11ea-b1f8-0242ac110008-ddgkh": Phase="Pending", Reason="", readiness=false. Elapsed: 6.944808391s Feb 21 10:52:54.749: INFO: Pod "pod-service-account-4af6316b-5498-11ea-b1f8-0242ac110008-ddgkh": Phase="Pending", Reason="", readiness=false. Elapsed: 8.969748569s Feb 21 10:52:56.763: INFO: Pod "pod-service-account-4af6316b-5498-11ea-b1f8-0242ac110008-ddgkh": Phase="Pending", Reason="", readiness=false. Elapsed: 10.983387799s Feb 21 10:53:00.007: INFO: Pod "pod-service-account-4af6316b-5498-11ea-b1f8-0242ac110008-ddgkh": Phase="Pending", Reason="", readiness=false. Elapsed: 14.227667742s Feb 21 10:53:02.025: INFO: Pod "pod-service-account-4af6316b-5498-11ea-b1f8-0242ac110008-ddgkh": Phase="Pending", Reason="", readiness=false. Elapsed: 16.245371521s Feb 21 10:53:04.055: INFO: Pod "pod-service-account-4af6316b-5498-11ea-b1f8-0242ac110008-ddgkh": Phase="Succeeded", Reason="", readiness=false. Elapsed: 18.275782291s STEP: Saw pod success Feb 21 10:53:04.055: INFO: Pod "pod-service-account-4af6316b-5498-11ea-b1f8-0242ac110008-ddgkh" satisfied condition "success or failure" Feb 21 10:53:04.064: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-service-account-4af6316b-5498-11ea-b1f8-0242ac110008-ddgkh container token-test: STEP: delete the pod Feb 21 10:53:04.293: INFO: Waiting for pod pod-service-account-4af6316b-5498-11ea-b1f8-0242ac110008-ddgkh to disappear Feb 21 10:53:04.333: INFO: Pod pod-service-account-4af6316b-5498-11ea-b1f8-0242ac110008-ddgkh no longer exists STEP: Creating a pod to test consume service account root CA Feb 21 10:53:04.410: INFO: Waiting up to 5m0s for pod "pod-service-account-4af6316b-5498-11ea-b1f8-0242ac110008-28sqk" in namespace "e2e-tests-svcaccounts-pxfbr" to be "success or failure" Feb 21 10:53:04.471: INFO: Pod "pod-service-account-4af6316b-5498-11ea-b1f8-0242ac110008-28sqk": Phase="Pending", Reason="", readiness=false. Elapsed: 60.770005ms Feb 21 10:53:06.712: INFO: Pod "pod-service-account-4af6316b-5498-11ea-b1f8-0242ac110008-28sqk": Phase="Pending", Reason="", readiness=false. Elapsed: 2.301442849s Feb 21 10:53:08.726: INFO: Pod "pod-service-account-4af6316b-5498-11ea-b1f8-0242ac110008-28sqk": Phase="Pending", Reason="", readiness=false. Elapsed: 4.315157795s Feb 21 10:53:10.980: INFO: Pod "pod-service-account-4af6316b-5498-11ea-b1f8-0242ac110008-28sqk": Phase="Pending", Reason="", readiness=false. Elapsed: 6.569321751s Feb 21 10:53:13.003: INFO: Pod "pod-service-account-4af6316b-5498-11ea-b1f8-0242ac110008-28sqk": Phase="Pending", Reason="", readiness=false. Elapsed: 8.592914319s Feb 21 10:53:15.532: INFO: Pod "pod-service-account-4af6316b-5498-11ea-b1f8-0242ac110008-28sqk": Phase="Pending", Reason="", readiness=false. Elapsed: 11.12095097s Feb 21 10:53:17.754: INFO: Pod "pod-service-account-4af6316b-5498-11ea-b1f8-0242ac110008-28sqk": Phase="Pending", Reason="", readiness=false. Elapsed: 13.343000958s Feb 21 10:53:19.804: INFO: Pod "pod-service-account-4af6316b-5498-11ea-b1f8-0242ac110008-28sqk": Phase="Pending", Reason="", readiness=false. Elapsed: 15.393061127s Feb 21 10:53:21.830: INFO: Pod "pod-service-account-4af6316b-5498-11ea-b1f8-0242ac110008-28sqk": Phase="Pending", Reason="", readiness=false. Elapsed: 17.419901769s Feb 21 10:53:23.853: INFO: Pod "pod-service-account-4af6316b-5498-11ea-b1f8-0242ac110008-28sqk": Phase="Pending", Reason="", readiness=false. Elapsed: 19.442370994s Feb 21 10:53:25.883: INFO: Pod "pod-service-account-4af6316b-5498-11ea-b1f8-0242ac110008-28sqk": Phase="Succeeded", Reason="", readiness=false. Elapsed: 21.472556507s STEP: Saw pod success Feb 21 10:53:25.883: INFO: Pod "pod-service-account-4af6316b-5498-11ea-b1f8-0242ac110008-28sqk" satisfied condition "success or failure" Feb 21 10:53:25.891: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-service-account-4af6316b-5498-11ea-b1f8-0242ac110008-28sqk container root-ca-test: STEP: delete the pod Feb 21 10:53:26.490: INFO: Waiting for pod pod-service-account-4af6316b-5498-11ea-b1f8-0242ac110008-28sqk to disappear Feb 21 10:53:26.597: INFO: Pod pod-service-account-4af6316b-5498-11ea-b1f8-0242ac110008-28sqk no longer exists STEP: Creating a pod to test consume service account namespace Feb 21 10:53:26.678: INFO: Waiting up to 5m0s for pod "pod-service-account-4af6316b-5498-11ea-b1f8-0242ac110008-h69v5" in namespace "e2e-tests-svcaccounts-pxfbr" to be "success or failure" Feb 21 10:53:26.743: INFO: Pod "pod-service-account-4af6316b-5498-11ea-b1f8-0242ac110008-h69v5": Phase="Pending", Reason="", readiness=false. Elapsed: 65.380219ms Feb 21 10:53:28.759: INFO: Pod "pod-service-account-4af6316b-5498-11ea-b1f8-0242ac110008-h69v5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.080765209s Feb 21 10:53:30.770: INFO: Pod "pod-service-account-4af6316b-5498-11ea-b1f8-0242ac110008-h69v5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.092504268s Feb 21 10:53:32.781: INFO: Pod "pod-service-account-4af6316b-5498-11ea-b1f8-0242ac110008-h69v5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.102660232s Feb 21 10:53:34.819: INFO: Pod "pod-service-account-4af6316b-5498-11ea-b1f8-0242ac110008-h69v5": Phase="Pending", Reason="", readiness=false. Elapsed: 8.140998352s Feb 21 10:53:37.286: INFO: Pod "pod-service-account-4af6316b-5498-11ea-b1f8-0242ac110008-h69v5": Phase="Pending", Reason="", readiness=false. Elapsed: 10.608019966s Feb 21 10:53:39.298: INFO: Pod "pod-service-account-4af6316b-5498-11ea-b1f8-0242ac110008-h69v5": Phase="Pending", Reason="", readiness=false. Elapsed: 12.620557413s Feb 21 10:53:42.170: INFO: Pod "pod-service-account-4af6316b-5498-11ea-b1f8-0242ac110008-h69v5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 15.492421536s STEP: Saw pod success Feb 21 10:53:42.171: INFO: Pod "pod-service-account-4af6316b-5498-11ea-b1f8-0242ac110008-h69v5" satisfied condition "success or failure" Feb 21 10:53:42.796: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-service-account-4af6316b-5498-11ea-b1f8-0242ac110008-h69v5 container namespace-test: STEP: delete the pod Feb 21 10:53:43.241: INFO: Waiting for pod pod-service-account-4af6316b-5498-11ea-b1f8-0242ac110008-h69v5 to disappear Feb 21 10:53:43.261: INFO: Pod pod-service-account-4af6316b-5498-11ea-b1f8-0242ac110008-h69v5 no longer exists [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 21 10:53:43.261: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-svcaccounts-pxfbr" for this suite. Feb 21 10:53:51.415: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 21 10:53:51.594: INFO: namespace: e2e-tests-svcaccounts-pxfbr, resource: bindings, ignored listing per whitelist Feb 21 10:53:51.605: INFO: namespace e2e-tests-svcaccounts-pxfbr deletion completed in 8.325173217s • [SLOW TEST:66.442 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:22 should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 21 10:53:51.605: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Feb 21 10:53:51.881: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. Feb 21 10:53:51.937: INFO: Number of nodes with available pods: 0 Feb 21 10:53:51.937: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 21 10:53:52.962: INFO: Number of nodes with available pods: 0 Feb 21 10:53:52.962: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 21 10:53:54.346: INFO: Number of nodes with available pods: 0 Feb 21 10:53:54.347: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 21 10:53:54.965: INFO: Number of nodes with available pods: 0 Feb 21 10:53:54.965: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 21 10:53:57.119: INFO: Number of nodes with available pods: 0 Feb 21 10:53:57.120: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 21 10:53:58.001: INFO: Number of nodes with available pods: 0 Feb 21 10:53:58.002: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 21 10:53:58.988: INFO: Number of nodes with available pods: 0 Feb 21 10:53:58.988: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 21 10:54:00.139: INFO: Number of nodes with available pods: 0 Feb 21 10:54:00.140: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 21 10:54:00.958: INFO: Number of nodes with available pods: 0 Feb 21 10:54:00.958: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 21 10:54:01.972: INFO: Number of nodes with available pods: 0 Feb 21 10:54:01.973: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 21 10:54:02.958: INFO: Number of nodes with available pods: 0 Feb 21 10:54:02.958: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 21 10:54:03.976: INFO: Number of nodes with available pods: 1 Feb 21 10:54:03.976: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. Feb 21 10:54:04.150: INFO: Wrong image for pod: daemon-set-zd5r9. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 21 10:54:05.172: INFO: Wrong image for pod: daemon-set-zd5r9. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 21 10:54:06.169: INFO: Wrong image for pod: daemon-set-zd5r9. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 21 10:54:08.007: INFO: Wrong image for pod: daemon-set-zd5r9. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 21 10:54:09.198: INFO: Wrong image for pod: daemon-set-zd5r9. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 21 10:54:10.173: INFO: Wrong image for pod: daemon-set-zd5r9. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 21 10:54:11.186: INFO: Wrong image for pod: daemon-set-zd5r9. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 21 10:54:12.190: INFO: Wrong image for pod: daemon-set-zd5r9. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 21 10:54:12.191: INFO: Pod daemon-set-zd5r9 is not available Feb 21 10:54:13.249: INFO: Pod daemon-set-g29s5 is not available STEP: Check that daemon pods are still running on every node of the cluster. Feb 21 10:54:13.413: INFO: Number of nodes with available pods: 0 Feb 21 10:54:13.414: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 21 10:54:14.459: INFO: Number of nodes with available pods: 0 Feb 21 10:54:14.459: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 21 10:54:15.444: INFO: Number of nodes with available pods: 0 Feb 21 10:54:15.444: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 21 10:54:16.433: INFO: Number of nodes with available pods: 0 Feb 21 10:54:16.433: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 21 10:54:21.557: INFO: Number of nodes with available pods: 0 Feb 21 10:54:21.557: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 21 10:54:22.625: INFO: Number of nodes with available pods: 0 Feb 21 10:54:22.626: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 21 10:54:23.490: INFO: Number of nodes with available pods: 0 Feb 21 10:54:23.490: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 21 10:54:24.437: INFO: Number of nodes with available pods: 0 Feb 21 10:54:24.438: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 21 10:54:25.443: INFO: Number of nodes with available pods: 1 Feb 21 10:54:25.443: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-j2nkf, will wait for the garbage collector to delete the pods Feb 21 10:54:25.545: INFO: Deleting DaemonSet.extensions daemon-set took: 26.864256ms Feb 21 10:54:25.846: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.747994ms Feb 21 10:54:33.709: INFO: Number of nodes with available pods: 0 Feb 21 10:54:33.709: INFO: Number of running nodes: 0, number of available pods: 0 Feb 21 10:54:33.719: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-j2nkf/daemonsets","resourceVersion":"22410760"},"items":null} Feb 21 10:54:33.723: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-j2nkf/pods","resourceVersion":"22410760"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 21 10:54:33.755: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-j2nkf" for this suite. Feb 21 10:54:41.805: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 21 10:54:41.943: INFO: namespace: e2e-tests-daemonsets-j2nkf, resource: bindings, ignored listing per whitelist Feb 21 10:54:42.014: INFO: namespace e2e-tests-daemonsets-j2nkf deletion completed in 8.253398081s • [SLOW TEST:50.409 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 21 10:54:42.015: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Feb 21 10:54:42.385: INFO: Waiting up to 5m0s for pod "downwardapi-volume-90681755-5498-11ea-b1f8-0242ac110008" in namespace "e2e-tests-projected-jb2rh" to be "success or failure" Feb 21 10:54:42.414: INFO: Pod "downwardapi-volume-90681755-5498-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 28.253668ms Feb 21 10:54:45.339: INFO: Pod "downwardapi-volume-90681755-5498-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.953356067s Feb 21 10:54:47.350: INFO: Pod "downwardapi-volume-90681755-5498-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.964893267s Feb 21 10:54:49.384: INFO: Pod "downwardapi-volume-90681755-5498-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.998484206s Feb 21 10:54:51.395: INFO: Pod "downwardapi-volume-90681755-5498-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 9.009871867s Feb 21 10:54:53.406: INFO: Pod "downwardapi-volume-90681755-5498-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 11.020553495s Feb 21 10:54:55.507: INFO: Pod "downwardapi-volume-90681755-5498-11ea-b1f8-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.122125892s STEP: Saw pod success Feb 21 10:54:55.508: INFO: Pod "downwardapi-volume-90681755-5498-11ea-b1f8-0242ac110008" satisfied condition "success or failure" Feb 21 10:54:55.521: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-90681755-5498-11ea-b1f8-0242ac110008 container client-container: STEP: delete the pod Feb 21 10:54:57.182: INFO: Waiting for pod downwardapi-volume-90681755-5498-11ea-b1f8-0242ac110008 to disappear Feb 21 10:54:57.308: INFO: Pod downwardapi-volume-90681755-5498-11ea-b1f8-0242ac110008 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 21 10:54:57.308: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-jb2rh" for this suite. Feb 21 10:55:03.377: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 21 10:55:03.441: INFO: namespace: e2e-tests-projected-jb2rh, resource: bindings, ignored listing per whitelist Feb 21 10:55:03.516: INFO: namespace e2e-tests-projected-jb2rh deletion completed in 6.199096135s • [SLOW TEST:21.501 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 21 10:55:03.516: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted Feb 21 10:55:10.170: INFO: 10 pods remaining Feb 21 10:55:10.170: INFO: 10 pods has nil DeletionTimestamp Feb 21 10:55:10.170: INFO: Feb 21 10:55:12.274: INFO: 7 pods remaining Feb 21 10:55:12.274: INFO: 6 pods has nil DeletionTimestamp Feb 21 10:55:12.275: INFO: Feb 21 10:55:13.265: INFO: 0 pods remaining Feb 21 10:55:13.266: INFO: 0 pods has nil DeletionTimestamp Feb 21 10:55:13.266: INFO: STEP: Gathering metrics W0221 10:55:14.277556 8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Feb 21 10:55:14.277: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 21 10:55:14.277: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-9wqt8" for this suite. Feb 21 10:55:32.483: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 21 10:55:32.605: INFO: namespace: e2e-tests-gc-9wqt8, resource: bindings, ignored listing per whitelist Feb 21 10:55:32.722: INFO: namespace e2e-tests-gc-9wqt8 deletion completed in 18.433596349s • [SLOW TEST:29.206 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 21 10:55:32.722: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating replication controller svc-latency-rc in namespace e2e-tests-svc-latency-zcjlg I0221 10:55:32.827900 8 runners.go:184] Created replication controller with name: svc-latency-rc, namespace: e2e-tests-svc-latency-zcjlg, replica count: 1 I0221 10:55:33.878869 8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0221 10:55:34.879442 8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0221 10:55:35.880318 8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0221 10:55:36.880918 8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0221 10:55:37.881368 8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0221 10:55:38.881983 8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0221 10:55:39.882703 8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0221 10:55:40.883220 8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Feb 21 10:55:41.100: INFO: Created: latency-svc-sqkcv Feb 21 10:55:41.217: INFO: Got endpoints: latency-svc-sqkcv [234.038399ms] Feb 21 10:55:41.306: INFO: Created: latency-svc-qhqbs Feb 21 10:55:41.657: INFO: Got endpoints: latency-svc-qhqbs [438.62206ms] Feb 21 10:55:42.796: INFO: Created: latency-svc-cx248 Feb 21 10:55:42.868: INFO: Got endpoints: latency-svc-cx248 [1.649125101s] Feb 21 10:55:43.042: INFO: Created: latency-svc-4hxsz Feb 21 10:55:43.044: INFO: Got endpoints: latency-svc-4hxsz [1.825631261s] Feb 21 10:55:43.097: INFO: Created: latency-svc-w4xqr Feb 21 10:55:43.250: INFO: Got endpoints: latency-svc-w4xqr [2.031287947s] Feb 21 10:55:43.312: INFO: Created: latency-svc-ds6s8 Feb 21 10:55:43.505: INFO: Got endpoints: latency-svc-ds6s8 [2.286215613s] Feb 21 10:55:43.535: INFO: Created: latency-svc-m8k8m Feb 21 10:55:43.558: INFO: Got endpoints: latency-svc-m8k8m [2.339658263s] Feb 21 10:55:43.735: INFO: Created: latency-svc-slbb4 Feb 21 10:55:43.766: INFO: Got endpoints: latency-svc-slbb4 [2.547864639s] Feb 21 10:55:43.918: INFO: Created: latency-svc-9n27q Feb 21 10:55:43.948: INFO: Got endpoints: latency-svc-9n27q [2.728989895s] Feb 21 10:55:44.201: INFO: Created: latency-svc-r959m Feb 21 10:55:44.233: INFO: Got endpoints: latency-svc-r959m [3.014053899s] Feb 21 10:55:44.278: INFO: Created: latency-svc-7nv5p Feb 21 10:55:44.393: INFO: Got endpoints: latency-svc-7nv5p [3.174783286s] Feb 21 10:55:44.405: INFO: Created: latency-svc-7v6vz Feb 21 10:55:44.413: INFO: Got endpoints: latency-svc-7v6vz [3.19446377s] Feb 21 10:55:44.615: INFO: Created: latency-svc-2kzzp Feb 21 10:55:44.615: INFO: Got endpoints: latency-svc-2kzzp [3.39670818s] Feb 21 10:55:44.652: INFO: Created: latency-svc-z6sz2 Feb 21 10:55:44.670: INFO: Got endpoints: latency-svc-z6sz2 [3.451664487s] Feb 21 10:55:44.876: INFO: Created: latency-svc-gmvq8 Feb 21 10:55:44.898: INFO: Got endpoints: latency-svc-gmvq8 [3.679604033s] Feb 21 10:55:45.096: INFO: Created: latency-svc-kjjlv Feb 21 10:55:45.097: INFO: Got endpoints: latency-svc-kjjlv [3.878768291s] Feb 21 10:55:45.325: INFO: Created: latency-svc-r4l8l Feb 21 10:55:45.337: INFO: Got endpoints: latency-svc-r4l8l [3.6799424s] Feb 21 10:55:45.400: INFO: Created: latency-svc-wcttz Feb 21 10:55:45.604: INFO: Got endpoints: latency-svc-wcttz [2.735724678s] Feb 21 10:55:45.615: INFO: Created: latency-svc-kxxhn Feb 21 10:55:45.632: INFO: Got endpoints: latency-svc-kxxhn [2.588056951s] Feb 21 10:55:45.858: INFO: Created: latency-svc-mndcp Feb 21 10:55:45.902: INFO: Got endpoints: latency-svc-mndcp [2.651307082s] Feb 21 10:55:46.055: INFO: Created: latency-svc-xmzpg Feb 21 10:55:46.082: INFO: Got endpoints: latency-svc-xmzpg [2.577557863s] Feb 21 10:55:46.145: INFO: Created: latency-svc-rl8rf Feb 21 10:55:46.237: INFO: Got endpoints: latency-svc-rl8rf [2.678270483s] Feb 21 10:55:46.279: INFO: Created: latency-svc-cfbk7 Feb 21 10:55:46.322: INFO: Got endpoints: latency-svc-cfbk7 [2.555162007s] Feb 21 10:55:46.500: INFO: Created: latency-svc-2p4hm Feb 21 10:55:46.530: INFO: Got endpoints: latency-svc-2p4hm [292.822731ms] Feb 21 10:55:46.581: INFO: Created: latency-svc-n6jdl Feb 21 10:55:46.763: INFO: Got endpoints: latency-svc-n6jdl [2.815374002s] Feb 21 10:55:46.815: INFO: Created: latency-svc-brvkv Feb 21 10:55:46.841: INFO: Got endpoints: latency-svc-brvkv [2.607781201s] Feb 21 10:55:46.967: INFO: Created: latency-svc-b6n7g Feb 21 10:55:46.986: INFO: Got endpoints: latency-svc-b6n7g [2.592259233s] Feb 21 10:55:47.076: INFO: Created: latency-svc-5q7xc Feb 21 10:55:47.271: INFO: Got endpoints: latency-svc-5q7xc [2.858344109s] Feb 21 10:55:47.286: INFO: Created: latency-svc-ftfmp Feb 21 10:55:47.313: INFO: Got endpoints: latency-svc-ftfmp [2.697785063s] Feb 21 10:55:48.505: INFO: Created: latency-svc-tdxn5 Feb 21 10:55:48.626: INFO: Got endpoints: latency-svc-tdxn5 [3.95523829s] Feb 21 10:55:48.810: INFO: Created: latency-svc-9l2lw Feb 21 10:55:49.132: INFO: Got endpoints: latency-svc-9l2lw [4.234354786s] Feb 21 10:55:49.355: INFO: Created: latency-svc-wfgk8 Feb 21 10:55:49.401: INFO: Got endpoints: latency-svc-wfgk8 [4.303917711s] Feb 21 10:55:49.619: INFO: Created: latency-svc-5l5k7 Feb 21 10:55:49.781: INFO: Got endpoints: latency-svc-5l5k7 [4.443791079s] Feb 21 10:55:50.061: INFO: Created: latency-svc-spfdr Feb 21 10:55:50.086: INFO: Got endpoints: latency-svc-spfdr [4.482366596s] Feb 21 10:55:50.137: INFO: Created: latency-svc-r8vj6 Feb 21 10:55:50.253: INFO: Got endpoints: latency-svc-r8vj6 [4.620850063s] Feb 21 10:55:50.280: INFO: Created: latency-svc-ffkvt Feb 21 10:55:50.297: INFO: Got endpoints: latency-svc-ffkvt [4.395162329s] Feb 21 10:55:50.331: INFO: Created: latency-svc-rfcg9 Feb 21 10:55:50.543: INFO: Got endpoints: latency-svc-rfcg9 [4.459954608s] Feb 21 10:55:50.549: INFO: Created: latency-svc-9flhn Feb 21 10:55:50.569: INFO: Got endpoints: latency-svc-9flhn [4.247393362s] Feb 21 10:55:50.626: INFO: Created: latency-svc-gwzkm Feb 21 10:55:50.884: INFO: Got endpoints: latency-svc-gwzkm [4.353281179s] Feb 21 10:55:50.937: INFO: Created: latency-svc-dxpj7 Feb 21 10:55:51.059: INFO: Got endpoints: latency-svc-dxpj7 [4.295170874s] Feb 21 10:55:51.250: INFO: Created: latency-svc-ntcx4 Feb 21 10:55:51.322: INFO: Got endpoints: latency-svc-ntcx4 [4.481646842s] Feb 21 10:55:52.426: INFO: Created: latency-svc-kwlqk Feb 21 10:55:52.436: INFO: Got endpoints: latency-svc-kwlqk [5.449267039s] Feb 21 10:55:52.921: INFO: Created: latency-svc-b5zls Feb 21 10:55:52.949: INFO: Got endpoints: latency-svc-b5zls [5.677074516s] Feb 21 10:55:53.164: INFO: Created: latency-svc-t5gbd Feb 21 10:55:53.775: INFO: Got endpoints: latency-svc-t5gbd [6.461738656s] Feb 21 10:55:53.801: INFO: Created: latency-svc-ktsxk Feb 21 10:55:53.834: INFO: Got endpoints: latency-svc-ktsxk [5.207515863s] Feb 21 10:55:53.982: INFO: Created: latency-svc-7c65b Feb 21 10:55:54.044: INFO: Got endpoints: latency-svc-7c65b [4.91137299s] Feb 21 10:55:54.194: INFO: Created: latency-svc-4fqtn Feb 21 10:55:54.242: INFO: Got endpoints: latency-svc-4fqtn [4.840850258s] Feb 21 10:55:54.372: INFO: Created: latency-svc-6s5tq Feb 21 10:55:54.399: INFO: Got endpoints: latency-svc-6s5tq [4.617774524s] Feb 21 10:55:55.811: INFO: Created: latency-svc-rpqqh Feb 21 10:55:55.821: INFO: Got endpoints: latency-svc-rpqqh [5.734912742s] Feb 21 10:55:55.890: INFO: Created: latency-svc-t2jp4 Feb 21 10:55:55.989: INFO: Got endpoints: latency-svc-t2jp4 [5.735976543s] Feb 21 10:55:56.063: INFO: Created: latency-svc-hkrll Feb 21 10:55:56.063: INFO: Got endpoints: latency-svc-hkrll [5.765436294s] Feb 21 10:55:56.220: INFO: Created: latency-svc-2fd2h Feb 21 10:55:56.241: INFO: Got endpoints: latency-svc-2fd2h [5.69812055s] Feb 21 10:55:56.425: INFO: Created: latency-svc-dlh2q Feb 21 10:55:56.426: INFO: Got endpoints: latency-svc-dlh2q [5.856236419s] Feb 21 10:55:56.508: INFO: Created: latency-svc-844r5 Feb 21 10:55:56.704: INFO: Got endpoints: latency-svc-844r5 [5.819466267s] Feb 21 10:55:56.748: INFO: Created: latency-svc-gfmzf Feb 21 10:55:56.765: INFO: Got endpoints: latency-svc-gfmzf [5.70545785s] Feb 21 10:55:56.879: INFO: Created: latency-svc-p9d6w Feb 21 10:55:56.899: INFO: Got endpoints: latency-svc-p9d6w [5.575848621s] Feb 21 10:55:56.948: INFO: Created: latency-svc-krkf6 Feb 21 10:55:56.958: INFO: Got endpoints: latency-svc-krkf6 [4.521984749s] Feb 21 10:55:57.141: INFO: Created: latency-svc-sj8tv Feb 21 10:55:57.152: INFO: Got endpoints: latency-svc-sj8tv [4.203009466s] Feb 21 10:55:57.216: INFO: Created: latency-svc-pjq5t Feb 21 10:55:57.313: INFO: Got endpoints: latency-svc-pjq5t [3.537502258s] Feb 21 10:55:57.362: INFO: Created: latency-svc-2m6xl Feb 21 10:55:57.400: INFO: Got endpoints: latency-svc-2m6xl [3.566051309s] Feb 21 10:55:57.651: INFO: Created: latency-svc-mw6zk Feb 21 10:55:57.664: INFO: Got endpoints: latency-svc-mw6zk [3.618887157s] Feb 21 10:55:57.885: INFO: Created: latency-svc-msxz8 Feb 21 10:55:57.910: INFO: Got endpoints: latency-svc-msxz8 [3.667695161s] Feb 21 10:55:57.978: INFO: Created: latency-svc-hpv8s Feb 21 10:55:58.128: INFO: Got endpoints: latency-svc-hpv8s [3.728770241s] Feb 21 10:55:58.196: INFO: Created: latency-svc-4jgkm Feb 21 10:55:58.201: INFO: Got endpoints: latency-svc-4jgkm [2.379891922s] Feb 21 10:55:58.478: INFO: Created: latency-svc-pngjh Feb 21 10:55:58.507: INFO: Got endpoints: latency-svc-pngjh [2.517282168s] Feb 21 10:55:58.865: INFO: Created: latency-svc-jk4zl Feb 21 10:55:58.989: INFO: Got endpoints: latency-svc-jk4zl [2.925776844s] Feb 21 10:55:59.007: INFO: Created: latency-svc-gtpmd Feb 21 10:55:59.051: INFO: Got endpoints: latency-svc-gtpmd [2.809867518s] Feb 21 10:55:59.301: INFO: Created: latency-svc-fvgkg Feb 21 10:55:59.337: INFO: Got endpoints: latency-svc-fvgkg [2.911443149s] Feb 21 10:55:59.474: INFO: Created: latency-svc-bx6xs Feb 21 10:55:59.495: INFO: Got endpoints: latency-svc-bx6xs [2.791092552s] Feb 21 10:55:59.583: INFO: Created: latency-svc-hlhcd Feb 21 10:55:59.655: INFO: Got endpoints: latency-svc-hlhcd [2.889487557s] Feb 21 10:55:59.680: INFO: Created: latency-svc-fgt7l Feb 21 10:55:59.683: INFO: Got endpoints: latency-svc-fgt7l [2.783978141s] Feb 21 10:55:59.868: INFO: Created: latency-svc-dl9f8 Feb 21 10:55:59.885: INFO: Got endpoints: latency-svc-dl9f8 [2.925883406s] Feb 21 10:55:59.932: INFO: Created: latency-svc-rv6b2 Feb 21 10:56:00.042: INFO: Got endpoints: latency-svc-rv6b2 [2.889199645s] Feb 21 10:56:00.240: INFO: Created: latency-svc-vtkn8 Feb 21 10:56:00.285: INFO: Got endpoints: latency-svc-vtkn8 [2.971641449s] Feb 21 10:56:00.957: INFO: Created: latency-svc-bxnsj Feb 21 10:56:00.957: INFO: Got endpoints: latency-svc-bxnsj [3.557054265s] Feb 21 10:56:01.094: INFO: Created: latency-svc-lzhnd Feb 21 10:56:01.116: INFO: Got endpoints: latency-svc-lzhnd [3.451965192s] Feb 21 10:56:01.263: INFO: Created: latency-svc-d4gxp Feb 21 10:56:01.276: INFO: Got endpoints: latency-svc-d4gxp [3.365716663s] Feb 21 10:56:01.440: INFO: Created: latency-svc-jnz4d Feb 21 10:56:01.525: INFO: Created: latency-svc-pg55v Feb 21 10:56:01.609: INFO: Got endpoints: latency-svc-jnz4d [3.48095526s] Feb 21 10:56:01.632: INFO: Got endpoints: latency-svc-pg55v [3.430362567s] Feb 21 10:56:01.733: INFO: Created: latency-svc-xpl2c Feb 21 10:56:01.828: INFO: Got endpoints: latency-svc-xpl2c [3.320947513s] Feb 21 10:56:01.842: INFO: Created: latency-svc-nr5ds Feb 21 10:56:01.858: INFO: Got endpoints: latency-svc-nr5ds [2.868301611s] Feb 21 10:56:02.023: INFO: Created: latency-svc-zzs7q Feb 21 10:56:02.044: INFO: Got endpoints: latency-svc-zzs7q [2.992447221s] Feb 21 10:56:02.107: INFO: Created: latency-svc-7qv98 Feb 21 10:56:02.224: INFO: Got endpoints: latency-svc-7qv98 [2.8861118s] Feb 21 10:56:02.239: INFO: Created: latency-svc-9t2zq Feb 21 10:56:02.277: INFO: Got endpoints: latency-svc-9t2zq [2.781997949s] Feb 21 10:56:02.374: INFO: Created: latency-svc-n5472 Feb 21 10:56:02.404: INFO: Got endpoints: latency-svc-n5472 [2.748998307s] Feb 21 10:56:02.452: INFO: Created: latency-svc-txhqw Feb 21 10:56:02.641: INFO: Got endpoints: latency-svc-txhqw [2.957779962s] Feb 21 10:56:02.677: INFO: Created: latency-svc-99hcm Feb 21 10:56:02.722: INFO: Got endpoints: latency-svc-99hcm [2.837498845s] Feb 21 10:56:02.981: INFO: Created: latency-svc-4q8mw Feb 21 10:56:03.007: INFO: Got endpoints: latency-svc-4q8mw [2.965228678s] Feb 21 10:56:03.022: INFO: Created: latency-svc-vhc29 Feb 21 10:56:03.041: INFO: Got endpoints: latency-svc-vhc29 [2.756093484s] Feb 21 10:56:03.160: INFO: Created: latency-svc-crlqw Feb 21 10:56:03.210: INFO: Got endpoints: latency-svc-crlqw [2.252158649s] Feb 21 10:56:03.332: INFO: Created: latency-svc-p8m8n Feb 21 10:56:03.334: INFO: Got endpoints: latency-svc-p8m8n [2.218232484s] Feb 21 10:56:03.492: INFO: Created: latency-svc-dr7tl Feb 21 10:56:03.506: INFO: Got endpoints: latency-svc-dr7tl [2.230007945s] Feb 21 10:56:03.521: INFO: Created: latency-svc-557w2 Feb 21 10:56:03.552: INFO: Got endpoints: latency-svc-557w2 [1.941909958s] Feb 21 10:56:03.570: INFO: Created: latency-svc-2fh78 Feb 21 10:56:03.590: INFO: Got endpoints: latency-svc-2fh78 [1.95804838s] Feb 21 10:56:03.606: INFO: Created: latency-svc-hvcs5 Feb 21 10:56:03.787: INFO: Got endpoints: latency-svc-hvcs5 [1.958150112s] Feb 21 10:56:03.799: INFO: Created: latency-svc-7fjbw Feb 21 10:56:03.945: INFO: Got endpoints: latency-svc-7fjbw [2.087230822s] Feb 21 10:56:03.975: INFO: Created: latency-svc-2k9sz Feb 21 10:56:03.995: INFO: Got endpoints: latency-svc-2k9sz [1.95021155s] Feb 21 10:56:04.149: INFO: Created: latency-svc-9mspw Feb 21 10:56:04.176: INFO: Got endpoints: latency-svc-9mspw [1.95218238s] Feb 21 10:56:04.197: INFO: Created: latency-svc-qzf6s Feb 21 10:56:04.210: INFO: Got endpoints: latency-svc-qzf6s [1.932795055s] Feb 21 10:56:04.358: INFO: Created: latency-svc-5ffkz Feb 21 10:56:04.404: INFO: Got endpoints: latency-svc-5ffkz [1.99908174s] Feb 21 10:56:04.539: INFO: Created: latency-svc-k446h Feb 21 10:56:04.601: INFO: Got endpoints: latency-svc-k446h [1.959008285s] Feb 21 10:56:04.699: INFO: Created: latency-svc-5kcqb Feb 21 10:56:04.717: INFO: Got endpoints: latency-svc-5kcqb [1.99387328s] Feb 21 10:56:04.768: INFO: Created: latency-svc-th759 Feb 21 10:56:04.784: INFO: Got endpoints: latency-svc-th759 [1.77691233s] Feb 21 10:56:04.931: INFO: Created: latency-svc-66brd Feb 21 10:56:04.938: INFO: Got endpoints: latency-svc-66brd [1.896983246s] Feb 21 10:56:05.013: INFO: Created: latency-svc-bggmd Feb 21 10:56:05.092: INFO: Got endpoints: latency-svc-bggmd [1.88237135s] Feb 21 10:56:05.156: INFO: Created: latency-svc-nqjxf Feb 21 10:56:05.172: INFO: Got endpoints: latency-svc-nqjxf [1.837906707s] Feb 21 10:56:05.362: INFO: Created: latency-svc-9t6d5 Feb 21 10:56:05.371: INFO: Got endpoints: latency-svc-9t6d5 [1.864685772s] Feb 21 10:56:05.525: INFO: Created: latency-svc-m72c8 Feb 21 10:56:05.561: INFO: Got endpoints: latency-svc-m72c8 [2.009017154s] Feb 21 10:56:05.566: INFO: Created: latency-svc-9b9zc Feb 21 10:56:05.574: INFO: Got endpoints: latency-svc-9b9zc [1.9838138s] Feb 21 10:56:05.734: INFO: Created: latency-svc-v85x7 Feb 21 10:56:05.763: INFO: Got endpoints: latency-svc-v85x7 [1.976659496s] Feb 21 10:56:05.894: INFO: Created: latency-svc-kbxq6 Feb 21 10:56:05.914: INFO: Got endpoints: latency-svc-kbxq6 [1.968626087s] Feb 21 10:56:05.995: INFO: Created: latency-svc-9tt2d Feb 21 10:56:06.125: INFO: Got endpoints: latency-svc-9tt2d [2.129686581s] Feb 21 10:56:06.167: INFO: Created: latency-svc-xnn4l Feb 21 10:56:06.203: INFO: Got endpoints: latency-svc-xnn4l [2.026603898s] Feb 21 10:56:06.340: INFO: Created: latency-svc-sn4jr Feb 21 10:56:06.349: INFO: Got endpoints: latency-svc-sn4jr [2.138031666s] Feb 21 10:56:06.397: INFO: Created: latency-svc-8g45m Feb 21 10:56:06.535: INFO: Got endpoints: latency-svc-8g45m [2.13098107s] Feb 21 10:56:06.574: INFO: Created: latency-svc-sv84t Feb 21 10:56:06.743: INFO: Got endpoints: latency-svc-sv84t [2.142333547s] Feb 21 10:56:06.745: INFO: Created: latency-svc-rdd9s Feb 21 10:56:06.769: INFO: Got endpoints: latency-svc-rdd9s [2.052163166s] Feb 21 10:56:06.860: INFO: Created: latency-svc-lpsjc Feb 21 10:56:06.928: INFO: Got endpoints: latency-svc-lpsjc [2.142967029s] Feb 21 10:56:07.015: INFO: Created: latency-svc-qnm6h Feb 21 10:56:07.214: INFO: Got endpoints: latency-svc-qnm6h [2.275801057s] Feb 21 10:56:07.268: INFO: Created: latency-svc-pmkgl Feb 21 10:56:07.268: INFO: Got endpoints: latency-svc-pmkgl [2.175553237s] Feb 21 10:56:07.318: INFO: Created: latency-svc-bmpsk Feb 21 10:56:07.470: INFO: Got endpoints: latency-svc-bmpsk [2.297677808s] Feb 21 10:56:07.534: INFO: Created: latency-svc-wq52v Feb 21 10:56:07.535: INFO: Got endpoints: latency-svc-wq52v [2.163419729s] Feb 21 10:56:07.700: INFO: Created: latency-svc-7fwsl Feb 21 10:56:07.715: INFO: Got endpoints: latency-svc-7fwsl [2.154013676s] Feb 21 10:56:07.789: INFO: Created: latency-svc-nsvcf Feb 21 10:56:07.789: INFO: Got endpoints: latency-svc-nsvcf [2.21512539s] Feb 21 10:56:07.959: INFO: Created: latency-svc-pq58d Feb 21 10:56:07.959: INFO: Got endpoints: latency-svc-pq58d [2.195748896s] Feb 21 10:56:08.118: INFO: Created: latency-svc-zgm6c Feb 21 10:56:08.143: INFO: Got endpoints: latency-svc-zgm6c [2.228577246s] Feb 21 10:56:09.025: INFO: Created: latency-svc-hhc7b Feb 21 10:56:09.053: INFO: Got endpoints: latency-svc-hhc7b [2.928260915s] Feb 21 10:56:09.284: INFO: Created: latency-svc-s4v8l Feb 21 10:56:09.306: INFO: Got endpoints: latency-svc-s4v8l [3.102430221s] Feb 21 10:56:09.446: INFO: Created: latency-svc-r5kqx Feb 21 10:56:09.456: INFO: Got endpoints: latency-svc-r5kqx [3.106707816s] Feb 21 10:56:09.508: INFO: Created: latency-svc-tqgsl Feb 21 10:56:09.517: INFO: Got endpoints: latency-svc-tqgsl [2.981472194s] Feb 21 10:56:09.629: INFO: Created: latency-svc-675m9 Feb 21 10:56:09.703: INFO: Got endpoints: latency-svc-675m9 [2.959088489s] Feb 21 10:56:09.720: INFO: Created: latency-svc-hzm5v Feb 21 10:56:09.837: INFO: Got endpoints: latency-svc-hzm5v [3.068276761s] Feb 21 10:56:09.840: INFO: Created: latency-svc-x7kh2 Feb 21 10:56:09.874: INFO: Got endpoints: latency-svc-x7kh2 [2.946488516s] Feb 21 10:56:09.935: INFO: Created: latency-svc-s5nds Feb 21 10:56:10.003: INFO: Got endpoints: latency-svc-s5nds [2.788880813s] Feb 21 10:56:10.057: INFO: Created: latency-svc-lf7pk Feb 21 10:56:10.084: INFO: Got endpoints: latency-svc-lf7pk [2.816091139s] Feb 21 10:56:10.219: INFO: Created: latency-svc-8w6pc Feb 21 10:56:10.249: INFO: Got endpoints: latency-svc-8w6pc [2.778853879s] Feb 21 10:56:10.426: INFO: Created: latency-svc-7thxf Feb 21 10:56:10.475: INFO: Got endpoints: latency-svc-7thxf [2.940680335s] Feb 21 10:56:10.561: INFO: Created: latency-svc-q8jvw Feb 21 10:56:10.760: INFO: Got endpoints: latency-svc-q8jvw [3.044513865s] Feb 21 10:56:10.807: INFO: Created: latency-svc-zzm65 Feb 21 10:56:10.817: INFO: Got endpoints: latency-svc-zzm65 [3.027641835s] Feb 21 10:56:11.003: INFO: Created: latency-svc-gwmst Feb 21 10:56:11.034: INFO: Got endpoints: latency-svc-gwmst [3.074770758s] Feb 21 10:56:11.159: INFO: Created: latency-svc-7fmcw Feb 21 10:56:11.203: INFO: Got endpoints: latency-svc-7fmcw [3.060608388s] Feb 21 10:56:11.375: INFO: Created: latency-svc-xdvf2 Feb 21 10:56:11.414: INFO: Got endpoints: latency-svc-xdvf2 [2.360464451s] Feb 21 10:56:11.571: INFO: Created: latency-svc-pzfd7 Feb 21 10:56:11.584: INFO: Got endpoints: latency-svc-pzfd7 [2.278105063s] Feb 21 10:56:11.643: INFO: Created: latency-svc-rjrf6 Feb 21 10:56:11.777: INFO: Created: latency-svc-t2qzm Feb 21 10:56:11.790: INFO: Got endpoints: latency-svc-rjrf6 [2.334158694s] Feb 21 10:56:11.847: INFO: Got endpoints: latency-svc-t2qzm [2.330487217s] Feb 21 10:56:11.852: INFO: Created: latency-svc-pjwm5 Feb 21 10:56:11.960: INFO: Got endpoints: latency-svc-pjwm5 [2.256803378s] Feb 21 10:56:11.972: INFO: Created: latency-svc-h782w Feb 21 10:56:11.993: INFO: Got endpoints: latency-svc-h782w [2.155234489s] Feb 21 10:56:12.054: INFO: Created: latency-svc-ns2b6 Feb 21 10:56:12.150: INFO: Got endpoints: latency-svc-ns2b6 [2.275075326s] Feb 21 10:56:12.197: INFO: Created: latency-svc-k4b5m Feb 21 10:56:12.218: INFO: Got endpoints: latency-svc-k4b5m [2.214502986s] Feb 21 10:56:12.348: INFO: Created: latency-svc-69m6w Feb 21 10:56:12.361: INFO: Got endpoints: latency-svc-69m6w [2.276582881s] Feb 21 10:56:12.435: INFO: Created: latency-svc-ntx8b Feb 21 10:56:12.672: INFO: Got endpoints: latency-svc-ntx8b [2.422422998s] Feb 21 10:56:12.703: INFO: Created: latency-svc-vt8j9 Feb 21 10:56:12.713: INFO: Got endpoints: latency-svc-vt8j9 [2.236924565s] Feb 21 10:56:12.915: INFO: Created: latency-svc-nm62s Feb 21 10:56:12.948: INFO: Got endpoints: latency-svc-nm62s [2.18777323s] Feb 21 10:56:13.020: INFO: Created: latency-svc-bq8m7 Feb 21 10:56:13.078: INFO: Got endpoints: latency-svc-bq8m7 [2.260500608s] Feb 21 10:56:13.130: INFO: Created: latency-svc-blqpb Feb 21 10:56:13.152: INFO: Got endpoints: latency-svc-blqpb [2.117790032s] Feb 21 10:56:13.296: INFO: Created: latency-svc-z9djt Feb 21 10:56:13.306: INFO: Got endpoints: latency-svc-z9djt [2.101924246s] Feb 21 10:56:13.400: INFO: Created: latency-svc-t7fsn Feb 21 10:56:13.474: INFO: Got endpoints: latency-svc-t7fsn [2.059285078s] Feb 21 10:56:13.532: INFO: Created: latency-svc-p86zc Feb 21 10:56:13.541: INFO: Got endpoints: latency-svc-p86zc [1.956266285s] Feb 21 10:56:13.676: INFO: Created: latency-svc-ph68m Feb 21 10:56:13.709: INFO: Got endpoints: latency-svc-ph68m [1.918796039s] Feb 21 10:56:13.945: INFO: Created: latency-svc-vc9q9 Feb 21 10:56:13.985: INFO: Got endpoints: latency-svc-vc9q9 [2.137685158s] Feb 21 10:56:14.140: INFO: Created: latency-svc-hcrbm Feb 21 10:56:14.156: INFO: Got endpoints: latency-svc-hcrbm [2.196277763s] Feb 21 10:56:14.219: INFO: Created: latency-svc-l4b2s Feb 21 10:56:14.326: INFO: Got endpoints: latency-svc-l4b2s [2.332658865s] Feb 21 10:56:14.358: INFO: Created: latency-svc-w6s45 Feb 21 10:56:14.361: INFO: Got endpoints: latency-svc-w6s45 [2.210425317s] Feb 21 10:56:14.421: INFO: Created: latency-svc-lsjww Feb 21 10:56:14.609: INFO: Got endpoints: latency-svc-lsjww [2.390119932s] Feb 21 10:56:14.626: INFO: Created: latency-svc-54wcx Feb 21 10:56:14.645: INFO: Got endpoints: latency-svc-54wcx [2.283246924s] Feb 21 10:56:14.683: INFO: Created: latency-svc-wwckz Feb 21 10:56:14.774: INFO: Got endpoints: latency-svc-wwckz [2.102320226s] Feb 21 10:56:14.838: INFO: Created: latency-svc-6dhf6 Feb 21 10:56:14.848: INFO: Got endpoints: latency-svc-6dhf6 [2.135635836s] Feb 21 10:56:14.959: INFO: Created: latency-svc-svk4j Feb 21 10:56:14.982: INFO: Got endpoints: latency-svc-svk4j [2.03434401s] Feb 21 10:56:15.035: INFO: Created: latency-svc-cnjxg Feb 21 10:56:15.037: INFO: Got endpoints: latency-svc-cnjxg [1.95946184s] Feb 21 10:56:15.171: INFO: Created: latency-svc-ls8j4 Feb 21 10:56:15.193: INFO: Got endpoints: latency-svc-ls8j4 [2.04055002s] Feb 21 10:56:15.218: INFO: Created: latency-svc-jwknv Feb 21 10:56:15.466: INFO: Got endpoints: latency-svc-jwknv [2.160189969s] Feb 21 10:56:15.501: INFO: Created: latency-svc-cgkrf Feb 21 10:56:15.501: INFO: Got endpoints: latency-svc-cgkrf [2.026820457s] Feb 21 10:56:15.556: INFO: Created: latency-svc-zb7dv Feb 21 10:56:15.649: INFO: Got endpoints: latency-svc-zb7dv [2.107846443s] Feb 21 10:56:15.679: INFO: Created: latency-svc-lgtc5 Feb 21 10:56:15.684: INFO: Got endpoints: latency-svc-lgtc5 [1.974522576s] Feb 21 10:56:15.738: INFO: Created: latency-svc-5fkr6 Feb 21 10:56:15.844: INFO: Got endpoints: latency-svc-5fkr6 [1.858040326s] Feb 21 10:56:15.894: INFO: Created: latency-svc-4tgzm Feb 21 10:56:16.047: INFO: Got endpoints: latency-svc-4tgzm [1.890162768s] Feb 21 10:56:16.071: INFO: Created: latency-svc-qf9ct Feb 21 10:56:16.117: INFO: Got endpoints: latency-svc-qf9ct [1.790227943s] Feb 21 10:56:16.306: INFO: Created: latency-svc-xw7kx Feb 21 10:56:16.317: INFO: Got endpoints: latency-svc-xw7kx [1.956242571s] Feb 21 10:56:16.470: INFO: Created: latency-svc-mcrsc Feb 21 10:56:16.497: INFO: Got endpoints: latency-svc-mcrsc [1.887901395s] Feb 21 10:56:16.565: INFO: Created: latency-svc-xxbxk Feb 21 10:56:16.739: INFO: Got endpoints: latency-svc-xxbxk [2.093921311s] Feb 21 10:56:16.758: INFO: Created: latency-svc-rftvm Feb 21 10:56:16.777: INFO: Got endpoints: latency-svc-rftvm [2.002150594s] Feb 21 10:56:16.827: INFO: Created: latency-svc-hmhnv Feb 21 10:56:16.837: INFO: Got endpoints: latency-svc-hmhnv [1.988432142s] Feb 21 10:56:16.951: INFO: Created: latency-svc-2fprk Feb 21 10:56:16.973: INFO: Got endpoints: latency-svc-2fprk [1.990671445s] Feb 21 10:56:17.033: INFO: Created: latency-svc-mptsj Feb 21 10:56:17.103: INFO: Got endpoints: latency-svc-mptsj [2.065791301s] Feb 21 10:56:17.119: INFO: Created: latency-svc-cd5bl Feb 21 10:56:17.148: INFO: Got endpoints: latency-svc-cd5bl [1.954394033s] Feb 21 10:56:17.220: INFO: Created: latency-svc-lm8vr Feb 21 10:56:17.326: INFO: Got endpoints: latency-svc-lm8vr [1.859629309s] Feb 21 10:56:17.364: INFO: Created: latency-svc-r5gqv Feb 21 10:56:17.364: INFO: Got endpoints: latency-svc-r5gqv [1.863583221s] Feb 21 10:56:17.421: INFO: Created: latency-svc-nc4dh Feb 21 10:56:17.529: INFO: Got endpoints: latency-svc-nc4dh [1.880342464s] Feb 21 10:56:17.551: INFO: Created: latency-svc-hz7l5 Feb 21 10:56:17.693: INFO: Got endpoints: latency-svc-hz7l5 [2.009327562s] Feb 21 10:56:18.439: INFO: Created: latency-svc-qnsfg Feb 21 10:56:18.459: INFO: Got endpoints: latency-svc-qnsfg [2.614732604s] Feb 21 10:56:18.708: INFO: Created: latency-svc-s48sv Feb 21 10:56:18.834: INFO: Got endpoints: latency-svc-s48sv [2.787282783s] Feb 21 10:56:18.914: INFO: Created: latency-svc-w8mxg Feb 21 10:56:18.923: INFO: Got endpoints: latency-svc-w8mxg [2.80631129s] Feb 21 10:56:19.014: INFO: Created: latency-svc-l77j6 Feb 21 10:56:19.028: INFO: Got endpoints: latency-svc-l77j6 [2.710796406s] Feb 21 10:56:19.064: INFO: Created: latency-svc-r7ll5 Feb 21 10:56:19.073: INFO: Got endpoints: latency-svc-r7ll5 [2.575765491s] Feb 21 10:56:19.174: INFO: Created: latency-svc-sbbqd Feb 21 10:56:19.200: INFO: Got endpoints: latency-svc-sbbqd [2.461221818s] Feb 21 10:56:19.248: INFO: Created: latency-svc-qjqg2 Feb 21 10:56:19.258: INFO: Got endpoints: latency-svc-qjqg2 [2.480889277s] Feb 21 10:56:19.415: INFO: Created: latency-svc-n2gvf Feb 21 10:56:19.446: INFO: Got endpoints: latency-svc-n2gvf [2.608211879s] Feb 21 10:56:19.580: INFO: Created: latency-svc-2z94x Feb 21 10:56:19.601: INFO: Got endpoints: latency-svc-2z94x [2.627129381s] Feb 21 10:56:19.641: INFO: Created: latency-svc-b9p6v Feb 21 10:56:19.645: INFO: Got endpoints: latency-svc-b9p6v [2.541366677s] Feb 21 10:56:19.790: INFO: Created: latency-svc-z9fsp Feb 21 10:56:19.809: INFO: Got endpoints: latency-svc-z9fsp [2.661028191s] Feb 21 10:56:19.845: INFO: Created: latency-svc-s5nsp Feb 21 10:56:19.851: INFO: Got endpoints: latency-svc-s5nsp [2.525215383s] Feb 21 10:56:19.851: INFO: Latencies: [292.822731ms 438.62206ms 1.649125101s 1.77691233s 1.790227943s 1.825631261s 1.837906707s 1.858040326s 1.859629309s 1.863583221s 1.864685772s 1.880342464s 1.88237135s 1.887901395s 1.890162768s 1.896983246s 1.918796039s 1.932795055s 1.941909958s 1.95021155s 1.95218238s 1.954394033s 1.956242571s 1.956266285s 1.95804838s 1.958150112s 1.959008285s 1.95946184s 1.968626087s 1.974522576s 1.976659496s 1.9838138s 1.988432142s 1.990671445s 1.99387328s 1.99908174s 2.002150594s 2.009017154s 2.009327562s 2.026603898s 2.026820457s 2.031287947s 2.03434401s 2.04055002s 2.052163166s 2.059285078s 2.065791301s 2.087230822s 2.093921311s 2.101924246s 2.102320226s 2.107846443s 2.117790032s 2.129686581s 2.13098107s 2.135635836s 2.137685158s 2.138031666s 2.142333547s 2.142967029s 2.154013676s 2.155234489s 2.160189969s 2.163419729s 2.175553237s 2.18777323s 2.195748896s 2.196277763s 2.210425317s 2.214502986s 2.21512539s 2.218232484s 2.228577246s 2.230007945s 2.236924565s 2.252158649s 2.256803378s 2.260500608s 2.275075326s 2.275801057s 2.276582881s 2.278105063s 2.283246924s 2.286215613s 2.297677808s 2.330487217s 2.332658865s 2.334158694s 2.339658263s 2.360464451s 2.379891922s 2.390119932s 2.422422998s 2.461221818s 2.480889277s 2.517282168s 2.525215383s 2.541366677s 2.547864639s 2.555162007s 2.575765491s 2.577557863s 2.588056951s 2.592259233s 2.607781201s 2.608211879s 2.614732604s 2.627129381s 2.651307082s 2.661028191s 2.678270483s 2.697785063s 2.710796406s 2.728989895s 2.735724678s 2.748998307s 2.756093484s 2.778853879s 2.781997949s 2.783978141s 2.787282783s 2.788880813s 2.791092552s 2.80631129s 2.809867518s 2.815374002s 2.816091139s 2.837498845s 2.858344109s 2.868301611s 2.8861118s 2.889199645s 2.889487557s 2.911443149s 2.925776844s 2.925883406s 2.928260915s 2.940680335s 2.946488516s 2.957779962s 2.959088489s 2.965228678s 2.971641449s 2.981472194s 2.992447221s 3.014053899s 3.027641835s 3.044513865s 3.060608388s 3.068276761s 3.074770758s 3.102430221s 3.106707816s 3.174783286s 3.19446377s 3.320947513s 3.365716663s 3.39670818s 3.430362567s 3.451664487s 3.451965192s 3.48095526s 3.537502258s 3.557054265s 3.566051309s 3.618887157s 3.667695161s 3.679604033s 3.6799424s 3.728770241s 3.878768291s 3.95523829s 4.203009466s 4.234354786s 4.247393362s 4.295170874s 4.303917711s 4.353281179s 4.395162329s 4.443791079s 4.459954608s 4.481646842s 4.482366596s 4.521984749s 4.617774524s 4.620850063s 4.840850258s 4.91137299s 5.207515863s 5.449267039s 5.575848621s 5.677074516s 5.69812055s 5.70545785s 5.734912742s 5.735976543s 5.765436294s 5.819466267s 5.856236419s 6.461738656s] Feb 21 10:56:19.852: INFO: 50 %ile: 2.575765491s Feb 21 10:56:19.852: INFO: 90 %ile: 4.459954608s Feb 21 10:56:19.852: INFO: 99 %ile: 5.856236419s Feb 21 10:56:19.852: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 21 10:56:19.852: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-svc-latency-zcjlg" for this suite. Feb 21 10:57:15.994: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 21 10:57:16.116: INFO: namespace: e2e-tests-svc-latency-zcjlg, resource: bindings, ignored listing per whitelist Feb 21 10:57:16.167: INFO: namespace e2e-tests-svc-latency-zcjlg deletion completed in 56.211200725s • [SLOW TEST:103.445 seconds] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 21 10:57:16.167: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Feb 21 10:57:16.356: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. Feb 21 10:57:16.372: INFO: Number of nodes with available pods: 0 Feb 21 10:57:16.372: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. Feb 21 10:57:16.452: INFO: Number of nodes with available pods: 0 Feb 21 10:57:16.452: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 21 10:57:18.219: INFO: Number of nodes with available pods: 0 Feb 21 10:57:18.219: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 21 10:57:18.510: INFO: Number of nodes with available pods: 0 Feb 21 10:57:18.511: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 21 10:57:19.553: INFO: Number of nodes with available pods: 0 Feb 21 10:57:19.553: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 21 10:57:20.476: INFO: Number of nodes with available pods: 0 Feb 21 10:57:20.476: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 21 10:57:22.115: INFO: Number of nodes with available pods: 0 Feb 21 10:57:22.116: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 21 10:57:22.464: INFO: Number of nodes with available pods: 0 Feb 21 10:57:22.464: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 21 10:57:25.944: INFO: Number of nodes with available pods: 0 Feb 21 10:57:25.945: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 21 10:57:26.469: INFO: Number of nodes with available pods: 0 Feb 21 10:57:26.469: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 21 10:57:27.464: INFO: Number of nodes with available pods: 0 Feb 21 10:57:27.464: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 21 10:57:28.468: INFO: Number of nodes with available pods: 1 Feb 21 10:57:28.468: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled Feb 21 10:57:28.665: INFO: Number of nodes with available pods: 1 Feb 21 10:57:28.665: INFO: Number of running nodes: 0, number of available pods: 1 Feb 21 10:57:29.671: INFO: Number of nodes with available pods: 0 Feb 21 10:57:29.672: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate Feb 21 10:57:29.737: INFO: Number of nodes with available pods: 0 Feb 21 10:57:29.737: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 21 10:57:30.747: INFO: Number of nodes with available pods: 0 Feb 21 10:57:30.747: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 21 10:57:31.748: INFO: Number of nodes with available pods: 0 Feb 21 10:57:31.748: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 21 10:57:33.052: INFO: Number of nodes with available pods: 0 Feb 21 10:57:33.053: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 21 10:57:33.760: INFO: Number of nodes with available pods: 0 Feb 21 10:57:33.761: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 21 10:57:34.983: INFO: Number of nodes with available pods: 0 Feb 21 10:57:34.984: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 21 10:57:35.750: INFO: Number of nodes with available pods: 0 Feb 21 10:57:35.750: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 21 10:57:36.912: INFO: Number of nodes with available pods: 0 Feb 21 10:57:36.912: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 21 10:57:37.757: INFO: Number of nodes with available pods: 0 Feb 21 10:57:37.757: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 21 10:57:38.753: INFO: Number of nodes with available pods: 0 Feb 21 10:57:38.753: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 21 10:57:39.757: INFO: Number of nodes with available pods: 0 Feb 21 10:57:39.757: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 21 10:57:40.748: INFO: Number of nodes with available pods: 0 Feb 21 10:57:40.748: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 21 10:57:41.749: INFO: Number of nodes with available pods: 0 Feb 21 10:57:41.749: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 21 10:57:42.788: INFO: Number of nodes with available pods: 0 Feb 21 10:57:42.788: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 21 10:57:44.140: INFO: Number of nodes with available pods: 0 Feb 21 10:57:44.140: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 21 10:57:44.765: INFO: Number of nodes with available pods: 0 Feb 21 10:57:44.766: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 21 10:57:45.776: INFO: Number of nodes with available pods: 0 Feb 21 10:57:45.777: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 21 10:57:46.746: INFO: Number of nodes with available pods: 0 Feb 21 10:57:46.746: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 21 10:57:47.868: INFO: Number of nodes with available pods: 1 Feb 21 10:57:47.868: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-nqqjr, will wait for the garbage collector to delete the pods Feb 21 10:57:48.003: INFO: Deleting DaemonSet.extensions daemon-set took: 51.303898ms Feb 21 10:57:48.103: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.653495ms Feb 21 10:58:02.769: INFO: Number of nodes with available pods: 0 Feb 21 10:58:02.770: INFO: Number of running nodes: 0, number of available pods: 0 Feb 21 10:58:02.795: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-nqqjr/daemonsets","resourceVersion":"22412624"},"items":null} Feb 21 10:58:02.815: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-nqqjr/pods","resourceVersion":"22412624"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 21 10:58:02.859: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-nqqjr" for this suite. Feb 21 10:58:09.118: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 21 10:58:09.205: INFO: namespace: e2e-tests-daemonsets-nqqjr, resource: bindings, ignored listing per whitelist Feb 21 10:58:09.239: INFO: namespace e2e-tests-daemonsets-nqqjr deletion completed in 6.157571972s • [SLOW TEST:53.072 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 21 10:58:09.239: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Feb 21 10:58:09.467: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0be37389-5499-11ea-b1f8-0242ac110008" in namespace "e2e-tests-projected-tzt6r" to be "success or failure" Feb 21 10:58:09.474: INFO: Pod "downwardapi-volume-0be37389-5499-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 7.386369ms Feb 21 10:58:11.486: INFO: Pod "downwardapi-volume-0be37389-5499-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019134707s Feb 21 10:58:13.569: INFO: Pod "downwardapi-volume-0be37389-5499-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.102187161s Feb 21 10:58:15.579: INFO: Pod "downwardapi-volume-0be37389-5499-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.112022593s Feb 21 10:58:17.593: INFO: Pod "downwardapi-volume-0be37389-5499-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.126452307s Feb 21 10:58:19.610: INFO: Pod "downwardapi-volume-0be37389-5499-11ea-b1f8-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.143362854s STEP: Saw pod success Feb 21 10:58:19.610: INFO: Pod "downwardapi-volume-0be37389-5499-11ea-b1f8-0242ac110008" satisfied condition "success or failure" Feb 21 10:58:19.615: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-0be37389-5499-11ea-b1f8-0242ac110008 container client-container: STEP: delete the pod Feb 21 10:58:20.115: INFO: Waiting for pod downwardapi-volume-0be37389-5499-11ea-b1f8-0242ac110008 to disappear Feb 21 10:58:20.120: INFO: Pod downwardapi-volume-0be37389-5499-11ea-b1f8-0242ac110008 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 21 10:58:20.120: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-tzt6r" for this suite. Feb 21 10:58:26.178: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 21 10:58:26.352: INFO: namespace: e2e-tests-projected-tzt6r, resource: bindings, ignored listing per whitelist Feb 21 10:58:26.367: INFO: namespace e2e-tests-projected-tzt6r deletion completed in 6.241716648s • [SLOW TEST:17.128 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 21 10:58:26.367: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85 [It] should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating service multi-endpoint-test in namespace e2e-tests-services-tpxfk STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-tpxfk to expose endpoints map[] Feb 21 10:58:26.719: INFO: Get endpoints failed (43.526052ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found Feb 21 10:58:27.744: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-tpxfk exposes endpoints map[] (1.068410932s elapsed) STEP: Creating pod pod1 in namespace e2e-tests-services-tpxfk STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-tpxfk to expose endpoints map[pod1:[100]] Feb 21 10:58:33.728: INFO: Unexpected endpoints: found map[], expected map[pod1:[100]] (5.779010856s elapsed, will retry) Feb 21 10:58:41.283: INFO: Unexpected endpoints: found map[], expected map[pod1:[100]] (13.33356705s elapsed, will retry) Feb 21 10:58:43.453: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-tpxfk exposes endpoints map[pod1:[100]] (15.50393643s elapsed) STEP: Creating pod pod2 in namespace e2e-tests-services-tpxfk STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-tpxfk to expose endpoints map[pod1:[100] pod2:[101]] Feb 21 10:58:49.216: INFO: Unexpected endpoints: found map[16e4e0bd-5499-11ea-a994-fa163e34d433:[100]], expected map[pod1:[100] pod2:[101]] (5.747376987s elapsed, will retry) Feb 21 10:59:00.023: INFO: Unexpected endpoints: found map[16e4e0bd-5499-11ea-a994-fa163e34d433:[100]], expected map[pod2:[101] pod1:[100]] (16.555135188s elapsed, will retry) Feb 21 10:59:04.640: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-tpxfk exposes endpoints map[pod1:[100] pod2:[101]] (21.171337591s elapsed) STEP: Deleting pod pod1 in namespace e2e-tests-services-tpxfk STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-tpxfk to expose endpoints map[pod2:[101]] Feb 21 10:59:06.521: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-tpxfk exposes endpoints map[pod2:[101]] (1.864867694s elapsed) STEP: Deleting pod pod2 in namespace e2e-tests-services-tpxfk STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-tpxfk to expose endpoints map[] Feb 21 10:59:06.749: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-tpxfk exposes endpoints map[] (29.700866ms elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 21 10:59:06.896: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-services-tpxfk" for this suite. Feb 21 10:59:33.141: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 21 10:59:33.230: INFO: namespace: e2e-tests-services-tpxfk, resource: bindings, ignored listing per whitelist Feb 21 10:59:33.275: INFO: namespace e2e-tests-services-tpxfk deletion completed in 26.287721726s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90 • [SLOW TEST:66.908 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 21 10:59:33.276: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted STEP: Gathering metrics W0221 10:59:46.988223 8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Feb 21 10:59:46.988: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 21 10:59:46.988: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-rt9t7" for this suite. Feb 21 11:00:11.107: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 21 11:00:11.198: INFO: namespace: e2e-tests-gc-rt9t7, resource: bindings, ignored listing per whitelist Feb 21 11:00:13.741: INFO: namespace e2e-tests-gc-rt9t7 deletion completed in 26.740934832s • [SLOW TEST:40.465 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl expose should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 21 11:00:13.741: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating Redis RC Feb 21 11:00:15.780: INFO: namespace e2e-tests-kubectl-8hxvt Feb 21 11:00:15.780: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-8hxvt' Feb 21 11:00:22.507: INFO: stderr: "" Feb 21 11:00:22.508: INFO: stdout: "replicationcontroller/redis-master created\n" STEP: Waiting for Redis master to start. Feb 21 11:00:23.637: INFO: Selector matched 1 pods for map[app:redis] Feb 21 11:00:23.637: INFO: Found 0 / 1 Feb 21 11:00:24.549: INFO: Selector matched 1 pods for map[app:redis] Feb 21 11:00:24.550: INFO: Found 0 / 1 Feb 21 11:00:26.470: INFO: Selector matched 1 pods for map[app:redis] Feb 21 11:00:26.471: INFO: Found 0 / 1 Feb 21 11:00:26.530: INFO: Selector matched 1 pods for map[app:redis] Feb 21 11:00:26.531: INFO: Found 0 / 1 Feb 21 11:00:27.844: INFO: Selector matched 1 pods for map[app:redis] Feb 21 11:00:27.845: INFO: Found 0 / 1 Feb 21 11:00:28.544: INFO: Selector matched 1 pods for map[app:redis] Feb 21 11:00:28.544: INFO: Found 0 / 1 Feb 21 11:00:29.575: INFO: Selector matched 1 pods for map[app:redis] Feb 21 11:00:29.576: INFO: Found 0 / 1 Feb 21 11:00:30.524: INFO: Selector matched 1 pods for map[app:redis] Feb 21 11:00:30.524: INFO: Found 0 / 1 Feb 21 11:00:32.179: INFO: Selector matched 1 pods for map[app:redis] Feb 21 11:00:32.179: INFO: Found 0 / 1 Feb 21 11:00:32.648: INFO: Selector matched 1 pods for map[app:redis] Feb 21 11:00:32.649: INFO: Found 0 / 1 Feb 21 11:00:33.597: INFO: Selector matched 1 pods for map[app:redis] Feb 21 11:00:33.597: INFO: Found 0 / 1 Feb 21 11:00:34.553: INFO: Selector matched 1 pods for map[app:redis] Feb 21 11:00:34.553: INFO: Found 0 / 1 Feb 21 11:00:35.523: INFO: Selector matched 1 pods for map[app:redis] Feb 21 11:00:35.523: INFO: Found 0 / 1 Feb 21 11:00:37.332: INFO: Selector matched 1 pods for map[app:redis] Feb 21 11:00:37.333: INFO: Found 1 / 1 Feb 21 11:00:37.333: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Feb 21 11:00:37.477: INFO: Selector matched 1 pods for map[app:redis] Feb 21 11:00:37.478: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Feb 21 11:00:37.478: INFO: wait on redis-master startup in e2e-tests-kubectl-8hxvt Feb 21 11:00:37.478: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-fgrkx redis-master --namespace=e2e-tests-kubectl-8hxvt' Feb 21 11:00:37.743: INFO: stderr: "" Feb 21 11:00:37.743: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 21 Feb 11:00:34.627 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 21 Feb 11:00:34.627 # Server started, Redis version 3.2.12\n1:M 21 Feb 11:00:34.628 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 21 Feb 11:00:34.628 * The server is now ready to accept connections on port 6379\n" STEP: exposing RC Feb 21 11:00:37.744: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc redis-master --name=rm2 --port=1234 --target-port=6379 --namespace=e2e-tests-kubectl-8hxvt' Feb 21 11:00:38.033: INFO: stderr: "" Feb 21 11:00:38.034: INFO: stdout: "service/rm2 exposed\n" Feb 21 11:00:38.214: INFO: Service rm2 in namespace e2e-tests-kubectl-8hxvt found. STEP: exposing service Feb 21 11:00:40.227: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=e2e-tests-kubectl-8hxvt' Feb 21 11:00:40.433: INFO: stderr: "" Feb 21 11:00:40.433: INFO: stdout: "service/rm3 exposed\n" Feb 21 11:00:40.481: INFO: Service rm3 in namespace e2e-tests-kubectl-8hxvt found. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 21 11:00:42.523: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-8hxvt" for this suite. Feb 21 11:01:10.695: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 21 11:01:13.696: INFO: namespace: e2e-tests-kubectl-8hxvt, resource: bindings, ignored listing per whitelist Feb 21 11:01:13.708: INFO: namespace e2e-tests-kubectl-8hxvt deletion completed in 31.167116425s • [SLOW TEST:59.967 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl expose /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run --rm job should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 21 11:01:13.708: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: executing a command with run --rm and attach with stdin Feb 21 11:01:14.125: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=e2e-tests-kubectl-bzj4g run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed'' Feb 21 11:01:26.491: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0221 11:01:25.191483 893 log.go:172] (0xc0006d6160) (0xc000a04a00) Create stream\nI0221 11:01:25.191568 893 log.go:172] (0xc0006d6160) (0xc000a04a00) Stream added, broadcasting: 1\nI0221 11:01:25.198403 893 log.go:172] (0xc0006d6160) Reply frame received for 1\nI0221 11:01:25.198434 893 log.go:172] (0xc0006d6160) (0xc00001e000) Create stream\nI0221 11:01:25.198443 893 log.go:172] (0xc0006d6160) (0xc00001e000) Stream added, broadcasting: 3\nI0221 11:01:25.199492 893 log.go:172] (0xc0006d6160) Reply frame received for 3\nI0221 11:01:25.199522 893 log.go:172] (0xc0006d6160) (0xc000356000) Create stream\nI0221 11:01:25.199534 893 log.go:172] (0xc0006d6160) (0xc000356000) Stream added, broadcasting: 5\nI0221 11:01:25.200413 893 log.go:172] (0xc0006d6160) Reply frame received for 5\nI0221 11:01:25.200428 893 log.go:172] (0xc0006d6160) (0xc00001e0a0) Create stream\nI0221 11:01:25.200432 893 log.go:172] (0xc0006d6160) (0xc00001e0a0) Stream added, broadcasting: 7\nI0221 11:01:25.201169 893 log.go:172] (0xc0006d6160) Reply frame received for 7\nI0221 11:01:25.201385 893 log.go:172] (0xc00001e000) (3) Writing data frame\nI0221 11:01:25.201481 893 log.go:172] (0xc00001e000) (3) Writing data frame\nI0221 11:01:25.210694 893 log.go:172] (0xc0006d6160) Data frame received for 5\nI0221 11:01:25.210735 893 log.go:172] (0xc000356000) (5) Data frame handling\nI0221 11:01:25.210751 893 log.go:172] (0xc000356000) (5) Data frame sent\nI0221 11:01:25.212216 893 log.go:172] (0xc0006d6160) Data frame received for 5\nI0221 11:01:25.212228 893 log.go:172] (0xc000356000) (5) Data frame handling\nI0221 11:01:25.212240 893 log.go:172] (0xc000356000) (5) Data frame sent\nI0221 11:01:26.429456 893 log.go:172] (0xc0006d6160) Data frame received for 1\nI0221 11:01:26.429796 893 log.go:172] (0xc0006d6160) (0xc00001e000) Stream removed, broadcasting: 3\nI0221 11:01:26.429861 893 log.go:172] (0xc000a04a00) (1) Data frame handling\nI0221 11:01:26.429882 893 log.go:172] (0xc000a04a00) (1) Data frame sent\nI0221 11:01:26.429966 893 log.go:172] (0xc0006d6160) (0xc000a04a00) Stream removed, broadcasting: 1\nI0221 11:01:26.430071 893 log.go:172] (0xc0006d6160) (0xc00001e0a0) Stream removed, broadcasting: 7\nI0221 11:01:26.430111 893 log.go:172] (0xc0006d6160) (0xc000356000) Stream removed, broadcasting: 5\nI0221 11:01:26.430139 893 log.go:172] (0xc0006d6160) Go away received\nI0221 11:01:26.430341 893 log.go:172] (0xc0006d6160) (0xc000a04a00) Stream removed, broadcasting: 1\nI0221 11:01:26.430386 893 log.go:172] (0xc0006d6160) (0xc00001e000) Stream removed, broadcasting: 3\nI0221 11:01:26.430402 893 log.go:172] (0xc0006d6160) (0xc000356000) Stream removed, broadcasting: 5\nI0221 11:01:26.430419 893 log.go:172] (0xc0006d6160) (0xc00001e0a0) Stream removed, broadcasting: 7\n" Feb 21 11:01:26.491: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n" STEP: verifying the job e2e-test-rm-busybox-job was deleted [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 21 11:01:28.835: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-bzj4g" for this suite. Feb 21 11:01:34.976: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 21 11:01:34.990: INFO: namespace: e2e-tests-kubectl-bzj4g, resource: bindings, ignored listing per whitelist Feb 21 11:01:35.272: INFO: namespace e2e-tests-kubectl-bzj4g deletion completed in 6.429964647s • [SLOW TEST:21.564 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run --rm job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 21 11:01:35.272: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0644 on tmpfs Feb 21 11:01:35.448: INFO: Waiting up to 5m0s for pod "pod-86ac3fc0-5499-11ea-b1f8-0242ac110008" in namespace "e2e-tests-emptydir-qbh9h" to be "success or failure" Feb 21 11:01:35.456: INFO: Pod "pod-86ac3fc0-5499-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 7.392461ms Feb 21 11:01:37.471: INFO: Pod "pod-86ac3fc0-5499-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022690563s Feb 21 11:01:39.491: INFO: Pod "pod-86ac3fc0-5499-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.042154699s Feb 21 11:01:42.255: INFO: Pod "pod-86ac3fc0-5499-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.806068238s Feb 21 11:01:44.291: INFO: Pod "pod-86ac3fc0-5499-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.842107386s Feb 21 11:01:48.994: INFO: Pod "pod-86ac3fc0-5499-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 13.545706158s Feb 21 11:01:51.005: INFO: Pod "pod-86ac3fc0-5499-11ea-b1f8-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 15.556241026s STEP: Saw pod success Feb 21 11:01:51.005: INFO: Pod "pod-86ac3fc0-5499-11ea-b1f8-0242ac110008" satisfied condition "success or failure" Feb 21 11:01:51.009: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-86ac3fc0-5499-11ea-b1f8-0242ac110008 container test-container: STEP: delete the pod Feb 21 11:01:51.791: INFO: Waiting for pod pod-86ac3fc0-5499-11ea-b1f8-0242ac110008 to disappear Feb 21 11:01:52.557: INFO: Pod pod-86ac3fc0-5499-11ea-b1f8-0242ac110008 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 21 11:01:52.557: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-qbh9h" for this suite. Feb 21 11:01:58.628: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 21 11:01:58.709: INFO: namespace: e2e-tests-emptydir-qbh9h, resource: bindings, ignored listing per whitelist Feb 21 11:01:58.798: INFO: namespace e2e-tests-emptydir-qbh9h deletion completed in 6.219072765s • [SLOW TEST:23.526 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 21 11:01:58.798: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Feb 21 11:01:59.702: INFO: Creating ReplicaSet my-hostname-basic-9522a343-5499-11ea-b1f8-0242ac110008 Feb 21 11:01:59.733: INFO: Pod name my-hostname-basic-9522a343-5499-11ea-b1f8-0242ac110008: Found 0 pods out of 1 Feb 21 11:02:05.250: INFO: Pod name my-hostname-basic-9522a343-5499-11ea-b1f8-0242ac110008: Found 1 pods out of 1 Feb 21 11:02:05.250: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-9522a343-5499-11ea-b1f8-0242ac110008" is running Feb 21 11:02:09.265: INFO: Pod "my-hostname-basic-9522a343-5499-11ea-b1f8-0242ac110008-9httr" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-21 11:01:59 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-21 11:01:59 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-9522a343-5499-11ea-b1f8-0242ac110008]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-21 11:01:59 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-9522a343-5499-11ea-b1f8-0242ac110008]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-21 11:01:59 +0000 UTC Reason: Message:}]) Feb 21 11:02:09.265: INFO: Trying to dial the pod Feb 21 11:02:14.335: INFO: Controller my-hostname-basic-9522a343-5499-11ea-b1f8-0242ac110008: Got expected result from replica 1 [my-hostname-basic-9522a343-5499-11ea-b1f8-0242ac110008-9httr]: "my-hostname-basic-9522a343-5499-11ea-b1f8-0242ac110008-9httr", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 21 11:02:14.335: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replicaset-d2jdx" for this suite. Feb 21 11:02:24.924: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 21 11:02:25.016: INFO: namespace: e2e-tests-replicaset-d2jdx, resource: bindings, ignored listing per whitelist Feb 21 11:02:25.104: INFO: namespace e2e-tests-replicaset-d2jdx deletion completed in 10.747464325s • [SLOW TEST:26.306 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 21 11:02:25.104: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 21 11:02:39.265: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-6p7sh" for this suite. Feb 21 11:02:45.816: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 21 11:02:46.074: INFO: namespace: e2e-tests-kubelet-test-6p7sh, resource: bindings, ignored listing per whitelist Feb 21 11:02:46.331: INFO: namespace e2e-tests-kubelet-test-6p7sh deletion completed in 7.051260603s • [SLOW TEST:21.227 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78 should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 21 11:02:46.332: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-jzg8j STEP: creating a selector STEP: Creating the service pods in kubernetes Feb 21 11:02:46.559: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Feb 21 11:03:40.903: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.32.0.4:8080/hostName | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-jzg8j PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 21 11:03:40.903: INFO: >>> kubeConfig: /root/.kube/config I0221 11:03:41.033954 8 log.go:172] (0xc000a5c580) (0xc000c086e0) Create stream I0221 11:03:41.034117 8 log.go:172] (0xc000a5c580) (0xc000c086e0) Stream added, broadcasting: 1 I0221 11:03:41.040341 8 log.go:172] (0xc000a5c580) Reply frame received for 1 I0221 11:03:41.040441 8 log.go:172] (0xc000a5c580) (0xc001b132c0) Create stream I0221 11:03:41.040452 8 log.go:172] (0xc000a5c580) (0xc001b132c0) Stream added, broadcasting: 3 I0221 11:03:41.041768 8 log.go:172] (0xc000a5c580) Reply frame received for 3 I0221 11:03:41.041802 8 log.go:172] (0xc000a5c580) (0xc000c08780) Create stream I0221 11:03:41.041812 8 log.go:172] (0xc000a5c580) (0xc000c08780) Stream added, broadcasting: 5 I0221 11:03:41.043198 8 log.go:172] (0xc000a5c580) Reply frame received for 5 I0221 11:03:41.304549 8 log.go:172] (0xc000a5c580) Data frame received for 3 I0221 11:03:41.304755 8 log.go:172] (0xc001b132c0) (3) Data frame handling I0221 11:03:41.304809 8 log.go:172] (0xc001b132c0) (3) Data frame sent I0221 11:03:41.519092 8 log.go:172] (0xc000a5c580) Data frame received for 1 I0221 11:03:41.519170 8 log.go:172] (0xc000a5c580) (0xc001b132c0) Stream removed, broadcasting: 3 I0221 11:03:41.519196 8 log.go:172] (0xc000c086e0) (1) Data frame handling I0221 11:03:41.519224 8 log.go:172] (0xc000c086e0) (1) Data frame sent I0221 11:03:41.519237 8 log.go:172] (0xc000a5c580) (0xc000c08780) Stream removed, broadcasting: 5 I0221 11:03:41.519275 8 log.go:172] (0xc000a5c580) (0xc000c086e0) Stream removed, broadcasting: 1 I0221 11:03:41.519285 8 log.go:172] (0xc000a5c580) Go away received I0221 11:03:41.519469 8 log.go:172] (0xc000a5c580) (0xc000c086e0) Stream removed, broadcasting: 1 I0221 11:03:41.519478 8 log.go:172] (0xc000a5c580) (0xc001b132c0) Stream removed, broadcasting: 3 I0221 11:03:41.519486 8 log.go:172] (0xc000a5c580) (0xc000c08780) Stream removed, broadcasting: 5 Feb 21 11:03:41.519: INFO: Found all expected endpoints: [netserver-0] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 21 11:03:41.519: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pod-network-test-jzg8j" for this suite. Feb 21 11:04:05.597: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 21 11:04:05.640: INFO: namespace: e2e-tests-pod-network-test-jzg8j, resource: bindings, ignored listing per whitelist Feb 21 11:04:05.682: INFO: namespace e2e-tests-pod-network-test-jzg8j deletion completed in 24.136921055s • [SLOW TEST:79.350 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for node-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 21 11:04:05.682: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-map-e05a9979-5499-11ea-b1f8-0242ac110008 STEP: Creating a pod to test consume configMaps Feb 21 11:04:05.908: INFO: Waiting up to 5m0s for pod "pod-configmaps-e05b433f-5499-11ea-b1f8-0242ac110008" in namespace "e2e-tests-configmap-xpj9t" to be "success or failure" Feb 21 11:04:05.944: INFO: Pod "pod-configmaps-e05b433f-5499-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 36.071595ms Feb 21 11:04:07.968: INFO: Pod "pod-configmaps-e05b433f-5499-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.059588374s Feb 21 11:04:09.990: INFO: Pod "pod-configmaps-e05b433f-5499-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.082157253s Feb 21 11:04:12.466: INFO: Pod "pod-configmaps-e05b433f-5499-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.558268627s Feb 21 11:04:14.535: INFO: Pod "pod-configmaps-e05b433f-5499-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.626447762s Feb 21 11:04:16.778: INFO: Pod "pod-configmaps-e05b433f-5499-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 10.870019303s Feb 21 11:04:18.862: INFO: Pod "pod-configmaps-e05b433f-5499-11ea-b1f8-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.953813872s STEP: Saw pod success Feb 21 11:04:18.862: INFO: Pod "pod-configmaps-e05b433f-5499-11ea-b1f8-0242ac110008" satisfied condition "success or failure" Feb 21 11:04:18.871: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-e05b433f-5499-11ea-b1f8-0242ac110008 container configmap-volume-test: STEP: delete the pod Feb 21 11:04:18.953: INFO: Waiting for pod pod-configmaps-e05b433f-5499-11ea-b1f8-0242ac110008 to disappear Feb 21 11:04:19.242: INFO: Pod pod-configmaps-e05b433f-5499-11ea-b1f8-0242ac110008 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 21 11:04:19.243: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-xpj9t" for this suite. Feb 21 11:04:25.291: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 21 11:04:25.328: INFO: namespace: e2e-tests-configmap-xpj9t, resource: bindings, ignored listing per whitelist Feb 21 11:04:25.690: INFO: namespace e2e-tests-configmap-xpj9t deletion completed in 6.438208522s • [SLOW TEST:20.008 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 21 11:04:25.690: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test use defaults Feb 21 11:04:26.197: INFO: Waiting up to 5m0s for pod "client-containers-ec6d1071-5499-11ea-b1f8-0242ac110008" in namespace "e2e-tests-containers-k2z75" to be "success or failure" Feb 21 11:04:26.228: INFO: Pod "client-containers-ec6d1071-5499-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 30.728275ms Feb 21 11:04:28.259: INFO: Pod "client-containers-ec6d1071-5499-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.06208798s Feb 21 11:04:30.279: INFO: Pod "client-containers-ec6d1071-5499-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.081983162s Feb 21 11:04:32.928: INFO: Pod "client-containers-ec6d1071-5499-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.730426474s Feb 21 11:04:34.957: INFO: Pod "client-containers-ec6d1071-5499-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.76012781s Feb 21 11:04:36.980: INFO: Pod "client-containers-ec6d1071-5499-11ea-b1f8-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.782893453s STEP: Saw pod success Feb 21 11:04:36.980: INFO: Pod "client-containers-ec6d1071-5499-11ea-b1f8-0242ac110008" satisfied condition "success or failure" Feb 21 11:04:36.989: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-containers-ec6d1071-5499-11ea-b1f8-0242ac110008 container test-container: STEP: delete the pod Feb 21 11:04:37.139: INFO: Waiting for pod client-containers-ec6d1071-5499-11ea-b1f8-0242ac110008 to disappear Feb 21 11:04:37.158: INFO: Pod client-containers-ec6d1071-5499-11ea-b1f8-0242ac110008 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 21 11:04:37.158: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-containers-k2z75" for this suite. Feb 21 11:04:43.294: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 21 11:04:43.403: INFO: namespace: e2e-tests-containers-k2z75, resource: bindings, ignored listing per whitelist Feb 21 11:04:43.445: INFO: namespace e2e-tests-containers-k2z75 deletion completed in 6.21706104s • [SLOW TEST:17.755 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 21 11:04:43.446: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-map-f6ebe111-5499-11ea-b1f8-0242ac110008 STEP: Creating a pod to test consume secrets Feb 21 11:04:43.837: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-f6eea7c4-5499-11ea-b1f8-0242ac110008" in namespace "e2e-tests-projected-jlxw7" to be "success or failure" Feb 21 11:04:43.924: INFO: Pod "pod-projected-secrets-f6eea7c4-5499-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 86.644435ms Feb 21 11:04:46.461: INFO: Pod "pod-projected-secrets-f6eea7c4-5499-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.62278508s Feb 21 11:04:48.488: INFO: Pod "pod-projected-secrets-f6eea7c4-5499-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.650423127s Feb 21 11:04:50.515: INFO: Pod "pod-projected-secrets-f6eea7c4-5499-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.677441665s Feb 21 11:04:52.677: INFO: Pod "pod-projected-secrets-f6eea7c4-5499-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.838956404s Feb 21 11:04:55.191: INFO: Pod "pod-projected-secrets-f6eea7c4-5499-11ea-b1f8-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.353323924s STEP: Saw pod success Feb 21 11:04:55.191: INFO: Pod "pod-projected-secrets-f6eea7c4-5499-11ea-b1f8-0242ac110008" satisfied condition "success or failure" Feb 21 11:04:55.200: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-f6eea7c4-5499-11ea-b1f8-0242ac110008 container projected-secret-volume-test: STEP: delete the pod Feb 21 11:04:55.716: INFO: Waiting for pod pod-projected-secrets-f6eea7c4-5499-11ea-b1f8-0242ac110008 to disappear Feb 21 11:04:55.725: INFO: Pod pod-projected-secrets-f6eea7c4-5499-11ea-b1f8-0242ac110008 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 21 11:04:55.725: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-jlxw7" for this suite. Feb 21 11:05:01.800: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 21 11:05:01.923: INFO: namespace: e2e-tests-projected-jlxw7, resource: bindings, ignored listing per whitelist Feb 21 11:05:01.987: INFO: namespace e2e-tests-projected-jlxw7 deletion completed in 6.246661935s • [SLOW TEST:18.542 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 21 11:05:01.988: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 21 11:05:08.546: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-namespaces-nd7qg" for this suite. Feb 21 11:05:14.598: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 21 11:05:14.664: INFO: namespace: e2e-tests-namespaces-nd7qg, resource: bindings, ignored listing per whitelist Feb 21 11:05:14.680: INFO: namespace e2e-tests-namespaces-nd7qg deletion completed in 6.119589052s STEP: Destroying namespace "e2e-tests-nsdeletetest-f4vbz" for this suite. Feb 21 11:05:14.682: INFO: Namespace e2e-tests-nsdeletetest-f4vbz was already deleted STEP: Destroying namespace "e2e-tests-nsdeletetest-4gdkr" for this suite. Feb 21 11:05:20.716: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 21 11:05:20.828: INFO: namespace: e2e-tests-nsdeletetest-4gdkr, resource: bindings, ignored listing per whitelist Feb 21 11:05:20.849: INFO: namespace e2e-tests-nsdeletetest-4gdkr deletion completed in 6.166493966s • [SLOW TEST:18.861 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 21 11:05:20.850: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W0221 11:06:02.350044 8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Feb 21 11:06:02.350: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 21 11:06:02.350: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-hrgcj" for this suite. Feb 21 11:06:30.574: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 21 11:06:30.919: INFO: namespace: e2e-tests-gc-hrgcj, resource: bindings, ignored listing per whitelist Feb 21 11:06:31.032: INFO: namespace e2e-tests-gc-hrgcj deletion completed in 28.651551667s • [SLOW TEST:70.183 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl label should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 21 11:06:31.032: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1052 STEP: creating the pod Feb 21 11:06:31.298: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-6kkl2' Feb 21 11:06:31.666: INFO: stderr: "" Feb 21 11:06:31.666: INFO: stdout: "pod/pause created\n" Feb 21 11:06:31.666: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] Feb 21 11:06:31.666: INFO: Waiting up to 5m0s for pod "pause" in namespace "e2e-tests-kubectl-6kkl2" to be "running and ready" Feb 21 11:06:31.775: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 108.861316ms Feb 21 11:06:34.908: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 3.241228035s Feb 21 11:06:36.925: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 5.258786575s Feb 21 11:06:40.801: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 9.134304088s Feb 21 11:06:42.935: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 11.268439886s Feb 21 11:06:44.949: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 13.282216313s Feb 21 11:06:46.970: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 15.30383498s Feb 21 11:06:48.997: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 17.330265351s Feb 21 11:06:48.997: INFO: Pod "pause" satisfied condition "running and ready" Feb 21 11:06:48.997: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: adding the label testing-label with value testing-label-value to a pod Feb 21 11:06:48.997: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=e2e-tests-kubectl-6kkl2' Feb 21 11:06:49.206: INFO: stderr: "" Feb 21 11:06:49.206: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value Feb 21 11:06:49.207: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=e2e-tests-kubectl-6kkl2' Feb 21 11:06:49.355: INFO: stderr: "" Feb 21 11:06:49.355: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 18s testing-label-value\n" STEP: removing the label testing-label of a pod Feb 21 11:06:49.355: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=e2e-tests-kubectl-6kkl2' Feb 21 11:06:49.483: INFO: stderr: "" Feb 21 11:06:49.483: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label Feb 21 11:06:49.484: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=e2e-tests-kubectl-6kkl2' Feb 21 11:06:49.655: INFO: stderr: "" Feb 21 11:06:49.655: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 18s \n" [AfterEach] [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1059 STEP: using delete to clean up resources Feb 21 11:06:49.656: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-6kkl2' Feb 21 11:06:49.856: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Feb 21 11:06:49.856: INFO: stdout: "pod \"pause\" force deleted\n" Feb 21 11:06:49.857: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=e2e-tests-kubectl-6kkl2' Feb 21 11:06:50.113: INFO: stderr: "No resources found.\n" Feb 21 11:06:50.113: INFO: stdout: "" Feb 21 11:06:50.114: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=e2e-tests-kubectl-6kkl2 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Feb 21 11:06:50.208: INFO: stderr: "" Feb 21 11:06:50.208: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 21 11:06:50.208: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-6kkl2" for this suite. Feb 21 11:06:56.264: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 21 11:07:00.181: INFO: namespace: e2e-tests-kubectl-6kkl2, resource: bindings, ignored listing per whitelist Feb 21 11:07:00.201: INFO: namespace e2e-tests-kubectl-6kkl2 deletion completed in 9.986772616s • [SLOW TEST:29.169 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [k8s.io] Pods should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 21 11:07:00.201: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Feb 21 11:07:13.942: INFO: Successfully updated pod "pod-update-4855910f-549a-11ea-b1f8-0242ac110008" STEP: verifying the updated pod is in kubernetes Feb 21 11:07:14.098: INFO: Pod update OK [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 21 11:07:14.098: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-79xbw" for this suite. Feb 21 11:07:38.189: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 21 11:07:38.420: INFO: namespace: e2e-tests-pods-79xbw, resource: bindings, ignored listing per whitelist Feb 21 11:07:38.459: INFO: namespace e2e-tests-pods-79xbw deletion completed in 24.326971134s • [SLOW TEST:38.258 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 21 11:07:38.459: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Feb 21 11:07:38.966: INFO: Waiting up to 5m0s for pod "downwardapi-volume-5f580994-549a-11ea-b1f8-0242ac110008" in namespace "e2e-tests-downward-api-ftv58" to be "success or failure" Feb 21 11:07:38.981: INFO: Pod "downwardapi-volume-5f580994-549a-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 15.005769ms Feb 21 11:07:41.010: INFO: Pod "downwardapi-volume-5f580994-549a-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.044155839s Feb 21 11:07:43.033: INFO: Pod "downwardapi-volume-5f580994-549a-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.067078017s Feb 21 11:07:45.056: INFO: Pod "downwardapi-volume-5f580994-549a-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.089955918s Feb 21 11:07:47.067: INFO: Pod "downwardapi-volume-5f580994-549a-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.100723048s Feb 21 11:07:49.210: INFO: Pod "downwardapi-volume-5f580994-549a-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 10.243568179s Feb 21 11:07:52.946: INFO: Pod "downwardapi-volume-5f580994-549a-11ea-b1f8-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.979776116s STEP: Saw pod success Feb 21 11:07:52.946: INFO: Pod "downwardapi-volume-5f580994-549a-11ea-b1f8-0242ac110008" satisfied condition "success or failure" Feb 21 11:07:52.959: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-5f580994-549a-11ea-b1f8-0242ac110008 container client-container: STEP: delete the pod Feb 21 11:07:53.318: INFO: Waiting for pod downwardapi-volume-5f580994-549a-11ea-b1f8-0242ac110008 to disappear Feb 21 11:07:54.306: INFO: Pod downwardapi-volume-5f580994-549a-11ea-b1f8-0242ac110008 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 21 11:07:54.306: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-ftv58" for this suite. Feb 21 11:08:00.628: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 21 11:08:00.751: INFO: namespace: e2e-tests-downward-api-ftv58, resource: bindings, ignored listing per whitelist Feb 21 11:08:00.809: INFO: namespace e2e-tests-downward-api-ftv58 deletion completed in 6.490154234s • [SLOW TEST:22.350 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 21 11:08:00.809: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Feb 21 11:08:02.758: INFO: Requires at least 2 nodes (not -1) [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 Feb 21 11:08:02.764: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-862fn/daemonsets","resourceVersion":"22414046"},"items":null} Feb 21 11:08:02.769: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-862fn/pods","resourceVersion":"22414046"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 21 11:08:02.778: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-862fn" for this suite. Feb 21 11:08:10.820: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 21 11:08:10.986: INFO: namespace: e2e-tests-daemonsets-862fn, resource: bindings, ignored listing per whitelist Feb 21 11:08:11.031: INFO: namespace e2e-tests-daemonsets-862fn deletion completed in 8.24798875s S [SKIPPING] [10.222 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should rollback without unnecessary restarts [Conformance] [It] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Feb 21 11:08:02.758: Requires at least 2 nodes (not -1) /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:292 ------------------------------ SSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl logs should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 21 11:08:11.031: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1134 STEP: creating an rc Feb 21 11:08:11.286: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-sv7pb' Feb 21 11:08:11.685: INFO: stderr: "" Feb 21 11:08:11.685: INFO: stdout: "replicationcontroller/redis-master created\n" [It] should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Waiting for Redis master to start. Feb 21 11:08:12.737: INFO: Selector matched 1 pods for map[app:redis] Feb 21 11:08:12.737: INFO: Found 0 / 1 Feb 21 11:08:15.493: INFO: Selector matched 1 pods for map[app:redis] Feb 21 11:08:15.493: INFO: Found 0 / 1 Feb 21 11:08:16.202: INFO: Selector matched 1 pods for map[app:redis] Feb 21 11:08:16.202: INFO: Found 0 / 1 Feb 21 11:08:16.700: INFO: Selector matched 1 pods for map[app:redis] Feb 21 11:08:16.700: INFO: Found 0 / 1 Feb 21 11:08:17.716: INFO: Selector matched 1 pods for map[app:redis] Feb 21 11:08:17.717: INFO: Found 0 / 1 Feb 21 11:08:18.708: INFO: Selector matched 1 pods for map[app:redis] Feb 21 11:08:18.708: INFO: Found 0 / 1 Feb 21 11:08:20.358: INFO: Selector matched 1 pods for map[app:redis] Feb 21 11:08:20.358: INFO: Found 0 / 1 Feb 21 11:08:21.001: INFO: Selector matched 1 pods for map[app:redis] Feb 21 11:08:21.001: INFO: Found 0 / 1 Feb 21 11:08:21.876: INFO: Selector matched 1 pods for map[app:redis] Feb 21 11:08:21.877: INFO: Found 0 / 1 Feb 21 11:08:22.717: INFO: Selector matched 1 pods for map[app:redis] Feb 21 11:08:22.717: INFO: Found 0 / 1 Feb 21 11:08:23.699: INFO: Selector matched 1 pods for map[app:redis] Feb 21 11:08:23.699: INFO: Found 1 / 1 Feb 21 11:08:23.699: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Feb 21 11:08:23.705: INFO: Selector matched 1 pods for map[app:redis] Feb 21 11:08:23.705: INFO: ForEach: Found 1 pods from the filter. Now looping through them. STEP: checking for a matching strings Feb 21 11:08:23.705: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-5vn88 redis-master --namespace=e2e-tests-kubectl-sv7pb' Feb 21 11:08:23.885: INFO: stderr: "" Feb 21 11:08:23.885: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 21 Feb 11:08:22.963 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 21 Feb 11:08:22.964 # Server started, Redis version 3.2.12\n1:M 21 Feb 11:08:22.964 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 21 Feb 11:08:22.964 * The server is now ready to accept connections on port 6379\n" STEP: limiting log lines Feb 21 11:08:23.886: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-5vn88 redis-master --namespace=e2e-tests-kubectl-sv7pb --tail=1' Feb 21 11:08:24.048: INFO: stderr: "" Feb 21 11:08:24.049: INFO: stdout: "1:M 21 Feb 11:08:22.964 * The server is now ready to accept connections on port 6379\n" STEP: limiting log bytes Feb 21 11:08:24.049: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-5vn88 redis-master --namespace=e2e-tests-kubectl-sv7pb --limit-bytes=1' Feb 21 11:08:24.188: INFO: stderr: "" Feb 21 11:08:24.189: INFO: stdout: " " STEP: exposing timestamps Feb 21 11:08:24.189: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-5vn88 redis-master --namespace=e2e-tests-kubectl-sv7pb --tail=1 --timestamps' Feb 21 11:08:24.311: INFO: stderr: "" Feb 21 11:08:24.312: INFO: stdout: "2020-02-21T11:08:22.964966514Z 1:M 21 Feb 11:08:22.964 * The server is now ready to accept connections on port 6379\n" STEP: restricting to a time range Feb 21 11:08:26.812: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-5vn88 redis-master --namespace=e2e-tests-kubectl-sv7pb --since=1s' Feb 21 11:08:26.972: INFO: stderr: "" Feb 21 11:08:26.973: INFO: stdout: "" Feb 21 11:08:26.973: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-5vn88 redis-master --namespace=e2e-tests-kubectl-sv7pb --since=24h' Feb 21 11:08:27.088: INFO: stderr: "" Feb 21 11:08:27.088: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 21 Feb 11:08:22.963 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 21 Feb 11:08:22.964 # Server started, Redis version 3.2.12\n1:M 21 Feb 11:08:22.964 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 21 Feb 11:08:22.964 * The server is now ready to accept connections on port 6379\n" [AfterEach] [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1140 STEP: using delete to clean up resources Feb 21 11:08:27.088: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-sv7pb' Feb 21 11:08:27.264: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Feb 21 11:08:27.264: INFO: stdout: "replicationcontroller \"redis-master\" force deleted\n" Feb 21 11:08:27.264: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=nginx --no-headers --namespace=e2e-tests-kubectl-sv7pb' Feb 21 11:08:27.492: INFO: stderr: "No resources found.\n" Feb 21 11:08:27.492: INFO: stdout: "" Feb 21 11:08:27.493: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=nginx --namespace=e2e-tests-kubectl-sv7pb -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Feb 21 11:08:27.586: INFO: stderr: "" Feb 21 11:08:27.586: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 21 11:08:27.586: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-sv7pb" for this suite. Feb 21 11:08:51.668: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 21 11:08:51.848: INFO: namespace: e2e-tests-kubectl-sv7pb, resource: bindings, ignored listing per whitelist Feb 21 11:08:51.925: INFO: namespace e2e-tests-kubectl-sv7pb deletion completed in 24.320974983s • [SLOW TEST:40.894 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 21 11:08:51.926: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 pods, got 1 pods STEP: Gathering metrics W0221 11:08:54.935709 8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Feb 21 11:08:54.935: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 21 11:08:54.935: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-w6wx4" for this suite. Feb 21 11:09:03.644: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 21 11:09:03.750: INFO: namespace: e2e-tests-gc-w6wx4, resource: bindings, ignored listing per whitelist Feb 21 11:09:03.951: INFO: namespace e2e-tests-gc-w6wx4 deletion completed in 9.011028074s • [SLOW TEST:12.026 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 21 11:09:03.952: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Feb 21 11:09:06.088: INFO: Waiting up to 5m0s for pod "downwardapi-volume-933fdbe1-549a-11ea-b1f8-0242ac110008" in namespace "e2e-tests-downward-api-bjnxs" to be "success or failure" Feb 21 11:09:06.209: INFO: Pod "downwardapi-volume-933fdbe1-549a-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 121.262308ms Feb 21 11:09:08.403: INFO: Pod "downwardapi-volume-933fdbe1-549a-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.314724785s Feb 21 11:09:10.418: INFO: Pod "downwardapi-volume-933fdbe1-549a-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.330362733s Feb 21 11:09:13.383: INFO: Pod "downwardapi-volume-933fdbe1-549a-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 7.295115263s Feb 21 11:09:15.398: INFO: Pod "downwardapi-volume-933fdbe1-549a-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 9.310196463s Feb 21 11:09:17.409: INFO: Pod "downwardapi-volume-933fdbe1-549a-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 11.321518652s Feb 21 11:09:20.467: INFO: Pod "downwardapi-volume-933fdbe1-549a-11ea-b1f8-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.379542003s STEP: Saw pod success Feb 21 11:09:20.468: INFO: Pod "downwardapi-volume-933fdbe1-549a-11ea-b1f8-0242ac110008" satisfied condition "success or failure" Feb 21 11:09:20.478: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-933fdbe1-549a-11ea-b1f8-0242ac110008 container client-container: STEP: delete the pod Feb 21 11:09:20.843: INFO: Waiting for pod downwardapi-volume-933fdbe1-549a-11ea-b1f8-0242ac110008 to disappear Feb 21 11:09:20.947: INFO: Pod downwardapi-volume-933fdbe1-549a-11ea-b1f8-0242ac110008 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 21 11:09:20.947: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-bjnxs" for this suite. Feb 21 11:09:28.985: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 21 11:09:29.070: INFO: namespace: e2e-tests-downward-api-bjnxs, resource: bindings, ignored listing per whitelist Feb 21 11:09:29.110: INFO: namespace e2e-tests-downward-api-bjnxs deletion completed in 8.153031145s • [SLOW TEST:25.159 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 21 11:09:29.110: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-projected-all-test-volume-a134f38f-549a-11ea-b1f8-0242ac110008 STEP: Creating secret with name secret-projected-all-test-volume-a134f35b-549a-11ea-b1f8-0242ac110008 STEP: Creating a pod to test Check all projections for projected volume plugin Feb 21 11:09:29.500: INFO: Waiting up to 5m0s for pod "projected-volume-a134f24e-549a-11ea-b1f8-0242ac110008" in namespace "e2e-tests-projected-5crlv" to be "success or failure" Feb 21 11:09:29.647: INFO: Pod "projected-volume-a134f24e-549a-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 146.217143ms Feb 21 11:09:31.786: INFO: Pod "projected-volume-a134f24e-549a-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.285707907s Feb 21 11:09:33.817: INFO: Pod "projected-volume-a134f24e-549a-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.316677302s Feb 21 11:09:35.828: INFO: Pod "projected-volume-a134f24e-549a-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.327566899s Feb 21 11:09:37.837: INFO: Pod "projected-volume-a134f24e-549a-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.336229705s Feb 21 11:09:39.853: INFO: Pod "projected-volume-a134f24e-549a-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 10.352919141s Feb 21 11:09:41.867: INFO: Pod "projected-volume-a134f24e-549a-11ea-b1f8-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.366051629s STEP: Saw pod success Feb 21 11:09:41.867: INFO: Pod "projected-volume-a134f24e-549a-11ea-b1f8-0242ac110008" satisfied condition "success or failure" Feb 21 11:09:41.920: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod projected-volume-a134f24e-549a-11ea-b1f8-0242ac110008 container projected-all-volume-test: STEP: delete the pod Feb 21 11:09:42.141: INFO: Waiting for pod projected-volume-a134f24e-549a-11ea-b1f8-0242ac110008 to disappear Feb 21 11:09:46.428: INFO: Pod projected-volume-a134f24e-549a-11ea-b1f8-0242ac110008 no longer exists [AfterEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 21 11:09:46.429: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-5crlv" for this suite. Feb 21 11:09:54.757: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 21 11:09:54.844: INFO: namespace: e2e-tests-projected-5crlv, resource: bindings, ignored listing per whitelist Feb 21 11:09:54.879: INFO: namespace e2e-tests-projected-5crlv deletion completed in 8.429567582s • [SLOW TEST:25.768 seconds] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31 should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 21 11:09:54.879: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-92txp A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.e2e-tests-dns-92txp;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-92txp A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.e2e-tests-dns-92txp;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-92txp.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.e2e-tests-dns-92txp.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-92txp.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.e2e-tests-dns-92txp.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-92txp.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-92txp.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-92txp.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-92txp.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-92txp.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.e2e-tests-dns-92txp.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-92txp.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.e2e-tests-dns-92txp.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-92txp.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 223.31.108.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.108.31.223_udp@PTR;check="$$(dig +tcp +noall +answer +search 223.31.108.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.108.31.223_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-92txp A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.e2e-tests-dns-92txp;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-92txp A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.e2e-tests-dns-92txp;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-92txp.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.e2e-tests-dns-92txp.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-92txp.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.e2e-tests-dns-92txp.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-92txp.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-92txp.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-92txp.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-92txp.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-92txp.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.e2e-tests-dns-92txp.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-92txp.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.e2e-tests-dns-92txp.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-92txp.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 223.31.108.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.108.31.223_udp@PTR;check="$$(dig +tcp +noall +answer +search 223.31.108.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.108.31.223_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Feb 21 11:10:13.490: INFO: Unable to read wheezy_udp@dns-test-service from pod e2e-tests-dns-92txp/dns-test-b0a922c8-549a-11ea-b1f8-0242ac110008: the server could not find the requested resource (get pods dns-test-b0a922c8-549a-11ea-b1f8-0242ac110008) Feb 21 11:10:13.494: INFO: Unable to read wheezy_tcp@dns-test-service from pod e2e-tests-dns-92txp/dns-test-b0a922c8-549a-11ea-b1f8-0242ac110008: the server could not find the requested resource (get pods dns-test-b0a922c8-549a-11ea-b1f8-0242ac110008) Feb 21 11:10:13.500: INFO: Unable to read wheezy_udp@dns-test-service.e2e-tests-dns-92txp from pod e2e-tests-dns-92txp/dns-test-b0a922c8-549a-11ea-b1f8-0242ac110008: the server could not find the requested resource (get pods dns-test-b0a922c8-549a-11ea-b1f8-0242ac110008) Feb 21 11:10:13.508: INFO: Unable to read wheezy_tcp@dns-test-service.e2e-tests-dns-92txp from pod e2e-tests-dns-92txp/dns-test-b0a922c8-549a-11ea-b1f8-0242ac110008: the server could not find the requested resource (get pods dns-test-b0a922c8-549a-11ea-b1f8-0242ac110008) Feb 21 11:10:13.513: INFO: Unable to read wheezy_udp@dns-test-service.e2e-tests-dns-92txp.svc from pod e2e-tests-dns-92txp/dns-test-b0a922c8-549a-11ea-b1f8-0242ac110008: the server could not find the requested resource (get pods dns-test-b0a922c8-549a-11ea-b1f8-0242ac110008) Feb 21 11:10:13.517: INFO: Unable to read wheezy_tcp@dns-test-service.e2e-tests-dns-92txp.svc from pod e2e-tests-dns-92txp/dns-test-b0a922c8-549a-11ea-b1f8-0242ac110008: the server could not find the requested resource (get pods dns-test-b0a922c8-549a-11ea-b1f8-0242ac110008) Feb 21 11:10:13.523: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-92txp.svc from pod e2e-tests-dns-92txp/dns-test-b0a922c8-549a-11ea-b1f8-0242ac110008: the server could not find the requested resource (get pods dns-test-b0a922c8-549a-11ea-b1f8-0242ac110008) Feb 21 11:10:13.527: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-92txp.svc from pod e2e-tests-dns-92txp/dns-test-b0a922c8-549a-11ea-b1f8-0242ac110008: the server could not find the requested resource (get pods dns-test-b0a922c8-549a-11ea-b1f8-0242ac110008) Feb 21 11:10:13.531: INFO: Unable to read wheezy_udp@_http._tcp.test-service-2.e2e-tests-dns-92txp.svc from pod e2e-tests-dns-92txp/dns-test-b0a922c8-549a-11ea-b1f8-0242ac110008: the server could not find the requested resource (get pods dns-test-b0a922c8-549a-11ea-b1f8-0242ac110008) Feb 21 11:10:13.535: INFO: Unable to read wheezy_tcp@_http._tcp.test-service-2.e2e-tests-dns-92txp.svc from pod e2e-tests-dns-92txp/dns-test-b0a922c8-549a-11ea-b1f8-0242ac110008: the server could not find the requested resource (get pods dns-test-b0a922c8-549a-11ea-b1f8-0242ac110008) Feb 21 11:10:13.539: INFO: Unable to read wheezy_udp@PodARecord from pod e2e-tests-dns-92txp/dns-test-b0a922c8-549a-11ea-b1f8-0242ac110008: the server could not find the requested resource (get pods dns-test-b0a922c8-549a-11ea-b1f8-0242ac110008) Feb 21 11:10:13.544: INFO: Unable to read wheezy_tcp@PodARecord from pod e2e-tests-dns-92txp/dns-test-b0a922c8-549a-11ea-b1f8-0242ac110008: the server could not find the requested resource (get pods dns-test-b0a922c8-549a-11ea-b1f8-0242ac110008) Feb 21 11:10:13.547: INFO: Unable to read 10.108.31.223_udp@PTR from pod e2e-tests-dns-92txp/dns-test-b0a922c8-549a-11ea-b1f8-0242ac110008: the server could not find the requested resource (get pods dns-test-b0a922c8-549a-11ea-b1f8-0242ac110008) Feb 21 11:10:13.551: INFO: Unable to read 10.108.31.223_tcp@PTR from pod e2e-tests-dns-92txp/dns-test-b0a922c8-549a-11ea-b1f8-0242ac110008: the server could not find the requested resource (get pods dns-test-b0a922c8-549a-11ea-b1f8-0242ac110008) Feb 21 11:10:13.558: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-92txp/dns-test-b0a922c8-549a-11ea-b1f8-0242ac110008: the server could not find the requested resource (get pods dns-test-b0a922c8-549a-11ea-b1f8-0242ac110008) Feb 21 11:10:13.565: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-92txp/dns-test-b0a922c8-549a-11ea-b1f8-0242ac110008: the server could not find the requested resource (get pods dns-test-b0a922c8-549a-11ea-b1f8-0242ac110008) Feb 21 11:10:13.575: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-92txp from pod e2e-tests-dns-92txp/dns-test-b0a922c8-549a-11ea-b1f8-0242ac110008: the server could not find the requested resource (get pods dns-test-b0a922c8-549a-11ea-b1f8-0242ac110008) Feb 21 11:10:13.585: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-92txp from pod e2e-tests-dns-92txp/dns-test-b0a922c8-549a-11ea-b1f8-0242ac110008: the server could not find the requested resource (get pods dns-test-b0a922c8-549a-11ea-b1f8-0242ac110008) Feb 21 11:10:13.591: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-92txp.svc from pod e2e-tests-dns-92txp/dns-test-b0a922c8-549a-11ea-b1f8-0242ac110008: the server could not find the requested resource (get pods dns-test-b0a922c8-549a-11ea-b1f8-0242ac110008) Feb 21 11:10:13.596: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-92txp.svc from pod e2e-tests-dns-92txp/dns-test-b0a922c8-549a-11ea-b1f8-0242ac110008: the server could not find the requested resource (get pods dns-test-b0a922c8-549a-11ea-b1f8-0242ac110008) Feb 21 11:10:13.605: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-92txp.svc from pod e2e-tests-dns-92txp/dns-test-b0a922c8-549a-11ea-b1f8-0242ac110008: the server could not find the requested resource (get pods dns-test-b0a922c8-549a-11ea-b1f8-0242ac110008) Feb 21 11:10:13.611: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-92txp.svc from pod e2e-tests-dns-92txp/dns-test-b0a922c8-549a-11ea-b1f8-0242ac110008: the server could not find the requested resource (get pods dns-test-b0a922c8-549a-11ea-b1f8-0242ac110008) Feb 21 11:10:13.615: INFO: Unable to read jessie_udp@_http._tcp.test-service-2.e2e-tests-dns-92txp.svc from pod e2e-tests-dns-92txp/dns-test-b0a922c8-549a-11ea-b1f8-0242ac110008: the server could not find the requested resource (get pods dns-test-b0a922c8-549a-11ea-b1f8-0242ac110008) Feb 21 11:10:13.620: INFO: Unable to read jessie_tcp@_http._tcp.test-service-2.e2e-tests-dns-92txp.svc from pod e2e-tests-dns-92txp/dns-test-b0a922c8-549a-11ea-b1f8-0242ac110008: the server could not find the requested resource (get pods dns-test-b0a922c8-549a-11ea-b1f8-0242ac110008) Feb 21 11:10:13.626: INFO: Unable to read jessie_udp@PodARecord from pod e2e-tests-dns-92txp/dns-test-b0a922c8-549a-11ea-b1f8-0242ac110008: the server could not find the requested resource (get pods dns-test-b0a922c8-549a-11ea-b1f8-0242ac110008) Feb 21 11:10:13.631: INFO: Unable to read jessie_tcp@PodARecord from pod e2e-tests-dns-92txp/dns-test-b0a922c8-549a-11ea-b1f8-0242ac110008: the server could not find the requested resource (get pods dns-test-b0a922c8-549a-11ea-b1f8-0242ac110008) Feb 21 11:10:13.638: INFO: Unable to read 10.108.31.223_udp@PTR from pod e2e-tests-dns-92txp/dns-test-b0a922c8-549a-11ea-b1f8-0242ac110008: the server could not find the requested resource (get pods dns-test-b0a922c8-549a-11ea-b1f8-0242ac110008) Feb 21 11:10:13.645: INFO: Unable to read 10.108.31.223_tcp@PTR from pod e2e-tests-dns-92txp/dns-test-b0a922c8-549a-11ea-b1f8-0242ac110008: the server could not find the requested resource (get pods dns-test-b0a922c8-549a-11ea-b1f8-0242ac110008) Feb 21 11:10:13.645: INFO: Lookups using e2e-tests-dns-92txp/dns-test-b0a922c8-549a-11ea-b1f8-0242ac110008 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.e2e-tests-dns-92txp wheezy_tcp@dns-test-service.e2e-tests-dns-92txp wheezy_udp@dns-test-service.e2e-tests-dns-92txp.svc wheezy_tcp@dns-test-service.e2e-tests-dns-92txp.svc wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-92txp.svc wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-92txp.svc wheezy_udp@_http._tcp.test-service-2.e2e-tests-dns-92txp.svc wheezy_tcp@_http._tcp.test-service-2.e2e-tests-dns-92txp.svc wheezy_udp@PodARecord wheezy_tcp@PodARecord 10.108.31.223_udp@PTR 10.108.31.223_tcp@PTR jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-92txp jessie_tcp@dns-test-service.e2e-tests-dns-92txp jessie_udp@dns-test-service.e2e-tests-dns-92txp.svc jessie_tcp@dns-test-service.e2e-tests-dns-92txp.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-92txp.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-92txp.svc jessie_udp@_http._tcp.test-service-2.e2e-tests-dns-92txp.svc jessie_tcp@_http._tcp.test-service-2.e2e-tests-dns-92txp.svc jessie_udp@PodARecord jessie_tcp@PodARecord 10.108.31.223_udp@PTR 10.108.31.223_tcp@PTR] Feb 21 11:10:19.080: INFO: DNS probes using e2e-tests-dns-92txp/dns-test-b0a922c8-549a-11ea-b1f8-0242ac110008 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 21 11:10:19.589: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-dns-92txp" for this suite. Feb 21 11:10:29.097: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 21 11:10:29.154: INFO: namespace: e2e-tests-dns-92txp, resource: bindings, ignored listing per whitelist Feb 21 11:10:29.207: INFO: namespace e2e-tests-dns-92txp deletion completed in 9.57293035s • [SLOW TEST:34.328 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 21 11:10:29.207: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Feb 21 11:10:29.475: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 21 11:10:40.055: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-2snfw" for this suite. Feb 21 11:11:24.164: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 21 11:11:24.752: INFO: namespace: e2e-tests-pods-2snfw, resource: bindings, ignored listing per whitelist Feb 21 11:11:24.836: INFO: namespace e2e-tests-pods-2snfw deletion completed in 44.719789504s • [SLOW TEST:55.629 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 21 11:11:24.836: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-map-e64a8568-549a-11ea-b1f8-0242ac110008 STEP: Creating a pod to test consume configMaps Feb 21 11:11:25.385: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-e64c31a5-549a-11ea-b1f8-0242ac110008" in namespace "e2e-tests-projected-zgjxw" to be "success or failure" Feb 21 11:11:25.393: INFO: Pod "pod-projected-configmaps-e64c31a5-549a-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 7.944498ms Feb 21 11:11:27.445: INFO: Pod "pod-projected-configmaps-e64c31a5-549a-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.059024333s Feb 21 11:11:29.500: INFO: Pod "pod-projected-configmaps-e64c31a5-549a-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.11399176s Feb 21 11:11:31.574: INFO: Pod "pod-projected-configmaps-e64c31a5-549a-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.188389364s Feb 21 11:11:36.582: INFO: Pod "pod-projected-configmaps-e64c31a5-549a-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 11.196716018s Feb 21 11:11:38.598: INFO: Pod "pod-projected-configmaps-e64c31a5-549a-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 13.212140518s Feb 21 11:11:40.620: INFO: Pod "pod-projected-configmaps-e64c31a5-549a-11ea-b1f8-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 15.234310928s STEP: Saw pod success Feb 21 11:11:40.620: INFO: Pod "pod-projected-configmaps-e64c31a5-549a-11ea-b1f8-0242ac110008" satisfied condition "success or failure" Feb 21 11:11:40.631: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-e64c31a5-549a-11ea-b1f8-0242ac110008 container projected-configmap-volume-test: STEP: delete the pod Feb 21 11:11:41.781: INFO: Waiting for pod pod-projected-configmaps-e64c31a5-549a-11ea-b1f8-0242ac110008 to disappear Feb 21 11:11:41.792: INFO: Pod pod-projected-configmaps-e64c31a5-549a-11ea-b1f8-0242ac110008 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 21 11:11:41.793: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-zgjxw" for this suite. Feb 21 11:11:53.948: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 21 11:11:54.048: INFO: namespace: e2e-tests-projected-zgjxw, resource: bindings, ignored listing per whitelist Feb 21 11:11:54.150: INFO: namespace e2e-tests-projected-zgjxw deletion completed in 12.351176484s • [SLOW TEST:29.314 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run deployment should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 21 11:11:54.150: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1399 [It] should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Feb 21 11:11:54.385: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --generator=deployment/v1beta1 --namespace=e2e-tests-kubectl-zb687' Feb 21 11:12:00.540: INFO: stderr: "kubectl run --generator=deployment/v1beta1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Feb 21 11:12:00.540: INFO: stdout: "deployment.extensions/e2e-test-nginx-deployment created\n" STEP: verifying the deployment e2e-test-nginx-deployment was created STEP: verifying the pod controlled by deployment e2e-test-nginx-deployment was created [AfterEach] [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1404 Feb 21 11:12:09.876: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=e2e-tests-kubectl-zb687' Feb 21 11:12:10.139: INFO: stderr: "" Feb 21 11:12:10.139: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 21 11:12:10.139: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-zb687" for this suite. Feb 21 11:12:16.267: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 21 11:12:16.410: INFO: namespace: e2e-tests-kubectl-zb687, resource: bindings, ignored listing per whitelist Feb 21 11:12:16.418: INFO: namespace e2e-tests-kubectl-zb687 deletion completed in 6.213951776s • [SLOW TEST:22.268 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 21 11:12:16.418: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-04f23d89-549b-11ea-b1f8-0242ac110008 STEP: Creating a pod to test consume configMaps Feb 21 11:12:16.808: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-04f3e4c9-549b-11ea-b1f8-0242ac110008" in namespace "e2e-tests-projected-sgx5n" to be "success or failure" Feb 21 11:12:16.849: INFO: Pod "pod-projected-configmaps-04f3e4c9-549b-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 40.666833ms Feb 21 11:12:19.028: INFO: Pod "pod-projected-configmaps-04f3e4c9-549b-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.219745849s Feb 21 11:12:21.076: INFO: Pod "pod-projected-configmaps-04f3e4c9-549b-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.267703483s Feb 21 11:12:23.085: INFO: Pod "pod-projected-configmaps-04f3e4c9-549b-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.276267823s Feb 21 11:12:25.115: INFO: Pod "pod-projected-configmaps-04f3e4c9-549b-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.306667912s Feb 21 11:12:27.133: INFO: Pod "pod-projected-configmaps-04f3e4c9-549b-11ea-b1f8-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.324842587s STEP: Saw pod success Feb 21 11:12:27.134: INFO: Pod "pod-projected-configmaps-04f3e4c9-549b-11ea-b1f8-0242ac110008" satisfied condition "success or failure" Feb 21 11:12:27.152: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-04f3e4c9-549b-11ea-b1f8-0242ac110008 container projected-configmap-volume-test: STEP: delete the pod Feb 21 11:12:27.444: INFO: Waiting for pod pod-projected-configmaps-04f3e4c9-549b-11ea-b1f8-0242ac110008 to disappear Feb 21 11:12:27.460: INFO: Pod pod-projected-configmaps-04f3e4c9-549b-11ea-b1f8-0242ac110008 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 21 11:12:27.460: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-sgx5n" for this suite. Feb 21 11:12:33.512: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 21 11:12:33.557: INFO: namespace: e2e-tests-projected-sgx5n, resource: bindings, ignored listing per whitelist Feb 21 11:12:33.647: INFO: namespace e2e-tests-projected-sgx5n deletion completed in 6.178555979s • [SLOW TEST:17.229 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 21 11:12:33.648: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default;check="$$(dig +tcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;test -n "$$(getent hosts dns-querier-1.dns-test-service.e2e-tests-dns-sj9zf.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-sj9zf.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-sj9zf.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default;check="$$(dig +tcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;test -n "$$(getent hosts dns-querier-1.dns-test-service.e2e-tests-dns-sj9zf.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-sj9zf.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-sj9zf.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Feb 21 11:12:56.506: INFO: Unable to read jessie_udp@kubernetes.default from pod e2e-tests-dns-sj9zf/dns-test-0f34c3ed-549b-11ea-b1f8-0242ac110008: the server could not find the requested resource (get pods dns-test-0f34c3ed-549b-11ea-b1f8-0242ac110008) Feb 21 11:12:56.534: INFO: Unable to read jessie_tcp@kubernetes.default from pod e2e-tests-dns-sj9zf/dns-test-0f34c3ed-549b-11ea-b1f8-0242ac110008: the server could not find the requested resource (get pods dns-test-0f34c3ed-549b-11ea-b1f8-0242ac110008) Feb 21 11:12:56.577: INFO: Unable to read jessie_udp@kubernetes.default.svc from pod e2e-tests-dns-sj9zf/dns-test-0f34c3ed-549b-11ea-b1f8-0242ac110008: the server could not find the requested resource (get pods dns-test-0f34c3ed-549b-11ea-b1f8-0242ac110008) Feb 21 11:12:56.610: INFO: Unable to read jessie_tcp@kubernetes.default.svc from pod e2e-tests-dns-sj9zf/dns-test-0f34c3ed-549b-11ea-b1f8-0242ac110008: the server could not find the requested resource (get pods dns-test-0f34c3ed-549b-11ea-b1f8-0242ac110008) Feb 21 11:12:56.657: INFO: Unable to read jessie_udp@kubernetes.default.svc.cluster.local from pod e2e-tests-dns-sj9zf/dns-test-0f34c3ed-549b-11ea-b1f8-0242ac110008: the server could not find the requested resource (get pods dns-test-0f34c3ed-549b-11ea-b1f8-0242ac110008) Feb 21 11:12:56.696: INFO: Unable to read jessie_tcp@kubernetes.default.svc.cluster.local from pod e2e-tests-dns-sj9zf/dns-test-0f34c3ed-549b-11ea-b1f8-0242ac110008: the server could not find the requested resource (get pods dns-test-0f34c3ed-549b-11ea-b1f8-0242ac110008) Feb 21 11:12:56.724: INFO: Unable to read jessie_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-sj9zf.svc.cluster.local from pod e2e-tests-dns-sj9zf/dns-test-0f34c3ed-549b-11ea-b1f8-0242ac110008: the server could not find the requested resource (get pods dns-test-0f34c3ed-549b-11ea-b1f8-0242ac110008) Feb 21 11:12:56.756: INFO: Unable to read jessie_hosts@dns-querier-1 from pod e2e-tests-dns-sj9zf/dns-test-0f34c3ed-549b-11ea-b1f8-0242ac110008: the server could not find the requested resource (get pods dns-test-0f34c3ed-549b-11ea-b1f8-0242ac110008) Feb 21 11:12:56.784: INFO: Unable to read jessie_udp@PodARecord from pod e2e-tests-dns-sj9zf/dns-test-0f34c3ed-549b-11ea-b1f8-0242ac110008: the server could not find the requested resource (get pods dns-test-0f34c3ed-549b-11ea-b1f8-0242ac110008) Feb 21 11:12:56.798: INFO: Unable to read jessie_tcp@PodARecord from pod e2e-tests-dns-sj9zf/dns-test-0f34c3ed-549b-11ea-b1f8-0242ac110008: the server could not find the requested resource (get pods dns-test-0f34c3ed-549b-11ea-b1f8-0242ac110008) Feb 21 11:12:56.798: INFO: Lookups using e2e-tests-dns-sj9zf/dns-test-0f34c3ed-549b-11ea-b1f8-0242ac110008 failed for: [jessie_udp@kubernetes.default jessie_tcp@kubernetes.default jessie_udp@kubernetes.default.svc jessie_tcp@kubernetes.default.svc jessie_udp@kubernetes.default.svc.cluster.local jessie_tcp@kubernetes.default.svc.cluster.local jessie_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-sj9zf.svc.cluster.local jessie_hosts@dns-querier-1 jessie_udp@PodARecord jessie_tcp@PodARecord] Feb 21 11:13:02.842: INFO: DNS probes using e2e-tests-dns-sj9zf/dns-test-0f34c3ed-549b-11ea-b1f8-0242ac110008 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 21 11:13:03.738: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-dns-sj9zf" for this suite. Feb 21 11:13:12.672: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 21 11:13:12.689: INFO: namespace: e2e-tests-dns-sj9zf, resource: bindings, ignored listing per whitelist Feb 21 11:13:12.794: INFO: namespace e2e-tests-dns-sj9zf deletion completed in 9.022326964s • [SLOW TEST:39.147 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 21 11:13:12.795: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43 [It] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod Feb 21 11:13:15.427: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 21 11:13:43.139: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-init-container-r27pq" for this suite. Feb 21 11:13:58.593: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 21 11:13:58.651: INFO: namespace: e2e-tests-init-container-r27pq, resource: bindings, ignored listing per whitelist Feb 21 11:13:58.676: INFO: namespace e2e-tests-init-container-r27pq deletion completed in 13.624386474s • [SLOW TEST:45.881 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 21 11:13:58.676: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Feb 21 11:14:27.897: INFO: Container started at 2020-02-21 11:14:09 +0000 UTC, pod became ready at 2020-02-21 11:14:25 +0000 UTC [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 21 11:14:27.897: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-r6gj9" for this suite. Feb 21 11:14:52.136: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 21 11:14:52.213: INFO: namespace: e2e-tests-container-probe-r6gj9, resource: bindings, ignored listing per whitelist Feb 21 11:14:52.257: INFO: namespace e2e-tests-container-probe-r6gj9 deletion completed in 24.348551332s • [SLOW TEST:53.581 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 21 11:14:52.257: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-tftzz STEP: creating a selector STEP: Creating the service pods in kubernetes Feb 21 11:14:52.529: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Feb 21 11:15:47.088: INFO: ExecWithOptions {Command:[/bin/sh -c echo 'hostName' | nc -w 1 -u 10.32.0.4 8081 | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-tftzz PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 21 11:15:47.088: INFO: >>> kubeConfig: /root/.kube/config I0221 11:15:47.173195 8 log.go:172] (0xc001b9e210) (0xc000de8460) Create stream I0221 11:15:47.173343 8 log.go:172] (0xc001b9e210) (0xc000de8460) Stream added, broadcasting: 1 I0221 11:15:47.181288 8 log.go:172] (0xc001b9e210) Reply frame received for 1 I0221 11:15:47.181335 8 log.go:172] (0xc001b9e210) (0xc000de8500) Create stream I0221 11:15:47.181347 8 log.go:172] (0xc001b9e210) (0xc000de8500) Stream added, broadcasting: 3 I0221 11:15:47.182345 8 log.go:172] (0xc001b9e210) Reply frame received for 3 I0221 11:15:47.182364 8 log.go:172] (0xc001b9e210) (0xc00050c0a0) Create stream I0221 11:15:47.182371 8 log.go:172] (0xc001b9e210) (0xc00050c0a0) Stream added, broadcasting: 5 I0221 11:15:47.183317 8 log.go:172] (0xc001b9e210) Reply frame received for 5 I0221 11:15:48.429966 8 log.go:172] (0xc001b9e210) Data frame received for 3 I0221 11:15:48.430044 8 log.go:172] (0xc000de8500) (3) Data frame handling I0221 11:15:48.430064 8 log.go:172] (0xc000de8500) (3) Data frame sent I0221 11:15:48.692753 8 log.go:172] (0xc001b9e210) Data frame received for 1 I0221 11:15:48.692904 8 log.go:172] (0xc000de8460) (1) Data frame handling I0221 11:15:48.692938 8 log.go:172] (0xc000de8460) (1) Data frame sent I0221 11:15:48.693330 8 log.go:172] (0xc001b9e210) (0xc000de8460) Stream removed, broadcasting: 1 I0221 11:15:48.694006 8 log.go:172] (0xc001b9e210) (0xc000de8500) Stream removed, broadcasting: 3 I0221 11:15:48.694676 8 log.go:172] (0xc001b9e210) (0xc00050c0a0) Stream removed, broadcasting: 5 I0221 11:15:48.694751 8 log.go:172] (0xc001b9e210) (0xc000de8460) Stream removed, broadcasting: 1 I0221 11:15:48.694771 8 log.go:172] (0xc001b9e210) (0xc000de8500) Stream removed, broadcasting: 3 I0221 11:15:48.694787 8 log.go:172] (0xc001b9e210) (0xc00050c0a0) Stream removed, broadcasting: 5 Feb 21 11:15:48.695: INFO: Found all expected endpoints: [netserver-0] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 21 11:15:48.696: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready I0221 11:15:48.696180 8 log.go:172] (0xc001b9e210) Go away received STEP: Destroying namespace "e2e-tests-pod-network-test-tftzz" for this suite. Feb 21 11:16:18.760: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 21 11:16:18.856: INFO: namespace: e2e-tests-pod-network-test-tftzz, resource: bindings, ignored listing per whitelist Feb 21 11:16:18.918: INFO: namespace e2e-tests-pod-network-test-tftzz deletion completed in 30.19642493s • [SLOW TEST:86.661 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for node-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 21 11:16:18.918: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Feb 21 11:16:19.373: INFO: Waiting up to 5m0s for pod "downwardapi-volume-95779832-549b-11ea-b1f8-0242ac110008" in namespace "e2e-tests-downward-api-cqqp2" to be "success or failure" Feb 21 11:16:19.387: INFO: Pod "downwardapi-volume-95779832-549b-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 14.003675ms Feb 21 11:16:21.406: INFO: Pod "downwardapi-volume-95779832-549b-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033032952s Feb 21 11:16:23.448: INFO: Pod "downwardapi-volume-95779832-549b-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.074338049s Feb 21 11:16:26.001: INFO: Pod "downwardapi-volume-95779832-549b-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.627472513s Feb 21 11:16:28.053: INFO: Pod "downwardapi-volume-95779832-549b-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.679617231s Feb 21 11:16:30.249: INFO: Pod "downwardapi-volume-95779832-549b-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 10.875741489s Feb 21 11:16:33.791: INFO: Pod "downwardapi-volume-95779832-549b-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 14.418157852s Feb 21 11:16:38.035: INFO: Pod "downwardapi-volume-95779832-549b-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 18.662104221s Feb 21 11:16:41.303: INFO: Pod "downwardapi-volume-95779832-549b-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 21.929724309s Feb 21 11:16:43.327: INFO: Pod "downwardapi-volume-95779832-549b-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 23.953591598s Feb 21 11:16:45.363: INFO: Pod "downwardapi-volume-95779832-549b-11ea-b1f8-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 25.989678014s STEP: Saw pod success Feb 21 11:16:45.363: INFO: Pod "downwardapi-volume-95779832-549b-11ea-b1f8-0242ac110008" satisfied condition "success or failure" Feb 21 11:16:45.416: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-95779832-549b-11ea-b1f8-0242ac110008 container client-container: STEP: delete the pod Feb 21 11:16:45.898: INFO: Waiting for pod downwardapi-volume-95779832-549b-11ea-b1f8-0242ac110008 to disappear Feb 21 11:16:45.952: INFO: Pod downwardapi-volume-95779832-549b-11ea-b1f8-0242ac110008 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 21 11:16:45.953: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-cqqp2" for this suite. Feb 21 11:17:11.177: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 21 11:17:11.378: INFO: namespace: e2e-tests-downward-api-cqqp2, resource: bindings, ignored listing per whitelist Feb 21 11:17:11.378: INFO: namespace e2e-tests-downward-api-cqqp2 deletion completed in 25.311047683s • [SLOW TEST:52.459 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 21 11:17:11.378: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-b4aae7a1-549b-11ea-b1f8-0242ac110008 STEP: Creating a pod to test consume secrets Feb 21 11:17:11.686: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-b4abbd70-549b-11ea-b1f8-0242ac110008" in namespace "e2e-tests-projected-96g56" to be "success or failure" Feb 21 11:17:11.693: INFO: Pod "pod-projected-secrets-b4abbd70-549b-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.230992ms Feb 21 11:17:13.724: INFO: Pod "pod-projected-secrets-b4abbd70-549b-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037866382s Feb 21 11:17:17.373: INFO: Pod "pod-projected-secrets-b4abbd70-549b-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 5.686573752s Feb 21 11:17:19.389: INFO: Pod "pod-projected-secrets-b4abbd70-549b-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 7.70275998s Feb 21 11:17:21.398: INFO: Pod "pod-projected-secrets-b4abbd70-549b-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 9.711325124s Feb 21 11:17:23.407: INFO: Pod "pod-projected-secrets-b4abbd70-549b-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 11.720776167s Feb 21 11:17:25.429: INFO: Pod "pod-projected-secrets-b4abbd70-549b-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 13.742537412s Feb 21 11:17:27.453: INFO: Pod "pod-projected-secrets-b4abbd70-549b-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 15.766588747s Feb 21 11:17:29.479: INFO: Pod "pod-projected-secrets-b4abbd70-549b-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 17.792847174s Feb 21 11:17:31.499: INFO: Pod "pod-projected-secrets-b4abbd70-549b-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 19.812375073s Feb 21 11:17:35.285: INFO: Pod "pod-projected-secrets-b4abbd70-549b-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 23.598952115s Feb 21 11:17:37.304: INFO: Pod "pod-projected-secrets-b4abbd70-549b-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 25.617187521s Feb 21 11:17:39.329: INFO: Pod "pod-projected-secrets-b4abbd70-549b-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 27.642587345s Feb 21 11:17:41.349: INFO: Pod "pod-projected-secrets-b4abbd70-549b-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 29.662175782s Feb 21 11:17:43.376: INFO: Pod "pod-projected-secrets-b4abbd70-549b-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 31.690031442s Feb 21 11:17:45.388: INFO: Pod "pod-projected-secrets-b4abbd70-549b-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 33.701467574s Feb 21 11:17:47.400: INFO: Pod "pod-projected-secrets-b4abbd70-549b-11ea-b1f8-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 35.713603017s STEP: Saw pod success Feb 21 11:17:47.400: INFO: Pod "pod-projected-secrets-b4abbd70-549b-11ea-b1f8-0242ac110008" satisfied condition "success or failure" Feb 21 11:17:47.406: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-b4abbd70-549b-11ea-b1f8-0242ac110008 container projected-secret-volume-test: STEP: delete the pod Feb 21 11:17:47.599: INFO: Waiting for pod pod-projected-secrets-b4abbd70-549b-11ea-b1f8-0242ac110008 to disappear Feb 21 11:17:48.009: INFO: Pod pod-projected-secrets-b4abbd70-549b-11ea-b1f8-0242ac110008 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 21 11:17:48.010: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-96g56" for this suite. Feb 21 11:17:54.270: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 21 11:17:54.406: INFO: namespace: e2e-tests-projected-96g56, resource: bindings, ignored listing per whitelist Feb 21 11:17:54.745: INFO: namespace e2e-tests-projected-96g56 deletion completed in 6.708986423s • [SLOW TEST:43.367 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 21 11:17:54.745: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test override command Feb 21 11:17:54.990: INFO: Waiting up to 5m0s for pod "client-containers-ce83e8a6-549b-11ea-b1f8-0242ac110008" in namespace "e2e-tests-containers-nv7rw" to be "success or failure" Feb 21 11:17:55.013: INFO: Pod "client-containers-ce83e8a6-549b-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 22.471763ms Feb 21 11:17:57.031: INFO: Pod "client-containers-ce83e8a6-549b-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03999956s Feb 21 11:17:59.044: INFO: Pod "client-containers-ce83e8a6-549b-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.052958353s Feb 21 11:18:01.279: INFO: Pod "client-containers-ce83e8a6-549b-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.288260539s Feb 21 11:18:03.299: INFO: Pod "client-containers-ce83e8a6-549b-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.308837309s Feb 21 11:18:05.315: INFO: Pod "client-containers-ce83e8a6-549b-11ea-b1f8-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.324581688s STEP: Saw pod success Feb 21 11:18:05.315: INFO: Pod "client-containers-ce83e8a6-549b-11ea-b1f8-0242ac110008" satisfied condition "success or failure" Feb 21 11:18:05.331: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-containers-ce83e8a6-549b-11ea-b1f8-0242ac110008 container test-container: STEP: delete the pod Feb 21 11:18:07.358: INFO: Waiting for pod client-containers-ce83e8a6-549b-11ea-b1f8-0242ac110008 to disappear Feb 21 11:18:07.385: INFO: Pod client-containers-ce83e8a6-549b-11ea-b1f8-0242ac110008 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 21 11:18:07.385: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-containers-nv7rw" for this suite. Feb 21 11:18:15.610: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 21 11:18:15.670: INFO: namespace: e2e-tests-containers-nv7rw, resource: bindings, ignored listing per whitelist Feb 21 11:18:15.739: INFO: namespace e2e-tests-containers-nv7rw deletion completed in 8.176267949s • [SLOW TEST:20.993 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 21 11:18:15.739: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:204 [It] should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 21 11:18:16.050: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-7btpx" for this suite. Feb 21 11:18:40.220: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 21 11:18:40.306: INFO: namespace: e2e-tests-pods-7btpx, resource: bindings, ignored listing per whitelist Feb 21 11:18:40.653: INFO: namespace e2e-tests-pods-7btpx deletion completed in 24.59675579s • [SLOW TEST:24.914 seconds] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 21 11:18:40.653: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Feb 21 11:18:40.923: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e9e6aee9-549b-11ea-b1f8-0242ac110008" in namespace "e2e-tests-projected-26kvl" to be "success or failure" Feb 21 11:18:40.953: INFO: Pod "downwardapi-volume-e9e6aee9-549b-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 30.178609ms Feb 21 11:18:42.969: INFO: Pod "downwardapi-volume-e9e6aee9-549b-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.046074716s Feb 21 11:18:51.720: INFO: Pod "downwardapi-volume-e9e6aee9-549b-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 10.796985977s Feb 21 11:18:53.760: INFO: Pod "downwardapi-volume-e9e6aee9-549b-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 12.836340531s Feb 21 11:18:55.796: INFO: Pod "downwardapi-volume-e9e6aee9-549b-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 14.873195302s Feb 21 11:18:57.809: INFO: Pod "downwardapi-volume-e9e6aee9-549b-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 16.885885358s Feb 21 11:18:59.912: INFO: Pod "downwardapi-volume-e9e6aee9-549b-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 18.989107889s Feb 21 11:19:01.937: INFO: Pod "downwardapi-volume-e9e6aee9-549b-11ea-b1f8-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 21.013805712s STEP: Saw pod success Feb 21 11:19:01.937: INFO: Pod "downwardapi-volume-e9e6aee9-549b-11ea-b1f8-0242ac110008" satisfied condition "success or failure" Feb 21 11:19:01.949: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-e9e6aee9-549b-11ea-b1f8-0242ac110008 container client-container: STEP: delete the pod Feb 21 11:19:04.660: INFO: Waiting for pod downwardapi-volume-e9e6aee9-549b-11ea-b1f8-0242ac110008 to disappear Feb 21 11:19:04.743: INFO: Pod downwardapi-volume-e9e6aee9-549b-11ea-b1f8-0242ac110008 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 21 11:19:04.743: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-26kvl" for this suite. Feb 21 11:19:12.890: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 21 11:19:16.038: INFO: namespace: e2e-tests-projected-26kvl, resource: bindings, ignored listing per whitelist Feb 21 11:19:16.224: INFO: namespace e2e-tests-projected-26kvl deletion completed in 11.468488994s • [SLOW TEST:35.571 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 21 11:19:16.224: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Feb 21 11:19:55.007: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Feb 21 11:19:55.022: INFO: Pod pod-with-poststart-http-hook still exists Feb 21 11:19:57.022: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Feb 21 11:19:57.037: INFO: Pod pod-with-poststart-http-hook still exists Feb 21 11:19:59.022: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Feb 21 11:19:59.032: INFO: Pod pod-with-poststart-http-hook still exists Feb 21 11:20:01.023: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Feb 21 11:20:01.066: INFO: Pod pod-with-poststart-http-hook still exists Feb 21 11:20:03.023: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Feb 21 11:20:04.533: INFO: Pod pod-with-poststart-http-hook still exists Feb 21 11:20:05.022: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Feb 21 11:20:05.034: INFO: Pod pod-with-poststart-http-hook still exists Feb 21 11:20:07.022: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Feb 21 11:20:07.032: INFO: Pod pod-with-poststart-http-hook still exists Feb 21 11:20:09.023: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Feb 21 11:20:09.052: INFO: Pod pod-with-poststart-http-hook still exists Feb 21 11:20:11.022: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Feb 21 11:20:11.037: INFO: Pod pod-with-poststart-http-hook still exists Feb 21 11:20:13.022: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Feb 21 11:20:13.033: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 21 11:20:13.033: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-77cd4" for this suite. Feb 21 11:20:33.064: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 21 11:20:33.113: INFO: namespace: e2e-tests-container-lifecycle-hook-77cd4, resource: bindings, ignored listing per whitelist Feb 21 11:20:33.179: INFO: namespace e2e-tests-container-lifecycle-hook-77cd4 deletion completed in 20.142826435s • [SLOW TEST:76.955 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40 should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 21 11:20:33.179: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Feb 21 11:20:33.420: INFO: (0) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/:
alternatives.log
alternatives.l... (200; 39.788771ms)
Feb 21 11:20:33.462: INFO: (1) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 40.94713ms)
Feb 21 11:20:33.479: INFO: (2) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 16.63536ms)
Feb 21 11:20:33.486: INFO: (3) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.153999ms)
Feb 21 11:20:33.493: INFO: (4) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.616746ms)
Feb 21 11:20:33.499: INFO: (5) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.978273ms)
Feb 21 11:20:33.504: INFO: (6) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.786456ms)
Feb 21 11:20:33.508: INFO: (7) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.744659ms)
Feb 21 11:20:33.513: INFO: (8) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.882198ms)
Feb 21 11:20:33.517: INFO: (9) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.054401ms)
Feb 21 11:20:33.522: INFO: (10) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.109845ms)
Feb 21 11:20:33.527: INFO: (11) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.822638ms)
Feb 21 11:20:33.532: INFO: (12) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.195309ms)
Feb 21 11:20:33.537: INFO: (13) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.003469ms)
Feb 21 11:20:33.542: INFO: (14) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.003407ms)
Feb 21 11:20:33.547: INFO: (15) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.307073ms)
Feb 21 11:20:33.558: INFO: (16) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 10.934219ms)
Feb 21 11:20:33.563: INFO: (17) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.154229ms)
Feb 21 11:20:33.567: INFO: (18) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.458995ms)
Feb 21 11:20:33.572: INFO: (19) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.537631ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 21 11:20:33.572: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-proxy-584q5" for this suite.
Feb 21 11:20:39.608: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 21 11:20:39.713: INFO: namespace: e2e-tests-proxy-584q5, resource: bindings, ignored listing per whitelist
Feb 21 11:20:39.797: INFO: namespace e2e-tests-proxy-584q5 deletion completed in 6.22182088s

• [SLOW TEST:6.617 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:56
    should proxy logs on node using proxy subresource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-node] Downward API 
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 21 11:20:39.797: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward api env vars
Feb 21 11:20:40.097: INFO: Waiting up to 5m0s for pod "downward-api-30ef094e-549c-11ea-b1f8-0242ac110008" in namespace "e2e-tests-downward-api-8bj55" to be "success or failure"
Feb 21 11:20:40.127: INFO: Pod "downward-api-30ef094e-549c-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 29.489407ms
Feb 21 11:20:42.227: INFO: Pod "downward-api-30ef094e-549c-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.129770334s
Feb 21 11:20:44.244: INFO: Pod "downward-api-30ef094e-549c-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.146700276s
Feb 21 11:20:46.936: INFO: Pod "downward-api-30ef094e-549c-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.838073046s
Feb 21 11:20:49.017: INFO: Pod "downward-api-30ef094e-549c-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.919507006s
Feb 21 11:20:51.110: INFO: Pod "downward-api-30ef094e-549c-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 11.012937166s
Feb 21 11:20:53.357: INFO: Pod "downward-api-30ef094e-549c-11ea-b1f8-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.260029965s
STEP: Saw pod success
Feb 21 11:20:53.358: INFO: Pod "downward-api-30ef094e-549c-11ea-b1f8-0242ac110008" satisfied condition "success or failure"
Feb 21 11:20:53.370: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-30ef094e-549c-11ea-b1f8-0242ac110008 container dapi-container: 
STEP: delete the pod
Feb 21 11:20:53.449: INFO: Waiting for pod downward-api-30ef094e-549c-11ea-b1f8-0242ac110008 to disappear
Feb 21 11:20:53.554: INFO: Pod downward-api-30ef094e-549c-11ea-b1f8-0242ac110008 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 21 11:20:53.555: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-8bj55" for this suite.
Feb 21 11:21:01.621: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 21 11:21:01.757: INFO: namespace: e2e-tests-downward-api-8bj55, resource: bindings, ignored listing per whitelist
Feb 21 11:21:01.772: INFO: namespace e2e-tests-downward-api-8bj55 deletion completed in 8.200560739s

• [SLOW TEST:21.975 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources Simple CustomResourceDefinition 
  creating/deleting custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 21 11:21:01.772: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] creating/deleting custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Feb 21 11:21:01.911: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 21 11:21:03.172: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-custom-resource-definition-4b4c9" for this suite.
Feb 21 11:21:09.306: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 21 11:21:09.371: INFO: namespace: e2e-tests-custom-resource-definition-4b4c9, resource: bindings, ignored listing per whitelist
Feb 21 11:21:09.465: INFO: namespace e2e-tests-custom-resource-definition-4b4c9 deletion completed in 6.279297996s

• [SLOW TEST:7.693 seconds]
[sig-api-machinery] CustomResourceDefinition resources
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  Simple CustomResourceDefinition
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:35
    creating/deleting custom resource definition objects works  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 21 11:21:09.465: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74
STEP: Creating service test in namespace e2e-tests-statefulset-w8kts
[It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Initializing watcher for selector baz=blah,foo=bar
STEP: Creating stateful set ss in namespace e2e-tests-statefulset-w8kts
STEP: Waiting until all stateful set ss replicas will be running in namespace e2e-tests-statefulset-w8kts
Feb 21 11:21:10.459: INFO: Found 0 stateful pods, waiting for 1
Feb 21 11:21:20.496: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Pending - Ready=false
Feb 21 11:21:30.532: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod
Feb 21 11:21:30.564: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-w8kts ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb 21 11:21:34.801: INFO: stderr: "I0221 11:21:30.772311    1349 log.go:172] (0xc00075c160) (0xc0006c6780) Create stream\nI0221 11:21:30.772557    1349 log.go:172] (0xc00075c160) (0xc0006c6780) Stream added, broadcasting: 1\nI0221 11:21:30.794467    1349 log.go:172] (0xc00075c160) Reply frame received for 1\nI0221 11:21:30.794534    1349 log.go:172] (0xc00075c160) (0xc000778b40) Create stream\nI0221 11:21:30.794543    1349 log.go:172] (0xc00075c160) (0xc000778b40) Stream added, broadcasting: 3\nI0221 11:21:30.797944    1349 log.go:172] (0xc00075c160) Reply frame received for 3\nI0221 11:21:30.797982    1349 log.go:172] (0xc00075c160) (0xc000676000) Create stream\nI0221 11:21:30.797994    1349 log.go:172] (0xc00075c160) (0xc000676000) Stream added, broadcasting: 5\nI0221 11:21:30.799159    1349 log.go:172] (0xc00075c160) Reply frame received for 5\nI0221 11:21:34.525123    1349 log.go:172] (0xc00075c160) Data frame received for 3\nI0221 11:21:34.525189    1349 log.go:172] (0xc000778b40) (3) Data frame handling\nI0221 11:21:34.525276    1349 log.go:172] (0xc000778b40) (3) Data frame sent\nI0221 11:21:34.792049    1349 log.go:172] (0xc00075c160) Data frame received for 1\nI0221 11:21:34.792132    1349 log.go:172] (0xc00075c160) (0xc000778b40) Stream removed, broadcasting: 3\nI0221 11:21:34.792158    1349 log.go:172] (0xc0006c6780) (1) Data frame handling\nI0221 11:21:34.792163    1349 log.go:172] (0xc0006c6780) (1) Data frame sent\nI0221 11:21:34.792170    1349 log.go:172] (0xc00075c160) (0xc0006c6780) Stream removed, broadcasting: 1\nI0221 11:21:34.792284    1349 log.go:172] (0xc00075c160) (0xc000676000) Stream removed, broadcasting: 5\nI0221 11:21:34.792411    1349 log.go:172] (0xc00075c160) (0xc0006c6780) Stream removed, broadcasting: 1\nI0221 11:21:34.792427    1349 log.go:172] (0xc00075c160) (0xc000778b40) Stream removed, broadcasting: 3\nI0221 11:21:34.792437    1349 log.go:172] (0xc00075c160) (0xc000676000) Stream removed, broadcasting: 5\nI0221 11:21:34.793126    1349 log.go:172] (0xc00075c160) Go away received\n"
Feb 21 11:21:34.801: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb 21 11:21:34.801: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Feb 21 11:21:34.832: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Feb 21 11:21:34.833: INFO: Waiting for statefulset status.replicas updated to 0
Feb 21 11:21:35.025: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999997109s
Feb 21 11:21:36.033: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.846073146s
Feb 21 11:21:37.043: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.837872633s
Feb 21 11:21:38.055: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.827541843s
Feb 21 11:21:39.083: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.81586349s
Feb 21 11:21:40.097: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.787127584s
Feb 21 11:21:41.106: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.77417114s
Feb 21 11:21:42.118: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.764575026s
Feb 21 11:21:43.129: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.752247646s
Feb 21 11:21:44.138: INFO: Verifying statefulset ss doesn't scale past 1 for another 741.56013ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace e2e-tests-statefulset-w8kts
Feb 21 11:21:45.148: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-w8kts ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 21 11:21:45.648: INFO: stderr: "I0221 11:21:45.316043    1370 log.go:172] (0xc00055a0b0) (0xc0006c41e0) Create stream\nI0221 11:21:45.316132    1370 log.go:172] (0xc00055a0b0) (0xc0006c41e0) Stream added, broadcasting: 1\nI0221 11:21:45.320799    1370 log.go:172] (0xc00055a0b0) Reply frame received for 1\nI0221 11:21:45.320834    1370 log.go:172] (0xc00055a0b0) (0xc0006c4820) Create stream\nI0221 11:21:45.320848    1370 log.go:172] (0xc00055a0b0) (0xc0006c4820) Stream added, broadcasting: 3\nI0221 11:21:45.321659    1370 log.go:172] (0xc00055a0b0) Reply frame received for 3\nI0221 11:21:45.321685    1370 log.go:172] (0xc00055a0b0) (0xc000720b40) Create stream\nI0221 11:21:45.321692    1370 log.go:172] (0xc00055a0b0) (0xc000720b40) Stream added, broadcasting: 5\nI0221 11:21:45.322676    1370 log.go:172] (0xc00055a0b0) Reply frame received for 5\nI0221 11:21:45.537416    1370 log.go:172] (0xc00055a0b0) Data frame received for 3\nI0221 11:21:45.537670    1370 log.go:172] (0xc0006c4820) (3) Data frame handling\nI0221 11:21:45.537705    1370 log.go:172] (0xc0006c4820) (3) Data frame sent\nI0221 11:21:45.636384    1370 log.go:172] (0xc00055a0b0) Data frame received for 1\nI0221 11:21:45.636726    1370 log.go:172] (0xc0006c41e0) (1) Data frame handling\nI0221 11:21:45.636778    1370 log.go:172] (0xc0006c41e0) (1) Data frame sent\nI0221 11:21:45.636911    1370 log.go:172] (0xc00055a0b0) (0xc0006c41e0) Stream removed, broadcasting: 1\nI0221 11:21:45.637398    1370 log.go:172] (0xc00055a0b0) (0xc0006c4820) Stream removed, broadcasting: 3\nI0221 11:21:45.637870    1370 log.go:172] (0xc00055a0b0) (0xc000720b40) Stream removed, broadcasting: 5\nI0221 11:21:45.637972    1370 log.go:172] (0xc00055a0b0) (0xc0006c41e0) Stream removed, broadcasting: 1\nI0221 11:21:45.638012    1370 log.go:172] (0xc00055a0b0) (0xc0006c4820) Stream removed, broadcasting: 3\nI0221 11:21:45.638041    1370 log.go:172] (0xc00055a0b0) (0xc000720b40) Stream removed, broadcasting: 5\n"
Feb 21 11:21:45.648: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Feb 21 11:21:45.648: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Feb 21 11:21:45.668: INFO: Found 1 stateful pods, waiting for 3
Feb 21 11:21:57.740: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Feb 21 11:21:57.740: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Feb 21 11:21:57.740: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Pending - Ready=false
Feb 21 11:22:05.680: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Feb 21 11:22:05.680: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Feb 21 11:22:05.680: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Pending - Ready=false
Feb 21 11:22:15.679: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Feb 21 11:22:15.679: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Feb 21 11:22:15.679: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Verifying that stateful set ss was scaled up in order
STEP: Scale down will halt with unhealthy stateful pod
Feb 21 11:22:15.688: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-w8kts ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb 21 11:22:16.167: INFO: stderr: "I0221 11:22:15.910391    1391 log.go:172] (0xc00077e2c0) (0xc0005bc780) Create stream\nI0221 11:22:15.910585    1391 log.go:172] (0xc00077e2c0) (0xc0005bc780) Stream added, broadcasting: 1\nI0221 11:22:15.918465    1391 log.go:172] (0xc00077e2c0) Reply frame received for 1\nI0221 11:22:15.918520    1391 log.go:172] (0xc00077e2c0) (0xc0005bc820) Create stream\nI0221 11:22:15.918534    1391 log.go:172] (0xc00077e2c0) (0xc0005bc820) Stream added, broadcasting: 3\nI0221 11:22:15.921019    1391 log.go:172] (0xc00077e2c0) Reply frame received for 3\nI0221 11:22:15.921187    1391 log.go:172] (0xc00077e2c0) (0xc000802960) Create stream\nI0221 11:22:15.921208    1391 log.go:172] (0xc00077e2c0) (0xc000802960) Stream added, broadcasting: 5\nI0221 11:22:15.923234    1391 log.go:172] (0xc00077e2c0) Reply frame received for 5\nI0221 11:22:16.037863    1391 log.go:172] (0xc00077e2c0) Data frame received for 3\nI0221 11:22:16.037965    1391 log.go:172] (0xc0005bc820) (3) Data frame handling\nI0221 11:22:16.038028    1391 log.go:172] (0xc0005bc820) (3) Data frame sent\nI0221 11:22:16.156681    1391 log.go:172] (0xc00077e2c0) Data frame received for 1\nI0221 11:22:16.156759    1391 log.go:172] (0xc0005bc780) (1) Data frame handling\nI0221 11:22:16.156778    1391 log.go:172] (0xc0005bc780) (1) Data frame sent\nI0221 11:22:16.156803    1391 log.go:172] (0xc00077e2c0) (0xc0005bc780) Stream removed, broadcasting: 1\nI0221 11:22:16.156967    1391 log.go:172] (0xc00077e2c0) (0xc0005bc820) Stream removed, broadcasting: 3\nI0221 11:22:16.157140    1391 log.go:172] (0xc00077e2c0) (0xc000802960) Stream removed, broadcasting: 5\nI0221 11:22:16.157174    1391 log.go:172] (0xc00077e2c0) Go away received\nI0221 11:22:16.157313    1391 log.go:172] (0xc00077e2c0) (0xc0005bc780) Stream removed, broadcasting: 1\nI0221 11:22:16.157387    1391 log.go:172] (0xc00077e2c0) (0xc0005bc820) Stream removed, broadcasting: 3\nI0221 11:22:16.157420    1391 log.go:172] (0xc00077e2c0) (0xc000802960) Stream removed, broadcasting: 5\n"
Feb 21 11:22:16.168: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb 21 11:22:16.168: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Feb 21 11:22:16.168: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-w8kts ss-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb 21 11:22:16.766: INFO: stderr: "I0221 11:22:16.300956    1413 log.go:172] (0xc000748160) (0xc0006a8780) Create stream\nI0221 11:22:16.301041    1413 log.go:172] (0xc000748160) (0xc0006a8780) Stream added, broadcasting: 1\nI0221 11:22:16.328112    1413 log.go:172] (0xc000748160) Reply frame received for 1\nI0221 11:22:16.328165    1413 log.go:172] (0xc000748160) (0xc000642aa0) Create stream\nI0221 11:22:16.328171    1413 log.go:172] (0xc000748160) (0xc000642aa0) Stream added, broadcasting: 3\nI0221 11:22:16.330526    1413 log.go:172] (0xc000748160) Reply frame received for 3\nI0221 11:22:16.330593    1413 log.go:172] (0xc000748160) (0xc000642be0) Create stream\nI0221 11:22:16.330602    1413 log.go:172] (0xc000748160) (0xc000642be0) Stream added, broadcasting: 5\nI0221 11:22:16.333820    1413 log.go:172] (0xc000748160) Reply frame received for 5\nI0221 11:22:16.638859    1413 log.go:172] (0xc000748160) Data frame received for 3\nI0221 11:22:16.639010    1413 log.go:172] (0xc000642aa0) (3) Data frame handling\nI0221 11:22:16.639046    1413 log.go:172] (0xc000642aa0) (3) Data frame sent\nI0221 11:22:16.760720    1413 log.go:172] (0xc000748160) (0xc000642aa0) Stream removed, broadcasting: 3\nI0221 11:22:16.760867    1413 log.go:172] (0xc000748160) Data frame received for 1\nI0221 11:22:16.760907    1413 log.go:172] (0xc0006a8780) (1) Data frame handling\nI0221 11:22:16.760913    1413 log.go:172] (0xc0006a8780) (1) Data frame sent\nI0221 11:22:16.760918    1413 log.go:172] (0xc000748160) (0xc0006a8780) Stream removed, broadcasting: 1\nI0221 11:22:16.760973    1413 log.go:172] (0xc000748160) (0xc000642be0) Stream removed, broadcasting: 5\nI0221 11:22:16.760992    1413 log.go:172] (0xc000748160) Go away received\nI0221 11:22:16.761132    1413 log.go:172] (0xc000748160) (0xc0006a8780) Stream removed, broadcasting: 1\nI0221 11:22:16.761142    1413 log.go:172] (0xc000748160) (0xc000642aa0) Stream removed, broadcasting: 3\nI0221 11:22:16.761147    1413 log.go:172] (0xc000748160) (0xc000642be0) Stream removed, broadcasting: 5\n"
Feb 21 11:22:16.766: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb 21 11:22:16.766: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Feb 21 11:22:16.767: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-w8kts ss-2 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb 21 11:22:17.387: INFO: stderr: "I0221 11:22:17.065610    1433 log.go:172] (0xc00014a9a0) (0xc0005a66e0) Create stream\nI0221 11:22:17.065762    1433 log.go:172] (0xc00014a9a0) (0xc0005a66e0) Stream added, broadcasting: 1\nI0221 11:22:17.071038    1433 log.go:172] (0xc00014a9a0) Reply frame received for 1\nI0221 11:22:17.071077    1433 log.go:172] (0xc00014a9a0) (0xc000820000) Create stream\nI0221 11:22:17.071091    1433 log.go:172] (0xc00014a9a0) (0xc000820000) Stream added, broadcasting: 3\nI0221 11:22:17.072181    1433 log.go:172] (0xc00014a9a0) Reply frame received for 3\nI0221 11:22:17.072227    1433 log.go:172] (0xc00014a9a0) (0xc0008c4000) Create stream\nI0221 11:22:17.072270    1433 log.go:172] (0xc00014a9a0) (0xc0008c4000) Stream added, broadcasting: 5\nI0221 11:22:17.073404    1433 log.go:172] (0xc00014a9a0) Reply frame received for 5\nI0221 11:22:17.264216    1433 log.go:172] (0xc00014a9a0) Data frame received for 3\nI0221 11:22:17.264270    1433 log.go:172] (0xc000820000) (3) Data frame handling\nI0221 11:22:17.264291    1433 log.go:172] (0xc000820000) (3) Data frame sent\nI0221 11:22:17.378918    1433 log.go:172] (0xc00014a9a0) Data frame received for 1\nI0221 11:22:17.379088    1433 log.go:172] (0xc00014a9a0) (0xc0008c4000) Stream removed, broadcasting: 5\nI0221 11:22:17.379175    1433 log.go:172] (0xc00014a9a0) (0xc000820000) Stream removed, broadcasting: 3\nI0221 11:22:17.379254    1433 log.go:172] (0xc0005a66e0) (1) Data frame handling\nI0221 11:22:17.379277    1433 log.go:172] (0xc0005a66e0) (1) Data frame sent\nI0221 11:22:17.379286    1433 log.go:172] (0xc00014a9a0) (0xc0005a66e0) Stream removed, broadcasting: 1\nI0221 11:22:17.379298    1433 log.go:172] (0xc00014a9a0) Go away received\nI0221 11:22:17.379582    1433 log.go:172] (0xc00014a9a0) (0xc0005a66e0) Stream removed, broadcasting: 1\nI0221 11:22:17.379613    1433 log.go:172] (0xc00014a9a0) (0xc000820000) Stream removed, broadcasting: 3\nI0221 11:22:17.379626    1433 log.go:172] (0xc00014a9a0) (0xc0008c4000) Stream removed, broadcasting: 5\n"
Feb 21 11:22:17.387: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb 21 11:22:17.387: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Feb 21 11:22:17.387: INFO: Waiting for statefulset status.replicas updated to 0
Feb 21 11:22:17.407: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2
Feb 21 11:22:27.422: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Feb 21 11:22:27.423: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Feb 21 11:22:27.423: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Feb 21 11:22:27.463: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999997039s
Feb 21 11:22:28.503: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.968951797s
Feb 21 11:22:30.211: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.92876706s
Feb 21 11:22:31.219: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.220386619s
Feb 21 11:22:32.231: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.21265465s
Feb 21 11:22:33.249: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.200924194s
Feb 21 11:22:34.263: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.183134627s
Feb 21 11:22:35.431: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.168948832s
Feb 21 11:22:36.841: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.001222182s
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacee2e-tests-statefulset-w8kts
Feb 21 11:22:37.865: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-w8kts ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 21 11:22:38.350: INFO: stderr: "I0221 11:22:38.037014    1456 log.go:172] (0xc000754370) (0xc000774640) Create stream\nI0221 11:22:38.037084    1456 log.go:172] (0xc000754370) (0xc000774640) Stream added, broadcasting: 1\nI0221 11:22:38.041909    1456 log.go:172] (0xc000754370) Reply frame received for 1\nI0221 11:22:38.041941    1456 log.go:172] (0xc000754370) (0xc000666be0) Create stream\nI0221 11:22:38.041947    1456 log.go:172] (0xc000754370) (0xc000666be0) Stream added, broadcasting: 3\nI0221 11:22:38.043232    1456 log.go:172] (0xc000754370) Reply frame received for 3\nI0221 11:22:38.043266    1456 log.go:172] (0xc000754370) (0xc0002b8000) Create stream\nI0221 11:22:38.043277    1456 log.go:172] (0xc000754370) (0xc0002b8000) Stream added, broadcasting: 5\nI0221 11:22:38.044438    1456 log.go:172] (0xc000754370) Reply frame received for 5\nI0221 11:22:38.195309    1456 log.go:172] (0xc000754370) Data frame received for 3\nI0221 11:22:38.195402    1456 log.go:172] (0xc000666be0) (3) Data frame handling\nI0221 11:22:38.195429    1456 log.go:172] (0xc000666be0) (3) Data frame sent\nI0221 11:22:38.342684    1456 log.go:172] (0xc000754370) (0xc000666be0) Stream removed, broadcasting: 3\nI0221 11:22:38.342794    1456 log.go:172] (0xc000754370) Data frame received for 1\nI0221 11:22:38.342809    1456 log.go:172] (0xc000774640) (1) Data frame handling\nI0221 11:22:38.342824    1456 log.go:172] (0xc000774640) (1) Data frame sent\nI0221 11:22:38.342837    1456 log.go:172] (0xc000754370) (0xc000774640) Stream removed, broadcasting: 1\nI0221 11:22:38.343009    1456 log.go:172] (0xc000754370) (0xc0002b8000) Stream removed, broadcasting: 5\nI0221 11:22:38.343050    1456 log.go:172] (0xc000754370) (0xc000774640) Stream removed, broadcasting: 1\nI0221 11:22:38.343066    1456 log.go:172] (0xc000754370) (0xc000666be0) Stream removed, broadcasting: 3\nI0221 11:22:38.343081    1456 log.go:172] (0xc000754370) (0xc0002b8000) Stream removed, broadcasting: 5\nI0221 11:22:38.343479    1456 log.go:172] (0xc000754370) Go away received\n"
Feb 21 11:22:38.350: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Feb 21 11:22:38.350: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Feb 21 11:22:38.351: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-w8kts ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 21 11:22:39.038: INFO: stderr: "I0221 11:22:38.611755    1478 log.go:172] (0xc000138840) (0xc00065b4a0) Create stream\nI0221 11:22:38.611868    1478 log.go:172] (0xc000138840) (0xc00065b4a0) Stream added, broadcasting: 1\nI0221 11:22:38.616095    1478 log.go:172] (0xc000138840) Reply frame received for 1\nI0221 11:22:38.616121    1478 log.go:172] (0xc000138840) (0xc00065b540) Create stream\nI0221 11:22:38.616126    1478 log.go:172] (0xc000138840) (0xc00065b540) Stream added, broadcasting: 3\nI0221 11:22:38.616873    1478 log.go:172] (0xc000138840) Reply frame received for 3\nI0221 11:22:38.616891    1478 log.go:172] (0xc000138840) (0xc00065b5e0) Create stream\nI0221 11:22:38.616898    1478 log.go:172] (0xc000138840) (0xc00065b5e0) Stream added, broadcasting: 5\nI0221 11:22:38.625191    1478 log.go:172] (0xc000138840) Reply frame received for 5\nI0221 11:22:38.915396    1478 log.go:172] (0xc000138840) Data frame received for 3\nI0221 11:22:38.915470    1478 log.go:172] (0xc00065b540) (3) Data frame handling\nI0221 11:22:38.915499    1478 log.go:172] (0xc00065b540) (3) Data frame sent\nI0221 11:22:39.032058    1478 log.go:172] (0xc000138840) (0xc00065b540) Stream removed, broadcasting: 3\nI0221 11:22:39.032321    1478 log.go:172] (0xc000138840) Data frame received for 1\nI0221 11:22:39.032361    1478 log.go:172] (0xc000138840) (0xc00065b5e0) Stream removed, broadcasting: 5\nI0221 11:22:39.032415    1478 log.go:172] (0xc00065b4a0) (1) Data frame handling\nI0221 11:22:39.032429    1478 log.go:172] (0xc00065b4a0) (1) Data frame sent\nI0221 11:22:39.032449    1478 log.go:172] (0xc000138840) (0xc00065b4a0) Stream removed, broadcasting: 1\nI0221 11:22:39.032476    1478 log.go:172] (0xc000138840) Go away received\nI0221 11:22:39.032758    1478 log.go:172] (0xc000138840) (0xc00065b4a0) Stream removed, broadcasting: 1\nI0221 11:22:39.032773    1478 log.go:172] (0xc000138840) (0xc00065b540) Stream removed, broadcasting: 3\nI0221 11:22:39.032777    1478 log.go:172] (0xc000138840) (0xc00065b5e0) Stream removed, broadcasting: 5\n"
Feb 21 11:22:39.038: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Feb 21 11:22:39.038: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Feb 21 11:22:39.038: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-w8kts ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 21 11:22:39.377: INFO: stderr: "I0221 11:22:39.165155    1499 log.go:172] (0xc00059c0b0) (0xc0006d66e0) Create stream\nI0221 11:22:39.165255    1499 log.go:172] (0xc00059c0b0) (0xc0006d66e0) Stream added, broadcasting: 1\nI0221 11:22:39.168939    1499 log.go:172] (0xc00059c0b0) Reply frame received for 1\nI0221 11:22:39.168986    1499 log.go:172] (0xc00059c0b0) (0xc0001aedc0) Create stream\nI0221 11:22:39.169005    1499 log.go:172] (0xc00059c0b0) (0xc0001aedc0) Stream added, broadcasting: 3\nI0221 11:22:39.169792    1499 log.go:172] (0xc00059c0b0) Reply frame received for 3\nI0221 11:22:39.169805    1499 log.go:172] (0xc00059c0b0) (0xc0006d6780) Create stream\nI0221 11:22:39.169810    1499 log.go:172] (0xc00059c0b0) (0xc0006d6780) Stream added, broadcasting: 5\nI0221 11:22:39.170690    1499 log.go:172] (0xc00059c0b0) Reply frame received for 5\nI0221 11:22:39.268907    1499 log.go:172] (0xc00059c0b0) Data frame received for 3\nI0221 11:22:39.268944    1499 log.go:172] (0xc0001aedc0) (3) Data frame handling\nI0221 11:22:39.268967    1499 log.go:172] (0xc0001aedc0) (3) Data frame sent\nI0221 11:22:39.373756    1499 log.go:172] (0xc00059c0b0) (0xc0001aedc0) Stream removed, broadcasting: 3\nI0221 11:22:39.373806    1499 log.go:172] (0xc00059c0b0) Data frame received for 1\nI0221 11:22:39.373835    1499 log.go:172] (0xc0006d66e0) (1) Data frame handling\nI0221 11:22:39.373845    1499 log.go:172] (0xc0006d66e0) (1) Data frame sent\nI0221 11:22:39.373854    1499 log.go:172] (0xc00059c0b0) (0xc0006d66e0) Stream removed, broadcasting: 1\nI0221 11:22:39.373864    1499 log.go:172] (0xc00059c0b0) (0xc0006d6780) Stream removed, broadcasting: 5\nI0221 11:22:39.373875    1499 log.go:172] (0xc00059c0b0) Go away received\nI0221 11:22:39.373974    1499 log.go:172] (0xc00059c0b0) (0xc0006d66e0) Stream removed, broadcasting: 1\nI0221 11:22:39.373986    1499 log.go:172] (0xc00059c0b0) (0xc0001aedc0) Stream removed, broadcasting: 3\nI0221 11:22:39.373993    1499 log.go:172] (0xc00059c0b0) (0xc0006d6780) Stream removed, broadcasting: 5\n"
Feb 21 11:22:39.378: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Feb 21 11:22:39.378: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Feb 21 11:22:39.378: INFO: Scaling statefulset ss to 0
STEP: Verifying that stateful set ss was scaled down in reverse order
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85
Feb 21 11:23:19.417: INFO: Deleting all statefulset in ns e2e-tests-statefulset-w8kts
Feb 21 11:23:19.431: INFO: Scaling statefulset ss to 0
Feb 21 11:23:19.454: INFO: Waiting for statefulset status.replicas updated to 0
Feb 21 11:23:19.458: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 21 11:23:19.517: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-statefulset-w8kts" for this suite.
Feb 21 11:23:25.827: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 21 11:23:25.899: INFO: namespace: e2e-tests-statefulset-w8kts, resource: bindings, ignored listing per whitelist
Feb 21 11:23:25.996: INFO: namespace e2e-tests-statefulset-w8kts deletion completed in 6.424423379s

• [SLOW TEST:136.531 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-network] Proxy version v1 
  should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 21 11:23:25.996: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Feb 21 11:23:26.366: INFO: (0) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 48.016724ms)
Feb 21 11:23:26.373: INFO: (1) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.881058ms)
Feb 21 11:23:26.377: INFO: (2) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.874694ms)
Feb 21 11:23:26.382: INFO: (3) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.887495ms)
Feb 21 11:23:26.385: INFO: (4) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.45278ms)
Feb 21 11:23:26.389: INFO: (5) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.469791ms)
Feb 21 11:23:26.393: INFO: (6) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.578683ms)
Feb 21 11:23:26.398: INFO: (7) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.719237ms)
Feb 21 11:23:26.403: INFO: (8) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.183227ms)
Feb 21 11:23:26.408: INFO: (9) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.668602ms)
Feb 21 11:23:26.413: INFO: (10) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.447488ms)
Feb 21 11:23:26.418: INFO: (11) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.198497ms)
Feb 21 11:23:26.422: INFO: (12) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.847967ms)
Feb 21 11:23:26.426: INFO: (13) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.217192ms)
Feb 21 11:23:26.431: INFO: (14) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.314958ms)
Feb 21 11:23:26.454: INFO: (15) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 22.65418ms)
Feb 21 11:23:26.467: INFO: (16) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 12.775262ms)
Feb 21 11:23:26.477: INFO: (17) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 10.367809ms)
Feb 21 11:23:26.489: INFO: (18) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 12.167019ms)
Feb 21 11:23:26.513: INFO: (19) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 22.976772ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 21 11:23:26.513: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-proxy-8qk4q" for this suite.
Feb 21 11:23:32.632: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 21 11:23:32.805: INFO: namespace: e2e-tests-proxy-8qk4q, resource: bindings, ignored listing per whitelist
Feb 21 11:23:32.894: INFO: namespace e2e-tests-proxy-8qk4q deletion completed in 6.360614319s

• [SLOW TEST:6.898 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:56
    should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 21 11:23:32.895: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-98181f6e-549c-11ea-b1f8-0242ac110008
STEP: Creating a pod to test consume configMaps
Feb 21 11:23:33.651: INFO: Waiting up to 5m0s for pod "pod-configmaps-98195b46-549c-11ea-b1f8-0242ac110008" in namespace "e2e-tests-configmap-wkr6x" to be "success or failure"
Feb 21 11:23:33.674: INFO: Pod "pod-configmaps-98195b46-549c-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 23.270753ms
Feb 21 11:23:35.688: INFO: Pod "pod-configmaps-98195b46-549c-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036944736s
Feb 21 11:23:37.733: INFO: Pod "pod-configmaps-98195b46-549c-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.081968782s
Feb 21 11:23:40.407: INFO: Pod "pod-configmaps-98195b46-549c-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.755993053s
Feb 21 11:23:42.424: INFO: Pod "pod-configmaps-98195b46-549c-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.772426768s
Feb 21 11:23:48.475: INFO: Pod "pod-configmaps-98195b46-549c-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 14.823527744s
Feb 21 11:23:50.500: INFO: Pod "pod-configmaps-98195b46-549c-11ea-b1f8-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.849202217s
STEP: Saw pod success
Feb 21 11:23:50.501: INFO: Pod "pod-configmaps-98195b46-549c-11ea-b1f8-0242ac110008" satisfied condition "success or failure"
Feb 21 11:23:50.517: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-98195b46-549c-11ea-b1f8-0242ac110008 container configmap-volume-test: 
STEP: delete the pod
Feb 21 11:23:51.311: INFO: Waiting for pod pod-configmaps-98195b46-549c-11ea-b1f8-0242ac110008 to disappear
Feb 21 11:23:51.332: INFO: Pod pod-configmaps-98195b46-549c-11ea-b1f8-0242ac110008 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 21 11:23:51.332: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-wkr6x" for this suite.
Feb 21 11:23:59.366: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 21 11:23:59.601: INFO: namespace: e2e-tests-configmap-wkr6x, resource: bindings, ignored listing per whitelist
Feb 21 11:23:59.676: INFO: namespace e2e-tests-configmap-wkr6x deletion completed in 8.338672607s

• [SLOW TEST:26.782 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 21 11:23:59.677: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with secret that has name projected-secret-test-map-a7fc7b7f-549c-11ea-b1f8-0242ac110008
STEP: Creating a pod to test consume secrets
Feb 21 11:23:59.888: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-a80077c9-549c-11ea-b1f8-0242ac110008" in namespace "e2e-tests-projected-mwd9w" to be "success or failure"
Feb 21 11:23:59.948: INFO: Pod "pod-projected-secrets-a80077c9-549c-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 59.817002ms
Feb 21 11:24:01.962: INFO: Pod "pod-projected-secrets-a80077c9-549c-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.073924114s
Feb 21 11:24:04.109: INFO: Pod "pod-projected-secrets-a80077c9-549c-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.220950059s
Feb 21 11:24:06.661: INFO: Pod "pod-projected-secrets-a80077c9-549c-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.772752822s
Feb 21 11:24:08.715: INFO: Pod "pod-projected-secrets-a80077c9-549c-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.826900947s
Feb 21 11:24:11.411: INFO: Pod "pod-projected-secrets-a80077c9-549c-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 11.52309967s
Feb 21 11:24:13.442: INFO: Pod "pod-projected-secrets-a80077c9-549c-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 13.554412034s
Feb 21 11:24:15.452: INFO: Pod "pod-projected-secrets-a80077c9-549c-11ea-b1f8-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 15.563944669s
STEP: Saw pod success
Feb 21 11:24:15.452: INFO: Pod "pod-projected-secrets-a80077c9-549c-11ea-b1f8-0242ac110008" satisfied condition "success or failure"
Feb 21 11:24:15.454: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-a80077c9-549c-11ea-b1f8-0242ac110008 container projected-secret-volume-test: 
STEP: delete the pod
Feb 21 11:24:17.499: INFO: Waiting for pod pod-projected-secrets-a80077c9-549c-11ea-b1f8-0242ac110008 to disappear
Feb 21 11:24:20.261: INFO: Pod pod-projected-secrets-a80077c9-549c-11ea-b1f8-0242ac110008 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 21 11:24:20.261: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-mwd9w" for this suite.
Feb 21 11:24:26.484: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 21 11:24:26.603: INFO: namespace: e2e-tests-projected-mwd9w, resource: bindings, ignored listing per whitelist
Feb 21 11:24:26.802: INFO: namespace e2e-tests-projected-mwd9w deletion completed in 6.511718604s

• [SLOW TEST:27.126 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl api-versions 
  should check if v1 is in available api versions  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 21 11:24:26.803: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should check if v1 is in available api versions  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: validating api versions
Feb 21 11:24:26.974: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions'
Feb 21 11:24:27.153: INFO: stderr: ""
Feb 21 11:24:27.153: INFO: stdout: "admissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\napps/v1beta1\napps/v1beta2\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 21 11:24:27.153: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-fw959" for this suite.
Feb 21 11:24:34.029: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 21 11:24:34.110: INFO: namespace: e2e-tests-kubectl-fw959, resource: bindings, ignored listing per whitelist
Feb 21 11:24:34.347: INFO: namespace e2e-tests-kubectl-fw959 deletion completed in 7.154619979s

• [SLOW TEST:7.545 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl api-versions
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should check if v1 is in available api versions  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 21 11:24:34.348: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with configmap pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod pod-subpath-test-configmap-zlkd
STEP: Creating a pod to test atomic-volume-subpath
Feb 21 11:24:34.574: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-zlkd" in namespace "e2e-tests-subpath-phcjk" to be "success or failure"
Feb 21 11:24:34.615: INFO: Pod "pod-subpath-test-configmap-zlkd": Phase="Pending", Reason="", readiness=false. Elapsed: 41.453121ms
Feb 21 11:24:36.951: INFO: Pod "pod-subpath-test-configmap-zlkd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.377128587s
Feb 21 11:24:38.967: INFO: Pod "pod-subpath-test-configmap-zlkd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.392899125s
Feb 21 11:24:41.013: INFO: Pod "pod-subpath-test-configmap-zlkd": Phase="Pending", Reason="", readiness=false. Elapsed: 6.438566063s
Feb 21 11:24:43.061: INFO: Pod "pod-subpath-test-configmap-zlkd": Phase="Pending", Reason="", readiness=false. Elapsed: 8.486768808s
Feb 21 11:24:45.316: INFO: Pod "pod-subpath-test-configmap-zlkd": Phase="Pending", Reason="", readiness=false. Elapsed: 10.741777473s
Feb 21 11:24:47.332: INFO: Pod "pod-subpath-test-configmap-zlkd": Phase="Pending", Reason="", readiness=false. Elapsed: 12.757600741s
Feb 21 11:24:53.534: INFO: Pod "pod-subpath-test-configmap-zlkd": Phase="Pending", Reason="", readiness=false. Elapsed: 18.95966972s
Feb 21 11:24:55.553: INFO: Pod "pod-subpath-test-configmap-zlkd": Phase="Pending", Reason="", readiness=false. Elapsed: 20.979185992s
Feb 21 11:24:58.646: INFO: Pod "pod-subpath-test-configmap-zlkd": Phase="Running", Reason="", readiness=false. Elapsed: 24.071578536s
Feb 21 11:25:00.657: INFO: Pod "pod-subpath-test-configmap-zlkd": Phase="Running", Reason="", readiness=false. Elapsed: 26.082917367s
Feb 21 11:25:02.671: INFO: Pod "pod-subpath-test-configmap-zlkd": Phase="Running", Reason="", readiness=false. Elapsed: 28.097032374s
Feb 21 11:25:04.692: INFO: Pod "pod-subpath-test-configmap-zlkd": Phase="Running", Reason="", readiness=false. Elapsed: 30.118146807s
Feb 21 11:25:06.702: INFO: Pod "pod-subpath-test-configmap-zlkd": Phase="Running", Reason="", readiness=false. Elapsed: 32.128079679s
Feb 21 11:25:08.728: INFO: Pod "pod-subpath-test-configmap-zlkd": Phase="Running", Reason="", readiness=false. Elapsed: 34.153921143s
Feb 21 11:25:10.900: INFO: Pod "pod-subpath-test-configmap-zlkd": Phase="Running", Reason="", readiness=false. Elapsed: 36.325875585s
Feb 21 11:25:13.075: INFO: Pod "pod-subpath-test-configmap-zlkd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 38.501455136s
STEP: Saw pod success
Feb 21 11:25:13.076: INFO: Pod "pod-subpath-test-configmap-zlkd" satisfied condition "success or failure"
Feb 21 11:25:13.086: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-configmap-zlkd container test-container-subpath-configmap-zlkd: 
STEP: delete the pod
Feb 21 11:25:13.691: INFO: Waiting for pod pod-subpath-test-configmap-zlkd to disappear
Feb 21 11:25:13.712: INFO: Pod pod-subpath-test-configmap-zlkd no longer exists
STEP: Deleting pod pod-subpath-test-configmap-zlkd
Feb 21 11:25:13.712: INFO: Deleting pod "pod-subpath-test-configmap-zlkd" in namespace "e2e-tests-subpath-phcjk"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 21 11:25:13.732: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-subpath-phcjk" for this suite.
Feb 21 11:25:21.931: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 21 11:25:22.559: INFO: namespace: e2e-tests-subpath-phcjk, resource: bindings, ignored listing per whitelist
Feb 21 11:25:22.651: INFO: namespace e2e-tests-subpath-phcjk deletion completed in 8.858163956s

• [SLOW TEST:48.304 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with configmap pod [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 21 11:25:22.652: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Feb 21 11:25:22.978: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d98963a0-549c-11ea-b1f8-0242ac110008" in namespace "e2e-tests-downward-api-fg5d2" to be "success or failure"
Feb 21 11:25:22.993: INFO: Pod "downwardapi-volume-d98963a0-549c-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 14.562823ms
Feb 21 11:25:27.561: INFO: Pod "downwardapi-volume-d98963a0-549c-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.582817432s
Feb 21 11:25:29.752: INFO: Pod "downwardapi-volume-d98963a0-549c-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.774195354s
Feb 21 11:25:31.771: INFO: Pod "downwardapi-volume-d98963a0-549c-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.793056872s
Feb 21 11:25:35.143: INFO: Pod "downwardapi-volume-d98963a0-549c-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 12.164520971s
Feb 21 11:25:37.300: INFO: Pod "downwardapi-volume-d98963a0-549c-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 14.321622489s
Feb 21 11:25:39.557: INFO: Pod "downwardapi-volume-d98963a0-549c-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 16.578265991s
Feb 21 11:25:41.566: INFO: Pod "downwardapi-volume-d98963a0-549c-11ea-b1f8-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 18.587726187s
STEP: Saw pod success
Feb 21 11:25:41.566: INFO: Pod "downwardapi-volume-d98963a0-549c-11ea-b1f8-0242ac110008" satisfied condition "success or failure"
Feb 21 11:25:41.569: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-d98963a0-549c-11ea-b1f8-0242ac110008 container client-container: 
STEP: delete the pod
Feb 21 11:25:41.758: INFO: Waiting for pod downwardapi-volume-d98963a0-549c-11ea-b1f8-0242ac110008 to disappear
Feb 21 11:25:41.763: INFO: Pod downwardapi-volume-d98963a0-549c-11ea-b1f8-0242ac110008 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 21 11:25:41.763: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-fg5d2" for this suite.
Feb 21 11:25:47.818: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 21 11:25:48.038: INFO: namespace: e2e-tests-downward-api-fg5d2, resource: bindings, ignored listing per whitelist
Feb 21 11:25:48.100: INFO: namespace e2e-tests-downward-api-fg5d2 deletion completed in 6.331009243s

• [SLOW TEST:25.449 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 21 11:25:48.101: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name projected-secret-test-eadc4bbc-549c-11ea-b1f8-0242ac110008
STEP: Creating a pod to test consume secrets
Feb 21 11:25:52.030: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-eadcf122-549c-11ea-b1f8-0242ac110008" in namespace "e2e-tests-projected-q5pp6" to be "success or failure"
Feb 21 11:25:52.049: INFO: Pod "pod-projected-secrets-eadcf122-549c-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 18.97537ms
Feb 21 11:25:54.349: INFO: Pod "pod-projected-secrets-eadcf122-549c-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.318513412s
Feb 21 11:25:56.369: INFO: Pod "pod-projected-secrets-eadcf122-549c-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.338983326s
Feb 21 11:25:58.913: INFO: Pod "pod-projected-secrets-eadcf122-549c-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.882873603s
Feb 21 11:26:00.928: INFO: Pod "pod-projected-secrets-eadcf122-549c-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.89739961s
Feb 21 11:26:02.943: INFO: Pod "pod-projected-secrets-eadcf122-549c-11ea-b1f8-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.913065055s
STEP: Saw pod success
Feb 21 11:26:02.943: INFO: Pod "pod-projected-secrets-eadcf122-549c-11ea-b1f8-0242ac110008" satisfied condition "success or failure"
Feb 21 11:26:02.948: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-eadcf122-549c-11ea-b1f8-0242ac110008 container secret-volume-test: 
STEP: delete the pod
Feb 21 11:26:03.096: INFO: Waiting for pod pod-projected-secrets-eadcf122-549c-11ea-b1f8-0242ac110008 to disappear
Feb 21 11:26:03.109: INFO: Pod pod-projected-secrets-eadcf122-549c-11ea-b1f8-0242ac110008 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 21 11:26:03.110: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-q5pp6" for this suite.
Feb 21 11:26:11.135: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 21 11:26:11.276: INFO: namespace: e2e-tests-projected-q5pp6, resource: bindings, ignored listing per whitelist
Feb 21 11:26:11.286: INFO: namespace e2e-tests-projected-q5pp6 deletion completed in 8.170468606s

• [SLOW TEST:23.186 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run default 
  should create an rc or deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 21 11:26:11.287: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1262
[It] should create an rc or deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Feb 21 11:26:11.487: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-66dgt'
Feb 21 11:26:13.474: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Feb 21 11:26:13.474: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n"
STEP: verifying the pod controlled by e2e-test-nginx-deployment gets created
[AfterEach] [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1268
Feb 21 11:26:17.191: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=e2e-tests-kubectl-66dgt'
Feb 21 11:26:18.071: INFO: stderr: ""
Feb 21 11:26:18.072: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 21 11:26:18.072: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-66dgt" for this suite.
Feb 21 11:26:26.127: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 21 11:26:26.255: INFO: namespace: e2e-tests-kubectl-66dgt, resource: bindings, ignored listing per whitelist
Feb 21 11:26:26.274: INFO: namespace e2e-tests-kubectl-66dgt deletion completed in 8.19065281s

• [SLOW TEST:14.987 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create an rc or deployment from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] Projected downwardAPI 
  should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 21 11:26:26.274: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Feb 21 11:26:26.412: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ff5b644a-549c-11ea-b1f8-0242ac110008" in namespace "e2e-tests-projected-mrjwq" to be "success or failure"
Feb 21 11:26:26.483: INFO: Pod "downwardapi-volume-ff5b644a-549c-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 71.187608ms
Feb 21 11:26:28.535: INFO: Pod "downwardapi-volume-ff5b644a-549c-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.122744148s
Feb 21 11:26:30.563: INFO: Pod "downwardapi-volume-ff5b644a-549c-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.151374384s
Feb 21 11:26:32.584: INFO: Pod "downwardapi-volume-ff5b644a-549c-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.172691677s
Feb 21 11:26:34.599: INFO: Pod "downwardapi-volume-ff5b644a-549c-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.187194923s
Feb 21 11:26:36.644: INFO: Pod "downwardapi-volume-ff5b644a-549c-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 10.232177149s
Feb 21 11:26:38.662: INFO: Pod "downwardapi-volume-ff5b644a-549c-11ea-b1f8-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.250599883s
STEP: Saw pod success
Feb 21 11:26:38.663: INFO: Pod "downwardapi-volume-ff5b644a-549c-11ea-b1f8-0242ac110008" satisfied condition "success or failure"
Feb 21 11:26:38.667: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-ff5b644a-549c-11ea-b1f8-0242ac110008 container client-container: 
STEP: delete the pod
Feb 21 11:26:38.764: INFO: Waiting for pod downwardapi-volume-ff5b644a-549c-11ea-b1f8-0242ac110008 to disappear
Feb 21 11:26:38.775: INFO: Pod downwardapi-volume-ff5b644a-549c-11ea-b1f8-0242ac110008 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 21 11:26:38.775: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-mrjwq" for this suite.
Feb 21 11:26:44.827: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 21 11:26:44.955: INFO: namespace: e2e-tests-projected-mrjwq, resource: bindings, ignored listing per whitelist
Feb 21 11:26:44.980: INFO: namespace e2e-tests-projected-mrjwq deletion completed in 6.185321073s

• [SLOW TEST:18.706 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 21 11:26:44.980: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43
[It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
Feb 21 11:26:45.219: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 21 11:27:02.067: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-init-container-kgzp5" for this suite.
Feb 21 11:27:08.122: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 21 11:27:08.161: INFO: namespace: e2e-tests-init-container-kgzp5, resource: bindings, ignored listing per whitelist
Feb 21 11:27:08.225: INFO: namespace e2e-tests-init-container-kgzp5 deletion completed in 6.137908321s

• [SLOW TEST:23.245 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 21 11:27:08.225: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward api env vars
Feb 21 11:27:08.398: INFO: Waiting up to 5m0s for pod "downward-api-1860ae59-549d-11ea-b1f8-0242ac110008" in namespace "e2e-tests-downward-api-9rkl9" to be "success or failure"
Feb 21 11:27:08.408: INFO: Pod "downward-api-1860ae59-549d-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 9.832421ms
Feb 21 11:27:10.416: INFO: Pod "downward-api-1860ae59-549d-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018191128s
Feb 21 11:27:12.488: INFO: Pod "downward-api-1860ae59-549d-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.090216266s
Feb 21 11:27:19.901: INFO: Pod "downward-api-1860ae59-549d-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 11.503509641s
Feb 21 11:27:21.919: INFO: Pod "downward-api-1860ae59-549d-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 13.520778329s
Feb 21 11:27:23.953: INFO: Pod "downward-api-1860ae59-549d-11ea-b1f8-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 15.555366859s
STEP: Saw pod success
Feb 21 11:27:23.953: INFO: Pod "downward-api-1860ae59-549d-11ea-b1f8-0242ac110008" satisfied condition "success or failure"
Feb 21 11:27:23.995: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-1860ae59-549d-11ea-b1f8-0242ac110008 container dapi-container: 
STEP: delete the pod
Feb 21 11:27:24.773: INFO: Waiting for pod downward-api-1860ae59-549d-11ea-b1f8-0242ac110008 to disappear
Feb 21 11:27:24.787: INFO: Pod downward-api-1860ae59-549d-11ea-b1f8-0242ac110008 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 21 11:27:24.787: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-9rkl9" for this suite.
Feb 21 11:27:30.831: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 21 11:27:31.152: INFO: namespace: e2e-tests-downward-api-9rkl9, resource: bindings, ignored listing per whitelist
Feb 21 11:27:31.666: INFO: namespace e2e-tests-downward-api-9rkl9 deletion completed in 6.868553426s

• [SLOW TEST:23.441 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 21 11:27:31.666: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-26c4df05-549d-11ea-b1f8-0242ac110008
STEP: Creating a pod to test consume secrets
Feb 21 11:27:32.610: INFO: Waiting up to 5m0s for pod "pod-secrets-26cb8844-549d-11ea-b1f8-0242ac110008" in namespace "e2e-tests-secrets-r5d6q" to be "success or failure"
Feb 21 11:27:32.718: INFO: Pod "pod-secrets-26cb8844-549d-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 107.063154ms
Feb 21 11:27:34.764: INFO: Pod "pod-secrets-26cb8844-549d-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.153392519s
Feb 21 11:27:36.780: INFO: Pod "pod-secrets-26cb8844-549d-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.169152605s
Feb 21 11:27:39.190: INFO: Pod "pod-secrets-26cb8844-549d-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.579946794s
Feb 21 11:27:41.208: INFO: Pod "pod-secrets-26cb8844-549d-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.597285632s
Feb 21 11:27:43.622: INFO: Pod "pod-secrets-26cb8844-549d-11ea-b1f8-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.01109452s
STEP: Saw pod success
Feb 21 11:27:43.622: INFO: Pod "pod-secrets-26cb8844-549d-11ea-b1f8-0242ac110008" satisfied condition "success or failure"
Feb 21 11:27:43.878: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-26cb8844-549d-11ea-b1f8-0242ac110008 container secret-volume-test: 
STEP: delete the pod
Feb 21 11:27:44.100: INFO: Waiting for pod pod-secrets-26cb8844-549d-11ea-b1f8-0242ac110008 to disappear
Feb 21 11:27:44.122: INFO: Pod pod-secrets-26cb8844-549d-11ea-b1f8-0242ac110008 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 21 11:27:44.122: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-r5d6q" for this suite.
Feb 21 11:27:52.281: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 21 11:27:52.602: INFO: namespace: e2e-tests-secrets-r5d6q, resource: bindings, ignored listing per whitelist
Feb 21 11:27:52.893: INFO: namespace e2e-tests-secrets-r5d6q deletion completed in 8.75954971s

• [SLOW TEST:21.227 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 21 11:27:52.893: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Feb 21 11:27:53.229: INFO: Waiting up to 5m0s for pod "downwardapi-volume-331a0ad5-549d-11ea-b1f8-0242ac110008" in namespace "e2e-tests-projected-rlsgp" to be "success or failure"
Feb 21 11:27:53.266: INFO: Pod "downwardapi-volume-331a0ad5-549d-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 36.327847ms
Feb 21 11:27:55.282: INFO: Pod "downwardapi-volume-331a0ad5-549d-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.052926185s
Feb 21 11:27:57.294: INFO: Pod "downwardapi-volume-331a0ad5-549d-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.06490065s
Feb 21 11:27:59.308: INFO: Pod "downwardapi-volume-331a0ad5-549d-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.078686074s
Feb 21 11:28:01.317: INFO: Pod "downwardapi-volume-331a0ad5-549d-11ea-b1f8-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.087126214s
STEP: Saw pod success
Feb 21 11:28:01.317: INFO: Pod "downwardapi-volume-331a0ad5-549d-11ea-b1f8-0242ac110008" satisfied condition "success or failure"
Feb 21 11:28:01.322: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-331a0ad5-549d-11ea-b1f8-0242ac110008 container client-container: 
STEP: delete the pod
Feb 21 11:28:01.706: INFO: Waiting for pod downwardapi-volume-331a0ad5-549d-11ea-b1f8-0242ac110008 to disappear
Feb 21 11:28:01.757: INFO: Pod downwardapi-volume-331a0ad5-549d-11ea-b1f8-0242ac110008 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 21 11:28:01.758: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-rlsgp" for this suite.
Feb 21 11:28:07.978: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 21 11:28:08.095: INFO: namespace: e2e-tests-projected-rlsgp, resource: bindings, ignored listing per whitelist
Feb 21 11:28:08.107: INFO: namespace e2e-tests-projected-rlsgp deletion completed in 6.218342452s

• [SLOW TEST:15.213 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-node] Downward API 
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 21 11:28:08.107: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward api env vars
Feb 21 11:28:08.291: INFO: Waiting up to 5m0s for pod "downward-api-3c156974-549d-11ea-b1f8-0242ac110008" in namespace "e2e-tests-downward-api-84x5h" to be "success or failure"
Feb 21 11:28:08.327: INFO: Pod "downward-api-3c156974-549d-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 35.822205ms
Feb 21 11:28:10.334: INFO: Pod "downward-api-3c156974-549d-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.043065106s
Feb 21 11:28:12.365: INFO: Pod "downward-api-3c156974-549d-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.074023796s
Feb 21 11:28:14.480: INFO: Pod "downward-api-3c156974-549d-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.188588638s
Feb 21 11:28:16.560: INFO: Pod "downward-api-3c156974-549d-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.268329405s
Feb 21 11:28:18.597: INFO: Pod "downward-api-3c156974-549d-11ea-b1f8-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.305608186s
STEP: Saw pod success
Feb 21 11:28:18.597: INFO: Pod "downward-api-3c156974-549d-11ea-b1f8-0242ac110008" satisfied condition "success or failure"
Feb 21 11:28:18.645: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-3c156974-549d-11ea-b1f8-0242ac110008 container dapi-container: 
STEP: delete the pod
Feb 21 11:28:18.753: INFO: Waiting for pod downward-api-3c156974-549d-11ea-b1f8-0242ac110008 to disappear
Feb 21 11:28:18.761: INFO: Pod downward-api-3c156974-549d-11ea-b1f8-0242ac110008 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 21 11:28:18.761: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-84x5h" for this suite.
Feb 21 11:28:26.791: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 21 11:28:26.951: INFO: namespace: e2e-tests-downward-api-84x5h, resource: bindings, ignored listing per whitelist
Feb 21 11:28:26.982: INFO: namespace e2e-tests-downward-api-84x5h deletion completed in 8.215634524s

• [SLOW TEST:18.875 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 21 11:28:26.983: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-47535648-549d-11ea-b1f8-0242ac110008
STEP: Creating a pod to test consume secrets
Feb 21 11:28:27.163: INFO: Waiting up to 5m0s for pod "pod-secrets-475416b1-549d-11ea-b1f8-0242ac110008" in namespace "e2e-tests-secrets-r4wc9" to be "success or failure"
Feb 21 11:28:27.177: INFO: Pod "pod-secrets-475416b1-549d-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 14.536819ms
Feb 21 11:28:29.187: INFO: Pod "pod-secrets-475416b1-549d-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023906604s
Feb 21 11:28:31.210: INFO: Pod "pod-secrets-475416b1-549d-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.046681657s
Feb 21 11:28:33.217: INFO: Pod "pod-secrets-475416b1-549d-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.054308809s
Feb 21 11:28:35.465: INFO: Pod "pod-secrets-475416b1-549d-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.302309558s
Feb 21 11:28:37.496: INFO: Pod "pod-secrets-475416b1-549d-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 10.333187031s
Feb 21 11:28:39.509: INFO: Pod "pod-secrets-475416b1-549d-11ea-b1f8-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.346404527s
STEP: Saw pod success
Feb 21 11:28:39.509: INFO: Pod "pod-secrets-475416b1-549d-11ea-b1f8-0242ac110008" satisfied condition "success or failure"
Feb 21 11:28:39.522: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-475416b1-549d-11ea-b1f8-0242ac110008 container secret-volume-test: 
STEP: delete the pod
Feb 21 11:28:39.594: INFO: Waiting for pod pod-secrets-475416b1-549d-11ea-b1f8-0242ac110008 to disappear
Feb 21 11:28:39.601: INFO: Pod pod-secrets-475416b1-549d-11ea-b1f8-0242ac110008 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 21 11:28:39.601: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-r4wc9" for this suite.
Feb 21 11:28:45.727: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 21 11:28:45.753: INFO: namespace: e2e-tests-secrets-r4wc9, resource: bindings, ignored listing per whitelist
Feb 21 11:28:45.925: INFO: namespace e2e-tests-secrets-r4wc9 deletion completed in 6.317930092s

• [SLOW TEST:18.942 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 21 11:28:45.925: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Feb 21 11:28:46.201: INFO: Waiting up to 5m0s for pod "downwardapi-volume-52ad3792-549d-11ea-b1f8-0242ac110008" in namespace "e2e-tests-projected-l4c86" to be "success or failure"
Feb 21 11:28:46.233: INFO: Pod "downwardapi-volume-52ad3792-549d-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 32.183317ms
Feb 21 11:28:48.248: INFO: Pod "downwardapi-volume-52ad3792-549d-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.046828485s
Feb 21 11:28:50.259: INFO: Pod "downwardapi-volume-52ad3792-549d-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.057519299s
Feb 21 11:28:52.607: INFO: Pod "downwardapi-volume-52ad3792-549d-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.405910871s
Feb 21 11:28:54.652: INFO: Pod "downwardapi-volume-52ad3792-549d-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.450857911s
Feb 21 11:28:57.062: INFO: Pod "downwardapi-volume-52ad3792-549d-11ea-b1f8-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.860449303s
STEP: Saw pod success
Feb 21 11:28:57.062: INFO: Pod "downwardapi-volume-52ad3792-549d-11ea-b1f8-0242ac110008" satisfied condition "success or failure"
Feb 21 11:28:57.121: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-52ad3792-549d-11ea-b1f8-0242ac110008 container client-container: 
STEP: delete the pod
Feb 21 11:28:57.274: INFO: Waiting for pod downwardapi-volume-52ad3792-549d-11ea-b1f8-0242ac110008 to disappear
Feb 21 11:28:57.280: INFO: Pod downwardapi-volume-52ad3792-549d-11ea-b1f8-0242ac110008 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 21 11:28:57.280: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-l4c86" for this suite.
Feb 21 11:29:03.374: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 21 11:29:03.453: INFO: namespace: e2e-tests-projected-l4c86, resource: bindings, ignored listing per whitelist
Feb 21 11:29:03.489: INFO: namespace e2e-tests-projected-l4c86 deletion completed in 6.203696798s

• [SLOW TEST:17.563 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on default medium should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 21 11:29:03.489: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on default medium should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir volume type on node default medium
Feb 21 11:29:03.666: INFO: Waiting up to 5m0s for pod "pod-5d152e7c-549d-11ea-b1f8-0242ac110008" in namespace "e2e-tests-emptydir-mc76v" to be "success or failure"
Feb 21 11:29:03.685: INFO: Pod "pod-5d152e7c-549d-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 19.528343ms
Feb 21 11:29:05.752: INFO: Pod "pod-5d152e7c-549d-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.086662359s
Feb 21 11:29:07.764: INFO: Pod "pod-5d152e7c-549d-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.097970335s
Feb 21 11:29:10.010: INFO: Pod "pod-5d152e7c-549d-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.344043054s
Feb 21 11:29:12.042: INFO: Pod "pod-5d152e7c-549d-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.376039287s
Feb 21 11:29:14.080: INFO: Pod "pod-5d152e7c-549d-11ea-b1f8-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.41388109s
STEP: Saw pod success
Feb 21 11:29:14.080: INFO: Pod "pod-5d152e7c-549d-11ea-b1f8-0242ac110008" satisfied condition "success or failure"
Feb 21 11:29:14.097: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-5d152e7c-549d-11ea-b1f8-0242ac110008 container test-container: 
STEP: delete the pod
Feb 21 11:29:14.283: INFO: Waiting for pod pod-5d152e7c-549d-11ea-b1f8-0242ac110008 to disappear
Feb 21 11:29:14.289: INFO: Pod pod-5d152e7c-549d-11ea-b1f8-0242ac110008 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 21 11:29:14.289: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-mc76v" for this suite.
Feb 21 11:29:20.413: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 21 11:29:20.612: INFO: namespace: e2e-tests-emptydir-mc76v, resource: bindings, ignored listing per whitelist
Feb 21 11:29:20.626: INFO: namespace e2e-tests-emptydir-mc76v deletion completed in 6.32985719s

• [SLOW TEST:17.136 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  volume on default medium should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] Secrets 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 21 11:29:20.626: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-6748ac95-549d-11ea-b1f8-0242ac110008
STEP: Creating a pod to test consume secrets
Feb 21 11:29:20.791: INFO: Waiting up to 5m0s for pod "pod-secrets-6749af34-549d-11ea-b1f8-0242ac110008" in namespace "e2e-tests-secrets-xpstw" to be "success or failure"
Feb 21 11:29:20.802: INFO: Pod "pod-secrets-6749af34-549d-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 10.417608ms
Feb 21 11:29:23.108: INFO: Pod "pod-secrets-6749af34-549d-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.316568528s
Feb 21 11:29:25.126: INFO: Pod "pod-secrets-6749af34-549d-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.334469719s
Feb 21 11:29:27.137: INFO: Pod "pod-secrets-6749af34-549d-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.345659291s
Feb 21 11:29:29.799: INFO: Pod "pod-secrets-6749af34-549d-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 9.007372564s
Feb 21 11:29:31.811: INFO: Pod "pod-secrets-6749af34-549d-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 11.020198279s
Feb 21 11:29:34.498: INFO: Pod "pod-secrets-6749af34-549d-11ea-b1f8-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.70658608s
STEP: Saw pod success
Feb 21 11:29:34.498: INFO: Pod "pod-secrets-6749af34-549d-11ea-b1f8-0242ac110008" satisfied condition "success or failure"
Feb 21 11:29:34.517: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-6749af34-549d-11ea-b1f8-0242ac110008 container secret-volume-test: 
STEP: delete the pod
Feb 21 11:29:34.781: INFO: Waiting for pod pod-secrets-6749af34-549d-11ea-b1f8-0242ac110008 to disappear
Feb 21 11:29:34.799: INFO: Pod pod-secrets-6749af34-549d-11ea-b1f8-0242ac110008 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 21 11:29:34.799: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-xpstw" for this suite.
Feb 21 11:29:42.836: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 21 11:29:42.875: INFO: namespace: e2e-tests-secrets-xpstw, resource: bindings, ignored listing per whitelist
Feb 21 11:29:42.959: INFO: namespace e2e-tests-secrets-xpstw deletion completed in 8.152868896s

• [SLOW TEST:22.333 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[k8s.io] Container Runtime blackbox test when starting a container that exits 
  should run with the expected status [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 21 11:29:42.959: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run with the expected status [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpa': should get the expected 'State'
STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpof': should get the expected 'State'
STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpn': should get the expected 'State'
STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance]
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 21 11:30:52.216: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-runtime-g7t2q" for this suite.
Feb 21 11:30:58.280: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 21 11:30:58.385: INFO: namespace: e2e-tests-container-runtime-g7t2q, resource: bindings, ignored listing per whitelist
Feb 21 11:30:58.387: INFO: namespace e2e-tests-container-runtime-g7t2q deletion completed in 6.164331449s

• [SLOW TEST:75.428 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:37
    when starting a container that exits
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
      should run with the expected status [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl describe 
  should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 21 11:30:58.387: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Feb 21 11:30:58.578: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version --client'
Feb 21 11:30:58.628: INFO: stderr: ""
Feb 21 11:30:58.628: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.12\", GitCommit:\"a8b52209ee172232b6db7a6e0ce2adc77458829f\", GitTreeState:\"clean\", BuildDate:\"2019-12-22T15:53:48Z\", GoVersion:\"go1.11.13\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n"
Feb 21 11:30:58.635: INFO: Not supported for server versions before "1.13.12"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 21 11:30:58.636: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-46q7k" for this suite.
Feb 21 11:31:04.731: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 21 11:31:04.834: INFO: namespace: e2e-tests-kubectl-46q7k, resource: bindings, ignored listing per whitelist
Feb 21 11:31:05.019: INFO: namespace e2e-tests-kubectl-46q7k deletion completed in 6.318394669s

S [SKIPPING] [6.632 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl describe
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should check if kubectl describe prints relevant information for rc and pods  [Conformance] [It]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699

    Feb 21 11:30:58.635: Not supported for server versions before "1.13.12"

    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:292
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 21 11:31:05.019: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating secret e2e-tests-secrets-qgncn/secret-test-a588227d-549d-11ea-b1f8-0242ac110008
STEP: Creating a pod to test consume secrets
Feb 21 11:31:05.241: INFO: Waiting up to 5m0s for pod "pod-configmaps-a58a6b37-549d-11ea-b1f8-0242ac110008" in namespace "e2e-tests-secrets-qgncn" to be "success or failure"
Feb 21 11:31:05.312: INFO: Pod "pod-configmaps-a58a6b37-549d-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 70.315086ms
Feb 21 11:31:07.883: INFO: Pod "pod-configmaps-a58a6b37-549d-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.641687022s
Feb 21 11:31:09.896: INFO: Pod "pod-configmaps-a58a6b37-549d-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.654267149s
Feb 21 11:31:12.637: INFO: Pod "pod-configmaps-a58a6b37-549d-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 7.39603083s
Feb 21 11:31:14.660: INFO: Pod "pod-configmaps-a58a6b37-549d-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 9.418864686s
Feb 21 11:31:16.677: INFO: Pod "pod-configmaps-a58a6b37-549d-11ea-b1f8-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.435460088s
STEP: Saw pod success
Feb 21 11:31:16.677: INFO: Pod "pod-configmaps-a58a6b37-549d-11ea-b1f8-0242ac110008" satisfied condition "success or failure"
Feb 21 11:31:16.695: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-a58a6b37-549d-11ea-b1f8-0242ac110008 container env-test: 
STEP: delete the pod
Feb 21 11:31:17.672: INFO: Waiting for pod pod-configmaps-a58a6b37-549d-11ea-b1f8-0242ac110008 to disappear
Feb 21 11:31:17.691: INFO: Pod pod-configmaps-a58a6b37-549d-11ea-b1f8-0242ac110008 no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 21 11:31:17.692: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-qgncn" for this suite.
Feb 21 11:31:23.745: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 21 11:31:23.936: INFO: namespace: e2e-tests-secrets-qgncn, resource: bindings, ignored listing per whitelist
Feb 21 11:31:23.939: INFO: namespace e2e-tests-secrets-qgncn deletion completed in 6.238694683s

• [SLOW TEST:18.919 seconds]
[sig-api-machinery] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:32
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should create and stop a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 21 11:31:23.939: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295
[It] should create and stop a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a replication controller
Feb 21 11:31:24.209: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-ccxrj'
Feb 21 11:31:24.475: INFO: stderr: ""
Feb 21 11:31:24.475: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Feb 21 11:31:24.476: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-ccxrj'
Feb 21 11:31:24.615: INFO: stderr: ""
Feb 21 11:31:24.615: INFO: stdout: "update-demo-nautilus-gl5pv update-demo-nautilus-rmdcv "
Feb 21 11:31:24.616: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-gl5pv -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-ccxrj'
Feb 21 11:31:24.806: INFO: stderr: ""
Feb 21 11:31:24.807: INFO: stdout: ""
Feb 21 11:31:24.807: INFO: update-demo-nautilus-gl5pv is created but not running
Feb 21 11:31:29.808: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-ccxrj'
Feb 21 11:31:30.051: INFO: stderr: ""
Feb 21 11:31:30.051: INFO: stdout: "update-demo-nautilus-gl5pv update-demo-nautilus-rmdcv "
Feb 21 11:31:30.052: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-gl5pv -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-ccxrj'
Feb 21 11:31:30.232: INFO: stderr: ""
Feb 21 11:31:30.232: INFO: stdout: ""
Feb 21 11:31:30.232: INFO: update-demo-nautilus-gl5pv is created but not running
Feb 21 11:31:35.233: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-ccxrj'
Feb 21 11:31:35.462: INFO: stderr: ""
Feb 21 11:31:35.462: INFO: stdout: "update-demo-nautilus-gl5pv update-demo-nautilus-rmdcv "
Feb 21 11:31:35.462: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-gl5pv -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-ccxrj'
Feb 21 11:31:35.905: INFO: stderr: ""
Feb 21 11:31:35.906: INFO: stdout: ""
Feb 21 11:31:35.906: INFO: update-demo-nautilus-gl5pv is created but not running
Feb 21 11:31:40.906: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-ccxrj'
Feb 21 11:31:41.053: INFO: stderr: ""
Feb 21 11:31:41.053: INFO: stdout: "update-demo-nautilus-gl5pv update-demo-nautilus-rmdcv "
Feb 21 11:31:41.053: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-gl5pv -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-ccxrj'
Feb 21 11:31:41.130: INFO: stderr: ""
Feb 21 11:31:41.130: INFO: stdout: "true"
Feb 21 11:31:41.130: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-gl5pv -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-ccxrj'
Feb 21 11:31:41.221: INFO: stderr: ""
Feb 21 11:31:41.222: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb 21 11:31:41.222: INFO: validating pod update-demo-nautilus-gl5pv
Feb 21 11:31:41.251: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb 21 11:31:41.251: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb 21 11:31:41.251: INFO: update-demo-nautilus-gl5pv is verified up and running
Feb 21 11:31:41.251: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-rmdcv -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-ccxrj'
Feb 21 11:31:41.359: INFO: stderr: ""
Feb 21 11:31:41.360: INFO: stdout: "true"
Feb 21 11:31:41.360: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-rmdcv -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-ccxrj'
Feb 21 11:31:41.451: INFO: stderr: ""
Feb 21 11:31:41.451: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb 21 11:31:41.451: INFO: validating pod update-demo-nautilus-rmdcv
Feb 21 11:31:41.475: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb 21 11:31:41.476: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb 21 11:31:41.476: INFO: update-demo-nautilus-rmdcv is verified up and running
STEP: using delete to clean up resources
Feb 21 11:31:41.476: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-ccxrj'
Feb 21 11:31:41.561: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb 21 11:31:41.561: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Feb 21 11:31:41.561: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=e2e-tests-kubectl-ccxrj'
Feb 21 11:31:41.684: INFO: stderr: "No resources found.\n"
Feb 21 11:31:41.684: INFO: stdout: ""
Feb 21 11:31:41.684: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=e2e-tests-kubectl-ccxrj -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Feb 21 11:31:41.845: INFO: stderr: ""
Feb 21 11:31:41.845: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 21 11:31:41.845: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-ccxrj" for this suite.
Feb 21 11:32:07.928: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 21 11:32:07.979: INFO: namespace: e2e-tests-kubectl-ccxrj, resource: bindings, ignored listing per whitelist
Feb 21 11:32:08.061: INFO: namespace e2e-tests-kubectl-ccxrj deletion completed in 26.18615672s

• [SLOW TEST:44.122 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create and stop a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 21 11:32:08.062: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test override all
Feb 21 11:32:08.182: INFO: Waiting up to 5m0s for pod "client-containers-cb10d079-549d-11ea-b1f8-0242ac110008" in namespace "e2e-tests-containers-xk9xf" to be "success or failure"
Feb 21 11:32:08.189: INFO: Pod "client-containers-cb10d079-549d-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.920737ms
Feb 21 11:32:10.201: INFO: Pod "client-containers-cb10d079-549d-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019039731s
Feb 21 11:32:12.212: INFO: Pod "client-containers-cb10d079-549d-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.030183275s
Feb 21 11:32:14.648: INFO: Pod "client-containers-cb10d079-549d-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.466114256s
Feb 21 11:32:18.249: INFO: Pod "client-containers-cb10d079-549d-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 10.066701303s
Feb 21 11:32:20.257: INFO: Pod "client-containers-cb10d079-549d-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 12.074532125s
Feb 21 11:32:22.479: INFO: Pod "client-containers-cb10d079-549d-11ea-b1f8-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.297386139s
STEP: Saw pod success
Feb 21 11:32:22.480: INFO: Pod "client-containers-cb10d079-549d-11ea-b1f8-0242ac110008" satisfied condition "success or failure"
Feb 21 11:32:22.492: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-containers-cb10d079-549d-11ea-b1f8-0242ac110008 container test-container: 
STEP: delete the pod
Feb 21 11:32:23.047: INFO: Waiting for pod client-containers-cb10d079-549d-11ea-b1f8-0242ac110008 to disappear
Feb 21 11:32:23.108: INFO: Pod client-containers-cb10d079-549d-11ea-b1f8-0242ac110008 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 21 11:32:23.108: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-containers-xk9xf" for this suite.
Feb 21 11:32:29.216: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 21 11:32:29.425: INFO: namespace: e2e-tests-containers-xk9xf, resource: bindings, ignored listing per whitelist
Feb 21 11:32:29.433: INFO: namespace e2e-tests-containers-xk9xf deletion completed in 6.255086519s

• [SLOW TEST:21.372 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 21 11:32:29.434: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs
STEP: Gathering metrics
W0221 11:33:00.650472       8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Feb 21 11:33:00.650: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 21 11:33:00.650: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-74666" for this suite.
Feb 21 11:33:08.736: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 21 11:33:08.909: INFO: namespace: e2e-tests-gc-74666, resource: bindings, ignored listing per whitelist
Feb 21 11:33:08.978: INFO: namespace e2e-tests-gc-74666 deletion completed in 8.321454491s

• [SLOW TEST:39.545 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-storage] Projected secret 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 21 11:33:08.978: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name s-test-opt-del-f02e18a6-549d-11ea-b1f8-0242ac110008
STEP: Creating secret with name s-test-opt-upd-f02e1a68-549d-11ea-b1f8-0242ac110008
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-f02e18a6-549d-11ea-b1f8-0242ac110008
STEP: Updating secret s-test-opt-upd-f02e1a68-549d-11ea-b1f8-0242ac110008
STEP: Creating secret with name s-test-opt-create-f02e1ab4-549d-11ea-b1f8-0242ac110008
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 21 11:34:45.746: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-k54fx" for this suite.
Feb 21 11:35:09.826: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 21 11:35:10.020: INFO: namespace: e2e-tests-projected-k54fx, resource: bindings, ignored listing per whitelist
Feb 21 11:35:10.029: INFO: namespace e2e-tests-projected-k54fx deletion completed in 24.275771382s

• [SLOW TEST:121.051 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 21 11:35:10.030: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85
[It] should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 21 11:35:10.309: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-services-ww5rt" for this suite.
Feb 21 11:35:16.355: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 21 11:35:16.528: INFO: namespace: e2e-tests-services-ww5rt, resource: bindings, ignored listing per whitelist
Feb 21 11:35:16.637: INFO: namespace e2e-tests-services-ww5rt deletion completed in 6.317284306s
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90

• [SLOW TEST:6.608 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 21 11:35:16.638: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43
[It] should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
Feb 21 11:35:16.882: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 21 11:35:44.662: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-init-container-mmd4x" for this suite.
Feb 21 11:36:08.740: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 21 11:36:08.814: INFO: namespace: e2e-tests-init-container-mmd4x, resource: bindings, ignored listing per whitelist
Feb 21 11:36:09.060: INFO: namespace e2e-tests-init-container-mmd4x deletion completed in 24.377629622s

• [SLOW TEST:52.423 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform canary updates and phased rolling updates of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 21 11:36:09.061: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74
STEP: Creating service test in namespace e2e-tests-statefulset-2nl5p
[It] should perform canary updates and phased rolling updates of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a new StaefulSet
Feb 21 11:36:09.250: INFO: Found 0 stateful pods, waiting for 3
Feb 21 11:36:19.265: INFO: Found 2 stateful pods, waiting for 3
Feb 21 11:36:29.264: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Feb 21 11:36:29.264: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Feb 21 11:36:29.264: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Feb 21 11:36:39.271: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Feb 21 11:36:39.272: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Feb 21 11:36:39.272: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Updating stateful set template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine
Feb 21 11:36:39.319: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Not applying an update when the partition is greater than the number of replicas
STEP: Performing a canary update
Feb 21 11:36:49.451: INFO: Updating stateful set ss2
Feb 21 11:36:49.466: INFO: Waiting for Pod e2e-tests-statefulset-2nl5p/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb 21 11:36:59.495: INFO: Waiting for Pod e2e-tests-statefulset-2nl5p/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
STEP: Restoring Pods to the correct revision when they are deleted
Feb 21 11:37:10.695: INFO: Found 2 stateful pods, waiting for 3
Feb 21 11:37:20.779: INFO: Found 2 stateful pods, waiting for 3
Feb 21 11:37:30.797: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Feb 21 11:37:30.797: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Feb 21 11:37:30.797: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Feb 21 11:37:40.714: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Feb 21 11:37:40.714: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Feb 21 11:37:40.714: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Performing a phased rolling update
Feb 21 11:37:40.766: INFO: Updating stateful set ss2
Feb 21 11:37:40.778: INFO: Waiting for Pod e2e-tests-statefulset-2nl5p/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb 21 11:37:50.863: INFO: Updating stateful set ss2
Feb 21 11:37:51.024: INFO: Waiting for StatefulSet e2e-tests-statefulset-2nl5p/ss2 to complete update
Feb 21 11:37:51.024: INFO: Waiting for Pod e2e-tests-statefulset-2nl5p/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb 21 11:38:01.037: INFO: Waiting for StatefulSet e2e-tests-statefulset-2nl5p/ss2 to complete update
Feb 21 11:38:01.037: INFO: Waiting for Pod e2e-tests-statefulset-2nl5p/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb 21 11:38:11.063: INFO: Waiting for StatefulSet e2e-tests-statefulset-2nl5p/ss2 to complete update
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85
Feb 21 11:38:21.048: INFO: Deleting all statefulset in ns e2e-tests-statefulset-2nl5p
Feb 21 11:38:21.056: INFO: Scaling statefulset ss2 to 0
Feb 21 11:39:01.125: INFO: Waiting for statefulset status.replicas updated to 0
Feb 21 11:39:01.135: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 21 11:39:01.192: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-statefulset-2nl5p" for this suite.
Feb 21 11:39:09.285: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 21 11:39:09.388: INFO: namespace: e2e-tests-statefulset-2nl5p, resource: bindings, ignored listing per whitelist
Feb 21 11:39:09.461: INFO: namespace e2e-tests-statefulset-2nl5p deletion completed in 8.257922222s

• [SLOW TEST:180.401 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should perform canary updates and phased rolling updates of template modifications [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 21 11:39:09.462: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-c6525abc-549e-11ea-b1f8-0242ac110008
STEP: Creating a pod to test consume secrets
Feb 21 11:39:09.765: INFO: Waiting up to 5m0s for pod "pod-secrets-c6589387-549e-11ea-b1f8-0242ac110008" in namespace "e2e-tests-secrets-tjfvm" to be "success or failure"
Feb 21 11:39:09.893: INFO: Pod "pod-secrets-c6589387-549e-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 127.512507ms
Feb 21 11:39:11.936: INFO: Pod "pod-secrets-c6589387-549e-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.170515559s
Feb 21 11:39:14.351: INFO: Pod "pod-secrets-c6589387-549e-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.586350926s
Feb 21 11:39:16.367: INFO: Pod "pod-secrets-c6589387-549e-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.601943743s
Feb 21 11:39:18.794: INFO: Pod "pod-secrets-c6589387-549e-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 9.029018195s
Feb 21 11:39:20.813: INFO: Pod "pod-secrets-c6589387-549e-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 11.047571402s
Feb 21 11:39:22.846: INFO: Pod "pod-secrets-c6589387-549e-11ea-b1f8-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.080688396s
STEP: Saw pod success
Feb 21 11:39:22.846: INFO: Pod "pod-secrets-c6589387-549e-11ea-b1f8-0242ac110008" satisfied condition "success or failure"
Feb 21 11:39:22.860: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-c6589387-549e-11ea-b1f8-0242ac110008 container secret-volume-test: 
STEP: delete the pod
Feb 21 11:39:23.070: INFO: Waiting for pod pod-secrets-c6589387-549e-11ea-b1f8-0242ac110008 to disappear
Feb 21 11:39:23.082: INFO: Pod pod-secrets-c6589387-549e-11ea-b1f8-0242ac110008 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 21 11:39:23.083: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-tjfvm" for this suite.
Feb 21 11:39:29.126: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 21 11:39:29.215: INFO: namespace: e2e-tests-secrets-tjfvm, resource: bindings, ignored listing per whitelist
Feb 21 11:39:29.287: INFO: namespace e2e-tests-secrets-tjfvm deletion completed in 6.19444838s

• [SLOW TEST:19.825 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should be possible to delete [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 21 11:39:29.287: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should be possible to delete [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 21 11:39:29.778: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubelet-test-ct4nn" for this suite.
Feb 21 11:39:36.044: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 21 11:39:36.073: INFO: namespace: e2e-tests-kubelet-test-ct4nn, resource: bindings, ignored listing per whitelist
Feb 21 11:39:36.210: INFO: namespace e2e-tests-kubelet-test-ct4nn deletion completed in 6.34739643s

• [SLOW TEST:6.923 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78
    should be possible to delete [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 21 11:39:36.211: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a watch on configmaps with label A
STEP: creating a watch on configmaps with label B
STEP: creating a watch on configmaps with label A or B
STEP: creating a configmap with label A and ensuring the correct watchers observe the notification
Feb 21 11:39:36.453: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-zc5cg,SelfLink:/api/v1/namespaces/e2e-tests-watch-zc5cg/configmaps/e2e-watch-test-configmap-a,UID:d6420255-549e-11ea-a994-fa163e34d433,ResourceVersion:22417978,Generation:0,CreationTimestamp:2020-02-21 11:39:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
Feb 21 11:39:36.454: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-zc5cg,SelfLink:/api/v1/namespaces/e2e-tests-watch-zc5cg/configmaps/e2e-watch-test-configmap-a,UID:d6420255-549e-11ea-a994-fa163e34d433,ResourceVersion:22417978,Generation:0,CreationTimestamp:2020-02-21 11:39:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
STEP: modifying configmap A and ensuring the correct watchers observe the notification
Feb 21 11:39:46.502: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-zc5cg,SelfLink:/api/v1/namespaces/e2e-tests-watch-zc5cg/configmaps/e2e-watch-test-configmap-a,UID:d6420255-549e-11ea-a994-fa163e34d433,ResourceVersion:22417991,Generation:0,CreationTimestamp:2020-02-21 11:39:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
Feb 21 11:39:46.503: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-zc5cg,SelfLink:/api/v1/namespaces/e2e-tests-watch-zc5cg/configmaps/e2e-watch-test-configmap-a,UID:d6420255-549e-11ea-a994-fa163e34d433,ResourceVersion:22417991,Generation:0,CreationTimestamp:2020-02-21 11:39:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying configmap A again and ensuring the correct watchers observe the notification
Feb 21 11:39:56.580: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-zc5cg,SelfLink:/api/v1/namespaces/e2e-tests-watch-zc5cg/configmaps/e2e-watch-test-configmap-a,UID:d6420255-549e-11ea-a994-fa163e34d433,ResourceVersion:22418004,Generation:0,CreationTimestamp:2020-02-21 11:39:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Feb 21 11:39:56.581: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-zc5cg,SelfLink:/api/v1/namespaces/e2e-tests-watch-zc5cg/configmaps/e2e-watch-test-configmap-a,UID:d6420255-549e-11ea-a994-fa163e34d433,ResourceVersion:22418004,Generation:0,CreationTimestamp:2020-02-21 11:39:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
STEP: deleting configmap A and ensuring the correct watchers observe the notification
Feb 21 11:40:06.628: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-zc5cg,SelfLink:/api/v1/namespaces/e2e-tests-watch-zc5cg/configmaps/e2e-watch-test-configmap-a,UID:d6420255-549e-11ea-a994-fa163e34d433,ResourceVersion:22418017,Generation:0,CreationTimestamp:2020-02-21 11:39:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Feb 21 11:40:06.628: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-zc5cg,SelfLink:/api/v1/namespaces/e2e-tests-watch-zc5cg/configmaps/e2e-watch-test-configmap-a,UID:d6420255-549e-11ea-a994-fa163e34d433,ResourceVersion:22418017,Generation:0,CreationTimestamp:2020-02-21 11:39:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
STEP: creating a configmap with label B and ensuring the correct watchers observe the notification
Feb 21 11:40:16.651: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-zc5cg,SelfLink:/api/v1/namespaces/e2e-tests-watch-zc5cg/configmaps/e2e-watch-test-configmap-b,UID:ee36d4ab-549e-11ea-a994-fa163e34d433,ResourceVersion:22418029,Generation:0,CreationTimestamp:2020-02-21 11:40:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
Feb 21 11:40:16.651: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-zc5cg,SelfLink:/api/v1/namespaces/e2e-tests-watch-zc5cg/configmaps/e2e-watch-test-configmap-b,UID:ee36d4ab-549e-11ea-a994-fa163e34d433,ResourceVersion:22418029,Generation:0,CreationTimestamp:2020-02-21 11:40:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
STEP: deleting configmap B and ensuring the correct watchers observe the notification
Feb 21 11:40:26.692: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-zc5cg,SelfLink:/api/v1/namespaces/e2e-tests-watch-zc5cg/configmaps/e2e-watch-test-configmap-b,UID:ee36d4ab-549e-11ea-a994-fa163e34d433,ResourceVersion:22418042,Generation:0,CreationTimestamp:2020-02-21 11:40:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
Feb 21 11:40:26.693: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-zc5cg,SelfLink:/api/v1/namespaces/e2e-tests-watch-zc5cg/configmaps/e2e-watch-test-configmap-b,UID:ee36d4ab-549e-11ea-a994-fa163e34d433,ResourceVersion:22418042,Generation:0,CreationTimestamp:2020-02-21 11:40:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 21 11:40:36.694: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-watch-zc5cg" for this suite.
Feb 21 11:40:42.782: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 21 11:40:42.913: INFO: namespace: e2e-tests-watch-zc5cg, resource: bindings, ignored listing per whitelist
Feb 21 11:40:43.000: INFO: namespace e2e-tests-watch-zc5cg deletion completed in 6.290945804s

• [SLOW TEST:66.790 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 21 11:40:43.001: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65
[It] RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Feb 21 11:40:43.207: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted)
Feb 21 11:40:43.233: INFO: Pod name sample-pod: Found 0 pods out of 1
Feb 21 11:40:48.254: INFO: Pod name sample-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Feb 21 11:40:54.275: INFO: Creating deployment "test-rolling-update-deployment"
Feb 21 11:40:54.286: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has
Feb 21 11:40:54.298: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created
Feb 21 11:40:56.321: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected
Feb 21 11:40:56.327: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717882054, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717882054, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717882054, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717882054, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 21 11:40:58.346: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717882054, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717882054, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717882054, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717882054, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 21 11:41:00.370: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717882054, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717882054, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717882054, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717882054, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 21 11:41:02.340: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717882054, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717882054, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717882054, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717882054, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 21 11:41:04.627: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted)
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59
Feb 21 11:41:05.063: INFO: Deployment "test-rolling-update-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment,GenerateName:,Namespace:e2e-tests-deployment-5hk82,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-5hk82/deployments/test-rolling-update-deployment,UID:04a692fe-549f-11ea-a994-fa163e34d433,ResourceVersion:22418134,Generation:1,CreationTimestamp:2020-02-21 11:40:54 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-02-21 11:40:54 +0000 UTC 2020-02-21 11:40:54 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-02-21 11:41:03 +0000 UTC 2020-02-21 11:40:54 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rolling-update-deployment-75db98fb4c" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},}

Feb 21 11:41:05.095: INFO: New ReplicaSet "test-rolling-update-deployment-75db98fb4c" of Deployment "test-rolling-update-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-75db98fb4c,GenerateName:,Namespace:e2e-tests-deployment-5hk82,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-5hk82/replicasets/test-rolling-update-deployment-75db98fb4c,UID:04b00d23-549f-11ea-a994-fa163e34d433,ResourceVersion:22418125,Generation:1,CreationTimestamp:2020-02-21 11:40:54 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment 04a692fe-549f-11ea-a994-fa163e34d433 0xc001e365e7 0xc001e365e8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Feb 21 11:41:05.095: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment":
Feb 21 11:41:05.096: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-controller,GenerateName:,Namespace:e2e-tests-deployment-5hk82,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-5hk82/replicasets/test-rolling-update-controller,UID:fe0dcd59-549e-11ea-a994-fa163e34d433,ResourceVersion:22418133,Generation:2,CreationTimestamp:2020-02-21 11:40:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305832,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment 04a692fe-549f-11ea-a994-fa163e34d433 0xc001e36527 0xc001e36528}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Feb 21 11:41:05.112: INFO: Pod "test-rolling-update-deployment-75db98fb4c-m5xts" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-75db98fb4c-m5xts,GenerateName:test-rolling-update-deployment-75db98fb4c-,Namespace:e2e-tests-deployment-5hk82,SelfLink:/api/v1/namespaces/e2e-tests-deployment-5hk82/pods/test-rolling-update-deployment-75db98fb4c-m5xts,UID:04ba7be7-549f-11ea-a994-fa163e34d433,ResourceVersion:22418124,Generation:0,CreationTimestamp:2020-02-21 11:40:54 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rolling-update-deployment-75db98fb4c 04b00d23-549f-11ea-a994-fa163e34d433 0xc001f8c8e7 0xc001f8c8e8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-sp8t4 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-sp8t4,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [{default-token-sp8t4 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001f8ca80} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001f8caa0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-21 11:40:54 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-21 11:41:02 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-21 11:41:02 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-21 11:40:54 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.5,StartTime:2020-02-21 11:40:54 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-02-21 11:41:01 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 docker://ce4004abd4c719096e7545a1e8890c4d1fbff509091e28f5673a61af5b44aebd}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 21 11:41:05.112: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-deployment-5hk82" for this suite.
Feb 21 11:41:13.225: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 21 11:41:13.377: INFO: namespace: e2e-tests-deployment-5hk82, resource: bindings, ignored listing per whitelist
Feb 21 11:41:13.384: INFO: namespace e2e-tests-deployment-5hk82 deletion completed in 8.251896378s

• [SLOW TEST:30.383 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: udp [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 21 11:41:13.384: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: udp [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-g77h4
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Feb 21 11:41:13.632: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Feb 21 11:41:51.960: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.32.0.5:8080/dial?request=hostName&protocol=udp&host=10.32.0.4&port=8081&tries=1'] Namespace:e2e-tests-pod-network-test-g77h4 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 21 11:41:51.960: INFO: >>> kubeConfig: /root/.kube/config
I0221 11:41:52.042061       8 log.go:172] (0xc000bdd1e0) (0xc000728dc0) Create stream
I0221 11:41:52.042624       8 log.go:172] (0xc000bdd1e0) (0xc000728dc0) Stream added, broadcasting: 1
I0221 11:41:52.051353       8 log.go:172] (0xc000bdd1e0) Reply frame received for 1
I0221 11:41:52.051519       8 log.go:172] (0xc000bdd1e0) (0xc00050c5a0) Create stream
I0221 11:41:52.051538       8 log.go:172] (0xc000bdd1e0) (0xc00050c5a0) Stream added, broadcasting: 3
I0221 11:41:52.053626       8 log.go:172] (0xc000bdd1e0) Reply frame received for 3
I0221 11:41:52.053697       8 log.go:172] (0xc000bdd1e0) (0xc000728f00) Create stream
I0221 11:41:52.053730       8 log.go:172] (0xc000bdd1e0) (0xc000728f00) Stream added, broadcasting: 5
I0221 11:41:52.054841       8 log.go:172] (0xc000bdd1e0) Reply frame received for 5
I0221 11:41:52.295338       8 log.go:172] (0xc000bdd1e0) Data frame received for 3
I0221 11:41:52.295467       8 log.go:172] (0xc00050c5a0) (3) Data frame handling
I0221 11:41:52.295501       8 log.go:172] (0xc00050c5a0) (3) Data frame sent
I0221 11:41:52.704643       8 log.go:172] (0xc000bdd1e0) (0xc00050c5a0) Stream removed, broadcasting: 3
I0221 11:41:52.704846       8 log.go:172] (0xc000bdd1e0) Data frame received for 1
I0221 11:41:52.704916       8 log.go:172] (0xc000bdd1e0) (0xc000728f00) Stream removed, broadcasting: 5
I0221 11:41:52.704964       8 log.go:172] (0xc000728dc0) (1) Data frame handling
I0221 11:41:52.705048       8 log.go:172] (0xc000728dc0) (1) Data frame sent
I0221 11:41:52.705062       8 log.go:172] (0xc000bdd1e0) (0xc000728dc0) Stream removed, broadcasting: 1
I0221 11:41:52.705175       8 log.go:172] (0xc000bdd1e0) Go away received
I0221 11:41:52.705454       8 log.go:172] (0xc000bdd1e0) (0xc000728dc0) Stream removed, broadcasting: 1
I0221 11:41:52.705499       8 log.go:172] (0xc000bdd1e0) (0xc00050c5a0) Stream removed, broadcasting: 3
I0221 11:41:52.705506       8 log.go:172] (0xc000bdd1e0) (0xc000728f00) Stream removed, broadcasting: 5
Feb 21 11:41:52.705: INFO: Waiting for endpoints: map[]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 21 11:41:52.706: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pod-network-test-g77h4" for this suite.
Feb 21 11:42:18.843: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 21 11:42:18.957: INFO: namespace: e2e-tests-pod-network-test-g77h4, resource: bindings, ignored listing per whitelist
Feb 21 11:42:18.984: INFO: namespace e2e-tests-pod-network-test-g77h4 deletion completed in 26.257113075s

• [SLOW TEST:65.599 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for intra-pod communication: udp [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 21 11:42:18.984: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102
[It] should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Feb 21 11:42:19.368: INFO: Number of nodes with available pods: 0
Feb 21 11:42:19.368: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 21 11:42:20.392: INFO: Number of nodes with available pods: 0
Feb 21 11:42:20.392: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 21 11:42:21.409: INFO: Number of nodes with available pods: 0
Feb 21 11:42:21.409: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 21 11:42:22.428: INFO: Number of nodes with available pods: 0
Feb 21 11:42:22.428: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 21 11:42:23.398: INFO: Number of nodes with available pods: 0
Feb 21 11:42:23.398: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 21 11:42:24.607: INFO: Number of nodes with available pods: 0
Feb 21 11:42:24.607: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 21 11:42:25.515: INFO: Number of nodes with available pods: 0
Feb 21 11:42:25.515: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 21 11:42:26.391: INFO: Number of nodes with available pods: 0
Feb 21 11:42:26.391: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 21 11:42:27.418: INFO: Number of nodes with available pods: 0
Feb 21 11:42:27.418: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 21 11:42:28.397: INFO: Number of nodes with available pods: 1
Feb 21 11:42:28.397: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived.
Feb 21 11:42:28.560: INFO: Number of nodes with available pods: 0
Feb 21 11:42:28.560: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 21 11:42:30.532: INFO: Number of nodes with available pods: 0
Feb 21 11:42:30.532: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 21 11:42:30.818: INFO: Number of nodes with available pods: 0
Feb 21 11:42:30.818: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 21 11:42:31.802: INFO: Number of nodes with available pods: 0
Feb 21 11:42:31.802: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 21 11:42:32.595: INFO: Number of nodes with available pods: 0
Feb 21 11:42:32.595: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 21 11:42:34.339: INFO: Number of nodes with available pods: 0
Feb 21 11:42:34.339: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 21 11:42:34.592: INFO: Number of nodes with available pods: 0
Feb 21 11:42:34.592: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 21 11:42:35.589: INFO: Number of nodes with available pods: 0
Feb 21 11:42:35.590: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 21 11:42:37.250: INFO: Number of nodes with available pods: 0
Feb 21 11:42:37.251: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 21 11:42:37.599: INFO: Number of nodes with available pods: 0
Feb 21 11:42:37.599: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 21 11:42:38.703: INFO: Number of nodes with available pods: 0
Feb 21 11:42:38.703: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 21 11:42:39.571: INFO: Number of nodes with available pods: 0
Feb 21 11:42:39.571: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 21 11:42:40.651: INFO: Number of nodes with available pods: 0
Feb 21 11:42:40.651: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 21 11:42:41.619: INFO: Number of nodes with available pods: 1
Feb 21 11:42:41.620: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Wait for the failed daemon pod to be completely deleted.
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-c7wq2, will wait for the garbage collector to delete the pods
Feb 21 11:42:41.720: INFO: Deleting DaemonSet.extensions daemon-set took: 28.226653ms
Feb 21 11:42:41.821: INFO: Terminating DaemonSet.extensions daemon-set pods took: 101.528351ms
Feb 21 11:42:49.051: INFO: Number of nodes with available pods: 0
Feb 21 11:42:49.051: INFO: Number of running nodes: 0, number of available pods: 0
Feb 21 11:42:49.055: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-c7wq2/daemonsets","resourceVersion":"22418380"},"items":null}

Feb 21 11:42:49.057: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-c7wq2/pods","resourceVersion":"22418380"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 21 11:42:49.066: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-daemonsets-c7wq2" for this suite.
Feb 21 11:42:55.152: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 21 11:42:55.298: INFO: namespace: e2e-tests-daemonsets-c7wq2, resource: bindings, ignored listing per whitelist
Feb 21 11:42:55.315: INFO: namespace e2e-tests-daemonsets-c7wq2 deletion completed in 6.244381007s

• [SLOW TEST:36.331 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 21 11:42:55.316: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap e2e-tests-configmap-qp6tv/configmap-test-4ce87b1f-549f-11ea-b1f8-0242ac110008
STEP: Creating a pod to test consume configMaps
Feb 21 11:42:55.523: INFO: Waiting up to 5m0s for pod "pod-configmaps-4ce9c31c-549f-11ea-b1f8-0242ac110008" in namespace "e2e-tests-configmap-qp6tv" to be "success or failure"
Feb 21 11:42:55.616: INFO: Pod "pod-configmaps-4ce9c31c-549f-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 92.863601ms
Feb 21 11:42:57.625: INFO: Pod "pod-configmaps-4ce9c31c-549f-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.10193673s
Feb 21 11:42:59.640: INFO: Pod "pod-configmaps-4ce9c31c-549f-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.116278136s
Feb 21 11:43:01.851: INFO: Pod "pod-configmaps-4ce9c31c-549f-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.328036721s
Feb 21 11:43:03.884: INFO: Pod "pod-configmaps-4ce9c31c-549f-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.361184494s
Feb 21 11:43:05.901: INFO: Pod "pod-configmaps-4ce9c31c-549f-11ea-b1f8-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.377736187s
STEP: Saw pod success
Feb 21 11:43:05.901: INFO: Pod "pod-configmaps-4ce9c31c-549f-11ea-b1f8-0242ac110008" satisfied condition "success or failure"
Feb 21 11:43:05.909: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-4ce9c31c-549f-11ea-b1f8-0242ac110008 container env-test: 
STEP: delete the pod
Feb 21 11:43:06.566: INFO: Waiting for pod pod-configmaps-4ce9c31c-549f-11ea-b1f8-0242ac110008 to disappear
Feb 21 11:43:06.577: INFO: Pod pod-configmaps-4ce9c31c-549f-11ea-b1f8-0242ac110008 no longer exists
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 21 11:43:06.578: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-qp6tv" for this suite.
Feb 21 11:43:14.835: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 21 11:43:14.968: INFO: namespace: e2e-tests-configmap-qp6tv, resource: bindings, ignored listing per whitelist
Feb 21 11:43:14.988: INFO: namespace e2e-tests-configmap-qp6tv deletion completed in 8.397221072s

• [SLOW TEST:19.673 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-network] Proxy version v1 
  should proxy through a service and a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 21 11:43:14.988: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy through a service and a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: starting an echo server on multiple ports
STEP: creating replication controller proxy-service-fjkf4 in namespace e2e-tests-proxy-gswrl
I0221 11:43:15.255311       8 runners.go:184] Created replication controller with name: proxy-service-fjkf4, namespace: e2e-tests-proxy-gswrl, replica count: 1
I0221 11:43:16.306435       8 runners.go:184] proxy-service-fjkf4 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0221 11:43:17.307444       8 runners.go:184] proxy-service-fjkf4 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0221 11:43:18.308898       8 runners.go:184] proxy-service-fjkf4 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0221 11:43:19.309480       8 runners.go:184] proxy-service-fjkf4 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0221 11:43:20.309957       8 runners.go:184] proxy-service-fjkf4 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0221 11:43:21.310429       8 runners.go:184] proxy-service-fjkf4 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0221 11:43:22.310787       8 runners.go:184] proxy-service-fjkf4 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0221 11:43:23.311218       8 runners.go:184] proxy-service-fjkf4 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0221 11:43:24.312313       8 runners.go:184] proxy-service-fjkf4 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0221 11:43:25.312820       8 runners.go:184] proxy-service-fjkf4 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0221 11:43:26.313411       8 runners.go:184] proxy-service-fjkf4 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0221 11:43:27.314272       8 runners.go:184] proxy-service-fjkf4 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0221 11:43:28.315323       8 runners.go:184] proxy-service-fjkf4 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0221 11:43:29.316307       8 runners.go:184] proxy-service-fjkf4 Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Feb 21 11:43:29.327: INFO: setup took 14.139638422s, starting test cases
STEP: running 16 cases, 20 attempts per case, 320 total attempts
Feb 21 11:43:29.360: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-gswrl/pods/proxy-service-fjkf4-4vprk:160/proxy/: foo (200; 33.090506ms)
Feb 21 11:43:29.377: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-gswrl/pods/http:proxy-service-fjkf4-4vprk:162/proxy/: bar (200; 49.877382ms)
Feb 21 11:43:29.404: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-gswrl/services/http:proxy-service-fjkf4:portname2/proxy/: bar (200; 76.485358ms)
Feb 21 11:43:29.406: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-gswrl/services/http:proxy-service-fjkf4:portname1/proxy/: foo (200; 78.510757ms)
Feb 21 11:43:29.406: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-gswrl/pods/proxy-service-fjkf4-4vprk:162/proxy/: bar (200; 78.651162ms)
Feb 21 11:43:29.408: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-gswrl/pods/http:proxy-service-fjkf4-4vprk:160/proxy/: foo (200; 80.178592ms)
Feb 21 11:43:29.413: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-gswrl/pods/http:proxy-service-fjkf4-4vprk:1080/proxy/: >> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1298
[It] should create an rc from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Feb 21 11:43:49.463: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=e2e-tests-kubectl-56ppv'
Feb 21 11:43:51.701: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Feb 21 11:43:51.701: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n"
STEP: verifying the rc e2e-test-nginx-rc was created
STEP: verifying the pod controlled by rc e2e-test-nginx-rc was created
STEP: confirm that you can get logs from an rc
Feb 21 11:43:53.757: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-nginx-rc-86v5x]
Feb 21 11:43:53.757: INFO: Waiting up to 5m0s for pod "e2e-test-nginx-rc-86v5x" in namespace "e2e-tests-kubectl-56ppv" to be "running and ready"
Feb 21 11:43:53.765: INFO: Pod "e2e-test-nginx-rc-86v5x": Phase="Pending", Reason="", readiness=false. Elapsed: 8.685837ms
Feb 21 11:43:55.778: INFO: Pod "e2e-test-nginx-rc-86v5x": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020836747s
Feb 21 11:43:57.830: INFO: Pod "e2e-test-nginx-rc-86v5x": Phase="Pending", Reason="", readiness=false. Elapsed: 4.073332841s
Feb 21 11:43:59.843: INFO: Pod "e2e-test-nginx-rc-86v5x": Phase="Pending", Reason="", readiness=false. Elapsed: 6.086688952s
Feb 21 11:44:01.866: INFO: Pod "e2e-test-nginx-rc-86v5x": Phase="Running", Reason="", readiness=true. Elapsed: 8.109698967s
Feb 21 11:44:01.867: INFO: Pod "e2e-test-nginx-rc-86v5x" satisfied condition "running and ready"
Feb 21 11:44:01.867: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-nginx-rc-86v5x]
Feb 21 11:44:01.867: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-nginx-rc --namespace=e2e-tests-kubectl-56ppv'
Feb 21 11:44:02.120: INFO: stderr: ""
Feb 21 11:44:02.121: INFO: stdout: ""
[AfterEach] [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1303
Feb 21 11:44:02.121: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=e2e-tests-kubectl-56ppv'
Feb 21 11:44:02.355: INFO: stderr: ""
Feb 21 11:44:02.355: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 21 11:44:02.355: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-56ppv" for this suite.
Feb 21 11:44:24.400: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 21 11:44:24.936: INFO: namespace: e2e-tests-kubectl-56ppv, resource: bindings, ignored listing per whitelist
Feb 21 11:44:24.957: INFO: namespace e2e-tests-kubectl-56ppv deletion completed in 22.592530273s

• [SLOW TEST:36.068 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create an rc from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 21 11:44:24.957: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-825369c2-549f-11ea-b1f8-0242ac110008
STEP: Creating a pod to test consume configMaps
Feb 21 11:44:25.171: INFO: Waiting up to 5m0s for pod "pod-configmaps-82548123-549f-11ea-b1f8-0242ac110008" in namespace "e2e-tests-configmap-ddpcg" to be "success or failure"
Feb 21 11:44:25.181: INFO: Pod "pod-configmaps-82548123-549f-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 9.954419ms
Feb 21 11:44:27.813: INFO: Pod "pod-configmaps-82548123-549f-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.642180934s
Feb 21 11:44:29.836: INFO: Pod "pod-configmaps-82548123-549f-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.664766679s
Feb 21 11:44:31.861: INFO: Pod "pod-configmaps-82548123-549f-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.690031835s
Feb 21 11:44:33.876: INFO: Pod "pod-configmaps-82548123-549f-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.70556152s
Feb 21 11:44:35.899: INFO: Pod "pod-configmaps-82548123-549f-11ea-b1f8-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.728545951s
STEP: Saw pod success
Feb 21 11:44:35.900: INFO: Pod "pod-configmaps-82548123-549f-11ea-b1f8-0242ac110008" satisfied condition "success or failure"
Feb 21 11:44:35.908: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-82548123-549f-11ea-b1f8-0242ac110008 container configmap-volume-test: 
STEP: delete the pod
Feb 21 11:44:36.170: INFO: Waiting for pod pod-configmaps-82548123-549f-11ea-b1f8-0242ac110008 to disappear
Feb 21 11:44:36.204: INFO: Pod pod-configmaps-82548123-549f-11ea-b1f8-0242ac110008 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 21 11:44:36.204: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-ddpcg" for this suite.
Feb 21 11:44:44.286: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 21 11:44:44.348: INFO: namespace: e2e-tests-configmap-ddpcg, resource: bindings, ignored listing per whitelist
Feb 21 11:44:44.394: INFO: namespace e2e-tests-configmap-ddpcg deletion completed in 8.171733639s

• [SLOW TEST:19.438 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl replace 
  should update a single-container pod's image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 21 11:44:44.395: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1563
[It] should update a single-container pod's image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Feb 21 11:44:44.760: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --labels=run=e2e-test-nginx-pod --namespace=e2e-tests-kubectl-g4ck9'
Feb 21 11:44:44.895: INFO: stderr: ""
Feb 21 11:44:44.895: INFO: stdout: "pod/e2e-test-nginx-pod created\n"
STEP: verifying the pod e2e-test-nginx-pod is running
STEP: verifying the pod e2e-test-nginx-pod was created
Feb 21 11:44:59.948: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-nginx-pod --namespace=e2e-tests-kubectl-g4ck9 -o json'
Feb 21 11:45:00.111: INFO: stderr: ""
Feb 21 11:45:00.111: INFO: stdout: "{\n    \"apiVersion\": \"v1\",\n    \"kind\": \"Pod\",\n    \"metadata\": {\n        \"creationTimestamp\": \"2020-02-21T11:44:44Z\",\n        \"labels\": {\n            \"run\": \"e2e-test-nginx-pod\"\n        },\n        \"name\": \"e2e-test-nginx-pod\",\n        \"namespace\": \"e2e-tests-kubectl-g4ck9\",\n        \"resourceVersion\": \"22418685\",\n        \"selfLink\": \"/api/v1/namespaces/e2e-tests-kubectl-g4ck9/pods/e2e-test-nginx-pod\",\n        \"uid\": \"8e178e52-549f-11ea-a994-fa163e34d433\"\n    },\n    \"spec\": {\n        \"containers\": [\n            {\n                \"image\": \"docker.io/library/nginx:1.14-alpine\",\n                \"imagePullPolicy\": \"IfNotPresent\",\n                \"name\": \"e2e-test-nginx-pod\",\n                \"resources\": {},\n                \"terminationMessagePath\": \"/dev/termination-log\",\n                \"terminationMessagePolicy\": \"File\",\n                \"volumeMounts\": [\n                    {\n                        \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n                        \"name\": \"default-token-s2jxr\",\n                        \"readOnly\": true\n                    }\n                ]\n            }\n        ],\n        \"dnsPolicy\": \"ClusterFirst\",\n        \"enableServiceLinks\": true,\n        \"nodeName\": \"hunter-server-hu5at5svl7ps\",\n        \"priority\": 0,\n        \"restartPolicy\": \"Always\",\n        \"schedulerName\": \"default-scheduler\",\n        \"securityContext\": {},\n        \"serviceAccount\": \"default\",\n        \"serviceAccountName\": \"default\",\n        \"terminationGracePeriodSeconds\": 30,\n        \"tolerations\": [\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/not-ready\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            },\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/unreachable\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            }\n        ],\n        \"volumes\": [\n            {\n                \"name\": \"default-token-s2jxr\",\n                \"secret\": {\n                    \"defaultMode\": 420,\n                    \"secretName\": \"default-token-s2jxr\"\n                }\n            }\n        ]\n    },\n    \"status\": {\n        \"conditions\": [\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-02-21T11:44:45Z\",\n                \"status\": \"True\",\n                \"type\": \"Initialized\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-02-21T11:44:56Z\",\n                \"status\": \"True\",\n                \"type\": \"Ready\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-02-21T11:44:56Z\",\n                \"status\": \"True\",\n                \"type\": \"ContainersReady\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-02-21T11:44:44Z\",\n                \"status\": \"True\",\n                \"type\": \"PodScheduled\"\n            }\n        ],\n        \"containerStatuses\": [\n            {\n                \"containerID\": \"docker://f0dd2abd4b1d5fc23943f48e73e04757a87ae82e74df9653c2e30304726f9541\",\n                \"image\": \"nginx:1.14-alpine\",\n                \"imageID\": \"docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7\",\n                \"lastState\": {},\n                \"name\": \"e2e-test-nginx-pod\",\n                \"ready\": true,\n                \"restartCount\": 0,\n                \"state\": {\n                    \"running\": {\n                        \"startedAt\": \"2020-02-21T11:44:55Z\"\n                    }\n                }\n            }\n        ],\n        \"hostIP\": \"10.96.1.240\",\n        \"phase\": \"Running\",\n        \"podIP\": \"10.32.0.4\",\n        \"qosClass\": \"BestEffort\",\n        \"startTime\": \"2020-02-21T11:44:45Z\"\n    }\n}\n"
STEP: replace the image in the pod
Feb 21 11:45:00.112: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=e2e-tests-kubectl-g4ck9'
Feb 21 11:45:00.524: INFO: stderr: ""
Feb 21 11:45:00.524: INFO: stdout: "pod/e2e-test-nginx-pod replaced\n"
STEP: verifying the pod e2e-test-nginx-pod has the right image docker.io/library/busybox:1.29
[AfterEach] [k8s.io] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1568
Feb 21 11:45:00.550: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=e2e-tests-kubectl-g4ck9'
Feb 21 11:45:08.494: INFO: stderr: ""
Feb 21 11:45:08.495: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 21 11:45:08.495: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-g4ck9" for this suite.
Feb 21 11:45:16.606: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 21 11:45:16.926: INFO: namespace: e2e-tests-kubectl-g4ck9, resource: bindings, ignored listing per whitelist
Feb 21 11:45:17.039: INFO: namespace e2e-tests-kubectl-g4ck9 deletion completed in 8.52370452s

• [SLOW TEST:32.644 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should update a single-container pod's image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 21 11:45:17.039: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0777 on tmpfs
Feb 21 11:45:17.389: INFO: Waiting up to 5m0s for pod "pod-a1651e06-549f-11ea-b1f8-0242ac110008" in namespace "e2e-tests-emptydir-wmwfv" to be "success or failure"
Feb 21 11:45:17.406: INFO: Pod "pod-a1651e06-549f-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 15.97125ms
Feb 21 11:45:19.549: INFO: Pod "pod-a1651e06-549f-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.159320554s
Feb 21 11:45:21.561: INFO: Pod "pod-a1651e06-549f-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.17134834s
Feb 21 11:45:23.604: INFO: Pod "pod-a1651e06-549f-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.21427716s
Feb 21 11:45:25.626: INFO: Pod "pod-a1651e06-549f-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.236895143s
Feb 21 11:45:27.637: INFO: Pod "pod-a1651e06-549f-11ea-b1f8-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.247743053s
STEP: Saw pod success
Feb 21 11:45:27.637: INFO: Pod "pod-a1651e06-549f-11ea-b1f8-0242ac110008" satisfied condition "success or failure"
Feb 21 11:45:27.641: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-a1651e06-549f-11ea-b1f8-0242ac110008 container test-container: 
STEP: delete the pod
Feb 21 11:45:28.242: INFO: Waiting for pod pod-a1651e06-549f-11ea-b1f8-0242ac110008 to disappear
Feb 21 11:45:28.681: INFO: Pod pod-a1651e06-549f-11ea-b1f8-0242ac110008 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 21 11:45:28.681: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-wmwfv" for this suite.
Feb 21 11:45:34.756: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 21 11:45:34.842: INFO: namespace: e2e-tests-emptydir-wmwfv, resource: bindings, ignored listing per whitelist
Feb 21 11:45:34.939: INFO: namespace e2e-tests-emptydir-wmwfv deletion completed in 6.23265099s

• [SLOW TEST:17.900 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 21 11:45:34.939: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-84dtb
Feb 21 11:45:45.366: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-84dtb
STEP: checking the pod's current state and verifying that restartCount is present
Feb 21 11:45:45.371: INFO: Initial restart count of pod liveness-http is 0
Feb 21 11:46:07.607: INFO: Restart count of pod e2e-tests-container-probe-84dtb/liveness-http is now 1 (22.235744602s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 21 11:46:07.648: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-84dtb" for this suite.
Feb 21 11:46:13.772: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 21 11:46:14.098: INFO: namespace: e2e-tests-container-probe-84dtb, resource: bindings, ignored listing per whitelist
Feb 21 11:46:14.385: INFO: namespace e2e-tests-container-probe-84dtb deletion completed in 6.668165053s

• [SLOW TEST:39.446 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[k8s.io] Pods 
  should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 21 11:46:14.385: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Feb 21 11:46:24.969: INFO: Waiting up to 5m0s for pod "client-envvars-c9b3f078-549f-11ea-b1f8-0242ac110008" in namespace "e2e-tests-pods-5jjps" to be "success or failure"
Feb 21 11:46:25.004: INFO: Pod "client-envvars-c9b3f078-549f-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 34.414567ms
Feb 21 11:46:27.020: INFO: Pod "client-envvars-c9b3f078-549f-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.049855455s
Feb 21 11:46:29.780: INFO: Pod "client-envvars-c9b3f078-549f-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.810285734s
Feb 21 11:46:31.801: INFO: Pod "client-envvars-c9b3f078-549f-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.831423379s
Feb 21 11:46:33.838: INFO: Pod "client-envvars-c9b3f078-549f-11ea-b1f8-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.867643274s
STEP: Saw pod success
Feb 21 11:46:33.838: INFO: Pod "client-envvars-c9b3f078-549f-11ea-b1f8-0242ac110008" satisfied condition "success or failure"
Feb 21 11:46:33.853: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-envvars-c9b3f078-549f-11ea-b1f8-0242ac110008 container env3cont: 
STEP: delete the pod
Feb 21 11:46:35.251: INFO: Waiting for pod client-envvars-c9b3f078-549f-11ea-b1f8-0242ac110008 to disappear
Feb 21 11:46:35.299: INFO: Pod client-envvars-c9b3f078-549f-11ea-b1f8-0242ac110008 no longer exists
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 21 11:46:35.299: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-5jjps" for this suite.
Feb 21 11:47:19.453: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 21 11:47:19.579: INFO: namespace: e2e-tests-pods-5jjps, resource: bindings, ignored listing per whitelist
Feb 21 11:47:19.618: INFO: namespace e2e-tests-pods-5jjps deletion completed in 44.246448308s

• [SLOW TEST:65.233 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] Projected configMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 21 11:47:19.618: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with configMap that has name projected-configmap-test-upd-ea6b6ff3-549f-11ea-b1f8-0242ac110008
STEP: Creating the pod
STEP: Updating configmap projected-configmap-test-upd-ea6b6ff3-549f-11ea-b1f8-0242ac110008
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 21 11:48:49.313: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-6qnkx" for this suite.
Feb 21 11:49:13.370: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 21 11:49:13.491: INFO: namespace: e2e-tests-projected-6qnkx, resource: bindings, ignored listing per whitelist
Feb 21 11:49:13.544: INFO: namespace e2e-tests-projected-6qnkx deletion completed in 24.221740843s

• [SLOW TEST:113.926 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] ConfigMap 
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 21 11:49:13.545: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-upd-2e8592fc-54a0-11ea-b1f8-0242ac110008
STEP: Creating the pod
STEP: Waiting for pod with text data
STEP: Waiting for pod with binary data
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 21 11:49:28.280: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-7cxbh" for this suite.
Feb 21 11:49:52.325: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 21 11:49:52.364: INFO: namespace: e2e-tests-configmap-7cxbh, resource: bindings, ignored listing per whitelist
Feb 21 11:49:52.547: INFO: namespace e2e-tests-configmap-7cxbh deletion completed in 24.258919061s

• [SLOW TEST:39.002 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 21 11:49:52.548: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-l4n8w
Feb 21 11:50:02.961: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-l4n8w
STEP: checking the pod's current state and verifying that restartCount is present
Feb 21 11:50:02.981: INFO: Initial restart count of pod liveness-http is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 21 11:54:04.579: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-l4n8w" for this suite.
Feb 21 11:54:10.726: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 21 11:54:10.774: INFO: namespace: e2e-tests-container-probe-l4n8w, resource: bindings, ignored listing per whitelist
Feb 21 11:54:10.875: INFO: namespace e2e-tests-container-probe-l4n8w deletion completed in 6.266361736s

• [SLOW TEST:258.328 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 21 11:54:10.876: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Feb 21 11:54:11.238: INFO: Waiting up to 5m0s for pod "downwardapi-volume-dfaa8c19-54a0-11ea-b1f8-0242ac110008" in namespace "e2e-tests-downward-api-86wbm" to be "success or failure"
Feb 21 11:54:11.250: INFO: Pod "downwardapi-volume-dfaa8c19-54a0-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 11.212084ms
Feb 21 11:54:13.383: INFO: Pod "downwardapi-volume-dfaa8c19-54a0-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.144355045s
Feb 21 11:54:15.410: INFO: Pod "downwardapi-volume-dfaa8c19-54a0-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.171423053s
Feb 21 11:54:20.241: INFO: Pod "downwardapi-volume-dfaa8c19-54a0-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 9.002600162s
Feb 21 11:54:22.276: INFO: Pod "downwardapi-volume-dfaa8c19-54a0-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 11.037797959s
Feb 21 11:54:24.293: INFO: Pod "downwardapi-volume-dfaa8c19-54a0-11ea-b1f8-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.05457283s
STEP: Saw pod success
Feb 21 11:54:24.293: INFO: Pod "downwardapi-volume-dfaa8c19-54a0-11ea-b1f8-0242ac110008" satisfied condition "success or failure"
Feb 21 11:54:24.297: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-dfaa8c19-54a0-11ea-b1f8-0242ac110008 container client-container: 
STEP: delete the pod
Feb 21 11:54:26.030: INFO: Waiting for pod downwardapi-volume-dfaa8c19-54a0-11ea-b1f8-0242ac110008 to disappear
Feb 21 11:54:26.038: INFO: Pod downwardapi-volume-dfaa8c19-54a0-11ea-b1f8-0242ac110008 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 21 11:54:26.039: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-86wbm" for this suite.
Feb 21 11:54:32.087: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 21 11:54:32.130: INFO: namespace: e2e-tests-downward-api-86wbm, resource: bindings, ignored listing per whitelist
Feb 21 11:54:32.235: INFO: namespace e2e-tests-downward-api-86wbm deletion completed in 6.1880166s

• [SLOW TEST:21.360 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 21 11:54:32.236: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Feb 21 11:54:32.419: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ec4bcc54-54a0-11ea-b1f8-0242ac110008" in namespace "e2e-tests-downward-api-9zp8f" to be "success or failure"
Feb 21 11:54:32.436: INFO: Pod "downwardapi-volume-ec4bcc54-54a0-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 16.745057ms
Feb 21 11:54:36.667: INFO: Pod "downwardapi-volume-ec4bcc54-54a0-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.247447498s
Feb 21 11:54:38.683: INFO: Pod "downwardapi-volume-ec4bcc54-54a0-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.263985625s
Feb 21 11:54:40.751: INFO: Pod "downwardapi-volume-ec4bcc54-54a0-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.331388049s
Feb 21 11:54:42.854: INFO: Pod "downwardapi-volume-ec4bcc54-54a0-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 10.434282504s
Feb 21 11:54:44.960: INFO: Pod "downwardapi-volume-ec4bcc54-54a0-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 12.540604593s
Feb 21 11:54:46.973: INFO: Pod "downwardapi-volume-ec4bcc54-54a0-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 14.553988154s
Feb 21 11:54:49.320: INFO: Pod "downwardapi-volume-ec4bcc54-54a0-11ea-b1f8-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.900436497s
STEP: Saw pod success
Feb 21 11:54:49.320: INFO: Pod "downwardapi-volume-ec4bcc54-54a0-11ea-b1f8-0242ac110008" satisfied condition "success or failure"
Feb 21 11:54:49.340: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-ec4bcc54-54a0-11ea-b1f8-0242ac110008 container client-container: 
STEP: delete the pod
Feb 21 11:54:49.733: INFO: Waiting for pod downwardapi-volume-ec4bcc54-54a0-11ea-b1f8-0242ac110008 to disappear
Feb 21 11:54:49.801: INFO: Pod downwardapi-volume-ec4bcc54-54a0-11ea-b1f8-0242ac110008 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 21 11:54:49.802: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-9zp8f" for this suite.
Feb 21 11:54:55.912: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 21 11:54:56.110: INFO: namespace: e2e-tests-downward-api-9zp8f, resource: bindings, ignored listing per whitelist
Feb 21 11:54:56.122: INFO: namespace e2e-tests-downward-api-9zp8f deletion completed in 6.299236765s

• [SLOW TEST:23.886 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] Projected configMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 21 11:54:56.123: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name cm-test-opt-del-faac81b4-54a0-11ea-b1f8-0242ac110008
STEP: Creating configMap with name cm-test-opt-upd-faac832e-54a0-11ea-b1f8-0242ac110008
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-faac81b4-54a0-11ea-b1f8-0242ac110008
STEP: Updating configmap cm-test-opt-upd-faac832e-54a0-11ea-b1f8-0242ac110008
STEP: Creating configMap with name cm-test-opt-create-faac8400-54a0-11ea-b1f8-0242ac110008
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 21 11:55:15.184: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-849m9" for this suite.
Feb 21 11:55:49.227: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 21 11:55:49.255: INFO: namespace: e2e-tests-projected-849m9, resource: bindings, ignored listing per whitelist
Feb 21 11:55:49.404: INFO: namespace e2e-tests-projected-849m9 deletion completed in 34.212395936s

• [SLOW TEST:53.282 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 21 11:55:49.405: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-map-1a4ea8e9-54a1-11ea-b1f8-0242ac110008
STEP: Creating a pod to test consume configMaps
Feb 21 11:55:49.636: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-1a508247-54a1-11ea-b1f8-0242ac110008" in namespace "e2e-tests-projected-vc9jm" to be "success or failure"
Feb 21 11:55:49.672: INFO: Pod "pod-projected-configmaps-1a508247-54a1-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 36.018688ms
Feb 21 11:55:52.586: INFO: Pod "pod-projected-configmaps-1a508247-54a1-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.950038745s
Feb 21 11:55:54.611: INFO: Pod "pod-projected-configmaps-1a508247-54a1-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.975293632s
Feb 21 11:55:57.377: INFO: Pod "pod-projected-configmaps-1a508247-54a1-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 7.741450821s
Feb 21 11:55:59.391: INFO: Pod "pod-projected-configmaps-1a508247-54a1-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 9.754915281s
Feb 21 11:56:01.401: INFO: Pod "pod-projected-configmaps-1a508247-54a1-11ea-b1f8-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.76529431s
STEP: Saw pod success
Feb 21 11:56:01.401: INFO: Pod "pod-projected-configmaps-1a508247-54a1-11ea-b1f8-0242ac110008" satisfied condition "success or failure"
Feb 21 11:56:01.405: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-1a508247-54a1-11ea-b1f8-0242ac110008 container projected-configmap-volume-test: 
STEP: delete the pod
Feb 21 11:56:02.539: INFO: Waiting for pod pod-projected-configmaps-1a508247-54a1-11ea-b1f8-0242ac110008 to disappear
Feb 21 11:56:02.839: INFO: Pod pod-projected-configmaps-1a508247-54a1-11ea-b1f8-0242ac110008 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 21 11:56:02.839: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-vc9jm" for this suite.
Feb 21 11:56:09.083: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 21 11:56:09.161: INFO: namespace: e2e-tests-projected-vc9jm, resource: bindings, ignored listing per whitelist
Feb 21 11:56:09.247: INFO: namespace e2e-tests-projected-vc9jm deletion completed in 6.24002773s

• [SLOW TEST:19.843 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-api-machinery] Watchers 
  should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 21 11:56:09.248: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a watch on configmaps
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: closing the watch once it receives two notifications
Feb 21 11:56:09.417: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-zv2pb,SelfLink:/api/v1/namespaces/e2e-tests-watch-zv2pb/configmaps/e2e-watch-test-watch-closed,UID:2618e0ea-54a1-11ea-a994-fa163e34d433,ResourceVersion:22419795,Generation:0,CreationTimestamp:2020-02-21 11:56:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
Feb 21 11:56:09.417: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-zv2pb,SelfLink:/api/v1/namespaces/e2e-tests-watch-zv2pb/configmaps/e2e-watch-test-watch-closed,UID:2618e0ea-54a1-11ea-a994-fa163e34d433,ResourceVersion:22419796,Generation:0,CreationTimestamp:2020-02-21 11:56:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying the configmap a second time, while the watch is closed
STEP: creating a new watch on configmaps from the last resource version observed by the first watch
STEP: deleting the configmap
STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed
Feb 21 11:56:09.465: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-zv2pb,SelfLink:/api/v1/namespaces/e2e-tests-watch-zv2pb/configmaps/e2e-watch-test-watch-closed,UID:2618e0ea-54a1-11ea-a994-fa163e34d433,ResourceVersion:22419797,Generation:0,CreationTimestamp:2020-02-21 11:56:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Feb 21 11:56:09.466: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-zv2pb,SelfLink:/api/v1/namespaces/e2e-tests-watch-zv2pb/configmaps/e2e-watch-test-watch-closed,UID:2618e0ea-54a1-11ea-a994-fa163e34d433,ResourceVersion:22419798,Generation:0,CreationTimestamp:2020-02-21 11:56:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 21 11:56:09.466: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-watch-zv2pb" for this suite.
Feb 21 11:56:15.521: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 21 11:56:15.625: INFO: namespace: e2e-tests-watch-zv2pb, resource: bindings, ignored listing per whitelist
Feb 21 11:56:15.680: INFO: namespace e2e-tests-watch-zv2pb deletion completed in 6.201180999s

• [SLOW TEST:6.433 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[k8s.io] Probing container 
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 21 11:56:15.681: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 21 11:57:15.959: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-bkw4x" for this suite.
Feb 21 11:57:40.000: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 21 11:57:40.035: INFO: namespace: e2e-tests-container-probe-bkw4x, resource: bindings, ignored listing per whitelist
Feb 21 11:57:40.168: INFO: namespace e2e-tests-container-probe-bkw4x deletion completed in 24.201091968s

• [SLOW TEST:84.488 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 21 11:57:40.169: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Feb 21 11:57:40.399: INFO: Waiting up to 5m0s for pod "downwardapi-volume-5c55d2fb-54a1-11ea-b1f8-0242ac110008" in namespace "e2e-tests-downward-api-mhhgj" to be "success or failure"
Feb 21 11:57:40.470: INFO: Pod "downwardapi-volume-5c55d2fb-54a1-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 70.701946ms
Feb 21 11:57:42.492: INFO: Pod "downwardapi-volume-5c55d2fb-54a1-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.092275188s
Feb 21 11:57:44.515: INFO: Pod "downwardapi-volume-5c55d2fb-54a1-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.115693879s
Feb 21 11:57:46.639: INFO: Pod "downwardapi-volume-5c55d2fb-54a1-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.239713341s
Feb 21 11:57:48.738: INFO: Pod "downwardapi-volume-5c55d2fb-54a1-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.33864478s
Feb 21 11:57:50.759: INFO: Pod "downwardapi-volume-5c55d2fb-54a1-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 10.359727313s
Feb 21 11:57:52.775: INFO: Pod "downwardapi-volume-5c55d2fb-54a1-11ea-b1f8-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.374894173s
STEP: Saw pod success
Feb 21 11:57:52.775: INFO: Pod "downwardapi-volume-5c55d2fb-54a1-11ea-b1f8-0242ac110008" satisfied condition "success or failure"
Feb 21 11:57:52.780: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-5c55d2fb-54a1-11ea-b1f8-0242ac110008 container client-container: 
STEP: delete the pod
Feb 21 11:57:52.878: INFO: Waiting for pod downwardapi-volume-5c55d2fb-54a1-11ea-b1f8-0242ac110008 to disappear
Feb 21 11:57:52.895: INFO: Pod downwardapi-volume-5c55d2fb-54a1-11ea-b1f8-0242ac110008 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 21 11:57:52.895: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-mhhgj" for this suite.
Feb 21 11:57:59.019: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 21 11:57:59.088: INFO: namespace: e2e-tests-downward-api-mhhgj, resource: bindings, ignored listing per whitelist
Feb 21 11:57:59.178: INFO: namespace e2e-tests-downward-api-mhhgj deletion completed in 6.269365044s

• [SLOW TEST:19.010 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] Projected downwardAPI 
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 21 11:57:59.179: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating the pod
Feb 21 11:58:10.358: INFO: Successfully updated pod "annotationupdate67bba22e-54a1-11ea-b1f8-0242ac110008"
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 21 11:58:12.792: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-ptfqr" for this suite.
Feb 21 11:58:37.084: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 21 11:58:37.205: INFO: namespace: e2e-tests-projected-ptfqr, resource: bindings, ignored listing per whitelist
Feb 21 11:58:37.312: INFO: namespace e2e-tests-projected-ptfqr deletion completed in 24.506908501s

• [SLOW TEST:38.134 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl rolling-update 
  should support rolling-update to same image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 21 11:58:37.313: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1358
[It] should support rolling-update to same image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Feb 21 11:58:37.459: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=e2e-tests-kubectl-r4rcm'
Feb 21 11:58:39.686: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Feb 21 11:58:39.687: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n"
STEP: verifying the rc e2e-test-nginx-rc was created
Feb 21 11:58:39.702: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0
Feb 21 11:58:39.758: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
STEP: rolling-update to same image controller
Feb 21 11:58:39.811: INFO: scanned /root for discovery docs: 
Feb 21 11:58:39.812: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-nginx-rc --update-period=1s --image=docker.io/library/nginx:1.14-alpine --image-pull-policy=IfNotPresent --namespace=e2e-tests-kubectl-r4rcm'
Feb 21 11:59:05.253: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Feb 21 11:59:05.254: INFO: stdout: "Created e2e-test-nginx-rc-d61a5d8e80d32530fd0ac2209ee7165f\nScaling up e2e-test-nginx-rc-d61a5d8e80d32530fd0ac2209ee7165f from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-d61a5d8e80d32530fd0ac2209ee7165f up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-d61a5d8e80d32530fd0ac2209ee7165f to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n"
Feb 21 11:59:05.254: INFO: stdout: "Created e2e-test-nginx-rc-d61a5d8e80d32530fd0ac2209ee7165f\nScaling up e2e-test-nginx-rc-d61a5d8e80d32530fd0ac2209ee7165f from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-d61a5d8e80d32530fd0ac2209ee7165f up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-d61a5d8e80d32530fd0ac2209ee7165f to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n"
STEP: waiting for all containers in run=e2e-test-nginx-rc pods to come up.
Feb 21 11:59:05.254: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=e2e-tests-kubectl-r4rcm'
Feb 21 11:59:05.441: INFO: stderr: ""
Feb 21 11:59:05.442: INFO: stdout: "e2e-test-nginx-rc-d61a5d8e80d32530fd0ac2209ee7165f-gzr7c "
Feb 21 11:59:05.442: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-d61a5d8e80d32530fd0ac2209ee7165f-gzr7c -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-nginx-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-r4rcm'
Feb 21 11:59:05.557: INFO: stderr: ""
Feb 21 11:59:05.558: INFO: stdout: "true"
Feb 21 11:59:05.558: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-d61a5d8e80d32530fd0ac2209ee7165f-gzr7c -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-nginx-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-r4rcm'
Feb 21 11:59:05.687: INFO: stderr: ""
Feb 21 11:59:05.687: INFO: stdout: "docker.io/library/nginx:1.14-alpine"
Feb 21 11:59:05.687: INFO: e2e-test-nginx-rc-d61a5d8e80d32530fd0ac2209ee7165f-gzr7c is verified up and running
[AfterEach] [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1364
Feb 21 11:59:05.688: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=e2e-tests-kubectl-r4rcm'
Feb 21 11:59:05.836: INFO: stderr: ""
Feb 21 11:59:05.836: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 21 11:59:05.836: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-r4rcm" for this suite.
Feb 21 11:59:27.939: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 21 11:59:28.012: INFO: namespace: e2e-tests-kubectl-r4rcm, resource: bindings, ignored listing per whitelist
Feb 21 11:59:28.035: INFO: namespace e2e-tests-kubectl-r4rcm deletion completed in 22.165917183s

• [SLOW TEST:50.722 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should support rolling-update to same image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Burst scaling should run to completion even with unhealthy pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 21 11:59:28.036: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74
STEP: Creating service test in namespace e2e-tests-statefulset-9j5bx
[It] Burst scaling should run to completion even with unhealthy pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating stateful set ss in namespace e2e-tests-statefulset-9j5bx
STEP: Waiting until all stateful set ss replicas will be running in namespace e2e-tests-statefulset-9j5bx
Feb 21 11:59:28.217: INFO: Found 0 stateful pods, waiting for 1
Feb 21 11:59:38.322: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Pending - Ready=false
Feb 21 11:59:48.235: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod
Feb 21 11:59:48.242: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9j5bx ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb 21 11:59:49.404: INFO: stderr: "I0221 11:59:48.524980    2219 log.go:172] (0xc0006d82c0) (0xc0006812c0) Create stream\nI0221 11:59:48.525220    2219 log.go:172] (0xc0006d82c0) (0xc0006812c0) Stream added, broadcasting: 1\nI0221 11:59:48.541641    2219 log.go:172] (0xc0006d82c0) Reply frame received for 1\nI0221 11:59:48.541726    2219 log.go:172] (0xc0006d82c0) (0xc00071e000) Create stream\nI0221 11:59:48.541812    2219 log.go:172] (0xc0006d82c0) (0xc00071e000) Stream added, broadcasting: 3\nI0221 11:59:48.544474    2219 log.go:172] (0xc0006d82c0) Reply frame received for 3\nI0221 11:59:48.544540    2219 log.go:172] (0xc0006d82c0) (0xc00040e000) Create stream\nI0221 11:59:48.544562    2219 log.go:172] (0xc0006d82c0) (0xc00040e000) Stream added, broadcasting: 5\nI0221 11:59:48.550628    2219 log.go:172] (0xc0006d82c0) Reply frame received for 5\nI0221 11:59:49.219845    2219 log.go:172] (0xc0006d82c0) Data frame received for 3\nI0221 11:59:49.219947    2219 log.go:172] (0xc00071e000) (3) Data frame handling\nI0221 11:59:49.219986    2219 log.go:172] (0xc00071e000) (3) Data frame sent\nI0221 11:59:49.390456    2219 log.go:172] (0xc0006d82c0) (0xc00071e000) Stream removed, broadcasting: 3\nI0221 11:59:49.390578    2219 log.go:172] (0xc0006d82c0) Data frame received for 1\nI0221 11:59:49.390616    2219 log.go:172] (0xc0006812c0) (1) Data frame handling\nI0221 11:59:49.390642    2219 log.go:172] (0xc0006812c0) (1) Data frame sent\nI0221 11:59:49.390781    2219 log.go:172] (0xc0006d82c0) (0xc0006812c0) Stream removed, broadcasting: 1\nI0221 11:59:49.390905    2219 log.go:172] (0xc0006d82c0) (0xc00040e000) Stream removed, broadcasting: 5\nI0221 11:59:49.390954    2219 log.go:172] (0xc0006d82c0) Go away received\nI0221 11:59:49.391119    2219 log.go:172] (0xc0006d82c0) (0xc0006812c0) Stream removed, broadcasting: 1\nI0221 11:59:49.391524    2219 log.go:172] (0xc0006d82c0) (0xc00071e000) Stream removed, broadcasting: 3\nI0221 11:59:49.391557    2219 log.go:172] (0xc0006d82c0) (0xc00040e000) Stream removed, broadcasting: 5\n"
Feb 21 11:59:49.405: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb 21 11:59:49.405: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Feb 21 11:59:49.438: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Feb 21 11:59:49.438: INFO: Waiting for statefulset status.replicas updated to 0
Feb 21 11:59:49.576: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Feb 21 11:59:49.576: INFO: ss-0  hunter-server-hu5at5svl7ps  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-21 11:59:28 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-21 11:59:49 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-21 11:59:49 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-21 11:59:28 +0000 UTC  }]
Feb 21 11:59:49.576: INFO: ss-1                              Pending         []
Feb 21 11:59:49.576: INFO: 
Feb 21 11:59:49.576: INFO: StatefulSet ss has not reached scale 3, at 2
Feb 21 11:59:50.624: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.93898008s
Feb 21 11:59:51.929: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.890622985s
Feb 21 11:59:52.942: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.585318887s
Feb 21 11:59:53.963: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.572929062s
Feb 21 11:59:55.020: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.55144396s
Feb 21 11:59:57.422: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.494872841s
Feb 21 11:59:58.630: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.093110348s
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace e2e-tests-statefulset-9j5bx
Feb 21 11:59:59.695: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9j5bx ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 21 12:00:00.774: INFO: stderr: "I0221 12:00:00.072527    2241 log.go:172] (0xc00072c370) (0xc0007b8640) Create stream\nI0221 12:00:00.072684    2241 log.go:172] (0xc00072c370) (0xc0007b8640) Stream added, broadcasting: 1\nI0221 12:00:00.079366    2241 log.go:172] (0xc00072c370) Reply frame received for 1\nI0221 12:00:00.079405    2241 log.go:172] (0xc00072c370) (0xc000658c80) Create stream\nI0221 12:00:00.079435    2241 log.go:172] (0xc00072c370) (0xc000658c80) Stream added, broadcasting: 3\nI0221 12:00:00.080391    2241 log.go:172] (0xc00072c370) Reply frame received for 3\nI0221 12:00:00.080446    2241 log.go:172] (0xc00072c370) (0xc000686000) Create stream\nI0221 12:00:00.080471    2241 log.go:172] (0xc00072c370) (0xc000686000) Stream added, broadcasting: 5\nI0221 12:00:00.081249    2241 log.go:172] (0xc00072c370) Reply frame received for 5\nI0221 12:00:00.358223    2241 log.go:172] (0xc00072c370) Data frame received for 3\nI0221 12:00:00.358312    2241 log.go:172] (0xc000658c80) (3) Data frame handling\nI0221 12:00:00.358341    2241 log.go:172] (0xc000658c80) (3) Data frame sent\nI0221 12:00:00.765057    2241 log.go:172] (0xc00072c370) (0xc000658c80) Stream removed, broadcasting: 3\nI0221 12:00:00.765150    2241 log.go:172] (0xc00072c370) Data frame received for 1\nI0221 12:00:00.765175    2241 log.go:172] (0xc0007b8640) (1) Data frame handling\nI0221 12:00:00.765194    2241 log.go:172] (0xc0007b8640) (1) Data frame sent\nI0221 12:00:00.765210    2241 log.go:172] (0xc00072c370) (0xc0007b8640) Stream removed, broadcasting: 1\nI0221 12:00:00.765370    2241 log.go:172] (0xc00072c370) (0xc000686000) Stream removed, broadcasting: 5\nI0221 12:00:00.765522    2241 log.go:172] (0xc00072c370) Go away received\nI0221 12:00:00.765620    2241 log.go:172] (0xc00072c370) (0xc0007b8640) Stream removed, broadcasting: 1\nI0221 12:00:00.765641    2241 log.go:172] (0xc00072c370) (0xc000658c80) Stream removed, broadcasting: 3\nI0221 12:00:00.765652    2241 log.go:172] (0xc00072c370) (0xc000686000) Stream removed, broadcasting: 5\n"
Feb 21 12:00:00.775: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Feb 21 12:00:00.775: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Feb 21 12:00:00.775: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9j5bx ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 21 12:00:01.214: INFO: rc: 1
Feb 21 12:00:01.214: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9j5bx ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    error: unable to upgrade connection: container not found ("nginx")
 []  0xc000d42f30 exit status 1   true [0xc00141a678 0xc00141a690 0xc00141a6a8] [0xc00141a678 0xc00141a690 0xc00141a6a8] [0xc00141a688 0xc00141a6a0] [0x935700 0x935700] 0xc00190a840 }:
Command stdout:

stderr:
error: unable to upgrade connection: container not found ("nginx")

error:
exit status 1

Feb 21 12:00:11.215: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9j5bx ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 21 12:00:11.740: INFO: stderr: "I0221 12:00:11.456567    2285 log.go:172] (0xc0003762c0) (0xc0005d7360) Create stream\nI0221 12:00:11.456724    2285 log.go:172] (0xc0003762c0) (0xc0005d7360) Stream added, broadcasting: 1\nI0221 12:00:11.462857    2285 log.go:172] (0xc0003762c0) Reply frame received for 1\nI0221 12:00:11.462911    2285 log.go:172] (0xc0003762c0) (0xc000732000) Create stream\nI0221 12:00:11.462920    2285 log.go:172] (0xc0003762c0) (0xc000732000) Stream added, broadcasting: 3\nI0221 12:00:11.464399    2285 log.go:172] (0xc0003762c0) Reply frame received for 3\nI0221 12:00:11.464437    2285 log.go:172] (0xc0003762c0) (0xc0005d7400) Create stream\nI0221 12:00:11.464449    2285 log.go:172] (0xc0003762c0) (0xc0005d7400) Stream added, broadcasting: 5\nI0221 12:00:11.467231    2285 log.go:172] (0xc0003762c0) Reply frame received for 5\nI0221 12:00:11.576855    2285 log.go:172] (0xc0003762c0) Data frame received for 3\nI0221 12:00:11.576951    2285 log.go:172] (0xc000732000) (3) Data frame handling\nI0221 12:00:11.576968    2285 log.go:172] (0xc000732000) (3) Data frame sent\nI0221 12:00:11.577060    2285 log.go:172] (0xc0003762c0) Data frame received for 5\nI0221 12:00:11.577130    2285 log.go:172] (0xc0005d7400) (5) Data frame handling\nI0221 12:00:11.577158    2285 log.go:172] (0xc0005d7400) (5) Data frame sent\nmv: can't rename '/tmp/index.html': No such file or directory\nI0221 12:00:11.726305    2285 log.go:172] (0xc0003762c0) (0xc000732000) Stream removed, broadcasting: 3\nI0221 12:00:11.726466    2285 log.go:172] (0xc0003762c0) Data frame received for 1\nI0221 12:00:11.726522    2285 log.go:172] (0xc0005d7360) (1) Data frame handling\nI0221 12:00:11.726572    2285 log.go:172] (0xc0005d7360) (1) Data frame sent\nI0221 12:00:11.726594    2285 log.go:172] (0xc0003762c0) (0xc0005d7360) Stream removed, broadcasting: 1\nI0221 12:00:11.726816    2285 log.go:172] (0xc0003762c0) (0xc0005d7400) Stream removed, broadcasting: 5\nI0221 12:00:11.727180    2285 log.go:172] (0xc0003762c0) (0xc0005d7360) Stream removed, broadcasting: 1\nI0221 12:00:11.727198    2285 log.go:172] (0xc0003762c0) (0xc000732000) Stream removed, broadcasting: 3\nI0221 12:00:11.727206    2285 log.go:172] (0xc0003762c0) (0xc0005d7400) Stream removed, broadcasting: 5\n"
Feb 21 12:00:11.741: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Feb 21 12:00:11.741: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Feb 21 12:00:11.742: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9j5bx ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 21 12:00:12.332: INFO: stderr: "I0221 12:00:11.964248    2307 log.go:172] (0xc000704370) (0xc000728640) Create stream\nI0221 12:00:11.964500    2307 log.go:172] (0xc000704370) (0xc000728640) Stream added, broadcasting: 1\nI0221 12:00:11.971364    2307 log.go:172] (0xc000704370) Reply frame received for 1\nI0221 12:00:11.971410    2307 log.go:172] (0xc000704370) (0xc000674be0) Create stream\nI0221 12:00:11.971427    2307 log.go:172] (0xc000704370) (0xc000674be0) Stream added, broadcasting: 3\nI0221 12:00:11.972604    2307 log.go:172] (0xc000704370) Reply frame received for 3\nI0221 12:00:11.972622    2307 log.go:172] (0xc000704370) (0xc0007286e0) Create stream\nI0221 12:00:11.972630    2307 log.go:172] (0xc000704370) (0xc0007286e0) Stream added, broadcasting: 5\nI0221 12:00:11.973549    2307 log.go:172] (0xc000704370) Reply frame received for 5\nI0221 12:00:12.146027    2307 log.go:172] (0xc000704370) Data frame received for 3\nI0221 12:00:12.146190    2307 log.go:172] (0xc000674be0) (3) Data frame handling\nI0221 12:00:12.146217    2307 log.go:172] (0xc000674be0) (3) Data frame sent\nI0221 12:00:12.146263    2307 log.go:172] (0xc000704370) Data frame received for 5\nI0221 12:00:12.146293    2307 log.go:172] (0xc0007286e0) (5) Data frame handling\nI0221 12:00:12.146348    2307 log.go:172] (0xc0007286e0) (5) Data frame sent\nmv: can't rename '/tmp/index.html': No such file or directory\nI0221 12:00:12.321627    2307 log.go:172] (0xc000704370) Data frame received for 1\nI0221 12:00:12.321749    2307 log.go:172] (0xc000728640) (1) Data frame handling\nI0221 12:00:12.321778    2307 log.go:172] (0xc000728640) (1) Data frame sent\nI0221 12:00:12.321814    2307 log.go:172] (0xc000704370) (0xc000674be0) Stream removed, broadcasting: 3\nI0221 12:00:12.321875    2307 log.go:172] (0xc000704370) (0xc000728640) Stream removed, broadcasting: 1\nI0221 12:00:12.322457    2307 log.go:172] (0xc000704370) (0xc0007286e0) Stream removed, broadcasting: 5\nI0221 12:00:12.322599    2307 log.go:172] (0xc000704370) Go away received\nI0221 12:00:12.322762    2307 log.go:172] (0xc000704370) (0xc000728640) Stream removed, broadcasting: 1\nI0221 12:00:12.322863    2307 log.go:172] (0xc000704370) (0xc000674be0) Stream removed, broadcasting: 3\nI0221 12:00:12.322958    2307 log.go:172] (0xc000704370) (0xc0007286e0) Stream removed, broadcasting: 5\n"
Feb 21 12:00:12.333: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Feb 21 12:00:12.333: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Feb 21 12:00:12.355: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Feb 21 12:00:12.355: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Feb 21 12:00:12.355: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Scale down will not halt with unhealthy stateful pod
Feb 21 12:00:12.362: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9j5bx ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb 21 12:00:12.998: INFO: stderr: "I0221 12:00:12.519306    2329 log.go:172] (0xc000720370) (0xc0007b2640) Create stream\nI0221 12:00:12.519799    2329 log.go:172] (0xc000720370) (0xc0007b2640) Stream added, broadcasting: 1\nI0221 12:00:12.529473    2329 log.go:172] (0xc000720370) Reply frame received for 1\nI0221 12:00:12.529529    2329 log.go:172] (0xc000720370) (0xc0005bcc80) Create stream\nI0221 12:00:12.529545    2329 log.go:172] (0xc000720370) (0xc0005bcc80) Stream added, broadcasting: 3\nI0221 12:00:12.531812    2329 log.go:172] (0xc000720370) Reply frame received for 3\nI0221 12:00:12.531831    2329 log.go:172] (0xc000720370) (0xc000708000) Create stream\nI0221 12:00:12.531836    2329 log.go:172] (0xc000720370) (0xc000708000) Stream added, broadcasting: 5\nI0221 12:00:12.533230    2329 log.go:172] (0xc000720370) Reply frame received for 5\nI0221 12:00:12.797251    2329 log.go:172] (0xc000720370) Data frame received for 3\nI0221 12:00:12.797397    2329 log.go:172] (0xc0005bcc80) (3) Data frame handling\nI0221 12:00:12.797427    2329 log.go:172] (0xc0005bcc80) (3) Data frame sent\nI0221 12:00:12.988176    2329 log.go:172] (0xc000720370) Data frame received for 1\nI0221 12:00:12.988251    2329 log.go:172] (0xc0007b2640) (1) Data frame handling\nI0221 12:00:12.988280    2329 log.go:172] (0xc0007b2640) (1) Data frame sent\nI0221 12:00:12.988429    2329 log.go:172] (0xc000720370) (0xc0005bcc80) Stream removed, broadcasting: 3\nI0221 12:00:12.988517    2329 log.go:172] (0xc000720370) (0xc0007b2640) Stream removed, broadcasting: 1\nI0221 12:00:12.988728    2329 log.go:172] (0xc000720370) (0xc000708000) Stream removed, broadcasting: 5\nI0221 12:00:12.988767    2329 log.go:172] (0xc000720370) (0xc0007b2640) Stream removed, broadcasting: 1\nI0221 12:00:12.988783    2329 log.go:172] (0xc000720370) (0xc0005bcc80) Stream removed, broadcasting: 3\nI0221 12:00:12.988789    2329 log.go:172] (0xc000720370) (0xc000708000) Stream removed, broadcasting: 5\nI0221 12:00:12.988835    2329 log.go:172] (0xc000720370) Go away received\n"
Feb 21 12:00:12.999: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb 21 12:00:12.999: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Feb 21 12:00:12.999: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9j5bx ss-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb 21 12:00:13.585: INFO: stderr: "I0221 12:00:13.209515    2351 log.go:172] (0xc00013a6e0) (0xc00071c640) Create stream\nI0221 12:00:13.209712    2351 log.go:172] (0xc00013a6e0) (0xc00071c640) Stream added, broadcasting: 1\nI0221 12:00:13.214472    2351 log.go:172] (0xc00013a6e0) Reply frame received for 1\nI0221 12:00:13.214502    2351 log.go:172] (0xc00013a6e0) (0xc0005b8b40) Create stream\nI0221 12:00:13.214511    2351 log.go:172] (0xc00013a6e0) (0xc0005b8b40) Stream added, broadcasting: 3\nI0221 12:00:13.215265    2351 log.go:172] (0xc00013a6e0) Reply frame received for 3\nI0221 12:00:13.215288    2351 log.go:172] (0xc00013a6e0) (0xc000122000) Create stream\nI0221 12:00:13.215296    2351 log.go:172] (0xc00013a6e0) (0xc000122000) Stream added, broadcasting: 5\nI0221 12:00:13.216061    2351 log.go:172] (0xc00013a6e0) Reply frame received for 5\nI0221 12:00:13.452854    2351 log.go:172] (0xc00013a6e0) Data frame received for 3\nI0221 12:00:13.452999    2351 log.go:172] (0xc0005b8b40) (3) Data frame handling\nI0221 12:00:13.453027    2351 log.go:172] (0xc0005b8b40) (3) Data frame sent\nI0221 12:00:13.572311    2351 log.go:172] (0xc00013a6e0) Data frame received for 1\nI0221 12:00:13.572410    2351 log.go:172] (0xc00071c640) (1) Data frame handling\nI0221 12:00:13.572429    2351 log.go:172] (0xc00071c640) (1) Data frame sent\nI0221 12:00:13.572778    2351 log.go:172] (0xc00013a6e0) (0xc00071c640) Stream removed, broadcasting: 1\nI0221 12:00:13.573447    2351 log.go:172] (0xc00013a6e0) (0xc000122000) Stream removed, broadcasting: 5\nI0221 12:00:13.573610    2351 log.go:172] (0xc00013a6e0) (0xc0005b8b40) Stream removed, broadcasting: 3\nI0221 12:00:13.573750    2351 log.go:172] (0xc00013a6e0) (0xc00071c640) Stream removed, broadcasting: 1\nI0221 12:00:13.573919    2351 log.go:172] (0xc00013a6e0) (0xc0005b8b40) Stream removed, broadcasting: 3\nI0221 12:00:13.574012    2351 log.go:172] (0xc00013a6e0) (0xc000122000) Stream removed, broadcasting: 5\nI0221 12:00:13.574090    2351 log.go:172] (0xc00013a6e0) Go away received\n"
Feb 21 12:00:13.585: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb 21 12:00:13.585: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Feb 21 12:00:13.586: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9j5bx ss-2 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb 21 12:00:14.126: INFO: stderr: "I0221 12:00:13.736905    2373 log.go:172] (0xc0006f8370) (0xc0002975e0) Create stream\nI0221 12:00:13.737104    2373 log.go:172] (0xc0006f8370) (0xc0002975e0) Stream added, broadcasting: 1\nI0221 12:00:13.745690    2373 log.go:172] (0xc0006f8370) Reply frame received for 1\nI0221 12:00:13.745748    2373 log.go:172] (0xc0006f8370) (0xc0005be000) Create stream\nI0221 12:00:13.745761    2373 log.go:172] (0xc0006f8370) (0xc0005be000) Stream added, broadcasting: 3\nI0221 12:00:13.748052    2373 log.go:172] (0xc0006f8370) Reply frame received for 3\nI0221 12:00:13.748089    2373 log.go:172] (0xc0006f8370) (0xc000590000) Create stream\nI0221 12:00:13.748100    2373 log.go:172] (0xc0006f8370) (0xc000590000) Stream added, broadcasting: 5\nI0221 12:00:13.750358    2373 log.go:172] (0xc0006f8370) Reply frame received for 5\nI0221 12:00:13.989506    2373 log.go:172] (0xc0006f8370) Data frame received for 3\nI0221 12:00:13.989571    2373 log.go:172] (0xc0005be000) (3) Data frame handling\nI0221 12:00:13.989589    2373 log.go:172] (0xc0005be000) (3) Data frame sent\nI0221 12:00:14.119643    2373 log.go:172] (0xc0006f8370) (0xc0005be000) Stream removed, broadcasting: 3\nI0221 12:00:14.119793    2373 log.go:172] (0xc0006f8370) Data frame received for 1\nI0221 12:00:14.119831    2373 log.go:172] (0xc0002975e0) (1) Data frame handling\nI0221 12:00:14.119841    2373 log.go:172] (0xc0002975e0) (1) Data frame sent\nI0221 12:00:14.119858    2373 log.go:172] (0xc0006f8370) (0xc0002975e0) Stream removed, broadcasting: 1\nI0221 12:00:14.119974    2373 log.go:172] (0xc0006f8370) (0xc000590000) Stream removed, broadcasting: 5\nI0221 12:00:14.120071    2373 log.go:172] (0xc0006f8370) (0xc0002975e0) Stream removed, broadcasting: 1\nI0221 12:00:14.120095    2373 log.go:172] (0xc0006f8370) (0xc0005be000) Stream removed, broadcasting: 3\nI0221 12:00:14.120101    2373 log.go:172] (0xc0006f8370) (0xc000590000) Stream removed, broadcasting: 5\n"
Feb 21 12:00:14.126: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb 21 12:00:14.126: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Feb 21 12:00:14.126: INFO: Waiting for statefulset status.replicas updated to 0
Feb 21 12:00:14.138: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2
Feb 21 12:00:24.155: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Feb 21 12:00:24.155: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Feb 21 12:00:24.155: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Feb 21 12:00:24.177: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Feb 21 12:00:24.177: INFO: ss-0  hunter-server-hu5at5svl7ps  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-21 11:59:28 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-21 12:00:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-21 12:00:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-21 11:59:28 +0000 UTC  }]
Feb 21 12:00:24.177: INFO: ss-1  hunter-server-hu5at5svl7ps  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-21 11:59:49 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-21 12:00:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-21 12:00:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-21 11:59:49 +0000 UTC  }]
Feb 21 12:00:24.177: INFO: ss-2  hunter-server-hu5at5svl7ps  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-21 11:59:49 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-21 12:00:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-21 12:00:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-21 11:59:49 +0000 UTC  }]
Feb 21 12:00:24.177: INFO: 
Feb 21 12:00:24.177: INFO: StatefulSet ss has not reached scale 0, at 3
Feb 21 12:00:26.601: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Feb 21 12:00:26.602: INFO: ss-0  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-21 11:59:28 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-21 12:00:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-21 12:00:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-21 11:59:28 +0000 UTC  }]
Feb 21 12:00:26.603: INFO: ss-1  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-21 11:59:49 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-21 12:00:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-21 12:00:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-21 11:59:49 +0000 UTC  }]
Feb 21 12:00:26.603: INFO: ss-2  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-21 11:59:49 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-21 12:00:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-21 12:00:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-21 11:59:49 +0000 UTC  }]
Feb 21 12:00:26.603: INFO: 
Feb 21 12:00:26.603: INFO: StatefulSet ss has not reached scale 0, at 3
Feb 21 12:00:27.658: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Feb 21 12:00:27.658: INFO: ss-0  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-21 11:59:28 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-21 12:00:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-21 12:00:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-21 11:59:28 +0000 UTC  }]
Feb 21 12:00:27.658: INFO: ss-1  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-21 11:59:49 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-21 12:00:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-21 12:00:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-21 11:59:49 +0000 UTC  }]
Feb 21 12:00:27.659: INFO: ss-2  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-21 11:59:49 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-21 12:00:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-21 12:00:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-21 11:59:49 +0000 UTC  }]
Feb 21 12:00:27.659: INFO: 
Feb 21 12:00:27.659: INFO: StatefulSet ss has not reached scale 0, at 3
Feb 21 12:00:28.672: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Feb 21 12:00:28.673: INFO: ss-0  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-21 11:59:28 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-21 12:00:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-21 12:00:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-21 11:59:28 +0000 UTC  }]
Feb 21 12:00:28.673: INFO: ss-1  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-21 11:59:49 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-21 12:00:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-21 12:00:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-21 11:59:49 +0000 UTC  }]
Feb 21 12:00:28.673: INFO: ss-2  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-21 11:59:49 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-21 12:00:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-21 12:00:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-21 11:59:49 +0000 UTC  }]
Feb 21 12:00:28.673: INFO: 
Feb 21 12:00:28.673: INFO: StatefulSet ss has not reached scale 0, at 3
Feb 21 12:00:30.410: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Feb 21 12:00:30.411: INFO: ss-0  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-21 11:59:28 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-21 12:00:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-21 12:00:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-21 11:59:28 +0000 UTC  }]
Feb 21 12:00:30.411: INFO: ss-1  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-21 11:59:49 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-21 12:00:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-21 12:00:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-21 11:59:49 +0000 UTC  }]
Feb 21 12:00:30.411: INFO: ss-2  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-21 11:59:49 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-21 12:00:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-21 12:00:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-21 11:59:49 +0000 UTC  }]
Feb 21 12:00:30.411: INFO: 
Feb 21 12:00:30.411: INFO: StatefulSet ss has not reached scale 0, at 3
Feb 21 12:00:32.124: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Feb 21 12:00:32.125: INFO: ss-0  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-21 11:59:28 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-21 12:00:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-21 12:00:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-21 11:59:28 +0000 UTC  }]
Feb 21 12:00:32.125: INFO: ss-1  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-21 11:59:49 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-21 12:00:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-21 12:00:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-21 11:59:49 +0000 UTC  }]
Feb 21 12:00:32.125: INFO: ss-2  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-21 11:59:49 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-21 12:00:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-21 12:00:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-21 11:59:49 +0000 UTC  }]
Feb 21 12:00:32.125: INFO: 
Feb 21 12:00:32.125: INFO: StatefulSet ss has not reached scale 0, at 3
Feb 21 12:00:33.148: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Feb 21 12:00:33.148: INFO: ss-0  hunter-server-hu5at5svl7ps  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-21 11:59:28 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-21 12:00:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-21 12:00:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-21 11:59:28 +0000 UTC  }]
Feb 21 12:00:33.148: INFO: ss-1  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-21 11:59:49 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-21 12:00:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-21 12:00:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-21 11:59:49 +0000 UTC  }]
Feb 21 12:00:33.148: INFO: ss-2  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-21 11:59:49 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-21 12:00:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-21 12:00:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-21 11:59:49 +0000 UTC  }]
Feb 21 12:00:33.148: INFO: 
Feb 21 12:00:33.148: INFO: StatefulSet ss has not reached scale 0, at 3
Feb 21 12:00:34.168: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Feb 21 12:00:34.168: INFO: ss-1  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-21 11:59:49 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-21 12:00:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-21 12:00:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-21 11:59:49 +0000 UTC  }]
Feb 21 12:00:34.168: INFO: ss-2  hunter-server-hu5at5svl7ps  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-21 11:59:49 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-21 12:00:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-21 12:00:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-21 11:59:49 +0000 UTC  }]
Feb 21 12:00:34.168: INFO: 
Feb 21 12:00:34.168: INFO: StatefulSet ss has not reached scale 0, at 2
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacee2e-tests-statefulset-9j5bx
Feb 21 12:00:35.189: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9j5bx ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 21 12:00:35.399: INFO: rc: 1
Feb 21 12:00:35.399: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9j5bx ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    error: unable to upgrade connection: container not found ("nginx")
 []  0xc001f06f30 exit status 1   true [0xc00141a778 0xc00141a790 0xc00141a7a8] [0xc00141a778 0xc00141a790 0xc00141a7a8] [0xc00141a788 0xc00141a7a0] [0x935700 0x935700] 0xc001f12600 }:
Command stdout:

stderr:
error: unable to upgrade connection: container not found ("nginx")

error:
exit status 1

Feb 21 12:00:45.400: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9j5bx ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 21 12:00:45.557: INFO: rc: 1
Feb 21 12:00:45.557: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9j5bx ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc001882510 exit status 1   true [0xc00194e598 0xc00194e5b0 0xc00194e5c8] [0xc00194e598 0xc00194e5b0 0xc00194e5c8] [0xc00194e5a8 0xc00194e5c0] [0x935700 0x935700] 0xc001c27380 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Feb 21 12:00:55.559: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9j5bx ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 21 12:00:55.685: INFO: rc: 1
Feb 21 12:00:55.685: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9j5bx ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc001882750 exit status 1   true [0xc00194e5d0 0xc00194e5e8 0xc00194e600] [0xc00194e5d0 0xc00194e5e8 0xc00194e600] [0xc00194e5e0 0xc00194e5f8] [0x935700 0x935700] 0xc001c27680 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Feb 21 12:01:05.686: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9j5bx ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 21 12:01:05.860: INFO: rc: 1
Feb 21 12:01:05.861: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9j5bx ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc001d162a0 exit status 1   true [0xc001f76428 0xc001f76440 0xc001f76458] [0xc001f76428 0xc001f76440 0xc001f76458] [0xc001f76438 0xc001f76450] [0x935700 0x935700] 0xc0019cb560 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Feb 21 12:01:15.863: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9j5bx ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 21 12:01:16.022: INFO: rc: 1
Feb 21 12:01:16.023: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9j5bx ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc0014d4120 exit status 1   true [0xc00000e318 0xc000186c90 0xc000186d60] [0xc00000e318 0xc000186c90 0xc000186d60] [0xc000186c88 0xc000186d20] [0x935700 0x935700] 0xc00190a000 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Feb 21 12:01:26.023: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9j5bx ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 21 12:01:26.219: INFO: rc: 1
Feb 21 12:01:26.219: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9j5bx ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc0014d4240 exit status 1   true [0xc000186d98 0xc000186e58 0xc000186f58] [0xc000186d98 0xc000186e58 0xc000186f58] [0xc000186df8 0xc000186eb8] [0x935700 0x935700] 0xc00190a480 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Feb 21 12:01:36.221: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9j5bx ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 21 12:01:36.373: INFO: rc: 1
Feb 21 12:01:36.374: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9j5bx ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc001eae1e0 exit status 1   true [0xc00141a000 0xc00141a030 0xc00141a078] [0xc00141a000 0xc00141a030 0xc00141a078] [0xc00141a028 0xc00141a060] [0x935700 0x935700] 0xc001b22420 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Feb 21 12:01:46.375: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9j5bx ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 21 12:01:46.492: INFO: rc: 1
Feb 21 12:01:46.492: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9j5bx ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc001eae540 exit status 1   true [0xc00141a080 0xc00141a0a0 0xc00141a0b8] [0xc00141a080 0xc00141a0a0 0xc00141a0b8] [0xc00141a098 0xc00141a0b0] [0x935700 0x935700] 0xc001b228a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Feb 21 12:01:56.493: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9j5bx ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 21 12:01:56.676: INFO: rc: 1
Feb 21 12:01:56.676: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9j5bx ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc001eae6f0 exit status 1   true [0xc00141a0c0 0xc00141a0f0 0xc00141a128] [0xc00141a0c0 0xc00141a0f0 0xc00141a128] [0xc00141a0e8 0xc00141a108] [0x935700 0x935700] 0xc001b22fc0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Feb 21 12:02:06.678: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9j5bx ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 21 12:02:06.791: INFO: rc: 1
Feb 21 12:02:06.792: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9j5bx ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc0014d43c0 exit status 1   true [0xc000186f70 0xc000186fd0 0xc000187008] [0xc000186f70 0xc000186fd0 0xc000187008] [0xc000186fa8 0xc000186ff8] [0x935700 0x935700] 0xc00190a840 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Feb 21 12:02:16.793: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9j5bx ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 21 12:02:16.890: INFO: rc: 1
Feb 21 12:02:16.890: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9j5bx ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc000d421b0 exit status 1   true [0xc000b8e000 0xc000b8e070 0xc000b8e148] [0xc000b8e000 0xc000b8e070 0xc000b8e148] [0xc000b8e048 0xc000b8e108] [0x935700 0x935700] 0xc0015be6c0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Feb 21 12:02:26.891: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9j5bx ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 21 12:02:26.980: INFO: rc: 1
Feb 21 12:02:26.980: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9j5bx ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc001eae870 exit status 1   true [0xc00141a130 0xc00141a170 0xc00141a190] [0xc00141a130 0xc00141a170 0xc00141a190] [0xc00141a158 0xc00141a188] [0x935700 0x935700] 0xc001b23680 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Feb 21 12:02:36.981: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9j5bx ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 21 12:02:37.142: INFO: rc: 1
Feb 21 12:02:37.143: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9j5bx ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc000d422d0 exit status 1   true [0xc000b8e188 0xc000b8e1d0 0xc000b8e248] [0xc000b8e188 0xc000b8e1d0 0xc000b8e248] [0xc000b8e1c0 0xc000b8e200] [0x935700 0x935700] 0xc0015bed80 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Feb 21 12:02:47.143: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9j5bx ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 21 12:02:47.256: INFO: rc: 1
Feb 21 12:02:47.257: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9j5bx ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc002580150 exit status 1   true [0xc001f76000 0xc001f76018 0xc001f76038] [0xc001f76000 0xc001f76018 0xc001f76038] [0xc001f76010 0xc001f76030] [0x935700 0x935700] 0xc00166e720 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Feb 21 12:02:57.258: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9j5bx ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 21 12:02:57.371: INFO: rc: 1
Feb 21 12:02:57.371: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9j5bx ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc0014d4540 exit status 1   true [0xc000187020 0xc0001870f8 0xc000187248] [0xc000187020 0xc0001870f8 0xc000187248] [0xc0001870e0 0xc0001871e8] [0x935700 0x935700] 0xc00190ade0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Feb 21 12:03:07.372: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9j5bx ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 21 12:03:07.547: INFO: rc: 1
Feb 21 12:03:07.548: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9j5bx ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc002580270 exit status 1   true [0xc001f76040 0xc001f76058 0xc001f76070] [0xc001f76040 0xc001f76058 0xc001f76070] [0xc001f76050 0xc001f76068] [0x935700 0x935700] 0xc00166ec60 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Feb 21 12:03:17.549: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9j5bx ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 21 12:03:17.739: INFO: rc: 1
Feb 21 12:03:17.739: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9j5bx ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc0014d4150 exit status 1   true [0xc00017c000 0xc00141a028 0xc00141a060] [0xc00017c000 0xc00141a028 0xc00141a060] [0xc00141a020 0xc00141a040] [0x935700 0x935700] 0xc00190a000 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Feb 21 12:03:27.740: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9j5bx ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 21 12:03:27.915: INFO: rc: 1
Feb 21 12:03:27.916: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9j5bx ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc0014d42a0 exit status 1   true [0xc00141a078 0xc00141a098 0xc00141a0b0] [0xc00141a078 0xc00141a098 0xc00141a0b0] [0xc00141a088 0xc00141a0a8] [0x935700 0x935700] 0xc00190a480 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Feb 21 12:03:37.917: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9j5bx ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 21 12:03:38.062: INFO: rc: 1
Feb 21 12:03:38.063: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9j5bx ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc0014d4420 exit status 1   true [0xc00141a0b8 0xc00141a0e8 0xc00141a108] [0xc00141a0b8 0xc00141a0e8 0xc00141a108] [0xc00141a0c8 0xc00141a0f8] [0x935700 0x935700] 0xc00190a840 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Feb 21 12:03:48.063: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9j5bx ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 21 12:03:48.211: INFO: rc: 1
Feb 21 12:03:48.212: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9j5bx ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc002580180 exit status 1   true [0xc000186c70 0xc000186cb8 0xc000186d98] [0xc000186c70 0xc000186cb8 0xc000186d98] [0xc000186c90 0xc000186d60] [0x935700 0x935700] 0xc00166e720 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Feb 21 12:03:58.213: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9j5bx ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 21 12:03:58.386: INFO: rc: 1
Feb 21 12:03:58.387: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9j5bx ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc0014d45a0 exit status 1   true [0xc00141a128 0xc00141a158 0xc00141a188] [0xc00141a128 0xc00141a158 0xc00141a188] [0xc00141a148 0xc00141a178] [0x935700 0x935700] 0xc00190ade0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Feb 21 12:04:08.388: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9j5bx ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 21 12:04:08.617: INFO: rc: 1
Feb 21 12:04:08.618: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9j5bx ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc000d421e0 exit status 1   true [0xc000b8e000 0xc000b8e070 0xc000b8e148] [0xc000b8e000 0xc000b8e070 0xc000b8e148] [0xc000b8e048 0xc000b8e108] [0x935700 0x935700] 0xc0015be6c0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Feb 21 12:04:18.620: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9j5bx ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 21 12:04:18.922: INFO: rc: 1
Feb 21 12:04:18.923: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9j5bx ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc000d42390 exit status 1   true [0xc000b8e188 0xc000b8e1d0 0xc000b8e248] [0xc000b8e188 0xc000b8e1d0 0xc000b8e248] [0xc000b8e1c0 0xc000b8e200] [0x935700 0x935700] 0xc0015bed80 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Feb 21 12:04:28.924: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9j5bx ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 21 12:04:29.042: INFO: rc: 1
Feb 21 12:04:29.043: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9j5bx ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc000d424b0 exit status 1   true [0xc000b8e258 0xc000b8e2d0 0xc000b8e308] [0xc000b8e258 0xc000b8e2d0 0xc000b8e308] [0xc000b8e2b8 0xc000b8e2f8] [0x935700 0x935700] 0xc001b221e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Feb 21 12:04:39.044: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9j5bx ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 21 12:04:39.173: INFO: rc: 1
Feb 21 12:04:39.174: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9j5bx ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc0014d4c90 exit status 1   true [0xc00141a190 0xc00141a1a8 0xc00141a1c0] [0xc00141a190 0xc00141a1a8 0xc00141a1c0] [0xc00141a1a0 0xc00141a1b8] [0x935700 0x935700] 0xc00190b1a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Feb 21 12:04:49.175: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9j5bx ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 21 12:04:49.344: INFO: rc: 1
Feb 21 12:04:49.345: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9j5bx ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc001eae210 exit status 1   true [0xc001f76000 0xc001f76018 0xc001f76038] [0xc001f76000 0xc001f76018 0xc001f76038] [0xc001f76010 0xc001f76030] [0x935700 0x935700] 0xc00174c600 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Feb 21 12:04:59.346: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9j5bx ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 21 12:04:59.532: INFO: rc: 1
Feb 21 12:04:59.533: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9j5bx ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc001eae5a0 exit status 1   true [0xc001f76040 0xc001f76058 0xc001f76070] [0xc001f76040 0xc001f76058 0xc001f76070] [0xc001f76050 0xc001f76068] [0x935700 0x935700] 0xc00174d020 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Feb 21 12:05:09.534: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9j5bx ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 21 12:05:09.676: INFO: rc: 1
Feb 21 12:05:09.677: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9j5bx ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc001eae780 exit status 1   true [0xc001f76078 0xc001f76090 0xc001f760a8] [0xc001f76078 0xc001f76090 0xc001f760a8] [0xc001f76088 0xc001f760a0] [0x935700 0x935700] 0xc00174d560 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Feb 21 12:05:19.678: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9j5bx ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 21 12:05:19.814: INFO: rc: 1
Feb 21 12:05:19.815: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9j5bx ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc000d421b0 exit status 1   true [0xc00000e318 0xc001f76010 0xc001f76030] [0xc00000e318 0xc001f76010 0xc001f76030] [0xc001f76008 0xc001f76028] [0x935700 0x935700] 0xc0015be000 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Feb 21 12:05:29.815: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9j5bx ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 21 12:05:30.013: INFO: rc: 1
Feb 21 12:05:30.013: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9j5bx ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc002580120 exit status 1   true [0xc000b8e000 0xc000b8e070 0xc000b8e148] [0xc000b8e000 0xc000b8e070 0xc000b8e148] [0xc000b8e048 0xc000b8e108] [0x935700 0x935700] 0xc00166e720 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Feb 21 12:05:40.014: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9j5bx ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 21 12:05:40.124: INFO: rc: 1
Feb 21 12:05:40.124: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: 
Feb 21 12:05:40.125: INFO: Scaling statefulset ss to 0
Feb 21 12:05:40.150: INFO: Waiting for statefulset status.replicas updated to 0
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85
Feb 21 12:05:40.157: INFO: Deleting all statefulset in ns e2e-tests-statefulset-9j5bx
Feb 21 12:05:40.164: INFO: Scaling statefulset ss to 0
Feb 21 12:05:40.181: INFO: Waiting for statefulset status.replicas updated to 0
Feb 21 12:05:40.186: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 21 12:05:40.347: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-statefulset-9j5bx" for this suite.
Feb 21 12:05:48.430: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 21 12:05:48.631: INFO: namespace: e2e-tests-statefulset-9j5bx, resource: bindings, ignored listing per whitelist
Feb 21 12:05:48.660: INFO: namespace e2e-tests-statefulset-9j5bx deletion completed in 8.300278341s

• [SLOW TEST:380.624 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    Burst scaling should run to completion even with unhealthy pods [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 21 12:05:48.661: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Cleaning up the secret
STEP: Cleaning up the configmap
STEP: Cleaning up the pod
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 21 12:06:01.161: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-wrapper-ws8bd" for this suite.
Feb 21 12:06:09.381: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 21 12:06:09.499: INFO: namespace: e2e-tests-emptydir-wrapper-ws8bd, resource: bindings, ignored listing per whitelist
Feb 21 12:06:09.597: INFO: namespace e2e-tests-emptydir-wrapper-ws8bd deletion completed in 8.397175729s

• [SLOW TEST:20.937 seconds]
[sig-storage] EmptyDir wrapper volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-node] ConfigMap 
  should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 21 12:06:09.598: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap e2e-tests-configmap-9h486/configmap-test-8c0568f6-54a2-11ea-b1f8-0242ac110008
STEP: Creating a pod to test consume configMaps
Feb 21 12:06:09.908: INFO: Waiting up to 5m0s for pod "pod-configmaps-8c070de5-54a2-11ea-b1f8-0242ac110008" in namespace "e2e-tests-configmap-9h486" to be "success or failure"
Feb 21 12:06:09.930: INFO: Pod "pod-configmaps-8c070de5-54a2-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 22.28124ms
Feb 21 12:06:11.972: INFO: Pod "pod-configmaps-8c070de5-54a2-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.063887197s
Feb 21 12:06:13.996: INFO: Pod "pod-configmaps-8c070de5-54a2-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.08763678s
Feb 21 12:06:16.114: INFO: Pod "pod-configmaps-8c070de5-54a2-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.205907576s
Feb 21 12:06:18.606: INFO: Pod "pod-configmaps-8c070de5-54a2-11ea-b1f8-0242ac110008": Phase="Running", Reason="", readiness=true. Elapsed: 8.69855329s
Feb 21 12:06:20.625: INFO: Pod "pod-configmaps-8c070de5-54a2-11ea-b1f8-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.717123427s
STEP: Saw pod success
Feb 21 12:06:20.625: INFO: Pod "pod-configmaps-8c070de5-54a2-11ea-b1f8-0242ac110008" satisfied condition "success or failure"
Feb 21 12:06:20.629: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-8c070de5-54a2-11ea-b1f8-0242ac110008 container env-test: 
STEP: delete the pod
Feb 21 12:06:21.561: INFO: Waiting for pod pod-configmaps-8c070de5-54a2-11ea-b1f8-0242ac110008 to disappear
Feb 21 12:06:21.580: INFO: Pod pod-configmaps-8c070de5-54a2-11ea-b1f8-0242ac110008 no longer exists
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 21 12:06:21.581: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-9h486" for this suite.
Feb 21 12:06:27.653: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 21 12:06:27.756: INFO: namespace: e2e-tests-configmap-9h486, resource: bindings, ignored listing per whitelist
Feb 21 12:06:27.836: INFO: namespace e2e-tests-configmap-9h486 deletion completed in 6.246609593s

• [SLOW TEST:18.239 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 21 12:06:27.837: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Feb 21 12:06:28.058: INFO: Waiting up to 5m0s for pod "downwardapi-volume-96d7ddfa-54a2-11ea-b1f8-0242ac110008" in namespace "e2e-tests-downward-api-vlntj" to be "success or failure"
Feb 21 12:06:28.071: INFO: Pod "downwardapi-volume-96d7ddfa-54a2-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 13.105089ms
Feb 21 12:06:30.264: INFO: Pod "downwardapi-volume-96d7ddfa-54a2-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.205251327s
Feb 21 12:06:32.282: INFO: Pod "downwardapi-volume-96d7ddfa-54a2-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.223225989s
Feb 21 12:06:34.302: INFO: Pod "downwardapi-volume-96d7ddfa-54a2-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.243437989s
Feb 21 12:06:36.323: INFO: Pod "downwardapi-volume-96d7ddfa-54a2-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.264952404s
Feb 21 12:06:38.342: INFO: Pod "downwardapi-volume-96d7ddfa-54a2-11ea-b1f8-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.283654319s
STEP: Saw pod success
Feb 21 12:06:38.342: INFO: Pod "downwardapi-volume-96d7ddfa-54a2-11ea-b1f8-0242ac110008" satisfied condition "success or failure"
Feb 21 12:06:38.351: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-96d7ddfa-54a2-11ea-b1f8-0242ac110008 container client-container: 
STEP: delete the pod
Feb 21 12:06:38.908: INFO: Waiting for pod downwardapi-volume-96d7ddfa-54a2-11ea-b1f8-0242ac110008 to disappear
Feb 21 12:06:38.918: INFO: Pod downwardapi-volume-96d7ddfa-54a2-11ea-b1f8-0242ac110008 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 21 12:06:38.918: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-vlntj" for this suite.
Feb 21 12:06:47.030: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 21 12:06:47.139: INFO: namespace: e2e-tests-downward-api-vlntj, resource: bindings, ignored listing per whitelist
Feb 21 12:06:47.172: INFO: namespace e2e-tests-downward-api-vlntj deletion completed in 8.247823642s

• [SLOW TEST:19.336 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 21 12:06:47.172: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Feb 21 12:06:47.429: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a25feb32-54a2-11ea-b1f8-0242ac110008" in namespace "e2e-tests-projected-5799h" to be "success or failure"
Feb 21 12:06:47.438: INFO: Pod "downwardapi-volume-a25feb32-54a2-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.646207ms
Feb 21 12:06:49.455: INFO: Pod "downwardapi-volume-a25feb32-54a2-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026036483s
Feb 21 12:06:51.479: INFO: Pod "downwardapi-volume-a25feb32-54a2-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.050335549s
Feb 21 12:06:53.494: INFO: Pod "downwardapi-volume-a25feb32-54a2-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.065481826s
Feb 21 12:06:55.513: INFO: Pod "downwardapi-volume-a25feb32-54a2-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.083711019s
Feb 21 12:06:57.522: INFO: Pod "downwardapi-volume-a25feb32-54a2-11ea-b1f8-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.09336024s
STEP: Saw pod success
Feb 21 12:06:57.523: INFO: Pod "downwardapi-volume-a25feb32-54a2-11ea-b1f8-0242ac110008" satisfied condition "success or failure"
Feb 21 12:06:57.526: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-a25feb32-54a2-11ea-b1f8-0242ac110008 container client-container: 
STEP: delete the pod
Feb 21 12:06:58.608: INFO: Waiting for pod downwardapi-volume-a25feb32-54a2-11ea-b1f8-0242ac110008 to disappear
Feb 21 12:06:58.878: INFO: Pod downwardapi-volume-a25feb32-54a2-11ea-b1f8-0242ac110008 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 21 12:06:58.878: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-5799h" for this suite.
Feb 21 12:07:05.064: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 21 12:07:05.158: INFO: namespace: e2e-tests-projected-5799h, resource: bindings, ignored listing per whitelist
Feb 21 12:07:05.504: INFO: namespace e2e-tests-projected-5799h deletion completed in 6.599327016s

• [SLOW TEST:18.332 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 21 12:07:05.504: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Feb 21 12:07:27.873: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Feb 21 12:07:27.946: INFO: Pod pod-with-prestop-http-hook still exists
Feb 21 12:07:29.948: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Feb 21 12:07:30.136: INFO: Pod pod-with-prestop-http-hook still exists
Feb 21 12:07:31.948: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Feb 21 12:07:32.485: INFO: Pod pod-with-prestop-http-hook still exists
Feb 21 12:07:33.947: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Feb 21 12:07:34.037: INFO: Pod pod-with-prestop-http-hook still exists
Feb 21 12:07:35.948: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Feb 21 12:07:35.960: INFO: Pod pod-with-prestop-http-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 21 12:07:35.987: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-czfr8" for this suite.
Feb 21 12:08:00.037: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 21 12:08:00.096: INFO: namespace: e2e-tests-container-lifecycle-hook-czfr8, resource: bindings, ignored listing per whitelist
Feb 21 12:08:00.169: INFO: namespace e2e-tests-container-lifecycle-hook-czfr8 deletion completed in 24.1738738s

• [SLOW TEST:54.664 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40
    should execute prestop http hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 21 12:08:00.169: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating pod
Feb 21 12:08:10.472: INFO: Pod pod-hostip-cde61c08-54a2-11ea-b1f8-0242ac110008 has hostIP: 10.96.1.240
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 21 12:08:10.472: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-fxplg" for this suite.
Feb 21 12:08:34.572: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 21 12:08:34.668: INFO: namespace: e2e-tests-pods-fxplg, resource: bindings, ignored listing per whitelist
Feb 21 12:08:34.736: INFO: namespace e2e-tests-pods-fxplg deletion completed in 24.254504123s

• [SLOW TEST:34.567 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 21 12:08:34.737: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0777 on node default medium
Feb 21 12:08:34.955: INFO: Waiting up to 5m0s for pod "pod-e26c5609-54a2-11ea-b1f8-0242ac110008" in namespace "e2e-tests-emptydir-p6t8n" to be "success or failure"
Feb 21 12:08:34.977: INFO: Pod "pod-e26c5609-54a2-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 22.456686ms
Feb 21 12:08:36.992: INFO: Pod "pod-e26c5609-54a2-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036896383s
Feb 21 12:08:39.003: INFO: Pod "pod-e26c5609-54a2-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.048180231s
Feb 21 12:08:41.020: INFO: Pod "pod-e26c5609-54a2-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.065254344s
Feb 21 12:08:43.036: INFO: Pod "pod-e26c5609-54a2-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.08078827s
Feb 21 12:08:45.055: INFO: Pod "pod-e26c5609-54a2-11ea-b1f8-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.100171931s
STEP: Saw pod success
Feb 21 12:08:45.055: INFO: Pod "pod-e26c5609-54a2-11ea-b1f8-0242ac110008" satisfied condition "success or failure"
Feb 21 12:08:45.061: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-e26c5609-54a2-11ea-b1f8-0242ac110008 container test-container: 
STEP: delete the pod
Feb 21 12:08:45.128: INFO: Waiting for pod pod-e26c5609-54a2-11ea-b1f8-0242ac110008 to disappear
Feb 21 12:08:45.243: INFO: Pod pod-e26c5609-54a2-11ea-b1f8-0242ac110008 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 21 12:08:45.244: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-p6t8n" for this suite.
Feb 21 12:08:52.023: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 21 12:08:52.068: INFO: namespace: e2e-tests-emptydir-p6t8n, resource: bindings, ignored listing per whitelist
Feb 21 12:08:52.268: INFO: namespace e2e-tests-emptydir-p6t8n deletion completed in 6.994759151s

• [SLOW TEST:17.531 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl patch 
  should add annotations for pods in rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 21 12:08:52.270: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should add annotations for pods in rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating Redis RC
Feb 21 12:08:52.487: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-9dkcw'
Feb 21 12:08:55.091: INFO: stderr: ""
Feb 21 12:08:55.091: INFO: stdout: "replicationcontroller/redis-master created\n"
STEP: Waiting for Redis master to start.
Feb 21 12:08:56.104: INFO: Selector matched 1 pods for map[app:redis]
Feb 21 12:08:56.104: INFO: Found 0 / 1
Feb 21 12:08:57.105: INFO: Selector matched 1 pods for map[app:redis]
Feb 21 12:08:57.105: INFO: Found 0 / 1
Feb 21 12:08:58.103: INFO: Selector matched 1 pods for map[app:redis]
Feb 21 12:08:58.103: INFO: Found 0 / 1
Feb 21 12:08:59.102: INFO: Selector matched 1 pods for map[app:redis]
Feb 21 12:08:59.102: INFO: Found 0 / 1
Feb 21 12:09:00.698: INFO: Selector matched 1 pods for map[app:redis]
Feb 21 12:09:00.706: INFO: Found 0 / 1
Feb 21 12:09:01.110: INFO: Selector matched 1 pods for map[app:redis]
Feb 21 12:09:01.111: INFO: Found 0 / 1
Feb 21 12:09:02.356: INFO: Selector matched 1 pods for map[app:redis]
Feb 21 12:09:02.356: INFO: Found 0 / 1
Feb 21 12:09:03.107: INFO: Selector matched 1 pods for map[app:redis]
Feb 21 12:09:03.107: INFO: Found 0 / 1
Feb 21 12:09:04.103: INFO: Selector matched 1 pods for map[app:redis]
Feb 21 12:09:04.103: INFO: Found 0 / 1
Feb 21 12:09:05.116: INFO: Selector matched 1 pods for map[app:redis]
Feb 21 12:09:05.116: INFO: Found 1 / 1
Feb 21 12:09:05.116: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
STEP: patching all pods
Feb 21 12:09:05.122: INFO: Selector matched 1 pods for map[app:redis]
Feb 21 12:09:05.122: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Feb 21 12:09:05.122: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod redis-master-sf9mp --namespace=e2e-tests-kubectl-9dkcw -p {"metadata":{"annotations":{"x":"y"}}}'
Feb 21 12:09:05.274: INFO: stderr: ""
Feb 21 12:09:05.274: INFO: stdout: "pod/redis-master-sf9mp patched\n"
STEP: checking annotations
Feb 21 12:09:05.291: INFO: Selector matched 1 pods for map[app:redis]
Feb 21 12:09:05.291: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 21 12:09:05.292: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-9dkcw" for this suite.
Feb 21 12:09:29.389: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 21 12:09:29.500: INFO: namespace: e2e-tests-kubectl-9dkcw, resource: bindings, ignored listing per whitelist
Feb 21 12:09:29.620: INFO: namespace e2e-tests-kubectl-9dkcw deletion completed in 24.26875811s

• [SLOW TEST:37.350 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl patch
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should add annotations for pods in rc  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-node] Downward API 
  should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 21 12:09:29.620: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward api env vars
Feb 21 12:09:29.908: INFO: Waiting up to 5m0s for pod "downward-api-033c07f6-54a3-11ea-b1f8-0242ac110008" in namespace "e2e-tests-downward-api-59457" to be "success or failure"
Feb 21 12:09:30.013: INFO: Pod "downward-api-033c07f6-54a3-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 103.900412ms
Feb 21 12:09:32.040: INFO: Pod "downward-api-033c07f6-54a3-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.131103646s
Feb 21 12:09:34.065: INFO: Pod "downward-api-033c07f6-54a3-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.156308073s
Feb 21 12:09:36.619: INFO: Pod "downward-api-033c07f6-54a3-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.709743452s
Feb 21 12:09:38.631: INFO: Pod "downward-api-033c07f6-54a3-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.721665095s
Feb 21 12:09:40.889: INFO: Pod "downward-api-033c07f6-54a3-11ea-b1f8-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.980448206s
STEP: Saw pod success
Feb 21 12:09:40.890: INFO: Pod "downward-api-033c07f6-54a3-11ea-b1f8-0242ac110008" satisfied condition "success or failure"
Feb 21 12:09:40.905: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-033c07f6-54a3-11ea-b1f8-0242ac110008 container dapi-container: 
STEP: delete the pod
Feb 21 12:09:41.097: INFO: Waiting for pod downward-api-033c07f6-54a3-11ea-b1f8-0242ac110008 to disappear
Feb 21 12:09:41.105: INFO: Pod downward-api-033c07f6-54a3-11ea-b1f8-0242ac110008 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 21 12:09:41.105: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-59457" for this suite.
Feb 21 12:09:47.141: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 21 12:09:47.248: INFO: namespace: e2e-tests-downward-api-59457, resource: bindings, ignored listing per whitelist
Feb 21 12:09:47.289: INFO: namespace e2e-tests-downward-api-59457 deletion completed in 6.178225597s

• [SLOW TEST:17.668 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38
  should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Guestbook application 
  should create and stop a working application  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 21 12:09:47.289: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should create and stop a working application  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating all guestbook components
Feb 21 12:09:47.469: INFO: apiVersion: v1
kind: Service
metadata:
  name: redis-slave
  labels:
    app: redis
    role: slave
    tier: backend
spec:
  ports:
  - port: 6379
  selector:
    app: redis
    role: slave
    tier: backend

Feb 21 12:09:47.469: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-4v4n5'
Feb 21 12:09:48.164: INFO: stderr: ""
Feb 21 12:09:48.164: INFO: stdout: "service/redis-slave created\n"
Feb 21 12:09:48.165: INFO: apiVersion: v1
kind: Service
metadata:
  name: redis-master
  labels:
    app: redis
    role: master
    tier: backend
spec:
  ports:
  - port: 6379
    targetPort: 6379
  selector:
    app: redis
    role: master
    tier: backend

Feb 21 12:09:48.166: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-4v4n5'
Feb 21 12:09:48.720: INFO: stderr: ""
Feb 21 12:09:48.721: INFO: stdout: "service/redis-master created\n"
Feb 21 12:09:48.722: INFO: apiVersion: v1
kind: Service
metadata:
  name: frontend
  labels:
    app: guestbook
    tier: frontend
spec:
  # if your cluster supports it, uncomment the following to automatically create
  # an external load-balanced IP for the frontend service.
  # type: LoadBalancer
  ports:
  - port: 80
  selector:
    app: guestbook
    tier: frontend

Feb 21 12:09:48.723: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-4v4n5'
Feb 21 12:09:49.295: INFO: stderr: ""
Feb 21 12:09:49.296: INFO: stdout: "service/frontend created\n"
Feb 21 12:09:49.297: INFO: apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: frontend
spec:
  replicas: 3
  template:
    metadata:
      labels:
        app: guestbook
        tier: frontend
    spec:
      containers:
      - name: php-redis
        image: gcr.io/google-samples/gb-frontend:v6
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        env:
        - name: GET_HOSTS_FROM
          value: dns
          # If your cluster config does not include a dns service, then to
          # instead access environment variables to find service host
          # info, comment out the 'value: dns' line above, and uncomment the
          # line below:
          # value: env
        ports:
        - containerPort: 80

Feb 21 12:09:49.297: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-4v4n5'
Feb 21 12:09:49.763: INFO: stderr: ""
Feb 21 12:09:49.763: INFO: stdout: "deployment.extensions/frontend created\n"
Feb 21 12:09:49.765: INFO: apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: redis-master
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: redis
        role: master
        tier: backend
    spec:
      containers:
      - name: master
        image: gcr.io/kubernetes-e2e-test-images/redis:1.0
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 6379

Feb 21 12:09:49.766: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-4v4n5'
Feb 21 12:09:50.580: INFO: stderr: ""
Feb 21 12:09:50.581: INFO: stdout: "deployment.extensions/redis-master created\n"
Feb 21 12:09:50.582: INFO: apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: redis-slave
spec:
  replicas: 2
  template:
    metadata:
      labels:
        app: redis
        role: slave
        tier: backend
    spec:
      containers:
      - name: slave
        image: gcr.io/google-samples/gb-redisslave:v3
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        env:
        - name: GET_HOSTS_FROM
          value: dns
          # If your cluster config does not include a dns service, then to
          # instead access an environment variable to find the master
          # service's host, comment out the 'value: dns' line above, and
          # uncomment the line below:
          # value: env
        ports:
        - containerPort: 6379

Feb 21 12:09:50.583: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-4v4n5'
Feb 21 12:09:51.092: INFO: stderr: ""
Feb 21 12:09:51.092: INFO: stdout: "deployment.extensions/redis-slave created\n"
STEP: validating guestbook app
Feb 21 12:09:51.092: INFO: Waiting for all frontend pods to be Running.
Feb 21 12:10:21.146: INFO: Waiting for frontend to serve content.
Feb 21 12:10:21.324: INFO: Trying to add a new entry to the guestbook.
Feb 21 12:10:21.483: INFO: Verifying that added entry can be retrieved.
STEP: using delete to clean up resources
Feb 21 12:10:21.513: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-4v4n5'
Feb 21 12:10:21.957: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb 21 12:10:21.957: INFO: stdout: "service \"redis-slave\" force deleted\n"
STEP: using delete to clean up resources
Feb 21 12:10:21.958: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-4v4n5'
Feb 21 12:10:22.301: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb 21 12:10:22.301: INFO: stdout: "service \"redis-master\" force deleted\n"
STEP: using delete to clean up resources
Feb 21 12:10:22.302: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-4v4n5'
Feb 21 12:10:22.627: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb 21 12:10:22.627: INFO: stdout: "service \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Feb 21 12:10:22.629: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-4v4n5'
Feb 21 12:10:22.817: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb 21 12:10:22.817: INFO: stdout: "deployment.extensions \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Feb 21 12:10:22.818: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-4v4n5'
Feb 21 12:10:23.387: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb 21 12:10:23.387: INFO: stdout: "deployment.extensions \"redis-master\" force deleted\n"
STEP: using delete to clean up resources
Feb 21 12:10:23.391: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-4v4n5'
Feb 21 12:10:23.954: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb 21 12:10:23.955: INFO: stdout: "deployment.extensions \"redis-slave\" force deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 21 12:10:23.955: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-4v4n5" for this suite.
Feb 21 12:11:16.087: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 21 12:11:16.170: INFO: namespace: e2e-tests-kubectl-4v4n5, resource: bindings, ignored listing per whitelist
Feb 21 12:11:16.229: INFO: namespace e2e-tests-kubectl-4v4n5 deletion completed in 52.218895365s

• [SLOW TEST:88.940 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Guestbook application
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create and stop a working application  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-apps] Deployment 
  deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 21 12:11:16.229: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65
[It] deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Feb 21 12:11:16.635: INFO: Pod name cleanup-pod: Found 0 pods out of 1
Feb 21 12:11:21.706: INFO: Pod name cleanup-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Feb 21 12:11:27.739: INFO: Creating deployment test-cleanup-deployment
STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59
Feb 21 12:11:27.905: INFO: Deployment "test-cleanup-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment,GenerateName:,Namespace:e2e-tests-deployment-lrgqk,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-lrgqk/deployments/test-cleanup-deployment,UID:497d14fb-54a3-11ea-a994-fa163e34d433,ResourceVersion:22421663,Generation:1,CreationTimestamp:2020-02-21 12:11:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[],ReadyReplicas:0,CollisionCount:nil,},}

Feb 21 12:11:27.943: INFO: New ReplicaSet "test-cleanup-deployment-6df768c57" of Deployment "test-cleanup-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment-6df768c57,GenerateName:,Namespace:e2e-tests-deployment-lrgqk,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-lrgqk/replicasets/test-cleanup-deployment-6df768c57,UID:4994b9ee-54a3-11ea-a994-fa163e34d433,ResourceVersion:22421665,Generation:1,CreationTimestamp:2020-02-21 12:11:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 6df768c57,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment 497d14fb-54a3-11ea-a994-fa163e34d433 0xc0021667b0 0xc0021667b1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 6df768c57,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 6df768c57,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:0,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Feb 21 12:11:27.943: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment":
Feb 21 12:11:27.944: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller,GenerateName:,Namespace:e2e-tests-deployment-lrgqk,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-lrgqk/replicasets/test-cleanup-controller,UID:42c1ff6f-54a3-11ea-a994-fa163e34d433,ResourceVersion:22421664,Generation:1,CreationTimestamp:2020-02-21 12:11:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment 497d14fb-54a3-11ea-a994-fa163e34d433 0xc0021666bf 0xc0021666e0}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Feb 21 12:11:27.970: INFO: Pod "test-cleanup-controller-nsvvc" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller-nsvvc,GenerateName:test-cleanup-controller-,Namespace:e2e-tests-deployment-lrgqk,SelfLink:/api/v1/namespaces/e2e-tests-deployment-lrgqk/pods/test-cleanup-controller-nsvvc,UID:42dff3fb-54a3-11ea-a994-fa163e34d433,ResourceVersion:22421658,Generation:0,CreationTimestamp:2020-02-21 12:11:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-controller 42c1ff6f-54a3-11ea-a994-fa163e34d433 0xc0021671c7 0xc0021671c8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-slwn9 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-slwn9,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-slwn9 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002167230} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002167250}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-21 12:11:16 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-21 12:11:26 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-21 12:11:26 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-21 12:11:16 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.4,StartTime:2020-02-21 12:11:16 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-21 12:11:25 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://71d2e84dcb8773c1872a589283ddbf26a17b9a827997c43305939bbe0a2e8516}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 21 12:11:27.970: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-deployment-lrgqk" for this suite.
Feb 21 12:11:44.252: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 21 12:11:44.374: INFO: namespace: e2e-tests-deployment-lrgqk, resource: bindings, ignored listing per whitelist
Feb 21 12:11:44.415: INFO: namespace e2e-tests-deployment-lrgqk deletion completed in 16.325060368s

• [SLOW TEST:28.186 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 21 12:11:44.416: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0777 on node default medium
Feb 21 12:11:44.655: INFO: Waiting up to 5m0s for pod "pod-538c265f-54a3-11ea-b1f8-0242ac110008" in namespace "e2e-tests-emptydir-9gpwc" to be "success or failure"
Feb 21 12:11:44.709: INFO: Pod "pod-538c265f-54a3-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 53.943472ms
Feb 21 12:11:46.724: INFO: Pod "pod-538c265f-54a3-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.069412116s
Feb 21 12:11:48.749: INFO: Pod "pod-538c265f-54a3-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.093887112s
Feb 21 12:11:50.783: INFO: Pod "pod-538c265f-54a3-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.128457873s
Feb 21 12:11:52.842: INFO: Pod "pod-538c265f-54a3-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.187391184s
Feb 21 12:11:55.615: INFO: Pod "pod-538c265f-54a3-11ea-b1f8-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.960117938s
STEP: Saw pod success
Feb 21 12:11:55.615: INFO: Pod "pod-538c265f-54a3-11ea-b1f8-0242ac110008" satisfied condition "success or failure"
Feb 21 12:11:55.953: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-538c265f-54a3-11ea-b1f8-0242ac110008 container test-container: 
STEP: delete the pod
Feb 21 12:11:56.031: INFO: Waiting for pod pod-538c265f-54a3-11ea-b1f8-0242ac110008 to disappear
Feb 21 12:11:56.088: INFO: Pod pod-538c265f-54a3-11ea-b1f8-0242ac110008 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 21 12:11:56.088: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-9gpwc" for this suite.
Feb 21 12:12:02.139: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 21 12:12:02.256: INFO: namespace: e2e-tests-emptydir-9gpwc, resource: bindings, ignored listing per whitelist
Feb 21 12:12:02.326: INFO: namespace e2e-tests-emptydir-9gpwc deletion completed in 6.230584123s

• [SLOW TEST:17.911 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not cause race condition when used for configmaps [Serial] [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 21 12:12:02.327: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not cause race condition when used for configmaps [Serial] [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating 50 configmaps
STEP: Creating RC which spawns configmap-volume pods
Feb 21 12:12:03.530: INFO: Pod name wrapped-volume-race-5ec7bd58-54a3-11ea-b1f8-0242ac110008: Found 0 pods out of 5
Feb 21 12:12:08.638: INFO: Pod name wrapped-volume-race-5ec7bd58-54a3-11ea-b1f8-0242ac110008: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-5ec7bd58-54a3-11ea-b1f8-0242ac110008 in namespace e2e-tests-emptydir-wrapper-s2szj, will wait for the garbage collector to delete the pods
Feb 21 12:13:50.946: INFO: Deleting ReplicationController wrapped-volume-race-5ec7bd58-54a3-11ea-b1f8-0242ac110008 took: 26.183839ms
Feb 21 12:13:51.347: INFO: Terminating ReplicationController wrapped-volume-race-5ec7bd58-54a3-11ea-b1f8-0242ac110008 pods took: 401.1648ms
STEP: Creating RC which spawns configmap-volume pods
Feb 21 12:14:35.404: INFO: Pod name wrapped-volume-race-b94d7a89-54a3-11ea-b1f8-0242ac110008: Found 0 pods out of 5
Feb 21 12:14:40.433: INFO: Pod name wrapped-volume-race-b94d7a89-54a3-11ea-b1f8-0242ac110008: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-b94d7a89-54a3-11ea-b1f8-0242ac110008 in namespace e2e-tests-emptydir-wrapper-s2szj, will wait for the garbage collector to delete the pods
Feb 21 12:16:54.775: INFO: Deleting ReplicationController wrapped-volume-race-b94d7a89-54a3-11ea-b1f8-0242ac110008 took: 31.019126ms
Feb 21 12:16:55.276: INFO: Terminating ReplicationController wrapped-volume-race-b94d7a89-54a3-11ea-b1f8-0242ac110008 pods took: 500.704844ms
STEP: Creating RC which spawns configmap-volume pods
Feb 21 12:17:42.933: INFO: Pod name wrapped-volume-race-2904b1d0-54a4-11ea-b1f8-0242ac110008: Found 0 pods out of 5
Feb 21 12:17:47.983: INFO: Pod name wrapped-volume-race-2904b1d0-54a4-11ea-b1f8-0242ac110008: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-2904b1d0-54a4-11ea-b1f8-0242ac110008 in namespace e2e-tests-emptydir-wrapper-s2szj, will wait for the garbage collector to delete the pods
Feb 21 12:19:52.135: INFO: Deleting ReplicationController wrapped-volume-race-2904b1d0-54a4-11ea-b1f8-0242ac110008 took: 25.433336ms
Feb 21 12:19:52.436: INFO: Terminating ReplicationController wrapped-volume-race-2904b1d0-54a4-11ea-b1f8-0242ac110008 pods took: 300.83792ms
STEP: Cleaning up the configMaps
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 21 12:20:44.298: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-wrapper-s2szj" for this suite.
Feb 21 12:20:52.375: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 21 12:20:52.576: INFO: namespace: e2e-tests-emptydir-wrapper-s2szj, resource: bindings, ignored listing per whitelist
Feb 21 12:20:52.661: INFO: namespace e2e-tests-emptydir-wrapper-s2szj deletion completed in 8.352953337s

• [SLOW TEST:530.334 seconds]
[sig-storage] EmptyDir wrapper volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  should not cause race condition when used for configmaps [Serial] [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 21 12:20:52.662: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a watch on configmaps with a certain label
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: changing the label value of the configmap
STEP: Expecting to observe a delete notification for the watched object
Feb 21 12:20:52.990: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-nqstd,SelfLink:/api/v1/namespaces/e2e-tests-watch-nqstd/configmaps/e2e-watch-test-label-changed,UID:9a4e16d5-54a4-11ea-a994-fa163e34d433,ResourceVersion:22422808,Generation:0,CreationTimestamp:2020-02-21 12:20:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
Feb 21 12:20:52.991: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-nqstd,SelfLink:/api/v1/namespaces/e2e-tests-watch-nqstd/configmaps/e2e-watch-test-label-changed,UID:9a4e16d5-54a4-11ea-a994-fa163e34d433,ResourceVersion:22422809,Generation:0,CreationTimestamp:2020-02-21 12:20:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
Feb 21 12:20:52.991: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-nqstd,SelfLink:/api/v1/namespaces/e2e-tests-watch-nqstd/configmaps/e2e-watch-test-label-changed,UID:9a4e16d5-54a4-11ea-a994-fa163e34d433,ResourceVersion:22422810,Generation:0,CreationTimestamp:2020-02-21 12:20:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying the configmap a second time
STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements
STEP: changing the label value of the configmap back
STEP: modifying the configmap a third time
STEP: deleting the configmap
STEP: Expecting to observe an add notification for the watched object when the label value was restored
Feb 21 12:21:04.802: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-nqstd,SelfLink:/api/v1/namespaces/e2e-tests-watch-nqstd/configmaps/e2e-watch-test-label-changed,UID:9a4e16d5-54a4-11ea-a994-fa163e34d433,ResourceVersion:22422825,Generation:0,CreationTimestamp:2020-02-21 12:20:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Feb 21 12:21:04.802: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-nqstd,SelfLink:/api/v1/namespaces/e2e-tests-watch-nqstd/configmaps/e2e-watch-test-label-changed,UID:9a4e16d5-54a4-11ea-a994-fa163e34d433,ResourceVersion:22422826,Generation:0,CreationTimestamp:2020-02-21 12:20:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},}
Feb 21 12:21:04.802: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-nqstd,SelfLink:/api/v1/namespaces/e2e-tests-watch-nqstd/configmaps/e2e-watch-test-label-changed,UID:9a4e16d5-54a4-11ea-a994-fa163e34d433,ResourceVersion:22422827,Generation:0,CreationTimestamp:2020-02-21 12:20:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 21 12:21:04.802: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-watch-nqstd" for this suite.
Feb 21 12:21:10.939: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 21 12:21:10.962: INFO: namespace: e2e-tests-watch-nqstd, resource: bindings, ignored listing per whitelist
Feb 21 12:21:11.092: INFO: namespace e2e-tests-watch-nqstd deletion completed in 6.262882522s

• [SLOW TEST:18.430 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 21 12:21:11.092: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Given a Pod with a 'name' label pod-adoption is created
STEP: When a replication controller with a matching selector is created
STEP: Then the orphan pod is adopted
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 21 12:21:24.545: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-replication-controller-p5st6" for this suite.
Feb 21 12:21:48.603: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 21 12:21:48.663: INFO: namespace: e2e-tests-replication-controller-p5st6, resource: bindings, ignored listing per whitelist
Feb 21 12:21:48.727: INFO: namespace e2e-tests-replication-controller-p5st6 deletion completed in 24.168648811s

• [SLOW TEST:37.635 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 21 12:21:48.728: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a pod in the namespace
STEP: Waiting for the pod to have running status
STEP: Creating an uninitialized pod in the namespace
Feb 21 12:21:59.102: INFO: error from create uninitialized namespace: 
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
STEP: Recreating the namespace
STEP: Verifying there are no pods in the namespace
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 21 12:22:24.814: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-namespaces-266j8" for this suite.
Feb 21 12:22:30.853: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 21 12:22:30.925: INFO: namespace: e2e-tests-namespaces-266j8, resource: bindings, ignored listing per whitelist
Feb 21 12:22:30.978: INFO: namespace e2e-tests-namespaces-266j8 deletion completed in 6.156840453s
STEP: Destroying namespace "e2e-tests-nsdeletetest-fpnzp" for this suite.
Feb 21 12:22:30.982: INFO: Namespace e2e-tests-nsdeletetest-fpnzp was already deleted
STEP: Destroying namespace "e2e-tests-nsdeletetest-7z8pm" for this suite.
Feb 21 12:22:37.022: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 21 12:22:37.136: INFO: namespace: e2e-tests-nsdeletetest-7z8pm, resource: bindings, ignored listing per whitelist
Feb 21 12:22:37.169: INFO: namespace e2e-tests-nsdeletetest-7z8pm deletion completed in 6.18677072s

• [SLOW TEST:48.441 seconds]
[sig-api-machinery] Namespaces [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Proxy server 
  should support --unix-socket=/path  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 21 12:22:37.169: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should support --unix-socket=/path  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Starting the proxy
Feb 21 12:22:37.299: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix002034213/test'
STEP: retrieving proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 21 12:22:37.390: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-mjk7z" for this suite.
Feb 21 12:22:45.441: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 21 12:22:45.514: INFO: namespace: e2e-tests-kubectl-mjk7z, resource: bindings, ignored listing per whitelist
Feb 21 12:22:45.647: INFO: namespace e2e-tests-kubectl-mjk7z deletion completed in 8.241590864s

• [SLOW TEST:8.478 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Proxy server
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should support --unix-socket=/path  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 21 12:22:45.648: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-dda75fdd-54a4-11ea-b1f8-0242ac110008
STEP: Creating a pod to test consume configMaps
Feb 21 12:22:45.953: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-ddb5aa97-54a4-11ea-b1f8-0242ac110008" in namespace "e2e-tests-projected-nhlqx" to be "success or failure"
Feb 21 12:22:45.963: INFO: Pod "pod-projected-configmaps-ddb5aa97-54a4-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 9.788409ms
Feb 21 12:22:48.052: INFO: Pod "pod-projected-configmaps-ddb5aa97-54a4-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.098754294s
Feb 21 12:22:50.083: INFO: Pod "pod-projected-configmaps-ddb5aa97-54a4-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.12896792s
Feb 21 12:22:52.351: INFO: Pod "pod-projected-configmaps-ddb5aa97-54a4-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.397923605s
Feb 21 12:22:54.368: INFO: Pod "pod-projected-configmaps-ddb5aa97-54a4-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.41461835s
Feb 21 12:22:56.390: INFO: Pod "pod-projected-configmaps-ddb5aa97-54a4-11ea-b1f8-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.436024125s
STEP: Saw pod success
Feb 21 12:22:56.390: INFO: Pod "pod-projected-configmaps-ddb5aa97-54a4-11ea-b1f8-0242ac110008" satisfied condition "success or failure"
Feb 21 12:22:56.396: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-ddb5aa97-54a4-11ea-b1f8-0242ac110008 container projected-configmap-volume-test: 
STEP: delete the pod
Feb 21 12:22:56.837: INFO: Waiting for pod pod-projected-configmaps-ddb5aa97-54a4-11ea-b1f8-0242ac110008 to disappear
Feb 21 12:22:57.531: INFO: Pod pod-projected-configmaps-ddb5aa97-54a4-11ea-b1f8-0242ac110008 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 21 12:22:57.532: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-nhlqx" for this suite.
Feb 21 12:23:03.949: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 21 12:23:04.000: INFO: namespace: e2e-tests-projected-nhlqx, resource: bindings, ignored listing per whitelist
Feb 21 12:23:04.114: INFO: namespace e2e-tests-projected-nhlqx deletion completed in 6.572065957s

• [SLOW TEST:18.467 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 21 12:23:04.116: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test substitution in container's command
Feb 21 12:23:04.284: INFO: Waiting up to 5m0s for pod "var-expansion-e8a4941e-54a4-11ea-b1f8-0242ac110008" in namespace "e2e-tests-var-expansion-wfvtq" to be "success or failure"
Feb 21 12:23:04.399: INFO: Pod "var-expansion-e8a4941e-54a4-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 114.4416ms
Feb 21 12:23:07.676: INFO: Pod "var-expansion-e8a4941e-54a4-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 3.391441273s
Feb 21 12:23:09.699: INFO: Pod "var-expansion-e8a4941e-54a4-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 5.414189756s
Feb 21 12:23:11.925: INFO: Pod "var-expansion-e8a4941e-54a4-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 7.640202639s
Feb 21 12:23:13.945: INFO: Pod "var-expansion-e8a4941e-54a4-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 9.660858834s
Feb 21 12:23:15.971: INFO: Pod "var-expansion-e8a4941e-54a4-11ea-b1f8-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.686192107s
STEP: Saw pod success
Feb 21 12:23:15.971: INFO: Pod "var-expansion-e8a4941e-54a4-11ea-b1f8-0242ac110008" satisfied condition "success or failure"
Feb 21 12:23:15.982: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod var-expansion-e8a4941e-54a4-11ea-b1f8-0242ac110008 container dapi-container: 
STEP: delete the pod
Feb 21 12:23:16.714: INFO: Waiting for pod var-expansion-e8a4941e-54a4-11ea-b1f8-0242ac110008 to disappear
Feb 21 12:23:17.006: INFO: Pod var-expansion-e8a4941e-54a4-11ea-b1f8-0242ac110008 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 21 12:23:17.007: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-var-expansion-wfvtq" for this suite.
Feb 21 12:23:23.059: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 21 12:23:23.143: INFO: namespace: e2e-tests-var-expansion-wfvtq, resource: bindings, ignored listing per whitelist
Feb 21 12:23:23.265: INFO: namespace e2e-tests-var-expansion-wfvtq deletion completed in 6.247037031s

• [SLOW TEST:19.149 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 21 12:23:23.265: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the rc
STEP: delete the rc
STEP: wait for all pods to be garbage collected
STEP: Gathering metrics
W0221 12:23:34.024914       8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Feb 21 12:23:34.025: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 21 12:23:34.025: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-hns9c" for this suite.
Feb 21 12:23:40.319: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 21 12:23:40.499: INFO: namespace: e2e-tests-gc-hns9c, resource: bindings, ignored listing per whitelist
Feb 21 12:23:40.510: INFO: namespace e2e-tests-gc-hns9c deletion completed in 6.478141457s

• [SLOW TEST:17.245 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 21 12:23:40.511: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating the pod
Feb 21 12:23:51.445: INFO: Successfully updated pod "labelsupdatefe643cd1-54a4-11ea-b1f8-0242ac110008"
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 21 12:23:53.529: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-jncgm" for this suite.
Feb 21 12:24:17.577: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 21 12:24:17.687: INFO: namespace: e2e-tests-downward-api-jncgm, resource: bindings, ignored listing per whitelist
Feb 21 12:24:17.758: INFO: namespace e2e-tests-downward-api-jncgm deletion completed in 24.220411628s

• [SLOW TEST:37.248 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should do a rolling update of a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 21 12:24:17.759: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295
[It] should do a rolling update of a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the initial replication controller
Feb 21 12:24:18.010: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-xjks6'
Feb 21 12:24:20.228: INFO: stderr: ""
Feb 21 12:24:20.228: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Feb 21 12:24:20.229: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-xjks6'
Feb 21 12:24:20.556: INFO: stderr: ""
Feb 21 12:24:20.557: INFO: stdout: "update-demo-nautilus-fx85g update-demo-nautilus-jjlz4 "
Feb 21 12:24:20.557: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-fx85g -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-xjks6'
Feb 21 12:24:20.758: INFO: stderr: ""
Feb 21 12:24:20.758: INFO: stdout: ""
Feb 21 12:24:20.758: INFO: update-demo-nautilus-fx85g is created but not running
Feb 21 12:24:25.759: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-xjks6'
Feb 21 12:24:26.065: INFO: stderr: ""
Feb 21 12:24:26.065: INFO: stdout: "update-demo-nautilus-fx85g update-demo-nautilus-jjlz4 "
Feb 21 12:24:26.065: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-fx85g -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-xjks6'
Feb 21 12:24:26.232: INFO: stderr: ""
Feb 21 12:24:26.232: INFO: stdout: ""
Feb 21 12:24:26.232: INFO: update-demo-nautilus-fx85g is created but not running
Feb 21 12:24:31.233: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-xjks6'
Feb 21 12:24:31.412: INFO: stderr: ""
Feb 21 12:24:31.413: INFO: stdout: "update-demo-nautilus-fx85g update-demo-nautilus-jjlz4 "
Feb 21 12:24:31.413: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-fx85g -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-xjks6'
Feb 21 12:24:31.554: INFO: stderr: ""
Feb 21 12:24:31.554: INFO: stdout: "true"
Feb 21 12:24:31.554: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-fx85g -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-xjks6'
Feb 21 12:24:31.665: INFO: stderr: ""
Feb 21 12:24:31.665: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb 21 12:24:31.665: INFO: validating pod update-demo-nautilus-fx85g
Feb 21 12:24:31.713: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb 21 12:24:31.713: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb 21 12:24:31.713: INFO: update-demo-nautilus-fx85g is verified up and running
Feb 21 12:24:31.713: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jjlz4 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-xjks6'
Feb 21 12:24:31.812: INFO: stderr: ""
Feb 21 12:24:31.812: INFO: stdout: ""
Feb 21 12:24:31.812: INFO: update-demo-nautilus-jjlz4 is created but not running
Feb 21 12:24:36.812: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-xjks6'
Feb 21 12:24:37.018: INFO: stderr: ""
Feb 21 12:24:37.019: INFO: stdout: "update-demo-nautilus-fx85g update-demo-nautilus-jjlz4 "
Feb 21 12:24:37.019: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-fx85g -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-xjks6'
Feb 21 12:24:37.148: INFO: stderr: ""
Feb 21 12:24:37.149: INFO: stdout: "true"
Feb 21 12:24:37.149: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-fx85g -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-xjks6'
Feb 21 12:24:37.293: INFO: stderr: ""
Feb 21 12:24:37.293: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb 21 12:24:37.294: INFO: validating pod update-demo-nautilus-fx85g
Feb 21 12:24:37.305: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb 21 12:24:37.306: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb 21 12:24:37.306: INFO: update-demo-nautilus-fx85g is verified up and running
Feb 21 12:24:37.306: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jjlz4 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-xjks6'
Feb 21 12:24:37.428: INFO: stderr: ""
Feb 21 12:24:37.429: INFO: stdout: "true"
Feb 21 12:24:37.429: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jjlz4 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-xjks6'
Feb 21 12:24:37.539: INFO: stderr: ""
Feb 21 12:24:37.539: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb 21 12:24:37.539: INFO: validating pod update-demo-nautilus-jjlz4
Feb 21 12:24:37.552: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb 21 12:24:37.552: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb 21 12:24:37.553: INFO: update-demo-nautilus-jjlz4 is verified up and running
STEP: rolling-update to new replication controller
Feb 21 12:24:37.555: INFO: scanned /root for discovery docs: 
Feb 21 12:24:37.555: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=e2e-tests-kubectl-xjks6'
Feb 21 12:25:10.146: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Feb 21 12:25:10.146: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Feb 21 12:25:10.147: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-xjks6'
Feb 21 12:25:10.313: INFO: stderr: ""
Feb 21 12:25:10.313: INFO: stdout: "update-demo-kitten-24b59 update-demo-kitten-vzsvb update-demo-nautilus-fx85g "
STEP: Replicas for name=update-demo: expected=2 actual=3
Feb 21 12:25:15.314: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-xjks6'
Feb 21 12:25:15.496: INFO: stderr: ""
Feb 21 12:25:15.497: INFO: stdout: "update-demo-kitten-24b59 update-demo-kitten-vzsvb "
Feb 21 12:25:15.497: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-24b59 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-xjks6'
Feb 21 12:25:15.698: INFO: stderr: ""
Feb 21 12:25:15.698: INFO: stdout: "true"
Feb 21 12:25:15.698: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-24b59 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-xjks6'
Feb 21 12:25:15.875: INFO: stderr: ""
Feb 21 12:25:15.875: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Feb 21 12:25:15.875: INFO: validating pod update-demo-kitten-24b59
Feb 21 12:25:15.920: INFO: got data: {
  "image": "kitten.jpg"
}

Feb 21 12:25:15.920: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Feb 21 12:25:15.920: INFO: update-demo-kitten-24b59 is verified up and running
Feb 21 12:25:15.920: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-vzsvb -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-xjks6'
Feb 21 12:25:16.055: INFO: stderr: ""
Feb 21 12:25:16.056: INFO: stdout: "true"
Feb 21 12:25:16.056: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-vzsvb -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-xjks6'
Feb 21 12:25:16.279: INFO: stderr: ""
Feb 21 12:25:16.280: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Feb 21 12:25:16.280: INFO: validating pod update-demo-kitten-vzsvb
Feb 21 12:25:16.316: INFO: got data: {
  "image": "kitten.jpg"
}

Feb 21 12:25:16.316: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Feb 21 12:25:16.316: INFO: update-demo-kitten-vzsvb is verified up and running
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 21 12:25:16.316: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-xjks6" for this suite.
Feb 21 12:25:48.411: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 21 12:25:48.569: INFO: namespace: e2e-tests-kubectl-xjks6, resource: bindings, ignored listing per whitelist
Feb 21 12:25:48.645: INFO: namespace e2e-tests-kubectl-xjks6 deletion completed in 32.311239951s

• [SLOW TEST:90.886 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should do a rolling update of a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 21 12:25:48.647: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Feb 21 12:25:48.929: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4ac2f832-54a5-11ea-b1f8-0242ac110008" in namespace "e2e-tests-projected-ctx8r" to be "success or failure"
Feb 21 12:25:49.041: INFO: Pod "downwardapi-volume-4ac2f832-54a5-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 111.876331ms
Feb 21 12:25:51.463: INFO: Pod "downwardapi-volume-4ac2f832-54a5-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.534577866s
Feb 21 12:25:53.492: INFO: Pod "downwardapi-volume-4ac2f832-54a5-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.563490758s
Feb 21 12:25:55.504: INFO: Pod "downwardapi-volume-4ac2f832-54a5-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.575247691s
Feb 21 12:25:57.518: INFO: Pod "downwardapi-volume-4ac2f832-54a5-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.5891567s
Feb 21 12:25:59.533: INFO: Pod "downwardapi-volume-4ac2f832-54a5-11ea-b1f8-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.603780821s
STEP: Saw pod success
Feb 21 12:25:59.533: INFO: Pod "downwardapi-volume-4ac2f832-54a5-11ea-b1f8-0242ac110008" satisfied condition "success or failure"
Feb 21 12:25:59.538: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-4ac2f832-54a5-11ea-b1f8-0242ac110008 container client-container: 
STEP: delete the pod
Feb 21 12:26:00.166: INFO: Waiting for pod downwardapi-volume-4ac2f832-54a5-11ea-b1f8-0242ac110008 to disappear
Feb 21 12:26:00.185: INFO: Pod downwardapi-volume-4ac2f832-54a5-11ea-b1f8-0242ac110008 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 21 12:26:00.185: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-ctx8r" for this suite.
Feb 21 12:26:06.297: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 21 12:26:06.359: INFO: namespace: e2e-tests-projected-ctx8r, resource: bindings, ignored listing per whitelist
Feb 21 12:26:07.110: INFO: namespace e2e-tests-projected-ctx8r deletion completed in 6.915161731s

• [SLOW TEST:18.464 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 21 12:26:07.112: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0666 on node default medium
Feb 21 12:26:07.719: INFO: Waiting up to 5m0s for pod "pod-55fb9b70-54a5-11ea-b1f8-0242ac110008" in namespace "e2e-tests-emptydir-hxshc" to be "success or failure"
Feb 21 12:26:07.757: INFO: Pod "pod-55fb9b70-54a5-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 37.446393ms
Feb 21 12:26:09.769: INFO: Pod "pod-55fb9b70-54a5-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.049721299s
Feb 21 12:26:11.795: INFO: Pod "pod-55fb9b70-54a5-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.076171202s
Feb 21 12:26:13.808: INFO: Pod "pod-55fb9b70-54a5-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.088670973s
Feb 21 12:26:16.010: INFO: Pod "pod-55fb9b70-54a5-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.290696473s
Feb 21 12:26:18.021: INFO: Pod "pod-55fb9b70-54a5-11ea-b1f8-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.301266088s
STEP: Saw pod success
Feb 21 12:26:18.021: INFO: Pod "pod-55fb9b70-54a5-11ea-b1f8-0242ac110008" satisfied condition "success or failure"
Feb 21 12:26:18.024: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-55fb9b70-54a5-11ea-b1f8-0242ac110008 container test-container: 
STEP: delete the pod
Feb 21 12:26:19.045: INFO: Waiting for pod pod-55fb9b70-54a5-11ea-b1f8-0242ac110008 to disappear
Feb 21 12:26:19.080: INFO: Pod pod-55fb9b70-54a5-11ea-b1f8-0242ac110008 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 21 12:26:19.081: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-hxshc" for this suite.
Feb 21 12:26:25.124: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 21 12:26:25.166: INFO: namespace: e2e-tests-emptydir-hxshc, resource: bindings, ignored listing per whitelist
Feb 21 12:26:25.421: INFO: namespace e2e-tests-emptydir-hxshc deletion completed in 6.330293498s

• [SLOW TEST:18.309 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] PreStop 
  should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 21 12:26:25.422: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename prestop
STEP: Waiting for a default service account to be provisioned in namespace
[It] should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating server pod server in namespace e2e-tests-prestop-2p9fs
STEP: Waiting for pods to come up.
STEP: Creating tester pod tester in namespace e2e-tests-prestop-2p9fs
STEP: Deleting pre-stop pod
Feb 21 12:26:50.983: INFO: Saw: {
	"Hostname": "server",
	"Sent": null,
	"Received": {
		"prestop": 1
	},
	"Errors": null,
	"Log": [
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up."
	],
	"StillContactingPeers": true
}
STEP: Deleting the server pod
[AfterEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 21 12:26:51.004: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-prestop-2p9fs" for this suite.
Feb 21 12:27:35.236: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 21 12:27:35.291: INFO: namespace: e2e-tests-prestop-2p9fs, resource: bindings, ignored listing per whitelist
Feb 21 12:27:35.391: INFO: namespace e2e-tests-prestop-2p9fs deletion completed in 44.24318604s

• [SLOW TEST:69.970 seconds]
[k8s.io] [sig-node] PreStop
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Should recreate evicted statefulset [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 21 12:27:35.392: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74
STEP: Creating service test in namespace e2e-tests-statefulset-x2hrg
[It] Should recreate evicted statefulset [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Looking for a node to schedule stateful set and pod
STEP: Creating pod with conflicting port in namespace e2e-tests-statefulset-x2hrg
STEP: Creating statefulset with conflicting port in namespace e2e-tests-statefulset-x2hrg
STEP: Waiting until pod test-pod will start running in namespace e2e-tests-statefulset-x2hrg
STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace e2e-tests-statefulset-x2hrg
Feb 21 12:27:49.975: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-x2hrg, name: ss-0, uid: 90cee14f-54a5-11ea-a994-fa163e34d433, status phase: Pending. Waiting for statefulset controller to delete.
Feb 21 12:27:52.485: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-x2hrg, name: ss-0, uid: 90cee14f-54a5-11ea-a994-fa163e34d433, status phase: Failed. Waiting for statefulset controller to delete.
Feb 21 12:27:52.667: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-x2hrg, name: ss-0, uid: 90cee14f-54a5-11ea-a994-fa163e34d433, status phase: Failed. Waiting for statefulset controller to delete.
Feb 21 12:27:52.691: INFO: Observed delete event for stateful pod ss-0 in namespace e2e-tests-statefulset-x2hrg
STEP: Removing pod with conflicting port in namespace e2e-tests-statefulset-x2hrg
STEP: Waiting when stateful pod ss-0 will be recreated in namespace e2e-tests-statefulset-x2hrg and will be in running state
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85
Feb 21 12:28:04.962: INFO: Deleting all statefulset in ns e2e-tests-statefulset-x2hrg
Feb 21 12:28:04.970: INFO: Scaling statefulset ss to 0
Feb 21 12:28:25.016: INFO: Waiting for statefulset status.replicas updated to 0
Feb 21 12:28:25.019: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 21 12:28:25.072: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-statefulset-x2hrg" for this suite.
Feb 21 12:28:31.247: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 21 12:28:31.462: INFO: namespace: e2e-tests-statefulset-x2hrg, resource: bindings, ignored listing per whitelist
Feb 21 12:28:31.470: INFO: namespace e2e-tests-statefulset-x2hrg deletion completed in 6.391394968s

• [SLOW TEST:56.078 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    Should recreate evicted statefulset [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[k8s.io] Kubelet when scheduling a read only busybox container 
  should not write to root filesystem [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 21 12:28:31.471: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should not write to root filesystem [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 21 12:28:41.865: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubelet-test-rhcbk" for this suite.
Feb 21 12:29:23.952: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 21 12:29:24.358: INFO: namespace: e2e-tests-kubelet-test-rhcbk, resource: bindings, ignored listing per whitelist
Feb 21 12:29:24.398: INFO: namespace e2e-tests-kubelet-test-rhcbk deletion completed in 42.500820002s

• [SLOW TEST:52.928 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when scheduling a read only busybox container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:186
    should not write to root filesystem [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-apps] Deployment 
  deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 21 12:29:24.399: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65
[It] deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Feb 21 12:29:24.782: INFO: Pod name rollover-pod: Found 0 pods out of 1
Feb 21 12:29:29.843: INFO: Pod name rollover-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Feb 21 12:29:35.256: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready
Feb 21 12:29:37.272: INFO: Creating deployment "test-rollover-deployment"
Feb 21 12:29:37.347: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations
Feb 21 12:29:39.882: INFO: Check revision of new replica set for deployment "test-rollover-deployment"
Feb 21 12:29:40.042: INFO: Ensure that both replica sets have 1 created replica
Feb 21 12:29:40.049: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update
Feb 21 12:29:40.059: INFO: Updating deployment test-rollover-deployment
Feb 21 12:29:40.059: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller
Feb 21 12:29:42.446: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2
Feb 21 12:29:42.915: INFO: Make sure deployment "test-rollover-deployment" is complete
Feb 21 12:29:42.931: INFO: all replica sets need to contain the pod-template-hash label
Feb 21 12:29:42.931: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717884977, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717884977, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717884981, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717884977, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 21 12:29:44.978: INFO: all replica sets need to contain the pod-template-hash label
Feb 21 12:29:44.979: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717884977, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717884977, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717884981, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717884977, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 21 12:29:46.962: INFO: all replica sets need to contain the pod-template-hash label
Feb 21 12:29:46.962: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717884977, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717884977, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717884981, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717884977, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 21 12:29:49.143: INFO: all replica sets need to contain the pod-template-hash label
Feb 21 12:29:49.144: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717884977, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717884977, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717884981, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717884977, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 21 12:29:50.955: INFO: all replica sets need to contain the pod-template-hash label
Feb 21 12:29:50.956: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717884977, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717884977, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717884981, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717884977, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 21 12:29:52.957: INFO: all replica sets need to contain the pod-template-hash label
Feb 21 12:29:52.957: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717884977, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717884977, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717884991, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717884977, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 21 12:29:54.967: INFO: all replica sets need to contain the pod-template-hash label
Feb 21 12:29:54.967: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717884977, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717884977, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717884991, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717884977, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 21 12:29:56.955: INFO: all replica sets need to contain the pod-template-hash label
Feb 21 12:29:56.955: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717884977, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717884977, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717884991, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717884977, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 21 12:29:58.954: INFO: all replica sets need to contain the pod-template-hash label
Feb 21 12:29:58.954: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717884977, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717884977, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717884991, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717884977, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 21 12:30:00.980: INFO: all replica sets need to contain the pod-template-hash label
Feb 21 12:30:00.980: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717884977, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717884977, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717884991, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717884977, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 21 12:30:03.936: INFO: 
Feb 21 12:30:03.936: INFO: Ensure that both old replica sets have no replicas
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59
Feb 21 12:30:04.121: INFO: Deployment "test-rollover-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment,GenerateName:,Namespace:e2e-tests-deployment-r5qtm,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-r5qtm/deployments/test-rollover-deployment,UID:d2e4e084-54a5-11ea-a994-fa163e34d433,ResourceVersion:22424136,Generation:2,CreationTimestamp:2020-02-21 12:29:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-02-21 12:29:37 +0000 UTC 2020-02-21 12:29:37 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-02-21 12:30:01 +0000 UTC 2020-02-21 12:29:37 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rollover-deployment-5b8479fdb6" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},}

Feb 21 12:30:04.128: INFO: New ReplicaSet "test-rollover-deployment-5b8479fdb6" of Deployment "test-rollover-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-5b8479fdb6,GenerateName:,Namespace:e2e-tests-deployment-r5qtm,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-r5qtm/replicasets/test-rollover-deployment-5b8479fdb6,UID:d48e66d6-54a5-11ea-a994-fa163e34d433,ResourceVersion:22424127,Generation:2,CreationTimestamp:2020-02-21 12:29:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment d2e4e084-54a5-11ea-a994-fa163e34d433 0xc001774bb7 0xc001774bb8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Feb 21 12:30:04.128: INFO: All old ReplicaSets of Deployment "test-rollover-deployment":
Feb 21 12:30:04.129: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-controller,GenerateName:,Namespace:e2e-tests-deployment-r5qtm,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-r5qtm/replicasets/test-rollover-controller,UID:cb700e32-54a5-11ea-a994-fa163e34d433,ResourceVersion:22424135,Generation:2,CreationTimestamp:2020-02-21 12:29:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment d2e4e084-54a5-11ea-a994-fa163e34d433 0xc001774817 0xc001774818}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Feb 21 12:30:04.129: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-58494b7559,GenerateName:,Namespace:e2e-tests-deployment-r5qtm,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-r5qtm/replicasets/test-rollover-deployment-58494b7559,UID:d2f82939-54a5-11ea-a994-fa163e34d433,ResourceVersion:22424091,Generation:2,CreationTimestamp:2020-02-21 12:29:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment d2e4e084-54a5-11ea-a994-fa163e34d433 0xc001774a27 0xc001774a28}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Feb 21 12:30:04.136: INFO: Pod "test-rollover-deployment-5b8479fdb6-66nbf" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-5b8479fdb6-66nbf,GenerateName:test-rollover-deployment-5b8479fdb6-,Namespace:e2e-tests-deployment-r5qtm,SelfLink:/api/v1/namespaces/e2e-tests-deployment-r5qtm/pods/test-rollover-deployment-5b8479fdb6-66nbf,UID:d4b0488b-54a5-11ea-a994-fa163e34d433,ResourceVersion:22424112,Generation:0,CreationTimestamp:2020-02-21 12:29:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rollover-deployment-5b8479fdb6 d48e66d6-54a5-11ea-a994-fa163e34d433 0xc001008f17 0xc001008f18}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-hldwd {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hldwd,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [{default-token-hldwd true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001008f80} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001008fa0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-21 12:29:40 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-21 12:29:50 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-21 12:29:50 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-21 12:29:40 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.5,StartTime:2020-02-21 12:29:40 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-02-21 12:29:49 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 docker://ead9972b335305141c421e2432562401b08a4097c46d5f60cdf3b504ff5a4b2e}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 21 12:30:04.136: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-deployment-r5qtm" for this suite.
Feb 21 12:30:14.221: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 21 12:30:15.286: INFO: namespace: e2e-tests-deployment-r5qtm, resource: bindings, ignored listing per whitelist
Feb 21 12:30:15.424: INFO: namespace e2e-tests-deployment-r5qtm deletion completed in 11.277889022s

• [SLOW TEST:51.026 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform rolling updates and roll backs of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 21 12:30:15.425: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74
STEP: Creating service test in namespace e2e-tests-statefulset-qhsjf
[It] should perform rolling updates and roll backs of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a new StatefulSet
Feb 21 12:30:15.665: INFO: Found 0 stateful pods, waiting for 3
Feb 21 12:30:25.710: INFO: Found 2 stateful pods, waiting for 3
Feb 21 12:30:36.579: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Feb 21 12:30:36.580: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Feb 21 12:30:36.580: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Feb 21 12:30:45.739: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Feb 21 12:30:45.739: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Feb 21 12:30:45.739: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
Feb 21 12:30:45.763: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-qhsjf ss2-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb 21 12:30:46.335: INFO: stderr: "I0221 12:30:46.042326    3825 log.go:172] (0xc0001222c0) (0xc000805540) Create stream\nI0221 12:30:46.042648    3825 log.go:172] (0xc0001222c0) (0xc000805540) Stream added, broadcasting: 1\nI0221 12:30:46.048900    3825 log.go:172] (0xc0001222c0) Reply frame received for 1\nI0221 12:30:46.048980    3825 log.go:172] (0xc0001222c0) (0xc0008d8000) Create stream\nI0221 12:30:46.048989    3825 log.go:172] (0xc0001222c0) (0xc0008d8000) Stream added, broadcasting: 3\nI0221 12:30:46.050380    3825 log.go:172] (0xc0001222c0) Reply frame received for 3\nI0221 12:30:46.050412    3825 log.go:172] (0xc0001222c0) (0xc0008d80a0) Create stream\nI0221 12:30:46.050419    3825 log.go:172] (0xc0001222c0) (0xc0008d80a0) Stream added, broadcasting: 5\nI0221 12:30:46.051325    3825 log.go:172] (0xc0001222c0) Reply frame received for 5\nI0221 12:30:46.205805    3825 log.go:172] (0xc0001222c0) Data frame received for 3\nI0221 12:30:46.205877    3825 log.go:172] (0xc0008d8000) (3) Data frame handling\nI0221 12:30:46.205910    3825 log.go:172] (0xc0008d8000) (3) Data frame sent\nI0221 12:30:46.321362    3825 log.go:172] (0xc0001222c0) (0xc0008d8000) Stream removed, broadcasting: 3\nI0221 12:30:46.321549    3825 log.go:172] (0xc0001222c0) Data frame received for 1\nI0221 12:30:46.321564    3825 log.go:172] (0xc000805540) (1) Data frame handling\nI0221 12:30:46.321583    3825 log.go:172] (0xc000805540) (1) Data frame sent\nI0221 12:30:46.321593    3825 log.go:172] (0xc0001222c0) (0xc000805540) Stream removed, broadcasting: 1\nI0221 12:30:46.321831    3825 log.go:172] (0xc0001222c0) (0xc0008d80a0) Stream removed, broadcasting: 5\nI0221 12:30:46.322344    3825 log.go:172] (0xc0001222c0) Go away received\nI0221 12:30:46.322721    3825 log.go:172] (0xc0001222c0) (0xc000805540) Stream removed, broadcasting: 1\nI0221 12:30:46.322752    3825 log.go:172] (0xc0001222c0) (0xc0008d8000) Stream removed, broadcasting: 3\nI0221 12:30:46.322770    3825 log.go:172] (0xc0001222c0) (0xc0008d80a0) Stream removed, broadcasting: 5\n"
Feb 21 12:30:46.336: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb 21 12:30:46.336: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

STEP: Updating StatefulSet template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine
Feb 21 12:30:56.427: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Updating Pods in reverse ordinal order
Feb 21 12:31:06.533: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-qhsjf ss2-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 21 12:31:07.108: INFO: stderr: "I0221 12:31:06.809825    3847 log.go:172] (0xc000710370) (0xc000788640) Create stream\nI0221 12:31:06.809960    3847 log.go:172] (0xc000710370) (0xc000788640) Stream added, broadcasting: 1\nI0221 12:31:06.818234    3847 log.go:172] (0xc000710370) Reply frame received for 1\nI0221 12:31:06.818258    3847 log.go:172] (0xc000710370) (0xc0007886e0) Create stream\nI0221 12:31:06.818264    3847 log.go:172] (0xc000710370) (0xc0007886e0) Stream added, broadcasting: 3\nI0221 12:31:06.822997    3847 log.go:172] (0xc000710370) Reply frame received for 3\nI0221 12:31:06.823058    3847 log.go:172] (0xc000710370) (0xc0005bcdc0) Create stream\nI0221 12:31:06.823074    3847 log.go:172] (0xc000710370) (0xc0005bcdc0) Stream added, broadcasting: 5\nI0221 12:31:06.825419    3847 log.go:172] (0xc000710370) Reply frame received for 5\nI0221 12:31:06.956016    3847 log.go:172] (0xc000710370) Data frame received for 3\nI0221 12:31:06.956066    3847 log.go:172] (0xc0007886e0) (3) Data frame handling\nI0221 12:31:06.956096    3847 log.go:172] (0xc0007886e0) (3) Data frame sent\nI0221 12:31:07.101846    3847 log.go:172] (0xc000710370) (0xc0007886e0) Stream removed, broadcasting: 3\nI0221 12:31:07.102246    3847 log.go:172] (0xc000710370) Data frame received for 1\nI0221 12:31:07.102298    3847 log.go:172] (0xc000788640) (1) Data frame handling\nI0221 12:31:07.102306    3847 log.go:172] (0xc000788640) (1) Data frame sent\nI0221 12:31:07.102315    3847 log.go:172] (0xc000710370) (0xc000788640) Stream removed, broadcasting: 1\nI0221 12:31:07.102371    3847 log.go:172] (0xc000710370) (0xc0005bcdc0) Stream removed, broadcasting: 5\nI0221 12:31:07.102437    3847 log.go:172] (0xc000710370) Go away received\nI0221 12:31:07.102446    3847 log.go:172] (0xc000710370) (0xc000788640) Stream removed, broadcasting: 1\nI0221 12:31:07.102456    3847 log.go:172] (0xc000710370) (0xc0007886e0) Stream removed, broadcasting: 3\nI0221 12:31:07.102462    3847 log.go:172] (0xc000710370) (0xc0005bcdc0) Stream removed, broadcasting: 5\n"
Feb 21 12:31:07.109: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Feb 21 12:31:07.109: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Feb 21 12:31:17.188: INFO: Waiting for StatefulSet e2e-tests-statefulset-qhsjf/ss2 to complete update
Feb 21 12:31:17.188: INFO: Waiting for Pod e2e-tests-statefulset-qhsjf/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb 21 12:31:17.188: INFO: Waiting for Pod e2e-tests-statefulset-qhsjf/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb 21 12:31:17.188: INFO: Waiting for Pod e2e-tests-statefulset-qhsjf/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb 21 12:31:27.736: INFO: Waiting for StatefulSet e2e-tests-statefulset-qhsjf/ss2 to complete update
Feb 21 12:31:27.736: INFO: Waiting for Pod e2e-tests-statefulset-qhsjf/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb 21 12:31:27.736: INFO: Waiting for Pod e2e-tests-statefulset-qhsjf/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb 21 12:31:37.242: INFO: Waiting for StatefulSet e2e-tests-statefulset-qhsjf/ss2 to complete update
Feb 21 12:31:37.242: INFO: Waiting for Pod e2e-tests-statefulset-qhsjf/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb 21 12:31:37.242: INFO: Waiting for Pod e2e-tests-statefulset-qhsjf/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb 21 12:31:47.288: INFO: Waiting for StatefulSet e2e-tests-statefulset-qhsjf/ss2 to complete update
Feb 21 12:31:47.288: INFO: Waiting for Pod e2e-tests-statefulset-qhsjf/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb 21 12:31:57.222: INFO: Waiting for StatefulSet e2e-tests-statefulset-qhsjf/ss2 to complete update
Feb 21 12:31:57.222: INFO: Waiting for Pod e2e-tests-statefulset-qhsjf/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb 21 12:32:07.220: INFO: Waiting for StatefulSet e2e-tests-statefulset-qhsjf/ss2 to complete update
STEP: Rolling back to a previous revision
Feb 21 12:32:17.235: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-qhsjf ss2-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb 21 12:32:18.127: INFO: stderr: "I0221 12:32:17.520111    3869 log.go:172] (0xc0007262c0) (0xc00074a640) Create stream\nI0221 12:32:17.520300    3869 log.go:172] (0xc0007262c0) (0xc00074a640) Stream added, broadcasting: 1\nI0221 12:32:17.528971    3869 log.go:172] (0xc0007262c0) Reply frame received for 1\nI0221 12:32:17.529045    3869 log.go:172] (0xc0007262c0) (0xc000662b40) Create stream\nI0221 12:32:17.529063    3869 log.go:172] (0xc0007262c0) (0xc000662b40) Stream added, broadcasting: 3\nI0221 12:32:17.530653    3869 log.go:172] (0xc0007262c0) Reply frame received for 3\nI0221 12:32:17.530728    3869 log.go:172] (0xc0007262c0) (0xc000312000) Create stream\nI0221 12:32:17.530751    3869 log.go:172] (0xc0007262c0) (0xc000312000) Stream added, broadcasting: 5\nI0221 12:32:17.532051    3869 log.go:172] (0xc0007262c0) Reply frame received for 5\nI0221 12:32:17.934252    3869 log.go:172] (0xc0007262c0) Data frame received for 3\nI0221 12:32:17.934327    3869 log.go:172] (0xc000662b40) (3) Data frame handling\nI0221 12:32:17.934365    3869 log.go:172] (0xc000662b40) (3) Data frame sent\nI0221 12:32:18.116836    3869 log.go:172] (0xc0007262c0) Data frame received for 1\nI0221 12:32:18.116929    3869 log.go:172] (0xc0007262c0) (0xc000662b40) Stream removed, broadcasting: 3\nI0221 12:32:18.117018    3869 log.go:172] (0xc00074a640) (1) Data frame handling\nI0221 12:32:18.117049    3869 log.go:172] (0xc00074a640) (1) Data frame sent\nI0221 12:32:18.117070    3869 log.go:172] (0xc0007262c0) (0xc00074a640) Stream removed, broadcasting: 1\nI0221 12:32:18.117115    3869 log.go:172] (0xc0007262c0) (0xc000312000) Stream removed, broadcasting: 5\nI0221 12:32:18.117549    3869 log.go:172] (0xc0007262c0) Go away received\nI0221 12:32:18.117711    3869 log.go:172] (0xc0007262c0) (0xc00074a640) Stream removed, broadcasting: 1\nI0221 12:32:18.117728    3869 log.go:172] (0xc0007262c0) (0xc000662b40) Stream removed, broadcasting: 3\nI0221 12:32:18.117741    3869 log.go:172] (0xc0007262c0) (0xc000312000) Stream removed, broadcasting: 5\n"
Feb 21 12:32:18.127: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb 21 12:32:18.127: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Feb 21 12:32:28.393: INFO: Updating stateful set ss2
STEP: Rolling back update in reverse ordinal order
Feb 21 12:32:38.441: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-qhsjf ss2-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 21 12:32:39.102: INFO: stderr: "I0221 12:32:38.723101    3892 log.go:172] (0xc0001306e0) (0xc0006e0640) Create stream\nI0221 12:32:38.723403    3892 log.go:172] (0xc0001306e0) (0xc0006e0640) Stream added, broadcasting: 1\nI0221 12:32:38.734064    3892 log.go:172] (0xc0001306e0) Reply frame received for 1\nI0221 12:32:38.734112    3892 log.go:172] (0xc0001306e0) (0xc0006b6be0) Create stream\nI0221 12:32:38.734125    3892 log.go:172] (0xc0001306e0) (0xc0006b6be0) Stream added, broadcasting: 3\nI0221 12:32:38.735675    3892 log.go:172] (0xc0001306e0) Reply frame received for 3\nI0221 12:32:38.735712    3892 log.go:172] (0xc0001306e0) (0xc000592000) Create stream\nI0221 12:32:38.735724    3892 log.go:172] (0xc0001306e0) (0xc000592000) Stream added, broadcasting: 5\nI0221 12:32:38.737579    3892 log.go:172] (0xc0001306e0) Reply frame received for 5\nI0221 12:32:38.914289    3892 log.go:172] (0xc0001306e0) Data frame received for 3\nI0221 12:32:38.914363    3892 log.go:172] (0xc0006b6be0) (3) Data frame handling\nI0221 12:32:38.914381    3892 log.go:172] (0xc0006b6be0) (3) Data frame sent\nI0221 12:32:39.093667    3892 log.go:172] (0xc0001306e0) Data frame received for 1\nI0221 12:32:39.093724    3892 log.go:172] (0xc0006e0640) (1) Data frame handling\nI0221 12:32:39.093736    3892 log.go:172] (0xc0006e0640) (1) Data frame sent\nI0221 12:32:39.094801    3892 log.go:172] (0xc0001306e0) (0xc0006e0640) Stream removed, broadcasting: 1\nI0221 12:32:39.094993    3892 log.go:172] (0xc0001306e0) (0xc0006b6be0) Stream removed, broadcasting: 3\nI0221 12:32:39.095141    3892 log.go:172] (0xc0001306e0) (0xc000592000) Stream removed, broadcasting: 5\nI0221 12:32:39.095164    3892 log.go:172] (0xc0001306e0) Go away received\nI0221 12:32:39.095213    3892 log.go:172] (0xc0001306e0) (0xc0006e0640) Stream removed, broadcasting: 1\nI0221 12:32:39.095229    3892 log.go:172] (0xc0001306e0) (0xc0006b6be0) Stream removed, broadcasting: 3\nI0221 12:32:39.095242    3892 log.go:172] (0xc0001306e0) (0xc000592000) Stream removed, broadcasting: 5\n"
Feb 21 12:32:39.102: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Feb 21 12:32:39.102: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Feb 21 12:32:49.174: INFO: Waiting for StatefulSet e2e-tests-statefulset-qhsjf/ss2 to complete update
Feb 21 12:32:49.175: INFO: Waiting for Pod e2e-tests-statefulset-qhsjf/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Feb 21 12:32:49.175: INFO: Waiting for Pod e2e-tests-statefulset-qhsjf/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Feb 21 12:32:49.175: INFO: Waiting for Pod e2e-tests-statefulset-qhsjf/ss2-2 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Feb 21 12:32:59.759: INFO: Waiting for StatefulSet e2e-tests-statefulset-qhsjf/ss2 to complete update
Feb 21 12:32:59.759: INFO: Waiting for Pod e2e-tests-statefulset-qhsjf/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Feb 21 12:32:59.759: INFO: Waiting for Pod e2e-tests-statefulset-qhsjf/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Feb 21 12:33:09.231: INFO: Waiting for StatefulSet e2e-tests-statefulset-qhsjf/ss2 to complete update
Feb 21 12:33:09.231: INFO: Waiting for Pod e2e-tests-statefulset-qhsjf/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Feb 21 12:33:09.231: INFO: Waiting for Pod e2e-tests-statefulset-qhsjf/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Feb 21 12:33:19.477: INFO: Waiting for StatefulSet e2e-tests-statefulset-qhsjf/ss2 to complete update
Feb 21 12:33:19.477: INFO: Waiting for Pod e2e-tests-statefulset-qhsjf/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Feb 21 12:33:29.215: INFO: Waiting for StatefulSet e2e-tests-statefulset-qhsjf/ss2 to complete update
Feb 21 12:33:29.215: INFO: Waiting for Pod e2e-tests-statefulset-qhsjf/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Feb 21 12:33:39.271: INFO: Waiting for StatefulSet e2e-tests-statefulset-qhsjf/ss2 to complete update
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85
Feb 21 12:33:49.219: INFO: Deleting all statefulset in ns e2e-tests-statefulset-qhsjf
Feb 21 12:33:49.226: INFO: Scaling statefulset ss2 to 0
Feb 21 12:34:19.282: INFO: Waiting for statefulset status.replicas updated to 0
Feb 21 12:34:19.289: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 21 12:34:19.455: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-statefulset-qhsjf" for this suite.
Feb 21 12:34:27.501: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 21 12:34:27.570: INFO: namespace: e2e-tests-statefulset-qhsjf, resource: bindings, ignored listing per whitelist
Feb 21 12:34:27.693: INFO: namespace e2e-tests-statefulset-qhsjf deletion completed in 8.22792874s

• [SLOW TEST:252.268 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should perform rolling updates and roll backs of template modifications [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 21 12:34:27.693: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-8035675c-54a6-11ea-b1f8-0242ac110008
STEP: Creating a pod to test consume configMaps
Feb 21 12:34:28.078: INFO: Waiting up to 5m0s for pod "pod-configmaps-80368b11-54a6-11ea-b1f8-0242ac110008" in namespace "e2e-tests-configmap-9vh79" to be "success or failure"
Feb 21 12:34:28.227: INFO: Pod "pod-configmaps-80368b11-54a6-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 148.869498ms
Feb 21 12:34:30.247: INFO: Pod "pod-configmaps-80368b11-54a6-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.168301656s
Feb 21 12:34:32.260: INFO: Pod "pod-configmaps-80368b11-54a6-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.181321572s
Feb 21 12:34:34.802: INFO: Pod "pod-configmaps-80368b11-54a6-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.723735051s
Feb 21 12:34:36.864: INFO: Pod "pod-configmaps-80368b11-54a6-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.785134953s
Feb 21 12:34:38.877: INFO: Pod "pod-configmaps-80368b11-54a6-11ea-b1f8-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.798700658s
STEP: Saw pod success
Feb 21 12:34:38.877: INFO: Pod "pod-configmaps-80368b11-54a6-11ea-b1f8-0242ac110008" satisfied condition "success or failure"
Feb 21 12:34:38.882: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-80368b11-54a6-11ea-b1f8-0242ac110008 container configmap-volume-test: 
STEP: delete the pod
Feb 21 12:34:39.119: INFO: Waiting for pod pod-configmaps-80368b11-54a6-11ea-b1f8-0242ac110008 to disappear
Feb 21 12:34:39.224: INFO: Pod pod-configmaps-80368b11-54a6-11ea-b1f8-0242ac110008 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 21 12:34:39.224: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-9vh79" for this suite.
Feb 21 12:34:45.262: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 21 12:34:45.352: INFO: namespace: e2e-tests-configmap-9vh79, resource: bindings, ignored listing per whitelist
Feb 21 12:34:45.383: INFO: namespace e2e-tests-configmap-9vh79 deletion completed in 6.150596006s

• [SLOW TEST:17.690 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 21 12:34:45.384: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating the pod
Feb 21 12:34:56.161: INFO: Successfully updated pod "annotationupdate8aa57cc6-54a6-11ea-b1f8-0242ac110008"
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 21 12:34:58.236: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-p4btb" for this suite.
Feb 21 12:35:22.335: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 21 12:35:22.448: INFO: namespace: e2e-tests-downward-api-p4btb, resource: bindings, ignored listing per whitelist
Feb 21 12:35:22.497: INFO: namespace e2e-tests-downward-api-p4btb deletion completed in 24.251469913s

• [SLOW TEST:37.113 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-apps] ReplicaSet 
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 21 12:35:22.498: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Given a Pod with a 'name' label pod-adoption-release is created
STEP: When a replicaset with a matching selector is created
STEP: Then the orphan pod is adopted
STEP: When the matched label of one of its pods change
Feb 21 12:35:34.281: INFO: Pod name pod-adoption-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 21 12:35:35.880: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-replicaset-wr68m" for this suite.
Feb 21 12:36:00.247: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 21 12:36:00.417: INFO: namespace: e2e-tests-replicaset-wr68m, resource: bindings, ignored listing per whitelist
Feb 21 12:36:00.471: INFO: namespace e2e-tests-replicaset-wr68m deletion completed in 24.575827268s

• [SLOW TEST:37.974 seconds]
[sig-apps] ReplicaSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 21 12:36:00.472: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79
Feb 21 12:36:00.679: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Feb 21 12:36:00.693: INFO: Waiting for terminating namespaces to be deleted...
Feb 21 12:36:00.696: INFO: 
Logging pods the kubelet thinks is on node hunter-server-hu5at5svl7ps before test
Feb 21 12:36:00.711: INFO: kube-proxy-bqnnz from kube-system started at 2019-08-04 08:33:23 +0000 UTC (1 container statuses recorded)
Feb 21 12:36:00.711: INFO: 	Container kube-proxy ready: true, restart count 0
Feb 21 12:36:00.711: INFO: etcd-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Feb 21 12:36:00.711: INFO: weave-net-tqwf2 from kube-system started at 2019-08-04 08:33:23 +0000 UTC (2 container statuses recorded)
Feb 21 12:36:00.711: INFO: 	Container weave ready: true, restart count 0
Feb 21 12:36:00.711: INFO: 	Container weave-npc ready: true, restart count 0
Feb 21 12:36:00.711: INFO: coredns-54ff9cd656-bmkk4 from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded)
Feb 21 12:36:00.711: INFO: 	Container coredns ready: true, restart count 0
Feb 21 12:36:00.711: INFO: kube-controller-manager-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Feb 21 12:36:00.711: INFO: kube-apiserver-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Feb 21 12:36:00.711: INFO: kube-scheduler-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Feb 21 12:36:00.711: INFO: coredns-54ff9cd656-79kxx from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded)
Feb 21 12:36:00.711: INFO: 	Container coredns ready: true, restart count 0
[It] validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Trying to schedule Pod with nonempty NodeSelector.
STEP: Considering event: 
Type = [Warning], Name = [restricted-pod.15f56b100c512351], Reason = [FailedScheduling], Message = [0/1 nodes are available: 1 node(s) didn't match node selector.]
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 21 12:36:01.782: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-sched-pred-wzk22" for this suite.
Feb 21 12:36:07.858: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 21 12:36:07.980: INFO: namespace: e2e-tests-sched-pred-wzk22, resource: bindings, ignored listing per whitelist
Feb 21 12:36:08.005: INFO: namespace e2e-tests-sched-pred-wzk22 deletion completed in 6.210476182s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70

• [SLOW TEST:7.533 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22
  validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 21 12:36:08.006: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0644 on tmpfs
Feb 21 12:36:08.233: INFO: Waiting up to 5m0s for pod "pod-bbda43f2-54a6-11ea-b1f8-0242ac110008" in namespace "e2e-tests-emptydir-mmfkj" to be "success or failure"
Feb 21 12:36:08.273: INFO: Pod "pod-bbda43f2-54a6-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 38.763011ms
Feb 21 12:36:10.284: INFO: Pod "pod-bbda43f2-54a6-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.049825054s
Feb 21 12:36:12.297: INFO: Pod "pod-bbda43f2-54a6-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.063209992s
Feb 21 12:36:14.317: INFO: Pod "pod-bbda43f2-54a6-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.083365433s
Feb 21 12:36:16.332: INFO: Pod "pod-bbda43f2-54a6-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.098603008s
Feb 21 12:36:18.359: INFO: Pod "pod-bbda43f2-54a6-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 10.125385449s
Feb 21 12:36:20.381: INFO: Pod "pod-bbda43f2-54a6-11ea-b1f8-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.14726044s
STEP: Saw pod success
Feb 21 12:36:20.381: INFO: Pod "pod-bbda43f2-54a6-11ea-b1f8-0242ac110008" satisfied condition "success or failure"
Feb 21 12:36:20.395: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-bbda43f2-54a6-11ea-b1f8-0242ac110008 container test-container: 
STEP: delete the pod
Feb 21 12:36:20.478: INFO: Waiting for pod pod-bbda43f2-54a6-11ea-b1f8-0242ac110008 to disappear
Feb 21 12:36:20.485: INFO: Pod pod-bbda43f2-54a6-11ea-b1f8-0242ac110008 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 21 12:36:20.485: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-mmfkj" for this suite.
Feb 21 12:36:26.588: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 21 12:36:26.735: INFO: namespace: e2e-tests-emptydir-mmfkj, resource: bindings, ignored listing per whitelist
Feb 21 12:36:26.748: INFO: namespace e2e-tests-emptydir-mmfkj deletion completed in 6.255730383s

• [SLOW TEST:18.743 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 21 12:36:26.749: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-map-c71d17e4-54a6-11ea-b1f8-0242ac110008
STEP: Creating a pod to test consume secrets
Feb 21 12:36:27.055: INFO: Waiting up to 5m0s for pod "pod-secrets-c71df273-54a6-11ea-b1f8-0242ac110008" in namespace "e2e-tests-secrets-vrr45" to be "success or failure"
Feb 21 12:36:27.191: INFO: Pod "pod-secrets-c71df273-54a6-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 135.565103ms
Feb 21 12:36:29.199: INFO: Pod "pod-secrets-c71df273-54a6-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.143775419s
Feb 21 12:36:31.242: INFO: Pod "pod-secrets-c71df273-54a6-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.187385468s
Feb 21 12:36:33.260: INFO: Pod "pod-secrets-c71df273-54a6-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.204612912s
Feb 21 12:36:35.983: INFO: Pod "pod-secrets-c71df273-54a6-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.927590753s
Feb 21 12:36:37.999: INFO: Pod "pod-secrets-c71df273-54a6-11ea-b1f8-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.944137141s
STEP: Saw pod success
Feb 21 12:36:37.999: INFO: Pod "pod-secrets-c71df273-54a6-11ea-b1f8-0242ac110008" satisfied condition "success or failure"
Feb 21 12:36:38.004: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-c71df273-54a6-11ea-b1f8-0242ac110008 container secret-volume-test: 
STEP: delete the pod
Feb 21 12:36:38.522: INFO: Waiting for pod pod-secrets-c71df273-54a6-11ea-b1f8-0242ac110008 to disappear
Feb 21 12:36:38.536: INFO: Pod pod-secrets-c71df273-54a6-11ea-b1f8-0242ac110008 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 21 12:36:38.536: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-vrr45" for this suite.
Feb 21 12:36:44.606: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 21 12:36:44.867: INFO: namespace: e2e-tests-secrets-vrr45, resource: bindings, ignored listing per whitelist
Feb 21 12:36:44.874: INFO: namespace e2e-tests-secrets-vrr45 deletion completed in 6.316114063s

• [SLOW TEST:18.125 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 21 12:36:44.874: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Feb 21 12:37:05.239: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 21 12:37:05.255: INFO: Pod pod-with-prestop-exec-hook still exists
Feb 21 12:37:07.256: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 21 12:37:07.279: INFO: Pod pod-with-prestop-exec-hook still exists
Feb 21 12:37:09.256: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 21 12:37:09.272: INFO: Pod pod-with-prestop-exec-hook still exists
Feb 21 12:37:11.256: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 21 12:37:11.272: INFO: Pod pod-with-prestop-exec-hook still exists
Feb 21 12:37:13.256: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 21 12:37:13.282: INFO: Pod pod-with-prestop-exec-hook still exists
Feb 21 12:37:15.256: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 21 12:37:15.274: INFO: Pod pod-with-prestop-exec-hook still exists
Feb 21 12:37:17.256: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 21 12:37:17.275: INFO: Pod pod-with-prestop-exec-hook still exists
Feb 21 12:37:19.256: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 21 12:37:19.288: INFO: Pod pod-with-prestop-exec-hook still exists
Feb 21 12:37:21.256: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 21 12:37:21.277: INFO: Pod pod-with-prestop-exec-hook still exists
Feb 21 12:37:23.256: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 21 12:37:23.285: INFO: Pod pod-with-prestop-exec-hook still exists
Feb 21 12:37:25.256: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 21 12:37:25.270: INFO: Pod pod-with-prestop-exec-hook still exists
Feb 21 12:37:27.256: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 21 12:37:27.285: INFO: Pod pod-with-prestop-exec-hook still exists
Feb 21 12:37:29.256: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 21 12:37:29.275: INFO: Pod pod-with-prestop-exec-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 21 12:37:29.319: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-4shl2" for this suite.
Feb 21 12:37:53.377: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 21 12:37:53.457: INFO: namespace: e2e-tests-container-lifecycle-hook-4shl2, resource: bindings, ignored listing per whitelist
Feb 21 12:37:53.691: INFO: namespace e2e-tests-container-lifecycle-hook-4shl2 deletion completed in 24.359711471s

• [SLOW TEST:68.817 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40
    should execute prestop exec hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 21 12:37:53.692: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward api env vars
Feb 21 12:37:53.919: INFO: Waiting up to 5m0s for pod "downward-api-fae6cf64-54a6-11ea-b1f8-0242ac110008" in namespace "e2e-tests-downward-api-n5snx" to be "success or failure"
Feb 21 12:37:53.963: INFO: Pod "downward-api-fae6cf64-54a6-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 43.324845ms
Feb 21 12:37:55.990: INFO: Pod "downward-api-fae6cf64-54a6-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.07114739s
Feb 21 12:37:58.009: INFO: Pod "downward-api-fae6cf64-54a6-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.090016397s
Feb 21 12:38:00.283: INFO: Pod "downward-api-fae6cf64-54a6-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.36378946s
Feb 21 12:38:02.299: INFO: Pod "downward-api-fae6cf64-54a6-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.379712981s
Feb 21 12:38:04.314: INFO: Pod "downward-api-fae6cf64-54a6-11ea-b1f8-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.394390948s
STEP: Saw pod success
Feb 21 12:38:04.314: INFO: Pod "downward-api-fae6cf64-54a6-11ea-b1f8-0242ac110008" satisfied condition "success or failure"
Feb 21 12:38:04.317: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-fae6cf64-54a6-11ea-b1f8-0242ac110008 container dapi-container: 
STEP: delete the pod
Feb 21 12:38:04.543: INFO: Waiting for pod downward-api-fae6cf64-54a6-11ea-b1f8-0242ac110008 to disappear
Feb 21 12:38:04.552: INFO: Pod downward-api-fae6cf64-54a6-11ea-b1f8-0242ac110008 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 21 12:38:04.552: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-n5snx" for this suite.
Feb 21 12:38:10.686: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 21 12:38:10.806: INFO: namespace: e2e-tests-downward-api-n5snx, resource: bindings, ignored listing per whitelist
Feb 21 12:38:11.121: INFO: namespace e2e-tests-downward-api-n5snx deletion completed in 6.55331748s

• [SLOW TEST:17.429 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 21 12:38:11.122: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0777 on tmpfs
Feb 21 12:38:11.509: INFO: Waiting up to 5m0s for pod "pod-055b9755-54a7-11ea-b1f8-0242ac110008" in namespace "e2e-tests-emptydir-l64qs" to be "success or failure"
Feb 21 12:38:11.563: INFO: Pod "pod-055b9755-54a7-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 53.524229ms
Feb 21 12:38:13.575: INFO: Pod "pod-055b9755-54a7-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.066335731s
Feb 21 12:38:15.593: INFO: Pod "pod-055b9755-54a7-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.084416001s
Feb 21 12:38:18.638: INFO: Pod "pod-055b9755-54a7-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 7.129268392s
Feb 21 12:38:20.651: INFO: Pod "pod-055b9755-54a7-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 9.141547327s
Feb 21 12:38:22.659: INFO: Pod "pod-055b9755-54a7-11ea-b1f8-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.149741853s
STEP: Saw pod success
Feb 21 12:38:22.659: INFO: Pod "pod-055b9755-54a7-11ea-b1f8-0242ac110008" satisfied condition "success or failure"
Feb 21 12:38:22.663: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-055b9755-54a7-11ea-b1f8-0242ac110008 container test-container: 
STEP: delete the pod
Feb 21 12:38:23.105: INFO: Waiting for pod pod-055b9755-54a7-11ea-b1f8-0242ac110008 to disappear
Feb 21 12:38:23.565: INFO: Pod pod-055b9755-54a7-11ea-b1f8-0242ac110008 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 21 12:38:23.565: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-l64qs" for this suite.
Feb 21 12:38:29.674: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 21 12:38:29.978: INFO: namespace: e2e-tests-emptydir-l64qs, resource: bindings, ignored listing per whitelist
Feb 21 12:38:30.071: INFO: namespace e2e-tests-emptydir-l64qs deletion completed in 6.457621336s

• [SLOW TEST:18.950 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 21 12:38:30.072: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0644 on node default medium
Feb 21 12:38:30.433: INFO: Waiting up to 5m0s for pod "pod-10aa8f64-54a7-11ea-b1f8-0242ac110008" in namespace "e2e-tests-emptydir-bffj5" to be "success or failure"
Feb 21 12:38:30.441: INFO: Pod "pod-10aa8f64-54a7-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 7.798808ms
Feb 21 12:38:32.476: INFO: Pod "pod-10aa8f64-54a7-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.042834847s
Feb 21 12:38:34.499: INFO: Pod "pod-10aa8f64-54a7-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.065751561s
Feb 21 12:38:36.531: INFO: Pod "pod-10aa8f64-54a7-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.098329254s
Feb 21 12:38:38.571: INFO: Pod "pod-10aa8f64-54a7-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.137911363s
Feb 21 12:38:40.703: INFO: Pod "pod-10aa8f64-54a7-11ea-b1f8-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.270003195s
STEP: Saw pod success
Feb 21 12:38:40.703: INFO: Pod "pod-10aa8f64-54a7-11ea-b1f8-0242ac110008" satisfied condition "success or failure"
Feb 21 12:38:40.730: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-10aa8f64-54a7-11ea-b1f8-0242ac110008 container test-container: 
STEP: delete the pod
Feb 21 12:38:41.731: INFO: Waiting for pod pod-10aa8f64-54a7-11ea-b1f8-0242ac110008 to disappear
Feb 21 12:38:41.740: INFO: Pod pod-10aa8f64-54a7-11ea-b1f8-0242ac110008 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 21 12:38:41.740: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-bffj5" for this suite.
Feb 21 12:38:47.847: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 21 12:38:47.902: INFO: namespace: e2e-tests-emptydir-bffj5, resource: bindings, ignored listing per whitelist
Feb 21 12:38:48.015: INFO: namespace e2e-tests-emptydir-bffj5 deletion completed in 6.268812122s

• [SLOW TEST:17.943 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 21 12:38:48.015: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43
[It] should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
Feb 21 12:38:48.269: INFO: PodSpec: initContainers in spec.initContainers
Feb 21 12:39:59.937: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-1b4f8b8a-54a7-11ea-b1f8-0242ac110008", GenerateName:"", Namespace:"e2e-tests-init-container-hzp7r", SelfLink:"/api/v1/namespaces/e2e-tests-init-container-hzp7r/pods/pod-init-1b4f8b8a-54a7-11ea-b1f8-0242ac110008", UID:"1b50dbf8-54a7-11ea-a994-fa163e34d433", ResourceVersion:"22425528", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63717885528, loc:(*time.Location)(0x7950ac0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"269137600"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-lh7qp", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc0024ca0c0), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-lh7qp", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-lh7qp", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-lh7qp", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc002468098), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"hunter-server-hu5at5svl7ps", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc002196000), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002468110)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002468130)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc002468138), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc00246813c)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717885528, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717885528, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717885528, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717885528, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.96.1.240", PodIP:"10.32.0.4", StartTime:(*v1.Time)(0xc00204e040), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc002744f50)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc002744fc0)}, Ready:false, RestartCount:3, Image:"busybox:1.29", ImageID:"docker-pullable://busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"docker://cfba2b17be9411bb240186e63360e50e7f99f17bb20e389a4a9229845a65387c"}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc00204e080), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:""}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc00204e060), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:""}}, QOSClass:"Guaranteed"}}
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 21 12:39:59.940: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-init-container-hzp7r" for this suite.
Feb 21 12:40:24.159: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 21 12:40:24.279: INFO: namespace: e2e-tests-init-container-hzp7r, resource: bindings, ignored listing per whitelist
Feb 21 12:40:24.358: INFO: namespace e2e-tests-init-container-hzp7r deletion completed in 24.294726616s

• [SLOW TEST:96.343 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-auth] ServiceAccounts 
  should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 21 12:40:24.359: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: getting the auto-created API token
Feb 21 12:40:25.191: INFO: created pod pod-service-account-defaultsa
Feb 21 12:40:25.191: INFO: pod pod-service-account-defaultsa service account token volume mount: true
Feb 21 12:40:25.222: INFO: created pod pod-service-account-mountsa
Feb 21 12:40:25.222: INFO: pod pod-service-account-mountsa service account token volume mount: true
Feb 21 12:40:25.269: INFO: created pod pod-service-account-nomountsa
Feb 21 12:40:25.269: INFO: pod pod-service-account-nomountsa service account token volume mount: false
Feb 21 12:40:25.304: INFO: created pod pod-service-account-defaultsa-mountspec
Feb 21 12:40:25.304: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true
Feb 21 12:40:25.401: INFO: created pod pod-service-account-mountsa-mountspec
Feb 21 12:40:25.402: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true
Feb 21 12:40:25.432: INFO: created pod pod-service-account-nomountsa-mountspec
Feb 21 12:40:25.433: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true
Feb 21 12:40:25.463: INFO: created pod pod-service-account-defaultsa-nomountspec
Feb 21 12:40:25.463: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false
Feb 21 12:40:25.484: INFO: created pod pod-service-account-mountsa-nomountspec
Feb 21 12:40:25.484: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false
Feb 21 12:40:25.623: INFO: created pod pod-service-account-nomountsa-nomountspec
Feb 21 12:40:25.624: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false
[AfterEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 21 12:40:25.624: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-svcaccounts-5cd9q" for this suite.
Feb 21 12:40:55.675: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 21 12:40:55.822: INFO: namespace: e2e-tests-svcaccounts-5cd9q, resource: bindings, ignored listing per whitelist
Feb 21 12:40:55.909: INFO: namespace e2e-tests-svcaccounts-5cd9q deletion completed in 30.252218397s

• [SLOW TEST:31.551 seconds]
[sig-auth] ServiceAccounts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:22
  should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[k8s.io] Pods 
  should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 21 12:40:55.910: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
STEP: setting up watch
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: verifying pod creation was observed
Feb 21 12:41:06.371: INFO: running pod: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-submit-remove-6784c641-54a7-11ea-b1f8-0242ac110008", GenerateName:"", Namespace:"e2e-tests-pods-ncdmj", SelfLink:"/api/v1/namespaces/e2e-tests-pods-ncdmj/pods/pod-submit-remove-6784c641-54a7-11ea-b1f8-0242ac110008", UID:"67869b3f-54a7-11ea-a994-fa163e34d433", ResourceVersion:"22425729", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63717885656, loc:(*time.Location)(0x7950ac0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"124808228"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-kmbm8", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc001125040), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"nginx", Image:"docker.io/library/nginx:1.14-alpine", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-kmbm8", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0015ce7f8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"hunter-server-hu5at5svl7ps", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc001ab4840), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0015ce830)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0015ce850)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc0015ce858), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc0015ce85c)}, Status:v1.PodStatus{Phase:"Running", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717885656, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717885665, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"ContainersReady", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717885665, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717885656, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.96.1.240", PodIP:"10.32.0.4", StartTime:(*v1.Time)(0xc000c3b9e0), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"nginx", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(0xc000c3ba20), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:true, RestartCount:0, Image:"nginx:1.14-alpine", ImageID:"docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7", ContainerID:"docker://1db1af963fb62841cac04d412782e72d9ef8b0ab7d5fb7c5b9187cde9b936079"}}, QOSClass:"BestEffort"}}
STEP: deleting the pod gracefully
STEP: verifying the kubelet observed the termination notice
STEP: verifying pod deletion was observed
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 21 12:41:22.685: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-ncdmj" for this suite.
Feb 21 12:41:28.875: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 21 12:41:29.124: INFO: namespace: e2e-tests-pods-ncdmj, resource: bindings, ignored listing per whitelist
Feb 21 12:41:29.148: INFO: namespace e2e-tests-pods-ncdmj deletion completed in 6.443227256s

• [SLOW TEST:33.238 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 21 12:41:29.149: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-7b58509d-54a7-11ea-b1f8-0242ac110008
STEP: Creating a pod to test consume configMaps
Feb 21 12:41:29.425: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-7b598c38-54a7-11ea-b1f8-0242ac110008" in namespace "e2e-tests-projected-5rkc7" to be "success or failure"
Feb 21 12:41:29.456: INFO: Pod "pod-projected-configmaps-7b598c38-54a7-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 30.432639ms
Feb 21 12:41:31.471: INFO: Pod "pod-projected-configmaps-7b598c38-54a7-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.045370136s
Feb 21 12:41:33.501: INFO: Pod "pod-projected-configmaps-7b598c38-54a7-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.075408949s
Feb 21 12:41:35.520: INFO: Pod "pod-projected-configmaps-7b598c38-54a7-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.094669727s
Feb 21 12:41:37.928: INFO: Pod "pod-projected-configmaps-7b598c38-54a7-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.502780747s
Feb 21 12:41:39.952: INFO: Pod "pod-projected-configmaps-7b598c38-54a7-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 10.52687459s
Feb 21 12:41:41.986: INFO: Pod "pod-projected-configmaps-7b598c38-54a7-11ea-b1f8-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.560495559s
STEP: Saw pod success
Feb 21 12:41:41.986: INFO: Pod "pod-projected-configmaps-7b598c38-54a7-11ea-b1f8-0242ac110008" satisfied condition "success or failure"
Feb 21 12:41:41.994: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-7b598c38-54a7-11ea-b1f8-0242ac110008 container projected-configmap-volume-test: 
STEP: delete the pod
Feb 21 12:41:42.237: INFO: Waiting for pod pod-projected-configmaps-7b598c38-54a7-11ea-b1f8-0242ac110008 to disappear
Feb 21 12:41:42.267: INFO: Pod pod-projected-configmaps-7b598c38-54a7-11ea-b1f8-0242ac110008 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 21 12:41:42.268: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-5rkc7" for this suite.
Feb 21 12:41:48.392: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 21 12:41:48.661: INFO: namespace: e2e-tests-projected-5rkc7, resource: bindings, ignored listing per whitelist
Feb 21 12:41:48.701: INFO: namespace e2e-tests-projected-5rkc7 deletion completed in 6.418976801s

• [SLOW TEST:19.553 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with downward pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 21 12:41:48.702: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with downward pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod pod-subpath-test-downwardapi-9b8f
STEP: Creating a pod to test atomic-volume-subpath
Feb 21 12:41:48.923: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-9b8f" in namespace "e2e-tests-subpath-5cc9n" to be "success or failure"
Feb 21 12:41:49.060: INFO: Pod "pod-subpath-test-downwardapi-9b8f": Phase="Pending", Reason="", readiness=false. Elapsed: 136.581938ms
Feb 21 12:41:51.356: INFO: Pod "pod-subpath-test-downwardapi-9b8f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.432228043s
Feb 21 12:41:53.379: INFO: Pod "pod-subpath-test-downwardapi-9b8f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.455022845s
Feb 21 12:41:55.528: INFO: Pod "pod-subpath-test-downwardapi-9b8f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.604897307s
Feb 21 12:41:57.544: INFO: Pod "pod-subpath-test-downwardapi-9b8f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.620754516s
Feb 21 12:41:59.578: INFO: Pod "pod-subpath-test-downwardapi-9b8f": Phase="Pending", Reason="", readiness=false. Elapsed: 10.653922873s
Feb 21 12:42:01.626: INFO: Pod "pod-subpath-test-downwardapi-9b8f": Phase="Pending", Reason="", readiness=false. Elapsed: 12.702111755s
Feb 21 12:42:03.644: INFO: Pod "pod-subpath-test-downwardapi-9b8f": Phase="Pending", Reason="", readiness=false. Elapsed: 14.720519715s
Feb 21 12:42:05.654: INFO: Pod "pod-subpath-test-downwardapi-9b8f": Phase="Running", Reason="", readiness=false. Elapsed: 16.730612992s
Feb 21 12:42:07.687: INFO: Pod "pod-subpath-test-downwardapi-9b8f": Phase="Running", Reason="", readiness=false. Elapsed: 18.763276186s
Feb 21 12:42:09.712: INFO: Pod "pod-subpath-test-downwardapi-9b8f": Phase="Running", Reason="", readiness=false. Elapsed: 20.788312531s
Feb 21 12:42:11.729: INFO: Pod "pod-subpath-test-downwardapi-9b8f": Phase="Running", Reason="", readiness=false. Elapsed: 22.805421424s
Feb 21 12:42:13.746: INFO: Pod "pod-subpath-test-downwardapi-9b8f": Phase="Running", Reason="", readiness=false. Elapsed: 24.822877765s
Feb 21 12:42:15.763: INFO: Pod "pod-subpath-test-downwardapi-9b8f": Phase="Running", Reason="", readiness=false. Elapsed: 26.839747815s
Feb 21 12:42:17.784: INFO: Pod "pod-subpath-test-downwardapi-9b8f": Phase="Running", Reason="", readiness=false. Elapsed: 28.860310402s
Feb 21 12:42:19.831: INFO: Pod "pod-subpath-test-downwardapi-9b8f": Phase="Running", Reason="", readiness=false. Elapsed: 30.907588278s
Feb 21 12:42:21.865: INFO: Pod "pod-subpath-test-downwardapi-9b8f": Phase="Running", Reason="", readiness=false. Elapsed: 32.941529691s
Feb 21 12:42:24.150: INFO: Pod "pod-subpath-test-downwardapi-9b8f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 35.226413998s
STEP: Saw pod success
Feb 21 12:42:24.150: INFO: Pod "pod-subpath-test-downwardapi-9b8f" satisfied condition "success or failure"
Feb 21 12:42:24.155: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-downwardapi-9b8f container test-container-subpath-downwardapi-9b8f: 
STEP: delete the pod
Feb 21 12:42:24.677: INFO: Waiting for pod pod-subpath-test-downwardapi-9b8f to disappear
Feb 21 12:42:24.697: INFO: Pod pod-subpath-test-downwardapi-9b8f no longer exists
STEP: Deleting pod pod-subpath-test-downwardapi-9b8f
Feb 21 12:42:24.697: INFO: Deleting pod "pod-subpath-test-downwardapi-9b8f" in namespace "e2e-tests-subpath-5cc9n"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 21 12:42:24.704: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-subpath-5cc9n" for this suite.
Feb 21 12:42:32.760: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 21 12:42:32.943: INFO: namespace: e2e-tests-subpath-5cc9n, resource: bindings, ignored listing per whitelist
Feb 21 12:42:32.992: INFO: namespace e2e-tests-subpath-5cc9n deletion completed in 8.27981216s

• [SLOW TEST:44.291 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with downward pod [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 21 12:42:32.993: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-map-a16ca8cb-54a7-11ea-b1f8-0242ac110008
STEP: Creating a pod to test consume configMaps
Feb 21 12:42:33.299: INFO: Waiting up to 5m0s for pod "pod-configmaps-a16df81f-54a7-11ea-b1f8-0242ac110008" in namespace "e2e-tests-configmap-2bjgx" to be "success or failure"
Feb 21 12:42:33.384: INFO: Pod "pod-configmaps-a16df81f-54a7-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 84.421151ms
Feb 21 12:42:35.391: INFO: Pod "pod-configmaps-a16df81f-54a7-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.091989475s
Feb 21 12:42:37.412: INFO: Pod "pod-configmaps-a16df81f-54a7-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.113148479s
Feb 21 12:42:39.936: INFO: Pod "pod-configmaps-a16df81f-54a7-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.63664534s
Feb 21 12:42:42.122: INFO: Pod "pod-configmaps-a16df81f-54a7-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.822442551s
Feb 21 12:42:44.141: INFO: Pod "pod-configmaps-a16df81f-54a7-11ea-b1f8-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.841692357s
STEP: Saw pod success
Feb 21 12:42:44.141: INFO: Pod "pod-configmaps-a16df81f-54a7-11ea-b1f8-0242ac110008" satisfied condition "success or failure"
Feb 21 12:42:44.145: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-a16df81f-54a7-11ea-b1f8-0242ac110008 container configmap-volume-test: 
STEP: delete the pod
Feb 21 12:42:44.750: INFO: Waiting for pod pod-configmaps-a16df81f-54a7-11ea-b1f8-0242ac110008 to disappear
Feb 21 12:42:44.797: INFO: Pod pod-configmaps-a16df81f-54a7-11ea-b1f8-0242ac110008 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 21 12:42:44.798: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-2bjgx" for this suite.
Feb 21 12:42:50.962: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 21 12:42:51.137: INFO: namespace: e2e-tests-configmap-2bjgx, resource: bindings, ignored listing per whitelist
Feb 21 12:42:51.182: INFO: namespace e2e-tests-configmap-2bjgx deletion completed in 6.267905634s

• [SLOW TEST:18.189 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] EmptyDir volumes 
  volume on tmpfs should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 21 12:42:51.182: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on tmpfs should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir volume type on tmpfs
Feb 21 12:42:51.563: INFO: Waiting up to 5m0s for pod "pod-ac4bfc68-54a7-11ea-b1f8-0242ac110008" in namespace "e2e-tests-emptydir-mfsl4" to be "success or failure"
Feb 21 12:42:51.588: INFO: Pod "pod-ac4bfc68-54a7-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 24.1733ms
Feb 21 12:42:53.639: INFO: Pod "pod-ac4bfc68-54a7-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.075483117s
Feb 21 12:42:55.650: INFO: Pod "pod-ac4bfc68-54a7-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.086568302s
Feb 21 12:42:57.682: INFO: Pod "pod-ac4bfc68-54a7-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.118073367s
Feb 21 12:43:00.034: INFO: Pod "pod-ac4bfc68-54a7-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.470280495s
Feb 21 12:43:02.063: INFO: Pod "pod-ac4bfc68-54a7-11ea-b1f8-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.500003713s
STEP: Saw pod success
Feb 21 12:43:02.064: INFO: Pod "pod-ac4bfc68-54a7-11ea-b1f8-0242ac110008" satisfied condition "success or failure"
Feb 21 12:43:02.071: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-ac4bfc68-54a7-11ea-b1f8-0242ac110008 container test-container: 
STEP: delete the pod
Feb 21 12:43:02.396: INFO: Waiting for pod pod-ac4bfc68-54a7-11ea-b1f8-0242ac110008 to disappear
Feb 21 12:43:02.554: INFO: Pod pod-ac4bfc68-54a7-11ea-b1f8-0242ac110008 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 21 12:43:02.555: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-mfsl4" for this suite.
Feb 21 12:43:08.644: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 21 12:43:08.680: INFO: namespace: e2e-tests-emptydir-mfsl4, resource: bindings, ignored listing per whitelist
Feb 21 12:43:08.784: INFO: namespace e2e-tests-emptydir-mfsl4 deletion completed in 6.192512098s

• [SLOW TEST:17.602 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  volume on tmpfs should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 21 12:43:08.784: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-map-b6a913d7-54a7-11ea-b1f8-0242ac110008
STEP: Creating a pod to test consume configMaps
Feb 21 12:43:08.956: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-b6a9a8de-54a7-11ea-b1f8-0242ac110008" in namespace "e2e-tests-projected-mdltp" to be "success or failure"
Feb 21 12:43:08.975: INFO: Pod "pod-projected-configmaps-b6a9a8de-54a7-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 19.117364ms
Feb 21 12:43:10.990: INFO: Pod "pod-projected-configmaps-b6a9a8de-54a7-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033746219s
Feb 21 12:43:13.005: INFO: Pod "pod-projected-configmaps-b6a9a8de-54a7-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.048369057s
Feb 21 12:43:15.025: INFO: Pod "pod-projected-configmaps-b6a9a8de-54a7-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.068623237s
Feb 21 12:43:17.234: INFO: Pod "pod-projected-configmaps-b6a9a8de-54a7-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.277445726s
Feb 21 12:43:19.284: INFO: Pod "pod-projected-configmaps-b6a9a8de-54a7-11ea-b1f8-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.327613524s
STEP: Saw pod success
Feb 21 12:43:19.284: INFO: Pod "pod-projected-configmaps-b6a9a8de-54a7-11ea-b1f8-0242ac110008" satisfied condition "success or failure"
Feb 21 12:43:19.291: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-b6a9a8de-54a7-11ea-b1f8-0242ac110008 container projected-configmap-volume-test: 
STEP: delete the pod
Feb 21 12:43:19.465: INFO: Waiting for pod pod-projected-configmaps-b6a9a8de-54a7-11ea-b1f8-0242ac110008 to disappear
Feb 21 12:43:19.483: INFO: Pod pod-projected-configmaps-b6a9a8de-54a7-11ea-b1f8-0242ac110008 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 21 12:43:19.483: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-mdltp" for this suite.
Feb 21 12:43:25.521: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 21 12:43:25.629: INFO: namespace: e2e-tests-projected-mdltp, resource: bindings, ignored listing per whitelist
Feb 21 12:43:25.717: INFO: namespace e2e-tests-projected-mdltp deletion completed in 6.228642486s

• [SLOW TEST:16.933 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run job 
  should create a job from an image when restart is OnFailure  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 21 12:43:25.718: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1454
[It] should create a job from an image when restart is OnFailure  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Feb 21 12:43:25.931: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-r7khd'
Feb 21 12:43:27.985: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Feb 21 12:43:27.985: INFO: stdout: "job.batch/e2e-test-nginx-job created\n"
STEP: verifying the job e2e-test-nginx-job was created
[AfterEach] [k8s.io] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1459
Feb 21 12:43:28.001: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-nginx-job --namespace=e2e-tests-kubectl-r7khd'
Feb 21 12:43:28.290: INFO: stderr: ""
Feb 21 12:43:28.291: INFO: stdout: "job.batch \"e2e-test-nginx-job\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 21 12:43:28.291: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-r7khd" for this suite.
Feb 21 12:43:36.443: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 21 12:43:36.631: INFO: namespace: e2e-tests-kubectl-r7khd, resource: bindings, ignored listing per whitelist
Feb 21 12:43:36.827: INFO: namespace e2e-tests-kubectl-r7khd deletion completed in 8.517747006s

• [SLOW TEST:11.109 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create a job from an image when restart is OnFailure  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 21 12:43:36.827: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test substitution in container's args
Feb 21 12:43:37.094: INFO: Waiting up to 5m0s for pod "var-expansion-c774e781-54a7-11ea-b1f8-0242ac110008" in namespace "e2e-tests-var-expansion-qskpz" to be "success or failure"
Feb 21 12:43:37.110: INFO: Pod "var-expansion-c774e781-54a7-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 15.236847ms
Feb 21 12:43:39.120: INFO: Pod "var-expansion-c774e781-54a7-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025182922s
Feb 21 12:43:41.144: INFO: Pod "var-expansion-c774e781-54a7-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.049422713s
Feb 21 12:43:43.409: INFO: Pod "var-expansion-c774e781-54a7-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.314659029s
Feb 21 12:43:45.564: INFO: Pod "var-expansion-c774e781-54a7-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.469172618s
Feb 21 12:43:47.577: INFO: Pod "var-expansion-c774e781-54a7-11ea-b1f8-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.481947567s
STEP: Saw pod success
Feb 21 12:43:47.577: INFO: Pod "var-expansion-c774e781-54a7-11ea-b1f8-0242ac110008" satisfied condition "success or failure"
Feb 21 12:43:47.584: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod var-expansion-c774e781-54a7-11ea-b1f8-0242ac110008 container dapi-container: 
STEP: delete the pod
Feb 21 12:43:48.789: INFO: Waiting for pod var-expansion-c774e781-54a7-11ea-b1f8-0242ac110008 to disappear
Feb 21 12:43:48.796: INFO: Pod var-expansion-c774e781-54a7-11ea-b1f8-0242ac110008 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 21 12:43:48.796: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-var-expansion-qskpz" for this suite.
Feb 21 12:43:54.847: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 21 12:43:54.897: INFO: namespace: e2e-tests-var-expansion-qskpz, resource: bindings, ignored listing per whitelist
Feb 21 12:43:55.143: INFO: namespace e2e-tests-var-expansion-qskpz deletion completed in 6.334229473s

• [SLOW TEST:18.317 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[k8s.io] Pods 
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 21 12:43:55.144: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Feb 21 12:44:05.947: INFO: Successfully updated pod "pod-update-activedeadlineseconds-d2569501-54a7-11ea-b1f8-0242ac110008"
Feb 21 12:44:05.947: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-d2569501-54a7-11ea-b1f8-0242ac110008" in namespace "e2e-tests-pods-tcxlc" to be "terminated due to deadline exceeded"
Feb 21 12:44:05.988: INFO: Pod "pod-update-activedeadlineseconds-d2569501-54a7-11ea-b1f8-0242ac110008": Phase="Running", Reason="", readiness=true. Elapsed: 40.707507ms
Feb 21 12:44:08.011: INFO: Pod "pod-update-activedeadlineseconds-d2569501-54a7-11ea-b1f8-0242ac110008": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.064065673s
Feb 21 12:44:08.011: INFO: Pod "pod-update-activedeadlineseconds-d2569501-54a7-11ea-b1f8-0242ac110008" satisfied condition "terminated due to deadline exceeded"
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 21 12:44:08.011: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-tcxlc" for this suite.
Feb 21 12:44:14.074: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 21 12:44:14.247: INFO: namespace: e2e-tests-pods-tcxlc, resource: bindings, ignored listing per whitelist
Feb 21 12:44:14.280: INFO: namespace e2e-tests-pods-tcxlc deletion completed in 6.260303415s

• [SLOW TEST:19.136 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command in a pod 
  should print the output to logs [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 21 12:44:14.280: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should print the output to logs [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 21 12:44:26.635: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubelet-test-4tk4l" for this suite.
Feb 21 12:45:16.682: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 21 12:45:16.841: INFO: namespace: e2e-tests-kubelet-test-4tk4l, resource: bindings, ignored listing per whitelist
Feb 21 12:45:16.858: INFO: namespace e2e-tests-kubelet-test-4tk4l deletion completed in 50.214102913s

• [SLOW TEST:62.578 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when scheduling a busybox command in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40
    should print the output to logs [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 21 12:45:16.858: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0644 on node default medium
Feb 21 12:45:17.110: INFO: Waiting up to 5m0s for pod "pod-030ad0ea-54a8-11ea-b1f8-0242ac110008" in namespace "e2e-tests-emptydir-6ld7c" to be "success or failure"
Feb 21 12:45:17.122: INFO: Pod "pod-030ad0ea-54a8-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 11.324995ms
Feb 21 12:45:19.558: INFO: Pod "pod-030ad0ea-54a8-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.447431753s
Feb 21 12:45:21.580: INFO: Pod "pod-030ad0ea-54a8-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.469154851s
Feb 21 12:45:24.564: INFO: Pod "pod-030ad0ea-54a8-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 7.453796174s
Feb 21 12:45:26.588: INFO: Pod "pod-030ad0ea-54a8-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 9.477393363s
Feb 21 12:45:28.627: INFO: Pod "pod-030ad0ea-54a8-11ea-b1f8-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.51613806s
STEP: Saw pod success
Feb 21 12:45:28.627: INFO: Pod "pod-030ad0ea-54a8-11ea-b1f8-0242ac110008" satisfied condition "success or failure"
Feb 21 12:45:28.647: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-030ad0ea-54a8-11ea-b1f8-0242ac110008 container test-container: 
STEP: delete the pod
Feb 21 12:45:29.798: INFO: Waiting for pod pod-030ad0ea-54a8-11ea-b1f8-0242ac110008 to disappear
Feb 21 12:45:29.896: INFO: Pod pod-030ad0ea-54a8-11ea-b1f8-0242ac110008 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 21 12:45:29.896: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-6ld7c" for this suite.
Feb 21 12:45:35.962: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 21 12:45:36.179: INFO: namespace: e2e-tests-emptydir-6ld7c, resource: bindings, ignored listing per whitelist
Feb 21 12:45:36.203: INFO: namespace e2e-tests-emptydir-6ld7c deletion completed in 6.29820455s

• [SLOW TEST:19.345 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] KubeletManagedEtcHosts 
  should test kubelet managed /etc/hosts file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] KubeletManagedEtcHosts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 21 12:45:36.204: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should test kubelet managed /etc/hosts file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Setting up the test
STEP: Creating hostNetwork=false pod
STEP: Creating hostNetwork=true pod
STEP: Running the test
STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false
Feb 21 12:46:02.731: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-7ln7k PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 21 12:46:02.731: INFO: >>> kubeConfig: /root/.kube/config
I0221 12:46:02.798464       8 log.go:172] (0xc00138c2c0) (0xc002622640) Create stream
I0221 12:46:02.798631       8 log.go:172] (0xc00138c2c0) (0xc002622640) Stream added, broadcasting: 1
I0221 12:46:02.820354       8 log.go:172] (0xc00138c2c0) Reply frame received for 1
I0221 12:46:02.821361       8 log.go:172] (0xc00138c2c0) (0xc0018c0000) Create stream
I0221 12:46:02.821736       8 log.go:172] (0xc00138c2c0) (0xc0018c0000) Stream added, broadcasting: 3
I0221 12:46:02.830003       8 log.go:172] (0xc00138c2c0) Reply frame received for 3
I0221 12:46:02.830119       8 log.go:172] (0xc00138c2c0) (0xc0026226e0) Create stream
I0221 12:46:02.830139       8 log.go:172] (0xc00138c2c0) (0xc0026226e0) Stream added, broadcasting: 5
I0221 12:46:02.833463       8 log.go:172] (0xc00138c2c0) Reply frame received for 5
I0221 12:46:03.023799       8 log.go:172] (0xc00138c2c0) Data frame received for 3
I0221 12:46:03.023908       8 log.go:172] (0xc0018c0000) (3) Data frame handling
I0221 12:46:03.023937       8 log.go:172] (0xc0018c0000) (3) Data frame sent
I0221 12:46:03.165189       8 log.go:172] (0xc00138c2c0) Data frame received for 1
I0221 12:46:03.165346       8 log.go:172] (0xc00138c2c0) (0xc0018c0000) Stream removed, broadcasting: 3
I0221 12:46:03.165426       8 log.go:172] (0xc002622640) (1) Data frame handling
I0221 12:46:03.165463       8 log.go:172] (0xc002622640) (1) Data frame sent
I0221 12:46:03.165498       8 log.go:172] (0xc00138c2c0) (0xc0026226e0) Stream removed, broadcasting: 5
I0221 12:46:03.165611       8 log.go:172] (0xc00138c2c0) (0xc002622640) Stream removed, broadcasting: 1
I0221 12:46:03.165770       8 log.go:172] (0xc00138c2c0) Go away received
I0221 12:46:03.165863       8 log.go:172] (0xc00138c2c0) (0xc002622640) Stream removed, broadcasting: 1
I0221 12:46:03.165886       8 log.go:172] (0xc00138c2c0) (0xc0018c0000) Stream removed, broadcasting: 3
I0221 12:46:03.165906       8 log.go:172] (0xc00138c2c0) (0xc0026226e0) Stream removed, broadcasting: 5
Feb 21 12:46:03.165: INFO: Exec stderr: ""
Feb 21 12:46:03.166: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-7ln7k PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 21 12:46:03.166: INFO: >>> kubeConfig: /root/.kube/config
I0221 12:46:03.270456       8 log.go:172] (0xc000a5c580) (0xc001d6e460) Create stream
I0221 12:46:03.271073       8 log.go:172] (0xc000a5c580) (0xc001d6e460) Stream added, broadcasting: 1
I0221 12:46:03.282358       8 log.go:172] (0xc000a5c580) Reply frame received for 1
I0221 12:46:03.282447       8 log.go:172] (0xc000a5c580) (0xc001cbc000) Create stream
I0221 12:46:03.282460       8 log.go:172] (0xc000a5c580) (0xc001cbc000) Stream added, broadcasting: 3
I0221 12:46:03.284678       8 log.go:172] (0xc000a5c580) Reply frame received for 3
I0221 12:46:03.284737       8 log.go:172] (0xc000a5c580) (0xc002622780) Create stream
I0221 12:46:03.284765       8 log.go:172] (0xc000a5c580) (0xc002622780) Stream added, broadcasting: 5
I0221 12:46:03.286395       8 log.go:172] (0xc000a5c580) Reply frame received for 5
I0221 12:46:03.428145       8 log.go:172] (0xc000a5c580) Data frame received for 3
I0221 12:46:03.428307       8 log.go:172] (0xc001cbc000) (3) Data frame handling
I0221 12:46:03.428373       8 log.go:172] (0xc001cbc000) (3) Data frame sent
I0221 12:46:03.532380       8 log.go:172] (0xc000a5c580) Data frame received for 1
I0221 12:46:03.532730       8 log.go:172] (0xc000a5c580) (0xc001cbc000) Stream removed, broadcasting: 3
I0221 12:46:03.532884       8 log.go:172] (0xc000a5c580) (0xc002622780) Stream removed, broadcasting: 5
I0221 12:46:03.532968       8 log.go:172] (0xc001d6e460) (1) Data frame handling
I0221 12:46:03.533028       8 log.go:172] (0xc001d6e460) (1) Data frame sent
I0221 12:46:03.533067       8 log.go:172] (0xc000a5c580) (0xc001d6e460) Stream removed, broadcasting: 1
I0221 12:46:03.533100       8 log.go:172] (0xc000a5c580) Go away received
I0221 12:46:03.533352       8 log.go:172] (0xc000a5c580) (0xc001d6e460) Stream removed, broadcasting: 1
I0221 12:46:03.533373       8 log.go:172] (0xc000a5c580) (0xc001cbc000) Stream removed, broadcasting: 3
I0221 12:46:03.533384       8 log.go:172] (0xc000a5c580) (0xc002622780) Stream removed, broadcasting: 5
Feb 21 12:46:03.533: INFO: Exec stderr: ""
Feb 21 12:46:03.533: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-7ln7k PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 21 12:46:03.533: INFO: >>> kubeConfig: /root/.kube/config
I0221 12:46:03.645102       8 log.go:172] (0xc000bdd1e0) (0xc0022febe0) Create stream
I0221 12:46:03.645268       8 log.go:172] (0xc000bdd1e0) (0xc0022febe0) Stream added, broadcasting: 1
I0221 12:46:03.652460       8 log.go:172] (0xc000bdd1e0) Reply frame received for 1
I0221 12:46:03.652588       8 log.go:172] (0xc000bdd1e0) (0xc0022fec80) Create stream
I0221 12:46:03.652599       8 log.go:172] (0xc000bdd1e0) (0xc0022fec80) Stream added, broadcasting: 3
I0221 12:46:03.654351       8 log.go:172] (0xc000bdd1e0) Reply frame received for 3
I0221 12:46:03.654385       8 log.go:172] (0xc000bdd1e0) (0xc0015ec0a0) Create stream
I0221 12:46:03.654399       8 log.go:172] (0xc000bdd1e0) (0xc0015ec0a0) Stream added, broadcasting: 5
I0221 12:46:03.655479       8 log.go:172] (0xc000bdd1e0) Reply frame received for 5
I0221 12:46:03.759829       8 log.go:172] (0xc000bdd1e0) Data frame received for 3
I0221 12:46:03.760092       8 log.go:172] (0xc0022fec80) (3) Data frame handling
I0221 12:46:03.760125       8 log.go:172] (0xc0022fec80) (3) Data frame sent
I0221 12:46:03.994199       8 log.go:172] (0xc000bdd1e0) Data frame received for 1
I0221 12:46:03.994541       8 log.go:172] (0xc0022febe0) (1) Data frame handling
I0221 12:46:03.994654       8 log.go:172] (0xc0022febe0) (1) Data frame sent
I0221 12:46:03.995142       8 log.go:172] (0xc000bdd1e0) (0xc0015ec0a0) Stream removed, broadcasting: 5
I0221 12:46:03.995377       8 log.go:172] (0xc000bdd1e0) (0xc0022febe0) Stream removed, broadcasting: 1
I0221 12:46:03.995604       8 log.go:172] (0xc000bdd1e0) (0xc0022fec80) Stream removed, broadcasting: 3
I0221 12:46:03.995919       8 log.go:172] (0xc000bdd1e0) (0xc0022febe0) Stream removed, broadcasting: 1
I0221 12:46:03.995958       8 log.go:172] (0xc000bdd1e0) (0xc0022fec80) Stream removed, broadcasting: 3
I0221 12:46:03.996198       8 log.go:172] (0xc000bdd1e0) (0xc0015ec0a0) Stream removed, broadcasting: 5
I0221 12:46:03.996425       8 log.go:172] (0xc000bdd1e0) Go away received
Feb 21 12:46:03.996: INFO: Exec stderr: ""
Feb 21 12:46:03.996: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-7ln7k PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 21 12:46:03.996: INFO: >>> kubeConfig: /root/.kube/config
I0221 12:46:04.103178       8 log.go:172] (0xc001b9e370) (0xc0018c01e0) Create stream
I0221 12:46:04.103393       8 log.go:172] (0xc001b9e370) (0xc0018c01e0) Stream added, broadcasting: 1
I0221 12:46:04.108750       8 log.go:172] (0xc001b9e370) Reply frame received for 1
I0221 12:46:04.108854       8 log.go:172] (0xc001b9e370) (0xc0022fedc0) Create stream
I0221 12:46:04.108867       8 log.go:172] (0xc001b9e370) (0xc0022fedc0) Stream added, broadcasting: 3
I0221 12:46:04.110210       8 log.go:172] (0xc001b9e370) Reply frame received for 3
I0221 12:46:04.110267       8 log.go:172] (0xc001b9e370) (0xc0018c0280) Create stream
I0221 12:46:04.110286       8 log.go:172] (0xc001b9e370) (0xc0018c0280) Stream added, broadcasting: 5
I0221 12:46:04.111525       8 log.go:172] (0xc001b9e370) Reply frame received for 5
I0221 12:46:04.318533       8 log.go:172] (0xc001b9e370) Data frame received for 3
I0221 12:46:04.318655       8 log.go:172] (0xc0022fedc0) (3) Data frame handling
I0221 12:46:04.318684       8 log.go:172] (0xc0022fedc0) (3) Data frame sent
I0221 12:46:04.421949       8 log.go:172] (0xc001b9e370) Data frame received for 1
I0221 12:46:04.422096       8 log.go:172] (0xc001b9e370) (0xc0022fedc0) Stream removed, broadcasting: 3
I0221 12:46:04.422155       8 log.go:172] (0xc0018c01e0) (1) Data frame handling
I0221 12:46:04.422187       8 log.go:172] (0xc0018c01e0) (1) Data frame sent
I0221 12:46:04.422208       8 log.go:172] (0xc001b9e370) (0xc0018c0280) Stream removed, broadcasting: 5
I0221 12:46:04.422300       8 log.go:172] (0xc001b9e370) (0xc0018c01e0) Stream removed, broadcasting: 1
I0221 12:46:04.422350       8 log.go:172] (0xc001b9e370) Go away received
I0221 12:46:04.422480       8 log.go:172] (0xc001b9e370) (0xc0018c01e0) Stream removed, broadcasting: 1
I0221 12:46:04.422516       8 log.go:172] (0xc001b9e370) (0xc0022fedc0) Stream removed, broadcasting: 3
I0221 12:46:04.422530       8 log.go:172] (0xc001b9e370) (0xc0018c0280) Stream removed, broadcasting: 5
Feb 21 12:46:04.422: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount
Feb 21 12:46:04.422: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-7ln7k PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 21 12:46:04.422: INFO: >>> kubeConfig: /root/.kube/config
I0221 12:46:04.550116       8 log.go:172] (0xc0022d82c0) (0xc001cbc3c0) Create stream
I0221 12:46:04.550229       8 log.go:172] (0xc0022d82c0) (0xc001cbc3c0) Stream added, broadcasting: 1
I0221 12:46:04.562710       8 log.go:172] (0xc0022d82c0) Reply frame received for 1
I0221 12:46:04.562845       8 log.go:172] (0xc0022d82c0) (0xc002622820) Create stream
I0221 12:46:04.562858       8 log.go:172] (0xc0022d82c0) (0xc002622820) Stream added, broadcasting: 3
I0221 12:46:04.563836       8 log.go:172] (0xc0022d82c0) Reply frame received for 3
I0221 12:46:04.563873       8 log.go:172] (0xc0022d82c0) (0xc0022fee60) Create stream
I0221 12:46:04.563881       8 log.go:172] (0xc0022d82c0) (0xc0022fee60) Stream added, broadcasting: 5
I0221 12:46:04.564707       8 log.go:172] (0xc0022d82c0) Reply frame received for 5
I0221 12:46:04.711285       8 log.go:172] (0xc0022d82c0) Data frame received for 3
I0221 12:46:04.711415       8 log.go:172] (0xc002622820) (3) Data frame handling
I0221 12:46:04.711445       8 log.go:172] (0xc002622820) (3) Data frame sent
I0221 12:46:04.819841       8 log.go:172] (0xc0022d82c0) (0xc0022fee60) Stream removed, broadcasting: 5
I0221 12:46:04.820011       8 log.go:172] (0xc0022d82c0) Data frame received for 1
I0221 12:46:04.820063       8 log.go:172] (0xc0022d82c0) (0xc002622820) Stream removed, broadcasting: 3
I0221 12:46:04.820113       8 log.go:172] (0xc001cbc3c0) (1) Data frame handling
I0221 12:46:04.820178       8 log.go:172] (0xc001cbc3c0) (1) Data frame sent
I0221 12:46:04.820211       8 log.go:172] (0xc0022d82c0) (0xc001cbc3c0) Stream removed, broadcasting: 1
I0221 12:46:04.820231       8 log.go:172] (0xc0022d82c0) Go away received
I0221 12:46:04.820425       8 log.go:172] (0xc0022d82c0) (0xc001cbc3c0) Stream removed, broadcasting: 1
I0221 12:46:04.820468       8 log.go:172] (0xc0022d82c0) (0xc002622820) Stream removed, broadcasting: 3
I0221 12:46:04.820505       8 log.go:172] (0xc0022d82c0) (0xc0022fee60) Stream removed, broadcasting: 5
Feb 21 12:46:04.820: INFO: Exec stderr: ""
Feb 21 12:46:04.820: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-7ln7k PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 21 12:46:04.821: INFO: >>> kubeConfig: /root/.kube/config
I0221 12:46:04.925470       8 log.go:172] (0xc001b9e840) (0xc0018c0500) Create stream
I0221 12:46:04.925752       8 log.go:172] (0xc001b9e840) (0xc0018c0500) Stream added, broadcasting: 1
I0221 12:46:04.930461       8 log.go:172] (0xc001b9e840) Reply frame received for 1
I0221 12:46:04.930509       8 log.go:172] (0xc001b9e840) (0xc0015ec140) Create stream
I0221 12:46:04.930525       8 log.go:172] (0xc001b9e840) (0xc0015ec140) Stream added, broadcasting: 3
I0221 12:46:04.931708       8 log.go:172] (0xc001b9e840) Reply frame received for 3
I0221 12:46:04.931736       8 log.go:172] (0xc001b9e840) (0xc0018c05a0) Create stream
I0221 12:46:04.931746       8 log.go:172] (0xc001b9e840) (0xc0018c05a0) Stream added, broadcasting: 5
I0221 12:46:04.940502       8 log.go:172] (0xc001b9e840) Reply frame received for 5
I0221 12:46:05.050784       8 log.go:172] (0xc001b9e840) Data frame received for 3
I0221 12:46:05.050975       8 log.go:172] (0xc0015ec140) (3) Data frame handling
I0221 12:46:05.051011       8 log.go:172] (0xc0015ec140) (3) Data frame sent
I0221 12:46:05.249297       8 log.go:172] (0xc001b9e840) Data frame received for 1
I0221 12:46:05.249408       8 log.go:172] (0xc0018c0500) (1) Data frame handling
I0221 12:46:05.249440       8 log.go:172] (0xc0018c0500) (1) Data frame sent
I0221 12:46:05.249470       8 log.go:172] (0xc001b9e840) (0xc0018c0500) Stream removed, broadcasting: 1
I0221 12:46:05.249910       8 log.go:172] (0xc001b9e840) (0xc0015ec140) Stream removed, broadcasting: 3
I0221 12:46:05.250314       8 log.go:172] (0xc001b9e840) (0xc0018c05a0) Stream removed, broadcasting: 5
I0221 12:46:05.250356       8 log.go:172] (0xc001b9e840) Go away received
I0221 12:46:05.250426       8 log.go:172] (0xc001b9e840) (0xc0018c0500) Stream removed, broadcasting: 1
I0221 12:46:05.250476       8 log.go:172] (0xc001b9e840) (0xc0015ec140) Stream removed, broadcasting: 3
I0221 12:46:05.250510       8 log.go:172] (0xc001b9e840) (0xc0018c05a0) Stream removed, broadcasting: 5
Feb 21 12:46:05.250: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true
Feb 21 12:46:05.250: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-7ln7k PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 21 12:46:05.250: INFO: >>> kubeConfig: /root/.kube/config
I0221 12:46:05.344666       8 log.go:172] (0xc001b9ed10) (0xc0018c08c0) Create stream
I0221 12:46:05.344855       8 log.go:172] (0xc001b9ed10) (0xc0018c08c0) Stream added, broadcasting: 1
I0221 12:46:05.506931       8 log.go:172] (0xc001b9ed10) Reply frame received for 1
I0221 12:46:05.507177       8 log.go:172] (0xc001b9ed10) (0xc0018c0a00) Create stream
I0221 12:46:05.507206       8 log.go:172] (0xc001b9ed10) (0xc0018c0a00) Stream added, broadcasting: 3
I0221 12:46:05.511732       8 log.go:172] (0xc001b9ed10) Reply frame received for 3
I0221 12:46:05.511775       8 log.go:172] (0xc001b9ed10) (0xc0015ec1e0) Create stream
I0221 12:46:05.511794       8 log.go:172] (0xc001b9ed10) (0xc0015ec1e0) Stream added, broadcasting: 5
I0221 12:46:05.514348       8 log.go:172] (0xc001b9ed10) Reply frame received for 5
I0221 12:46:05.696291       8 log.go:172] (0xc001b9ed10) Data frame received for 3
I0221 12:46:05.696389       8 log.go:172] (0xc0018c0a00) (3) Data frame handling
I0221 12:46:05.696410       8 log.go:172] (0xc0018c0a00) (3) Data frame sent
I0221 12:46:05.809584       8 log.go:172] (0xc001b9ed10) (0xc0018c0a00) Stream removed, broadcasting: 3
I0221 12:46:05.809690       8 log.go:172] (0xc001b9ed10) Data frame received for 1
I0221 12:46:05.809716       8 log.go:172] (0xc0018c08c0) (1) Data frame handling
I0221 12:46:05.809740       8 log.go:172] (0xc0018c08c0) (1) Data frame sent
I0221 12:46:05.809751       8 log.go:172] (0xc001b9ed10) (0xc0015ec1e0) Stream removed, broadcasting: 5
I0221 12:46:05.809832       8 log.go:172] (0xc001b9ed10) (0xc0018c08c0) Stream removed, broadcasting: 1
I0221 12:46:05.809847       8 log.go:172] (0xc001b9ed10) Go away received
I0221 12:46:05.809972       8 log.go:172] (0xc001b9ed10) (0xc0018c08c0) Stream removed, broadcasting: 1
I0221 12:46:05.809986       8 log.go:172] (0xc001b9ed10) (0xc0018c0a00) Stream removed, broadcasting: 3
I0221 12:46:05.809993       8 log.go:172] (0xc001b9ed10) (0xc0015ec1e0) Stream removed, broadcasting: 5
Feb 21 12:46:05.810: INFO: Exec stderr: ""
Feb 21 12:46:05.810: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-7ln7k PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 21 12:46:05.810: INFO: >>> kubeConfig: /root/.kube/config
I0221 12:46:05.892668       8 log.go:172] (0xc0022d8790) (0xc001cbc6e0) Create stream
I0221 12:46:05.892797       8 log.go:172] (0xc0022d8790) (0xc001cbc6e0) Stream added, broadcasting: 1
I0221 12:46:05.897284       8 log.go:172] (0xc0022d8790) Reply frame received for 1
I0221 12:46:05.897316       8 log.go:172] (0xc0022d8790) (0xc0015ec280) Create stream
I0221 12:46:05.897326       8 log.go:172] (0xc0022d8790) (0xc0015ec280) Stream added, broadcasting: 3
I0221 12:46:05.898105       8 log.go:172] (0xc0022d8790) Reply frame received for 3
I0221 12:46:05.898122       8 log.go:172] (0xc0022d8790) (0xc001cbc780) Create stream
I0221 12:46:05.898130       8 log.go:172] (0xc0022d8790) (0xc001cbc780) Stream added, broadcasting: 5
I0221 12:46:05.899072       8 log.go:172] (0xc0022d8790) Reply frame received for 5
I0221 12:46:06.007575       8 log.go:172] (0xc0022d8790) Data frame received for 3
I0221 12:46:06.007698       8 log.go:172] (0xc0015ec280) (3) Data frame handling
I0221 12:46:06.007748       8 log.go:172] (0xc0015ec280) (3) Data frame sent
I0221 12:46:06.132298       8 log.go:172] (0xc0022d8790) (0xc001cbc780) Stream removed, broadcasting: 5
I0221 12:46:06.132508       8 log.go:172] (0xc0022d8790) Data frame received for 1
I0221 12:46:06.132639       8 log.go:172] (0xc0022d8790) (0xc0015ec280) Stream removed, broadcasting: 3
I0221 12:46:06.132718       8 log.go:172] (0xc001cbc6e0) (1) Data frame handling
I0221 12:46:06.132786       8 log.go:172] (0xc001cbc6e0) (1) Data frame sent
I0221 12:46:06.132924       8 log.go:172] (0xc0022d8790) (0xc001cbc6e0) Stream removed, broadcasting: 1
I0221 12:46:06.132944       8 log.go:172] (0xc0022d8790) Go away received
I0221 12:46:06.133361       8 log.go:172] (0xc0022d8790) (0xc001cbc6e0) Stream removed, broadcasting: 1
I0221 12:46:06.133419       8 log.go:172] (0xc0022d8790) (0xc0015ec280) Stream removed, broadcasting: 3
I0221 12:46:06.133447       8 log.go:172] (0xc0022d8790) (0xc001cbc780) Stream removed, broadcasting: 5
Feb 21 12:46:06.133: INFO: Exec stderr: ""
Feb 21 12:46:06.133: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-7ln7k PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 21 12:46:06.133: INFO: >>> kubeConfig: /root/.kube/config
I0221 12:46:06.186086       8 log.go:172] (0xc001e322c0) (0xc0015ec500) Create stream
I0221 12:46:06.186202       8 log.go:172] (0xc001e322c0) (0xc0015ec500) Stream added, broadcasting: 1
I0221 12:46:06.192026       8 log.go:172] (0xc001e322c0) Reply frame received for 1
I0221 12:46:06.192091       8 log.go:172] (0xc001e322c0) (0xc0022fef00) Create stream
I0221 12:46:06.192100       8 log.go:172] (0xc001e322c0) (0xc0022fef00) Stream added, broadcasting: 3
I0221 12:46:06.192762       8 log.go:172] (0xc001e322c0) Reply frame received for 3
I0221 12:46:06.192793       8 log.go:172] (0xc001e322c0) (0xc001cbc820) Create stream
I0221 12:46:06.192806       8 log.go:172] (0xc001e322c0) (0xc001cbc820) Stream added, broadcasting: 5
I0221 12:46:06.193645       8 log.go:172] (0xc001e322c0) Reply frame received for 5
I0221 12:46:06.289051       8 log.go:172] (0xc001e322c0) Data frame received for 3
I0221 12:46:06.289243       8 log.go:172] (0xc0022fef00) (3) Data frame handling
I0221 12:46:06.289295       8 log.go:172] (0xc0022fef00) (3) Data frame sent
I0221 12:46:06.405455       8 log.go:172] (0xc001e322c0) Data frame received for 1
I0221 12:46:06.405566       8 log.go:172] (0xc0015ec500) (1) Data frame handling
I0221 12:46:06.405591       8 log.go:172] (0xc0015ec500) (1) Data frame sent
I0221 12:46:06.405625       8 log.go:172] (0xc001e322c0) (0xc0015ec500) Stream removed, broadcasting: 1
I0221 12:46:06.405912       8 log.go:172] (0xc001e322c0) (0xc0022fef00) Stream removed, broadcasting: 3
I0221 12:46:06.405976       8 log.go:172] (0xc001e322c0) (0xc001cbc820) Stream removed, broadcasting: 5
I0221 12:46:06.406013       8 log.go:172] (0xc001e322c0) Go away received
I0221 12:46:06.406159       8 log.go:172] (0xc001e322c0) (0xc0015ec500) Stream removed, broadcasting: 1
I0221 12:46:06.406197       8 log.go:172] (0xc001e322c0) (0xc0022fef00) Stream removed, broadcasting: 3
I0221 12:46:06.406223       8 log.go:172] (0xc001e322c0) (0xc001cbc820) Stream removed, broadcasting: 5
Feb 21 12:46:06.406: INFO: Exec stderr: ""
Feb 21 12:46:06.406: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-7ln7k PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 21 12:46:06.406: INFO: >>> kubeConfig: /root/.kube/config
I0221 12:46:06.472798       8 log.go:172] (0xc001b9f1e0) (0xc0018c0be0) Create stream
I0221 12:46:06.472998       8 log.go:172] (0xc001b9f1e0) (0xc0018c0be0) Stream added, broadcasting: 1
I0221 12:46:06.693414       8 log.go:172] (0xc001b9f1e0) Reply frame received for 1
I0221 12:46:06.693668       8 log.go:172] (0xc001b9f1e0) (0xc001cbc960) Create stream
I0221 12:46:06.693691       8 log.go:172] (0xc001b9f1e0) (0xc001cbc960) Stream added, broadcasting: 3
I0221 12:46:06.696274       8 log.go:172] (0xc001b9f1e0) Reply frame received for 3
I0221 12:46:06.696300       8 log.go:172] (0xc001b9f1e0) (0xc0015ec5a0) Create stream
I0221 12:46:06.696324       8 log.go:172] (0xc001b9f1e0) (0xc0015ec5a0) Stream added, broadcasting: 5
I0221 12:46:06.698058       8 log.go:172] (0xc001b9f1e0) Reply frame received for 5
I0221 12:46:06.890623       8 log.go:172] (0xc001b9f1e0) Data frame received for 3
I0221 12:46:06.890727       8 log.go:172] (0xc001cbc960) (3) Data frame handling
I0221 12:46:06.890764       8 log.go:172] (0xc001cbc960) (3) Data frame sent
I0221 12:46:07.011052       8 log.go:172] (0xc001b9f1e0) (0xc001cbc960) Stream removed, broadcasting: 3
I0221 12:46:07.011283       8 log.go:172] (0xc001b9f1e0) Data frame received for 1
I0221 12:46:07.011331       8 log.go:172] (0xc0018c0be0) (1) Data frame handling
I0221 12:46:07.011384       8 log.go:172] (0xc0018c0be0) (1) Data frame sent
I0221 12:46:07.011425       8 log.go:172] (0xc001b9f1e0) (0xc0015ec5a0) Stream removed, broadcasting: 5
I0221 12:46:07.011529       8 log.go:172] (0xc001b9f1e0) (0xc0018c0be0) Stream removed, broadcasting: 1
I0221 12:46:07.011568       8 log.go:172] (0xc001b9f1e0) Go away received
I0221 12:46:07.011894       8 log.go:172] (0xc001b9f1e0) (0xc0018c0be0) Stream removed, broadcasting: 1
I0221 12:46:07.011919       8 log.go:172] (0xc001b9f1e0) (0xc001cbc960) Stream removed, broadcasting: 3
I0221 12:46:07.011936       8 log.go:172] (0xc001b9f1e0) (0xc0015ec5a0) Stream removed, broadcasting: 5
Feb 21 12:46:07.011: INFO: Exec stderr: ""
[AfterEach] [k8s.io] KubeletManagedEtcHosts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 21 12:46:07.012: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-e2e-kubelet-etc-hosts-7ln7k" for this suite.
Feb 21 12:46:53.056: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 21 12:46:53.154: INFO: namespace: e2e-tests-e2e-kubelet-etc-hosts-7ln7k, resource: bindings, ignored listing per whitelist
Feb 21 12:46:53.210: INFO: namespace e2e-tests-e2e-kubelet-etc-hosts-7ln7k deletion completed in 46.184626491s

• [SLOW TEST:77.006 seconds]
[k8s.io] KubeletManagedEtcHosts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should test kubelet managed /etc/hosts file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 21 12:46:53.211: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-map-3c80db02-54a8-11ea-b1f8-0242ac110008
STEP: Creating a pod to test consume configMaps
Feb 21 12:46:53.473: INFO: Waiting up to 5m0s for pod "pod-configmaps-3c8224ef-54a8-11ea-b1f8-0242ac110008" in namespace "e2e-tests-configmap-r47t7" to be "success or failure"
Feb 21 12:46:53.494: INFO: Pod "pod-configmaps-3c8224ef-54a8-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 20.655733ms
Feb 21 12:46:55.516: INFO: Pod "pod-configmaps-3c8224ef-54a8-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.042331933s
Feb 21 12:46:57.527: INFO: Pod "pod-configmaps-3c8224ef-54a8-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.053639388s
Feb 21 12:46:59.860: INFO: Pod "pod-configmaps-3c8224ef-54a8-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.386902098s
Feb 21 12:47:01.884: INFO: Pod "pod-configmaps-3c8224ef-54a8-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.410780277s
Feb 21 12:47:03.898: INFO: Pod "pod-configmaps-3c8224ef-54a8-11ea-b1f8-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.424252796s
STEP: Saw pod success
Feb 21 12:47:03.898: INFO: Pod "pod-configmaps-3c8224ef-54a8-11ea-b1f8-0242ac110008" satisfied condition "success or failure"
Feb 21 12:47:03.902: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-3c8224ef-54a8-11ea-b1f8-0242ac110008 container configmap-volume-test: 
STEP: delete the pod
Feb 21 12:47:04.266: INFO: Waiting for pod pod-configmaps-3c8224ef-54a8-11ea-b1f8-0242ac110008 to disappear
Feb 21 12:47:04.564: INFO: Pod pod-configmaps-3c8224ef-54a8-11ea-b1f8-0242ac110008 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 21 12:47:04.565: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-r47t7" for this suite.
Feb 21 12:47:10.793: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 21 12:47:10.870: INFO: namespace: e2e-tests-configmap-r47t7, resource: bindings, ignored listing per whitelist
Feb 21 12:47:11.013: INFO: namespace e2e-tests-configmap-r47t7 deletion completed in 6.410105454s

• [SLOW TEST:17.803 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-apps] ReplicationController 
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 21 12:47:11.013: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating replication controller my-hostname-basic-471cf054-54a8-11ea-b1f8-0242ac110008
Feb 21 12:47:11.271: INFO: Pod name my-hostname-basic-471cf054-54a8-11ea-b1f8-0242ac110008: Found 0 pods out of 1
Feb 21 12:47:16.826: INFO: Pod name my-hostname-basic-471cf054-54a8-11ea-b1f8-0242ac110008: Found 1 pods out of 1
Feb 21 12:47:16.827: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-471cf054-54a8-11ea-b1f8-0242ac110008" are running
Feb 21 12:47:20.851: INFO: Pod "my-hostname-basic-471cf054-54a8-11ea-b1f8-0242ac110008-69p9g" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-21 12:47:11 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-21 12:47:11 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-471cf054-54a8-11ea-b1f8-0242ac110008]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-21 12:47:11 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-471cf054-54a8-11ea-b1f8-0242ac110008]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-21 12:47:11 +0000 UTC Reason: Message:}])
Feb 21 12:47:20.851: INFO: Trying to dial the pod
Feb 21 12:47:25.928: INFO: Controller my-hostname-basic-471cf054-54a8-11ea-b1f8-0242ac110008: Got expected result from replica 1 [my-hostname-basic-471cf054-54a8-11ea-b1f8-0242ac110008-69p9g]: "my-hostname-basic-471cf054-54a8-11ea-b1f8-0242ac110008-69p9g", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 21 12:47:25.928: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-replication-controller-9rwqj" for this suite.
Feb 21 12:47:32.034: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 21 12:47:32.134: INFO: namespace: e2e-tests-replication-controller-9rwqj, resource: bindings, ignored listing per whitelist
Feb 21 12:47:32.199: INFO: namespace e2e-tests-replication-controller-9rwqj deletion completed in 6.261945669s

• [SLOW TEST:21.186 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 21 12:47:32.200: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102
[It] should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Feb 21 12:47:32.514: INFO: Number of nodes with available pods: 0
Feb 21 12:47:32.515: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 21 12:47:33.844: INFO: Number of nodes with available pods: 0
Feb 21 12:47:33.844: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 21 12:47:34.606: INFO: Number of nodes with available pods: 0
Feb 21 12:47:34.606: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 21 12:47:35.555: INFO: Number of nodes with available pods: 0
Feb 21 12:47:35.555: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 21 12:47:37.844: INFO: Number of nodes with available pods: 0
Feb 21 12:47:37.844: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 21 12:47:38.588: INFO: Number of nodes with available pods: 0
Feb 21 12:47:38.588: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 21 12:47:39.552: INFO: Number of nodes with available pods: 0
Feb 21 12:47:39.552: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 21 12:47:40.833: INFO: Number of nodes with available pods: 0
Feb 21 12:47:40.833: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 21 12:47:41.539: INFO: Number of nodes with available pods: 0
Feb 21 12:47:41.539: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 21 12:47:43.077: INFO: Number of nodes with available pods: 0
Feb 21 12:47:43.077: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 21 12:47:43.728: INFO: Number of nodes with available pods: 1
Feb 21 12:47:43.728: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Stop a daemon pod, check that the daemon pod is revived.
Feb 21 12:47:43.800: INFO: Number of nodes with available pods: 0
Feb 21 12:47:43.801: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 21 12:47:44.817: INFO: Number of nodes with available pods: 0
Feb 21 12:47:44.817: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 21 12:47:45.893: INFO: Number of nodes with available pods: 0
Feb 21 12:47:45.893: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 21 12:47:49.446: INFO: Number of nodes with available pods: 0
Feb 21 12:47:49.446: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 21 12:47:49.860: INFO: Number of nodes with available pods: 0
Feb 21 12:47:49.860: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 21 12:47:50.826: INFO: Number of nodes with available pods: 0
Feb 21 12:47:50.826: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 21 12:47:51.864: INFO: Number of nodes with available pods: 0
Feb 21 12:47:51.864: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 21 12:47:52.841: INFO: Number of nodes with available pods: 0
Feb 21 12:47:52.841: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 21 12:47:53.882: INFO: Number of nodes with available pods: 0
Feb 21 12:47:53.882: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 21 12:47:54.833: INFO: Number of nodes with available pods: 0
Feb 21 12:47:54.833: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 21 12:47:55.863: INFO: Number of nodes with available pods: 0
Feb 21 12:47:55.863: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 21 12:47:56.819: INFO: Number of nodes with available pods: 0
Feb 21 12:47:56.819: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 21 12:47:57.863: INFO: Number of nodes with available pods: 0
Feb 21 12:47:57.863: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 21 12:47:58.820: INFO: Number of nodes with available pods: 0
Feb 21 12:47:58.820: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 21 12:47:59.828: INFO: Number of nodes with available pods: 0
Feb 21 12:47:59.828: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 21 12:48:00.826: INFO: Number of nodes with available pods: 0
Feb 21 12:48:00.826: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 21 12:48:01.920: INFO: Number of nodes with available pods: 0
Feb 21 12:48:01.921: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 21 12:48:02.850: INFO: Number of nodes with available pods: 0
Feb 21 12:48:02.850: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 21 12:48:03.819: INFO: Number of nodes with available pods: 0
Feb 21 12:48:03.819: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 21 12:48:05.021: INFO: Number of nodes with available pods: 0
Feb 21 12:48:05.021: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 21 12:48:05.842: INFO: Number of nodes with available pods: 0
Feb 21 12:48:05.843: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 21 12:48:06.816: INFO: Number of nodes with available pods: 0
Feb 21 12:48:06.816: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 21 12:48:08.375: INFO: Number of nodes with available pods: 0
Feb 21 12:48:08.375: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 21 12:48:08.832: INFO: Number of nodes with available pods: 0
Feb 21 12:48:08.832: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 21 12:48:09.939: INFO: Number of nodes with available pods: 0
Feb 21 12:48:09.939: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 21 12:48:10.839: INFO: Number of nodes with available pods: 0
Feb 21 12:48:10.839: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 21 12:48:11.856: INFO: Number of nodes with available pods: 0
Feb 21 12:48:11.856: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 21 12:48:12.819: INFO: Number of nodes with available pods: 1
Feb 21 12:48:12.819: INFO: Number of running nodes: 1, number of available pods: 1
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-bs4dv, will wait for the garbage collector to delete the pods
Feb 21 12:48:12.952: INFO: Deleting DaemonSet.extensions daemon-set took: 69.930567ms
Feb 21 12:48:13.153: INFO: Terminating DaemonSet.extensions daemon-set pods took: 200.631509ms
Feb 21 12:48:19.588: INFO: Number of nodes with available pods: 0
Feb 21 12:48:19.588: INFO: Number of running nodes: 0, number of available pods: 0
Feb 21 12:48:19.592: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-bs4dv/daemonsets","resourceVersion":"22426672"},"items":null}

Feb 21 12:48:19.595: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-bs4dv/pods","resourceVersion":"22426672"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 21 12:48:19.608: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-daemonsets-bs4dv" for this suite.
Feb 21 12:48:25.661: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 21 12:48:25.790: INFO: namespace: e2e-tests-daemonsets-bs4dv, resource: bindings, ignored listing per whitelist
Feb 21 12:48:25.823: INFO: namespace e2e-tests-daemonsets-bs4dv deletion completed in 6.211230561s

• [SLOW TEST:53.623 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-storage] ConfigMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 21 12:48:25.824: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name cm-test-opt-del-73bceb3e-54a8-11ea-b1f8-0242ac110008
STEP: Creating configMap with name cm-test-opt-upd-73bcec0b-54a8-11ea-b1f8-0242ac110008
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-73bceb3e-54a8-11ea-b1f8-0242ac110008
STEP: Updating configmap cm-test-opt-upd-73bcec0b-54a8-11ea-b1f8-0242ac110008
STEP: Creating configMap with name cm-test-opt-create-73bcec39-54a8-11ea-b1f8-0242ac110008
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 21 12:48:42.492: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-5smzs" for this suite.
Feb 21 12:49:06.591: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 21 12:49:06.744: INFO: namespace: e2e-tests-configmap-5smzs, resource: bindings, ignored listing per whitelist
Feb 21 12:49:06.806: INFO: namespace e2e-tests-configmap-5smzs deletion completed in 24.29175066s

• [SLOW TEST:40.983 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox Pod with hostAliases 
  should write entries to /etc/hosts [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 21 12:49:06.807: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should write entries to /etc/hosts [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 21 12:49:17.172: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubelet-test-965xt" for this suite.
Feb 21 12:50:05.215: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 21 12:50:05.327: INFO: namespace: e2e-tests-kubelet-test-965xt, resource: bindings, ignored listing per whitelist
Feb 21 12:50:05.370: INFO: namespace e2e-tests-kubelet-test-965xt deletion completed in 48.189186798s

• [SLOW TEST:58.562 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when scheduling a busybox Pod with hostAliases
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136
    should write entries to /etc/hosts [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 21 12:50:05.370: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79
Feb 21 12:50:05.517: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Feb 21 12:50:05.525: INFO: Waiting for terminating namespaces to be deleted...
Feb 21 12:50:05.527: INFO: 
Logging pods the kubelet thinks is on node hunter-server-hu5at5svl7ps before test
Feb 21 12:50:05.539: INFO: coredns-54ff9cd656-bmkk4 from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded)
Feb 21 12:50:05.539: INFO: 	Container coredns ready: true, restart count 0
Feb 21 12:50:05.539: INFO: kube-controller-manager-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Feb 21 12:50:05.539: INFO: kube-apiserver-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Feb 21 12:50:05.539: INFO: kube-scheduler-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Feb 21 12:50:05.539: INFO: coredns-54ff9cd656-79kxx from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded)
Feb 21 12:50:05.539: INFO: 	Container coredns ready: true, restart count 0
Feb 21 12:50:05.539: INFO: kube-proxy-bqnnz from kube-system started at 2019-08-04 08:33:23 +0000 UTC (1 container statuses recorded)
Feb 21 12:50:05.539: INFO: 	Container kube-proxy ready: true, restart count 0
Feb 21 12:50:05.539: INFO: etcd-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Feb 21 12:50:05.539: INFO: weave-net-tqwf2 from kube-system started at 2019-08-04 08:33:23 +0000 UTC (2 container statuses recorded)
Feb 21 12:50:05.539: INFO: 	Container weave ready: true, restart count 0
Feb 21 12:50:05.539: INFO: 	Container weave-npc ready: true, restart count 0
[It] validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-b50a76a8-54a8-11ea-b1f8-0242ac110008 42
STEP: Trying to relaunch the pod, now with labels.
STEP: removing the label kubernetes.io/e2e-b50a76a8-54a8-11ea-b1f8-0242ac110008 off the node hunter-server-hu5at5svl7ps
STEP: verifying the node doesn't have the label kubernetes.io/e2e-b50a76a8-54a8-11ea-b1f8-0242ac110008
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 21 12:50:28.028: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-sched-pred-ws9rp" for this suite.
Feb 21 12:50:42.108: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 21 12:50:42.221: INFO: namespace: e2e-tests-sched-pred-ws9rp, resource: bindings, ignored listing per whitelist
Feb 21 12:50:42.226: INFO: namespace e2e-tests-sched-pred-ws9rp deletion completed in 14.191681158s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70

• [SLOW TEST:36.856 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22
  validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 21 12:50:42.226: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0666 on node default medium
Feb 21 12:50:42.617: INFO: Waiting up to 5m0s for pod "pod-c50a5abe-54a8-11ea-b1f8-0242ac110008" in namespace "e2e-tests-emptydir-bx2s8" to be "success or failure"
Feb 21 12:50:42.654: INFO: Pod "pod-c50a5abe-54a8-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 36.845564ms
Feb 21 12:50:44.677: INFO: Pod "pod-c50a5abe-54a8-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.060249865s
Feb 21 12:50:46.701: INFO: Pod "pod-c50a5abe-54a8-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.083975818s
Feb 21 12:50:48.716: INFO: Pod "pod-c50a5abe-54a8-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.09913599s
Feb 21 12:50:50.810: INFO: Pod "pod-c50a5abe-54a8-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.192448827s
Feb 21 12:50:52.979: INFO: Pod "pod-c50a5abe-54a8-11ea-b1f8-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.36213456s
STEP: Saw pod success
Feb 21 12:50:52.980: INFO: Pod "pod-c50a5abe-54a8-11ea-b1f8-0242ac110008" satisfied condition "success or failure"
Feb 21 12:50:52.992: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-c50a5abe-54a8-11ea-b1f8-0242ac110008 container test-container: 
STEP: delete the pod
Feb 21 12:50:53.317: INFO: Waiting for pod pod-c50a5abe-54a8-11ea-b1f8-0242ac110008 to disappear
Feb 21 12:50:53.600: INFO: Pod pod-c50a5abe-54a8-11ea-b1f8-0242ac110008 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 21 12:50:53.600: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-bx2s8" for this suite.
Feb 21 12:51:00.765: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 21 12:51:00.907: INFO: namespace: e2e-tests-emptydir-bx2s8, resource: bindings, ignored listing per whitelist
Feb 21 12:51:00.956: INFO: namespace e2e-tests-emptydir-bx2s8 deletion completed in 7.338112818s

• [SLOW TEST:18.730 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 21 12:51:00.956: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with secret that has name projected-secret-test-d0357413-54a8-11ea-b1f8-0242ac110008
STEP: Creating a pod to test consume secrets
Feb 21 12:51:01.319: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-d036e867-54a8-11ea-b1f8-0242ac110008" in namespace "e2e-tests-projected-7lsmf" to be "success or failure"
Feb 21 12:51:01.332: INFO: Pod "pod-projected-secrets-d036e867-54a8-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 12.080679ms
Feb 21 12:51:05.109: INFO: Pod "pod-projected-secrets-d036e867-54a8-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 3.789714745s
Feb 21 12:51:07.148: INFO: Pod "pod-projected-secrets-d036e867-54a8-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 5.828895359s
Feb 21 12:51:09.165: INFO: Pod "pod-projected-secrets-d036e867-54a8-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 7.845486967s
Feb 21 12:51:11.330: INFO: Pod "pod-projected-secrets-d036e867-54a8-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 10.011016178s
Feb 21 12:51:13.345: INFO: Pod "pod-projected-secrets-d036e867-54a8-11ea-b1f8-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.02574982s
STEP: Saw pod success
Feb 21 12:51:13.345: INFO: Pod "pod-projected-secrets-d036e867-54a8-11ea-b1f8-0242ac110008" satisfied condition "success or failure"
Feb 21 12:51:13.357: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-d036e867-54a8-11ea-b1f8-0242ac110008 container projected-secret-volume-test: 
STEP: delete the pod
Feb 21 12:51:13.547: INFO: Waiting for pod pod-projected-secrets-d036e867-54a8-11ea-b1f8-0242ac110008 to disappear
Feb 21 12:51:13.558: INFO: Pod pod-projected-secrets-d036e867-54a8-11ea-b1f8-0242ac110008 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 21 12:51:13.558: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-7lsmf" for this suite.
Feb 21 12:51:19.621: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 21 12:51:19.756: INFO: namespace: e2e-tests-projected-7lsmf, resource: bindings, ignored listing per whitelist
Feb 21 12:51:19.810: INFO: namespace e2e-tests-projected-7lsmf deletion completed in 6.236612923s

• [SLOW TEST:18.853 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 21 12:51:19.810: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Feb 21 12:51:20.227: INFO: Waiting up to 5m0s for pod "downwardapi-volume-db810bd7-54a8-11ea-b1f8-0242ac110008" in namespace "e2e-tests-projected-96fbr" to be "success or failure"
Feb 21 12:51:20.236: INFO: Pod "downwardapi-volume-db810bd7-54a8-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.950906ms
Feb 21 12:51:22.251: INFO: Pod "downwardapi-volume-db810bd7-54a8-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024187597s
Feb 21 12:51:24.264: INFO: Pod "downwardapi-volume-db810bd7-54a8-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.036621547s
Feb 21 12:51:26.286: INFO: Pod "downwardapi-volume-db810bd7-54a8-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.058430923s
Feb 21 12:51:28.561: INFO: Pod "downwardapi-volume-db810bd7-54a8-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.333842875s
Feb 21 12:51:30.593: INFO: Pod "downwardapi-volume-db810bd7-54a8-11ea-b1f8-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.36588183s
STEP: Saw pod success
Feb 21 12:51:30.593: INFO: Pod "downwardapi-volume-db810bd7-54a8-11ea-b1f8-0242ac110008" satisfied condition "success or failure"
Feb 21 12:51:30.607: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-db810bd7-54a8-11ea-b1f8-0242ac110008 container client-container: 
STEP: delete the pod
Feb 21 12:51:31.836: INFO: Waiting for pod downwardapi-volume-db810bd7-54a8-11ea-b1f8-0242ac110008 to disappear
Feb 21 12:51:31.864: INFO: Pod downwardapi-volume-db810bd7-54a8-11ea-b1f8-0242ac110008 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 21 12:51:31.865: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-96fbr" for this suite.
Feb 21 12:51:38.117: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 21 12:51:38.179: INFO: namespace: e2e-tests-projected-96fbr, resource: bindings, ignored listing per whitelist
Feb 21 12:51:38.328: INFO: namespace e2e-tests-projected-96fbr deletion completed in 6.289224211s

• [SLOW TEST:18.518 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 21 12:51:38.328: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Feb 21 12:54:47.175: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 21 12:54:47.223: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 21 12:54:49.223: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 21 12:54:49.290: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 21 12:54:51.224: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 21 12:54:51.271: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 21 12:54:53.223: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 21 12:54:53.244: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 21 12:54:55.223: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 21 12:54:55.238: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 21 12:54:57.224: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 21 12:54:57.250: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 21 12:54:59.223: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 21 12:54:59.265: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 21 12:55:01.223: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 21 12:55:01.259: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 21 12:55:03.223: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 21 12:55:03.244: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 21 12:55:05.224: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 21 12:55:05.234: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 21 12:55:07.223: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 21 12:55:07.241: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 21 12:55:09.224: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 21 12:55:09.254: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 21 12:55:11.223: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 21 12:55:11.242: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 21 12:55:13.224: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 21 12:55:13.242: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 21 12:55:15.223: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 21 12:55:15.239: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 21 12:55:17.223: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 21 12:55:17.252: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 21 12:55:19.223: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 21 12:55:19.241: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 21 12:55:21.223: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 21 12:55:21.301: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 21 12:55:23.223: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 21 12:55:23.247: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 21 12:55:25.223: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 21 12:55:25.245: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 21 12:55:27.223: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 21 12:55:27.240: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 21 12:55:29.223: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 21 12:55:29.239: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 21 12:55:31.223: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 21 12:55:31.241: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 21 12:55:33.223: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 21 12:55:33.234: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 21 12:55:35.224: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 21 12:55:35.242: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 21 12:55:37.223: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 21 12:55:37.256: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 21 12:55:39.223: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 21 12:55:39.245: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 21 12:55:41.223: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 21 12:55:41.293: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 21 12:55:43.224: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 21 12:55:43.357: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 21 12:55:45.224: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 21 12:55:45.245: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 21 12:55:47.223: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 21 12:55:47.243: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 21 12:55:49.223: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 21 12:55:49.244: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 21 12:55:51.223: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 21 12:55:51.294: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 21 12:55:53.223: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 21 12:55:53.242: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 21 12:55:55.223: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 21 12:55:55.232: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 21 12:55:57.223: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 21 12:55:57.242: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 21 12:55:59.223: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 21 12:55:59.245: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 21 12:56:01.223: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 21 12:56:01.257: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 21 12:56:03.223: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 21 12:56:03.255: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 21 12:56:05.223: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 21 12:56:05.234: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 21 12:56:07.223: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 21 12:56:07.232: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 21 12:56:09.223: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 21 12:56:09.240: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 21 12:56:11.224: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 21 12:56:11.241: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 21 12:56:13.224: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 21 12:56:13.278: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 21 12:56:15.224: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 21 12:56:15.427: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 21 12:56:17.224: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 21 12:56:17.240: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 21 12:56:19.224: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 21 12:56:19.244: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 21 12:56:21.224: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 21 12:56:21.247: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 21 12:56:23.224: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 21 12:56:23.256: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 21 12:56:25.224: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 21 12:56:25.352: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 21 12:56:27.223: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 21 12:56:27.387: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 21 12:56:29.223: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 21 12:56:29.236: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 21 12:56:31.224: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 21 12:56:31.269: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 21 12:56:33.224: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 21 12:56:33.242: INFO: Pod pod-with-poststart-exec-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 21 12:56:33.242: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-29xwf" for this suite.
Feb 21 12:56:57.300: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 21 12:56:57.504: INFO: namespace: e2e-tests-container-lifecycle-hook-29xwf, resource: bindings, ignored listing per whitelist
Feb 21 12:56:57.539: INFO: namespace e2e-tests-container-lifecycle-hook-29xwf deletion completed in 24.287349174s

• [SLOW TEST:319.211 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40
    should execute poststart exec hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 21 12:56:57.540: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-a504d586-54a9-11ea-b1f8-0242ac110008
STEP: Creating a pod to test consume secrets
Feb 21 12:56:58.349: INFO: Waiting up to 5m0s for pod "pod-secrets-a5086bbf-54a9-11ea-b1f8-0242ac110008" in namespace "e2e-tests-secrets-w7pnf" to be "success or failure"
Feb 21 12:56:58.447: INFO: Pod "pod-secrets-a5086bbf-54a9-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 97.256152ms
Feb 21 12:57:00.565: INFO: Pod "pod-secrets-a5086bbf-54a9-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.21539226s
Feb 21 12:57:02.599: INFO: Pod "pod-secrets-a5086bbf-54a9-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.249951681s
Feb 21 12:57:04.966: INFO: Pod "pod-secrets-a5086bbf-54a9-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.616784579s
Feb 21 12:57:07.660: INFO: Pod "pod-secrets-a5086bbf-54a9-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 9.310260843s
Feb 21 12:57:09.674: INFO: Pod "pod-secrets-a5086bbf-54a9-11ea-b1f8-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.324344797s
STEP: Saw pod success
Feb 21 12:57:09.674: INFO: Pod "pod-secrets-a5086bbf-54a9-11ea-b1f8-0242ac110008" satisfied condition "success or failure"
Feb 21 12:57:09.679: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-a5086bbf-54a9-11ea-b1f8-0242ac110008 container secret-env-test: 
STEP: delete the pod
Feb 21 12:57:09.886: INFO: Waiting for pod pod-secrets-a5086bbf-54a9-11ea-b1f8-0242ac110008 to disappear
Feb 21 12:57:10.205: INFO: Pod pod-secrets-a5086bbf-54a9-11ea-b1f8-0242ac110008 no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 21 12:57:10.206: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-w7pnf" for this suite.
Feb 21 12:57:16.274: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 21 12:57:16.490: INFO: namespace: e2e-tests-secrets-w7pnf, resource: bindings, ignored listing per whitelist
Feb 21 12:57:16.540: INFO: namespace e2e-tests-secrets-w7pnf deletion completed in 6.321165851s

• [SLOW TEST:19.000 seconds]
[sig-api-machinery] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:32
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-api-machinery] Watchers 
  should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 21 12:57:16.541: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: modifying the configmap a second time
STEP: deleting the configmap
STEP: creating a watch on configmaps from the resource version returned by the first update
STEP: Expecting to observe notifications for all changes to the configmap after the first update
Feb 21 12:57:17.075: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:e2e-tests-watch-4h922,SelfLink:/api/v1/namespaces/e2e-tests-watch-4h922/configmaps/e2e-watch-test-resource-version,UID:b01325d6-54a9-11ea-a994-fa163e34d433,ResourceVersion:22427587,Generation:0,CreationTimestamp:2020-02-21 12:57:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Feb 21 12:57:17.076: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:e2e-tests-watch-4h922,SelfLink:/api/v1/namespaces/e2e-tests-watch-4h922/configmaps/e2e-watch-test-resource-version,UID:b01325d6-54a9-11ea-a994-fa163e34d433,ResourceVersion:22427588,Generation:0,CreationTimestamp:2020-02-21 12:57:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 21 12:57:17.076: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-watch-4h922" for this suite.
Feb 21 12:57:23.147: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 21 12:57:23.343: INFO: namespace: e2e-tests-watch-4h922, resource: bindings, ignored listing per whitelist
Feb 21 12:57:23.343: INFO: namespace e2e-tests-watch-4h922 deletion completed in 6.257807749s

• [SLOW TEST:6.802 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 21 12:57:23.343: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65
[It] deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Feb 21 12:57:23.596: INFO: Creating deployment "nginx-deployment"
Feb 21 12:57:23.633: INFO: Waiting for observed generation 1
Feb 21 12:57:26.742: INFO: Waiting for all required pods to come up
Feb 21 12:57:26.765: INFO: Pod name nginx: Found 10 pods out of 10
STEP: ensuring each pod is running
Feb 21 12:58:11.666: INFO: Waiting for deployment "nginx-deployment" to complete
Feb 21 12:58:11.685: INFO: Updating deployment "nginx-deployment" with a non-existent image
Feb 21 12:58:11.708: INFO: Updating deployment nginx-deployment
Feb 21 12:58:11.708: INFO: Waiting for observed generation 2
Feb 21 12:58:14.756: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8
Feb 21 12:58:15.193: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8
Feb 21 12:58:15.246: INFO: Waiting for the first rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas
Feb 21 12:58:15.475: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0
Feb 21 12:58:15.475: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5
Feb 21 12:58:15.484: INFO: Waiting for the second rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas
Feb 21 12:58:15.521: INFO: Verifying that deployment "nginx-deployment" has minimum required number of available replicas
Feb 21 12:58:15.521: INFO: Scaling up the deployment "nginx-deployment" from 10 to 30
Feb 21 12:58:15.542: INFO: Updating deployment nginx-deployment
Feb 21 12:58:15.542: INFO: Waiting for the replicasets of deployment "nginx-deployment" to have desired number of replicas
Feb 21 12:58:18.915: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20
Feb 21 12:58:22.151: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59
Feb 21 12:58:27.035: INFO: Deployment "nginx-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment,GenerateName:,Namespace:e2e-tests-deployment-r79c7,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-r79c7/deployments/nginx-deployment,UID:b41aa8cb-54a9-11ea-a994-fa163e34d433,ResourceVersion:22427883,Generation:3,CreationTimestamp:2020-02-21 12:57:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*30,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:25,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[{Progressing True 2020-02-21 12:58:13 +0000 UTC 2020-02-21 12:57:23 +0000 UTC ReplicaSetUpdated ReplicaSet "nginx-deployment-5c98f8fb5" is progressing.} {Available False 2020-02-21 12:58:18 +0000 UTC 2020-02-21 12:58:18 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.}],ReadyReplicas:8,CollisionCount:nil,},}

Feb 21 12:58:27.784: INFO: New ReplicaSet "nginx-deployment-5c98f8fb5" of Deployment "nginx-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5,GenerateName:,Namespace:e2e-tests-deployment-r79c7,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-r79c7/replicasets/nginx-deployment-5c98f8fb5,UID:d0c72f30-54a9-11ea-a994-fa163e34d433,ResourceVersion:22427903,Generation:3,CreationTimestamp:2020-02-21 12:58:11 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment nginx-deployment b41aa8cb-54a9-11ea-a994-fa163e34d433 0xc001eee8d7 0xc001eee8d8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*13,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Feb 21 12:58:27.784: INFO: All old ReplicaSets of Deployment "nginx-deployment":
Feb 21 12:58:27.785: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d,GenerateName:,Namespace:e2e-tests-deployment-r79c7,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-r79c7/replicasets/nginx-deployment-85ddf47c5d,UID:b4215b49-54a9-11ea-a994-fa163e34d433,ResourceVersion:22427875,Generation:3,CreationTimestamp:2020-02-21 12:57:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment nginx-deployment b41aa8cb-54a9-11ea-a994-fa163e34d433 0xc001eee997 0xc001eee998}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*20,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[],},}
Feb 21 12:58:28.404: INFO: Pod "nginx-deployment-5c98f8fb5-4csmh" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-4csmh,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-r79c7,SelfLink:/api/v1/namespaces/e2e-tests-deployment-r79c7/pods/nginx-deployment-5c98f8fb5-4csmh,UID:d5d5643e-54a9-11ea-a994-fa163e34d433,ResourceVersion:22427859,Generation:0,CreationTimestamp:2020-02-21 12:58:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 d0c72f30-54a9-11ea-a994-fa163e34d433 0xc001eef317 0xc001eef318}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ppswb {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ppswb,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-ppswb true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001eef380} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001eef3a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-21 12:58:20 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 21 12:58:28.405: INFO: Pod "nginx-deployment-5c98f8fb5-89xld" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-89xld,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-r79c7,SelfLink:/api/v1/namespaces/e2e-tests-deployment-r79c7/pods/nginx-deployment-5c98f8fb5-89xld,UID:d651974c-54a9-11ea-a994-fa163e34d433,ResourceVersion:22427880,Generation:0,CreationTimestamp:2020-02-21 12:58:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 d0c72f30-54a9-11ea-a994-fa163e34d433 0xc001eef417 0xc001eef418}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ppswb {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ppswb,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-ppswb true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001eef480} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001eef4a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-21 12:58:21 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 21 12:58:28.405: INFO: Pod "nginx-deployment-5c98f8fb5-8rmjr" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-8rmjr,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-r79c7,SelfLink:/api/v1/namespaces/e2e-tests-deployment-r79c7/pods/nginx-deployment-5c98f8fb5-8rmjr,UID:d5116551-54a9-11ea-a994-fa163e34d433,ResourceVersion:22427844,Generation:0,CreationTimestamp:2020-02-21 12:58:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 d0c72f30-54a9-11ea-a994-fa163e34d433 0xc001eef517 0xc001eef518}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ppswb {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ppswb,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-ppswb true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001eef580} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001eef5a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-21 12:58:20 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 21 12:58:28.405: INFO: Pod "nginx-deployment-5c98f8fb5-8v5h8" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-8v5h8,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-r79c7,SelfLink:/api/v1/namespaces/e2e-tests-deployment-r79c7/pods/nginx-deployment-5c98f8fb5-8v5h8,UID:d1340a21-54a9-11ea-a994-fa163e34d433,ResourceVersion:22427839,Generation:0,CreationTimestamp:2020-02-21 12:58:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 d0c72f30-54a9-11ea-a994-fa163e34d433 0xc001eef617 0xc001eef618}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ppswb {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ppswb,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-ppswb true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001eef680} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001eef6a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-21 12:58:14 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-21 12:58:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-21 12:58:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-21 12:58:12 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-02-21 12:58:14 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 21 12:58:28.405: INFO: Pod "nginx-deployment-5c98f8fb5-9kdzf" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-9kdzf,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-r79c7,SelfLink:/api/v1/namespaces/e2e-tests-deployment-r79c7/pods/nginx-deployment-5c98f8fb5-9kdzf,UID:d6505f9c-54a9-11ea-a994-fa163e34d433,ResourceVersion:22427878,Generation:0,CreationTimestamp:2020-02-21 12:58:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 d0c72f30-54a9-11ea-a994-fa163e34d433 0xc001eef767 0xc001eef768}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ppswb {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ppswb,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-ppswb true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001eef7d0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001eef7f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-21 12:58:21 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 21 12:58:28.405: INFO: Pod "nginx-deployment-5c98f8fb5-9tjnq" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-9tjnq,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-r79c7,SelfLink:/api/v1/namespaces/e2e-tests-deployment-r79c7/pods/nginx-deployment-5c98f8fb5-9tjnq,UID:d68bda57-54a9-11ea-a994-fa163e34d433,ResourceVersion:22427885,Generation:0,CreationTimestamp:2020-02-21 12:58:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 d0c72f30-54a9-11ea-a994-fa163e34d433 0xc001eef867 0xc001eef868}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ppswb {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ppswb,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-ppswb true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001eef8d0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001eef8f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-21 12:58:22 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 21 12:58:28.406: INFO: Pod "nginx-deployment-5c98f8fb5-cfmh8" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-cfmh8,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-r79c7,SelfLink:/api/v1/namespaces/e2e-tests-deployment-r79c7/pods/nginx-deployment-5c98f8fb5-cfmh8,UID:d0e41993-54a9-11ea-a994-fa163e34d433,ResourceVersion:22427818,Generation:0,CreationTimestamp:2020-02-21 12:58:11 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 d0c72f30-54a9-11ea-a994-fa163e34d433 0xc001eef967 0xc001eef968}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ppswb {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ppswb,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-ppswb true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001eef9d0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001eef9f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-21 12:58:12 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-21 12:58:12 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-21 12:58:12 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-21 12:58:11 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-02-21 12:58:12 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 21 12:58:28.406: INFO: Pod "nginx-deployment-5c98f8fb5-f9z6l" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-f9z6l,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-r79c7,SelfLink:/api/v1/namespaces/e2e-tests-deployment-r79c7/pods/nginx-deployment-5c98f8fb5-f9z6l,UID:d65062f4-54a9-11ea-a994-fa163e34d433,ResourceVersion:22427879,Generation:0,CreationTimestamp:2020-02-21 12:58:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 d0c72f30-54a9-11ea-a994-fa163e34d433 0xc001eefab7 0xc001eefab8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ppswb {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ppswb,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-ppswb true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001eefb20} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001eefb40}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-21 12:58:21 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 21 12:58:28.406: INFO: Pod "nginx-deployment-5c98f8fb5-pscnd" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-pscnd,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-r79c7,SelfLink:/api/v1/namespaces/e2e-tests-deployment-r79c7/pods/nginx-deployment-5c98f8fb5-pscnd,UID:d5d3cff4-54a9-11ea-a994-fa163e34d433,ResourceVersion:22427861,Generation:0,CreationTimestamp:2020-02-21 12:58:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 d0c72f30-54a9-11ea-a994-fa163e34d433 0xc001eefbe7 0xc001eefbe8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ppswb {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ppswb,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-ppswb true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001eefc50} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001eefc80}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-21 12:58:20 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 21 12:58:28.406: INFO: Pod "nginx-deployment-5c98f8fb5-pzckc" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-pzckc,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-r79c7,SelfLink:/api/v1/namespaces/e2e-tests-deployment-r79c7/pods/nginx-deployment-5c98f8fb5-pzckc,UID:d150c68a-54a9-11ea-a994-fa163e34d433,ResourceVersion:22427882,Generation:0,CreationTimestamp:2020-02-21 12:58:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 d0c72f30-54a9-11ea-a994-fa163e34d433 0xc001eefcf7 0xc001eefcf8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ppswb {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ppswb,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-ppswb true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001eefd60} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001eefd80}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-21 12:58:17 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-21 12:58:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-21 12:58:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-21 12:58:12 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-02-21 12:58:17 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 21 12:58:28.407: INFO: Pod "nginx-deployment-5c98f8fb5-sxjgg" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-sxjgg,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-r79c7,SelfLink:/api/v1/namespaces/e2e-tests-deployment-r79c7/pods/nginx-deployment-5c98f8fb5-sxjgg,UID:d0e3a124-54a9-11ea-a994-fa163e34d433,ResourceVersion:22427821,Generation:0,CreationTimestamp:2020-02-21 12:58:11 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 d0c72f30-54a9-11ea-a994-fa163e34d433 0xc001eefe87 0xc001eefe88}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ppswb {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ppswb,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-ppswb true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001eefef0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001eeff10}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-21 12:58:12 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-21 12:58:12 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-21 12:58:12 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-21 12:58:11 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-02-21 12:58:12 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 21 12:58:28.407: INFO: Pod "nginx-deployment-5c98f8fb5-vj66j" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-vj66j,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-r79c7,SelfLink:/api/v1/namespaces/e2e-tests-deployment-r79c7/pods/nginx-deployment-5c98f8fb5-vj66j,UID:d0cf5b4e-54a9-11ea-a994-fa163e34d433,ResourceVersion:22427814,Generation:0,CreationTimestamp:2020-02-21 12:58:11 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 d0c72f30-54a9-11ea-a994-fa163e34d433 0xc001eeffd7 0xc001eeffd8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ppswb {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ppswb,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-ppswb true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001c6a090} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001c6a0b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-21 12:58:12 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-21 12:58:12 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-21 12:58:12 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-21 12:58:11 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-02-21 12:58:12 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 21 12:58:28.407: INFO: Pod "nginx-deployment-5c98f8fb5-vsmj4" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-vsmj4,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-r79c7,SelfLink:/api/v1/namespaces/e2e-tests-deployment-r79c7/pods/nginx-deployment-5c98f8fb5-vsmj4,UID:d64fe9b2-54a9-11ea-a994-fa163e34d433,ResourceVersion:22427881,Generation:0,CreationTimestamp:2020-02-21 12:58:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 d0c72f30-54a9-11ea-a994-fa163e34d433 0xc001c6a177 0xc001c6a178}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ppswb {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ppswb,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-ppswb true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001c6a1e0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001c6a280}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-21 12:58:21 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 21 12:58:28.407: INFO: Pod "nginx-deployment-85ddf47c5d-6xnvw" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-6xnvw,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-r79c7,SelfLink:/api/v1/namespaces/e2e-tests-deployment-r79c7/pods/nginx-deployment-85ddf47c5d-6xnvw,UID:d5e54a77-54a9-11ea-a994-fa163e34d433,ResourceVersion:22427862,Generation:0,CreationTimestamp:2020-02-21 12:58:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d b4215b49-54a9-11ea-a994-fa163e34d433 0xc001c6a2f7 0xc001c6a2f8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ppswb {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ppswb,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-ppswb true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001c6a370} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001c6a3c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-21 12:58:20 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 21 12:58:28.407: INFO: Pod "nginx-deployment-85ddf47c5d-8fb96" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-8fb96,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-r79c7,SelfLink:/api/v1/namespaces/e2e-tests-deployment-r79c7/pods/nginx-deployment-85ddf47c5d-8fb96,UID:b44f6e14-54a9-11ea-a994-fa163e34d433,ResourceVersion:22427745,Generation:0,CreationTimestamp:2020-02-21 12:57:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d b4215b49-54a9-11ea-a994-fa163e34d433 0xc001c6a437 0xc001c6a438}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ppswb {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ppswb,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-ppswb true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001c6a4a0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001c6a4c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-21 12:57:26 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-21 12:58:05 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-21 12:58:05 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-21 12:57:24 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.4,StartTime:2020-02-21 12:57:26 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-21 12:57:54 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://3aa2e6aa738f4d8ff53ad160c1cf1862d08cc11621bf0b74d7341754d72465dc}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 21 12:58:28.408: INFO: Pod "nginx-deployment-85ddf47c5d-8tlf5" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-8tlf5,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-r79c7,SelfLink:/api/v1/namespaces/e2e-tests-deployment-r79c7/pods/nginx-deployment-85ddf47c5d-8tlf5,UID:d3ffb2ab-54a9-11ea-a994-fa163e34d433,ResourceVersion:22427893,Generation:0,CreationTimestamp:2020-02-21 12:58:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d b4215b49-54a9-11ea-a994-fa163e34d433 0xc001c6a587 0xc001c6a588}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ppswb {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ppswb,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-ppswb true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001c6a5f0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001c6a610}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-21 12:58:21 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-21 12:58:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-21 12:58:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-21 12:58:18 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-02-21 12:58:21 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 21 12:58:28.408: INFO: Pod "nginx-deployment-85ddf47c5d-9qlb5" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-9qlb5,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-r79c7,SelfLink:/api/v1/namespaces/e2e-tests-deployment-r79c7/pods/nginx-deployment-85ddf47c5d-9qlb5,UID:d511a297-54a9-11ea-a994-fa163e34d433,ResourceVersion:22427837,Generation:0,CreationTimestamp:2020-02-21 12:58:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d b4215b49-54a9-11ea-a994-fa163e34d433 0xc001c6a6c7 0xc001c6a6c8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ppswb {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ppswb,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-ppswb true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001c6a730} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001c6a750}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-21 12:58:18 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 21 12:58:28.408: INFO: Pod "nginx-deployment-85ddf47c5d-cdlln" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-cdlln,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-r79c7,SelfLink:/api/v1/namespaces/e2e-tests-deployment-r79c7/pods/nginx-deployment-85ddf47c5d-cdlln,UID:d5161e8e-54a9-11ea-a994-fa163e34d433,ResourceVersion:22427856,Generation:0,CreationTimestamp:2020-02-21 12:58:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d b4215b49-54a9-11ea-a994-fa163e34d433 0xc001c6a7c7 0xc001c6a7c8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ppswb {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ppswb,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-ppswb true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001c6a830} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001c6a850}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-21 12:58:20 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 21 12:58:28.408: INFO: Pod "nginx-deployment-85ddf47c5d-gcr9n" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-gcr9n,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-r79c7,SelfLink:/api/v1/namespaces/e2e-tests-deployment-r79c7/pods/nginx-deployment-85ddf47c5d-gcr9n,UID:d511c531-54a9-11ea-a994-fa163e34d433,ResourceVersion:22427840,Generation:0,CreationTimestamp:2020-02-21 12:58:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d b4215b49-54a9-11ea-a994-fa163e34d433 0xc001c6a8c7 0xc001c6a8c8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ppswb {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ppswb,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-ppswb true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001c6a930} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001c6a950}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-21 12:58:18 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 21 12:58:28.409: INFO: Pod "nginx-deployment-85ddf47c5d-kkt9r" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-kkt9r,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-r79c7,SelfLink:/api/v1/namespaces/e2e-tests-deployment-r79c7/pods/nginx-deployment-85ddf47c5d-kkt9r,UID:d5e50241-54a9-11ea-a994-fa163e34d433,ResourceVersion:22427864,Generation:0,CreationTimestamp:2020-02-21 12:58:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d b4215b49-54a9-11ea-a994-fa163e34d433 0xc001c6a9c7 0xc001c6a9c8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ppswb {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ppswb,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-ppswb true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001c6aa30} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001c6aa50}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-21 12:58:20 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 21 12:58:28.409: INFO: Pod "nginx-deployment-85ddf47c5d-lk6tc" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-lk6tc,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-r79c7,SelfLink:/api/v1/namespaces/e2e-tests-deployment-r79c7/pods/nginx-deployment-85ddf47c5d-lk6tc,UID:b4404aa7-54a9-11ea-a994-fa163e34d433,ResourceVersion:22427742,Generation:0,CreationTimestamp:2020-02-21 12:57:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d b4215b49-54a9-11ea-a994-fa163e34d433 0xc001c6aac7 0xc001c6aac8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ppswb {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ppswb,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-ppswb true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001c6ab30} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001c6ab50}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-21 12:57:24 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-21 12:58:05 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-21 12:58:05 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-21 12:57:23 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.8,StartTime:2020-02-21 12:57:24 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-21 12:58:05 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://ee55bc89a8eee4e1d1a8afc4ba687b6d8aedf72cd08cee00dc9083b0f12cf8f6}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 21 12:58:28.409: INFO: Pod "nginx-deployment-85ddf47c5d-nn68h" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-nn68h,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-r79c7,SelfLink:/api/v1/namespaces/e2e-tests-deployment-r79c7/pods/nginx-deployment-85ddf47c5d-nn68h,UID:b44f5267-54a9-11ea-a994-fa163e34d433,ResourceVersion:22427766,Generation:0,CreationTimestamp:2020-02-21 12:57:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d b4215b49-54a9-11ea-a994-fa163e34d433 0xc001c6ac17 0xc001c6ac18}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ppswb {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ppswb,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-ppswb true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001c6ac80} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001c6aca0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-21 12:57:24 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-21 12:58:07 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-21 12:58:07 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-21 12:57:24 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.7,StartTime:2020-02-21 12:57:24 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-21 12:58:01 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://5375de4773221dfdb37317fc7d2535b1321babfa104025ca8b5130fb998fbe0e}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 21 12:58:28.410: INFO: Pod "nginx-deployment-85ddf47c5d-npds7" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-npds7,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-r79c7,SelfLink:/api/v1/namespaces/e2e-tests-deployment-r79c7/pods/nginx-deployment-85ddf47c5d-npds7,UID:d515e83b-54a9-11ea-a994-fa163e34d433,ResourceVersion:22427899,Generation:0,CreationTimestamp:2020-02-21 12:58:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d b4215b49-54a9-11ea-a994-fa163e34d433 0xc001c6ad67 0xc001c6ad68}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ppswb {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ppswb,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-ppswb true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001c6add0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001c6adf0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-21 12:58:23 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-21 12:58:23 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-21 12:58:23 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-21 12:58:20 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-02-21 12:58:23 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 21 12:58:28.410: INFO: Pod "nginx-deployment-85ddf47c5d-nxt8f" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-nxt8f,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-r79c7,SelfLink:/api/v1/namespaces/e2e-tests-deployment-r79c7/pods/nginx-deployment-85ddf47c5d-nxt8f,UID:d5e52b74-54a9-11ea-a994-fa163e34d433,ResourceVersion:22427858,Generation:0,CreationTimestamp:2020-02-21 12:58:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d b4215b49-54a9-11ea-a994-fa163e34d433 0xc001c6aea7 0xc001c6aea8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ppswb {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ppswb,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-ppswb true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001c6afc0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001c6afe0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-21 12:58:20 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 21 12:58:28.410: INFO: Pod "nginx-deployment-85ddf47c5d-rt86c" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-rt86c,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-r79c7,SelfLink:/api/v1/namespaces/e2e-tests-deployment-r79c7/pods/nginx-deployment-85ddf47c5d-rt86c,UID:b44e5bfb-54a9-11ea-a994-fa163e34d433,ResourceVersion:22427720,Generation:0,CreationTimestamp:2020-02-21 12:57:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d b4215b49-54a9-11ea-a994-fa163e34d433 0xc001c6b1d7 0xc001c6b1d8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ppswb {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ppswb,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-ppswb true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001c6b340} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001c6b360}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-21 12:57:24 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-21 12:58:01 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-21 12:58:01 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-21 12:57:24 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.5,StartTime:2020-02-21 12:57:24 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-21 12:57:57 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://c55c2e168dd8403111f023d38c119667ca85069ea9c7ef84e9be79fbfeb86249}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 21 12:58:28.410: INFO: Pod "nginx-deployment-85ddf47c5d-s4b2z" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-s4b2z,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-r79c7,SelfLink:/api/v1/namespaces/e2e-tests-deployment-r79c7/pods/nginx-deployment-85ddf47c5d-s4b2z,UID:d5e60894-54a9-11ea-a994-fa163e34d433,ResourceVersion:22427863,Generation:0,CreationTimestamp:2020-02-21 12:58:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d b4215b49-54a9-11ea-a994-fa163e34d433 0xc001c6b427 0xc001c6b428}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ppswb {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ppswb,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-ppswb true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001c6b4c0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001c6b4f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-21 12:58:20 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 21 12:58:28.410: INFO: Pod "nginx-deployment-85ddf47c5d-t2x7r" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-t2x7r,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-r79c7,SelfLink:/api/v1/namespaces/e2e-tests-deployment-r79c7/pods/nginx-deployment-85ddf47c5d-t2x7r,UID:b458ec35-54a9-11ea-a994-fa163e34d433,ResourceVersion:22427739,Generation:0,CreationTimestamp:2020-02-21 12:57:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d b4215b49-54a9-11ea-a994-fa163e34d433 0xc001c6b567 0xc001c6b568}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ppswb {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ppswb,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-ppswb true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001c6b5d0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001c6b5f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-21 12:57:28 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-21 12:58:05 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-21 12:58:05 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-21 12:57:24 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.13,StartTime:2020-02-21 12:57:28 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-21 12:58:05 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://e627d7e5c7c6f74cc1371215c4f38527d3f833042ae384c4ff452ebf1dc59656}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 21 12:58:28.411: INFO: Pod "nginx-deployment-85ddf47c5d-t89wl" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-t89wl,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-r79c7,SelfLink:/api/v1/namespaces/e2e-tests-deployment-r79c7/pods/nginx-deployment-85ddf47c5d-t89wl,UID:b44f28a6-54a9-11ea-a994-fa163e34d433,ResourceVersion:22427762,Generation:0,CreationTimestamp:2020-02-21 12:57:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d b4215b49-54a9-11ea-a994-fa163e34d433 0xc001c6b777 0xc001c6b778}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ppswb {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ppswb,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-ppswb true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001c6b7f0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001c6b810}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-21 12:57:24 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-21 12:58:07 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-21 12:58:07 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-21 12:57:24 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.10,StartTime:2020-02-21 12:57:24 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-21 12:58:05 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://c4c2db26fb1d4426fead6a82a3095945f1623eb861f73f0f53c4d6583dce2567}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 21 12:58:28.411: INFO: Pod "nginx-deployment-85ddf47c5d-vd22s" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-vd22s,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-r79c7,SelfLink:/api/v1/namespaces/e2e-tests-deployment-r79c7/pods/nginx-deployment-85ddf47c5d-vd22s,UID:d5164038-54a9-11ea-a994-fa163e34d433,ResourceVersion:22427860,Generation:0,CreationTimestamp:2020-02-21 12:58:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d b4215b49-54a9-11ea-a994-fa163e34d433 0xc001c6b8d7 0xc001c6b8d8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ppswb {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ppswb,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-ppswb true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001c6b970} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001c6b990}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-21 12:58:20 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 21 12:58:28.411: INFO: Pod "nginx-deployment-85ddf47c5d-vwfs7" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-vwfs7,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-r79c7,SelfLink:/api/v1/namespaces/e2e-tests-deployment-r79c7/pods/nginx-deployment-85ddf47c5d-vwfs7,UID:b43ffced-54a9-11ea-a994-fa163e34d433,ResourceVersion:22427733,Generation:0,CreationTimestamp:2020-02-21 12:57:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d b4215b49-54a9-11ea-a994-fa163e34d433 0xc001c6ba07 0xc001c6ba08}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ppswb {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ppswb,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-ppswb true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001c6baf0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001c6bb20}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-21 12:57:24 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-21 12:58:05 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-21 12:58:05 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-21 12:57:23 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.9,StartTime:2020-02-21 12:57:24 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-21 12:58:05 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://a69aeb66bddb54c05a4604dc5163946d0e90df827a51ae3321e8cd82258d3465}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 21 12:58:28.412: INFO: Pod "nginx-deployment-85ddf47c5d-vzcfn" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-vzcfn,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-r79c7,SelfLink:/api/v1/namespaces/e2e-tests-deployment-r79c7/pods/nginx-deployment-85ddf47c5d-vzcfn,UID:b43ac25f-54a9-11ea-a994-fa163e34d433,ResourceVersion:22427750,Generation:0,CreationTimestamp:2020-02-21 12:57:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d b4215b49-54a9-11ea-a994-fa163e34d433 0xc001c6bc47 0xc001c6bc48}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ppswb {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ppswb,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-ppswb true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001c6bcb0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001c6bcd0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-21 12:57:24 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-21 12:58:07 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-21 12:58:07 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-21 12:57:23 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.6,StartTime:2020-02-21 12:57:24 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-21 12:57:59 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://24c63cf180247159c060808c045f2c90674fdcb6c26b9a247de631fdb6e8153f}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 21 12:58:28.412: INFO: Pod "nginx-deployment-85ddf47c5d-wt98t" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-wt98t,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-r79c7,SelfLink:/api/v1/namespaces/e2e-tests-deployment-r79c7/pods/nginx-deployment-85ddf47c5d-wt98t,UID:d51666e2-54a9-11ea-a994-fa163e34d433,ResourceVersion:22427848,Generation:0,CreationTimestamp:2020-02-21 12:58:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d b4215b49-54a9-11ea-a994-fa163e34d433 0xc001c6be97 0xc001c6be98}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ppswb {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ppswb,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-ppswb true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001c6bf30} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001c6bf50}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-21 12:58:20 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 21 12:58:28.412: INFO: Pod "nginx-deployment-85ddf47c5d-wwf7j" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-wwf7j,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-r79c7,SelfLink:/api/v1/namespaces/e2e-tests-deployment-r79c7/pods/nginx-deployment-85ddf47c5d-wwf7j,UID:d5e515e3-54a9-11ea-a994-fa163e34d433,ResourceVersion:22427865,Generation:0,CreationTimestamp:2020-02-21 12:58:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d b4215b49-54a9-11ea-a994-fa163e34d433 0xc001c9c017 0xc001c9c018}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ppswb {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ppswb,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-ppswb true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001c9c080} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001c9c110}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-21 12:58:20 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 21 12:58:28.412: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-deployment-r79c7" for this suite.
Feb 21 12:59:29.411: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 21 12:59:29.451: INFO: namespace: e2e-tests-deployment-r79c7, resource: bindings, ignored listing per whitelist
Feb 21 12:59:29.552: INFO: namespace e2e-tests-deployment-r79c7 deletion completed in 1m0.900031287s

• [SLOW TEST:126.209 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 21 12:59:29.553: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79
Feb 21 12:59:30.610: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Feb 21 12:59:30.647: INFO: Waiting for terminating namespaces to be deleted...
Feb 21 12:59:30.665: INFO: 
Logging pods the kubelet thinks is on node hunter-server-hu5at5svl7ps before test
Feb 21 12:59:30.910: INFO: etcd-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Feb 21 12:59:30.910: INFO: weave-net-tqwf2 from kube-system started at 2019-08-04 08:33:23 +0000 UTC (2 container statuses recorded)
Feb 21 12:59:30.910: INFO: 	Container weave ready: true, restart count 0
Feb 21 12:59:30.910: INFO: 	Container weave-npc ready: true, restart count 0
Feb 21 12:59:30.910: INFO: coredns-54ff9cd656-bmkk4 from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded)
Feb 21 12:59:30.910: INFO: 	Container coredns ready: true, restart count 0
Feb 21 12:59:30.910: INFO: kube-controller-manager-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Feb 21 12:59:30.910: INFO: kube-apiserver-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Feb 21 12:59:30.910: INFO: kube-scheduler-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Feb 21 12:59:30.910: INFO: coredns-54ff9cd656-79kxx from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded)
Feb 21 12:59:30.910: INFO: 	Container coredns ready: true, restart count 0
Feb 21 12:59:30.910: INFO: kube-proxy-bqnnz from kube-system started at 2019-08-04 08:33:23 +0000 UTC (1 container statuses recorded)
Feb 21 12:59:30.910: INFO: 	Container kube-proxy ready: true, restart count 0
[It] validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: verifying the node has the label node hunter-server-hu5at5svl7ps
Feb 21 12:59:31.387: INFO: Pod coredns-54ff9cd656-79kxx requesting resource cpu=100m on Node hunter-server-hu5at5svl7ps
Feb 21 12:59:31.387: INFO: Pod coredns-54ff9cd656-bmkk4 requesting resource cpu=100m on Node hunter-server-hu5at5svl7ps
Feb 21 12:59:31.387: INFO: Pod etcd-hunter-server-hu5at5svl7ps requesting resource cpu=0m on Node hunter-server-hu5at5svl7ps
Feb 21 12:59:31.387: INFO: Pod kube-apiserver-hunter-server-hu5at5svl7ps requesting resource cpu=250m on Node hunter-server-hu5at5svl7ps
Feb 21 12:59:31.387: INFO: Pod kube-controller-manager-hunter-server-hu5at5svl7ps requesting resource cpu=200m on Node hunter-server-hu5at5svl7ps
Feb 21 12:59:31.387: INFO: Pod kube-proxy-bqnnz requesting resource cpu=0m on Node hunter-server-hu5at5svl7ps
Feb 21 12:59:31.387: INFO: Pod kube-scheduler-hunter-server-hu5at5svl7ps requesting resource cpu=100m on Node hunter-server-hu5at5svl7ps
Feb 21 12:59:31.387: INFO: Pod weave-net-tqwf2 requesting resource cpu=20m on Node hunter-server-hu5at5svl7ps
STEP: Starting Pods to consume most of the cluster CPU.
STEP: Creating another pod that requires unavailable amount of CPU.
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-00446dc2-54aa-11ea-b1f8-0242ac110008.15f56c58b49e9c18], Reason = [Scheduled], Message = [Successfully assigned e2e-tests-sched-pred-l652c/filler-pod-00446dc2-54aa-11ea-b1f8-0242ac110008 to hunter-server-hu5at5svl7ps]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-00446dc2-54aa-11ea-b1f8-0242ac110008.15f56c5e36c609ce], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-00446dc2-54aa-11ea-b1f8-0242ac110008.15f56c5efea63099], Reason = [Created], Message = [Created container]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-00446dc2-54aa-11ea-b1f8-0242ac110008.15f56c5f3696f695], Reason = [Started], Message = [Started container]
STEP: Considering event: 
Type = [Warning], Name = [additional-pod.15f56c5fb249b61e], Reason = [FailedScheduling], Message = [0/1 nodes are available: 1 Insufficient cpu.]
STEP: removing the label node off the node hunter-server-hu5at5svl7ps
STEP: verifying the node doesn't have the label node
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 21 13:00:03.555: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-sched-pred-l652c" for this suite.
Feb 21 13:00:11.756: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 21 13:00:11.981: INFO: namespace: e2e-tests-sched-pred-l652c, resource: bindings, ignored listing per whitelist
Feb 21 13:00:11.989: INFO: namespace e2e-tests-sched-pred-l652c deletion completed in 8.326552778s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70

• [SLOW TEST:42.437 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22
  validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 21 13:00:11.990: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Given a ReplicationController is created
STEP: When the matched label of one of its pods change
Feb 21 13:00:12.478: INFO: Pod name pod-release: Found 0 pods out of 1
Feb 21 13:00:17.493: INFO: Pod name pod-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 21 13:00:18.836: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-replication-controller-h7b6n" for this suite.
Feb 21 13:00:26.670: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 21 13:00:28.171: INFO: namespace: e2e-tests-replication-controller-h7b6n, resource: bindings, ignored listing per whitelist
Feb 21 13:00:28.183: INFO: namespace e2e-tests-replication-controller-h7b6n deletion completed in 9.330116607s

• [SLOW TEST:16.193 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 21 13:00:28.183: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod liveness-exec in namespace e2e-tests-container-probe-nvk7v
Feb 21 13:00:40.692: INFO: Started pod liveness-exec in namespace e2e-tests-container-probe-nvk7v
STEP: checking the pod's current state and verifying that restartCount is present
Feb 21 13:00:40.697: INFO: Initial restart count of pod liveness-exec is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 21 13:04:42.281: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-nvk7v" for this suite.
Feb 21 13:04:48.334: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 21 13:04:48.408: INFO: namespace: e2e-tests-container-probe-nvk7v, resource: bindings, ignored listing per whitelist
Feb 21 13:04:48.458: INFO: namespace e2e-tests-container-probe-nvk7v deletion completed in 6.165334626s

• [SLOW TEST:260.275 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-network] Services 
  should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 21 13:04:48.459: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85
[It] should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating service endpoint-test2 in namespace e2e-tests-services-nmvjl
STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-nmvjl to expose endpoints map[]
Feb 21 13:04:48.815: INFO: Get endpoints failed (47.314754ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found
Feb 21 13:04:49.830: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-nmvjl exposes endpoints map[] (1.062413416s elapsed)
STEP: Creating pod pod1 in namespace e2e-tests-services-nmvjl
STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-nmvjl to expose endpoints map[pod1:[80]]
Feb 21 13:04:57.913: INFO: Unexpected endpoints: found map[], expected map[pod1:[80]] (8.049863248s elapsed, will retry)
Feb 21 13:05:02.089: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-nmvjl exposes endpoints map[pod1:[80]] (12.225587193s elapsed)
STEP: Creating pod pod2 in namespace e2e-tests-services-nmvjl
STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-nmvjl to expose endpoints map[pod2:[80] pod1:[80]]
Feb 21 13:05:06.654: INFO: Unexpected endpoints: found map[be1429b2-54aa-11ea-a994-fa163e34d433:[80]], expected map[pod1:[80] pod2:[80]] (4.54916176s elapsed, will retry)
Feb 21 13:05:12.503: INFO: Unexpected endpoints: found map[be1429b2-54aa-11ea-a994-fa163e34d433:[80]], expected map[pod1:[80] pod2:[80]] (10.398656509s elapsed, will retry)
Feb 21 13:05:13.530: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-nmvjl exposes endpoints map[pod1:[80] pod2:[80]] (11.425813227s elapsed)
STEP: Deleting pod pod1 in namespace e2e-tests-services-nmvjl
STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-nmvjl to expose endpoints map[pod2:[80]]
Feb 21 13:05:17.137: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-nmvjl exposes endpoints map[pod2:[80]] (3.595003615s elapsed)
STEP: Deleting pod pod2 in namespace e2e-tests-services-nmvjl
STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-nmvjl to expose endpoints map[]
Feb 21 13:05:17.390: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-nmvjl exposes endpoints map[] (202.874592ms elapsed)
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 21 13:05:17.631: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-services-nmvjl" for this suite.
Feb 21 13:05:49.742: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 21 13:05:49.884: INFO: namespace: e2e-tests-services-nmvjl, resource: bindings, ignored listing per whitelist
Feb 21 13:05:49.910: INFO: namespace e2e-tests-services-nmvjl deletion completed in 32.235662997s
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90

• [SLOW TEST:61.451 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 21 13:05:49.910: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod liveness-exec in namespace e2e-tests-container-probe-frvb5
Feb 21 13:06:02.269: INFO: Started pod liveness-exec in namespace e2e-tests-container-probe-frvb5
STEP: checking the pod's current state and verifying that restartCount is present
Feb 21 13:06:02.282: INFO: Initial restart count of pod liveness-exec is 0
Feb 21 13:06:58.383: INFO: Restart count of pod e2e-tests-container-probe-frvb5/liveness-exec is now 1 (56.101282331s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 21 13:06:58.427: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-frvb5" for this suite.
Feb 21 13:07:04.574: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 21 13:07:04.731: INFO: namespace: e2e-tests-container-probe-frvb5, resource: bindings, ignored listing per whitelist
Feb 21 13:07:04.840: INFO: namespace e2e-tests-container-probe-frvb5 deletion completed in 6.39716507s

• [SLOW TEST:74.930 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 21 13:07:04.842: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-0ed70013-54ab-11ea-b1f8-0242ac110008
STEP: Creating a pod to test consume configMaps
Feb 21 13:07:05.375: INFO: Waiting up to 5m0s for pod "pod-configmaps-0ed990c9-54ab-11ea-b1f8-0242ac110008" in namespace "e2e-tests-configmap-c54zq" to be "success or failure"
Feb 21 13:07:05.385: INFO: Pod "pod-configmaps-0ed990c9-54ab-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 9.500646ms
Feb 21 13:07:08.194: INFO: Pod "pod-configmaps-0ed990c9-54ab-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.818156656s
Feb 21 13:07:10.229: INFO: Pod "pod-configmaps-0ed990c9-54ab-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.853155433s
Feb 21 13:07:12.260: INFO: Pod "pod-configmaps-0ed990c9-54ab-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.884402799s
Feb 21 13:07:14.304: INFO: Pod "pod-configmaps-0ed990c9-54ab-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.927973936s
Feb 21 13:07:16.426: INFO: Pod "pod-configmaps-0ed990c9-54ab-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 11.04996372s
Feb 21 13:07:19.382: INFO: Pod "pod-configmaps-0ed990c9-54ab-11ea-b1f8-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.006117848s
STEP: Saw pod success
Feb 21 13:07:19.382: INFO: Pod "pod-configmaps-0ed990c9-54ab-11ea-b1f8-0242ac110008" satisfied condition "success or failure"
Feb 21 13:07:19.420: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-0ed990c9-54ab-11ea-b1f8-0242ac110008 container configmap-volume-test: 
STEP: delete the pod
Feb 21 13:07:19.916: INFO: Waiting for pod pod-configmaps-0ed990c9-54ab-11ea-b1f8-0242ac110008 to disappear
Feb 21 13:07:19.945: INFO: Pod pod-configmaps-0ed990c9-54ab-11ea-b1f8-0242ac110008 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 21 13:07:19.945: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-c54zq" for this suite.
Feb 21 13:07:26.330: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 21 13:07:26.701: INFO: namespace: e2e-tests-configmap-c54zq, resource: bindings, ignored listing per whitelist
Feb 21 13:07:26.713: INFO: namespace e2e-tests-configmap-c54zq deletion completed in 6.559306314s

• [SLOW TEST:21.871 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run pod 
  should create a pod from an image when restart is Never  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 21 13:07:26.713: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1527
[It] should create a pod from an image when restart is Never  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Feb 21 13:07:27.177: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-sztx9'
Feb 21 13:07:29.718: INFO: stderr: ""
Feb 21 13:07:29.718: INFO: stdout: "pod/e2e-test-nginx-pod created\n"
STEP: verifying the pod e2e-test-nginx-pod was created
[AfterEach] [k8s.io] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1532
Feb 21 13:07:29.839: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=e2e-tests-kubectl-sztx9'
Feb 21 13:07:34.679: INFO: stderr: ""
Feb 21 13:07:34.680: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 21 13:07:34.680: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-sztx9" for this suite.
Feb 21 13:07:41.223: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 21 13:07:41.316: INFO: namespace: e2e-tests-kubectl-sztx9, resource: bindings, ignored listing per whitelist
Feb 21 13:07:41.365: INFO: namespace e2e-tests-kubectl-sztx9 deletion completed in 6.65571068s

• [SLOW TEST:14.652 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create a pod from an image when restart is Never  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 21 13:07:41.365: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Feb 21 13:07:41.685: INFO: Waiting up to 5m0s for pod "downwardapi-volume-246a2460-54ab-11ea-b1f8-0242ac110008" in namespace "e2e-tests-downward-api-5jqxn" to be "success or failure"
Feb 21 13:07:41.706: INFO: Pod "downwardapi-volume-246a2460-54ab-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 21.339256ms
Feb 21 13:07:43.715: INFO: Pod "downwardapi-volume-246a2460-54ab-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030296886s
Feb 21 13:07:45.727: INFO: Pod "downwardapi-volume-246a2460-54ab-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.041901851s
Feb 21 13:07:48.534: INFO: Pod "downwardapi-volume-246a2460-54ab-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.849359141s
Feb 21 13:07:50.560: INFO: Pod "downwardapi-volume-246a2460-54ab-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.875139076s
Feb 21 13:07:52.602: INFO: Pod "downwardapi-volume-246a2460-54ab-11ea-b1f8-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.917042547s
STEP: Saw pod success
Feb 21 13:07:52.603: INFO: Pod "downwardapi-volume-246a2460-54ab-11ea-b1f8-0242ac110008" satisfied condition "success or failure"
Feb 21 13:07:52.649: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-246a2460-54ab-11ea-b1f8-0242ac110008 container client-container: 
STEP: delete the pod
Feb 21 13:07:52.978: INFO: Waiting for pod downwardapi-volume-246a2460-54ab-11ea-b1f8-0242ac110008 to disappear
Feb 21 13:07:53.000: INFO: Pod downwardapi-volume-246a2460-54ab-11ea-b1f8-0242ac110008 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 21 13:07:53.000: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-5jqxn" for this suite.
Feb 21 13:07:59.136: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 21 13:07:59.207: INFO: namespace: e2e-tests-downward-api-5jqxn, resource: bindings, ignored listing per whitelist
Feb 21 13:07:59.287: INFO: namespace e2e-tests-downward-api-5jqxn deletion completed in 6.277482786s

• [SLOW TEST:17.922 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 21 13:07:59.288: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-map-2f1f6880-54ab-11ea-b1f8-0242ac110008
STEP: Creating a pod to test consume secrets
Feb 21 13:07:59.519: INFO: Waiting up to 5m0s for pod "pod-secrets-2f20d4ac-54ab-11ea-b1f8-0242ac110008" in namespace "e2e-tests-secrets-khgj2" to be "success or failure"
Feb 21 13:07:59.525: INFO: Pod "pod-secrets-2f20d4ac-54ab-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 5.935635ms
Feb 21 13:08:01.536: INFO: Pod "pod-secrets-2f20d4ac-54ab-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017606875s
Feb 21 13:08:03.550: INFO: Pod "pod-secrets-2f20d4ac-54ab-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.031398026s
Feb 21 13:08:05.567: INFO: Pod "pod-secrets-2f20d4ac-54ab-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.048129083s
Feb 21 13:08:07.713: INFO: Pod "pod-secrets-2f20d4ac-54ab-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.194667438s
Feb 21 13:08:09.731: INFO: Pod "pod-secrets-2f20d4ac-54ab-11ea-b1f8-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.212061363s
STEP: Saw pod success
Feb 21 13:08:09.731: INFO: Pod "pod-secrets-2f20d4ac-54ab-11ea-b1f8-0242ac110008" satisfied condition "success or failure"
Feb 21 13:08:09.745: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-2f20d4ac-54ab-11ea-b1f8-0242ac110008 container secret-volume-test: 
STEP: delete the pod
Feb 21 13:08:11.133: INFO: Waiting for pod pod-secrets-2f20d4ac-54ab-11ea-b1f8-0242ac110008 to disappear
Feb 21 13:08:11.150: INFO: Pod pod-secrets-2f20d4ac-54ab-11ea-b1f8-0242ac110008 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 21 13:08:11.151: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-khgj2" for this suite.
Feb 21 13:08:17.212: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 21 13:08:17.468: INFO: namespace: e2e-tests-secrets-khgj2, resource: bindings, ignored listing per whitelist
Feb 21 13:08:17.483: INFO: namespace e2e-tests-secrets-khgj2 deletion completed in 6.32496733s

• [SLOW TEST:18.195 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-storage] HostPath 
  should give a volume the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 21 13:08:17.483: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename hostpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37
[It] should give a volume the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test hostPath mode
Feb 21 13:08:17.809: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "e2e-tests-hostpath-fwcst" to be "success or failure"
Feb 21 13:08:18.017: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 207.579711ms
Feb 21 13:08:20.033: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.223407629s
Feb 21 13:08:22.061: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.251748388s
Feb 21 13:08:24.075: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 6.265364749s
Feb 21 13:08:26.151: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 8.341300394s
Feb 21 13:08:28.885: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 11.074864633s
Feb 21 13:08:30.965: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 13.155072859s
Feb 21 13:08:33.061: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 15.251077435s
Feb 21 13:08:35.523: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 17.7128839s
STEP: Saw pod success
Feb 21 13:08:35.523: INFO: Pod "pod-host-path-test" satisfied condition "success or failure"
Feb 21 13:08:35.542: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-host-path-test container test-container-1: 
STEP: delete the pod
Feb 21 13:08:36.191: INFO: Waiting for pod pod-host-path-test to disappear
Feb 21 13:08:36.213: INFO: Pod pod-host-path-test no longer exists
[AfterEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 21 13:08:36.214: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-hostpath-fwcst" for this suite.
Feb 21 13:08:42.271: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 21 13:08:42.399: INFO: namespace: e2e-tests-hostpath-fwcst, resource: bindings, ignored listing per whitelist
Feb 21 13:08:42.429: INFO: namespace e2e-tests-hostpath-fwcst deletion completed in 6.200216803s

• [SLOW TEST:24.946 seconds]
[sig-storage] HostPath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34
  should give a volume the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 21 13:08:42.429: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-upd-48e9992b-54ab-11ea-b1f8-0242ac110008
STEP: Creating the pod
STEP: Updating configmap configmap-test-upd-48e9992b-54ab-11ea-b1f8-0242ac110008
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 21 13:10:01.478: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-wlxk8" for this suite.
Feb 21 13:10:25.718: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 21 13:10:25.964: INFO: namespace: e2e-tests-configmap-wlxk8, resource: bindings, ignored listing per whitelist
Feb 21 13:10:25.984: INFO: namespace e2e-tests-configmap-wlxk8 deletion completed in 24.493102036s

• [SLOW TEST:103.555 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] Secrets 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 21 13:10:25.985: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name s-test-opt-del-86945377-54ab-11ea-b1f8-0242ac110008
STEP: Creating secret with name s-test-opt-upd-86945460-54ab-11ea-b1f8-0242ac110008
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-86945377-54ab-11ea-b1f8-0242ac110008
STEP: Updating secret s-test-opt-upd-86945460-54ab-11ea-b1f8-0242ac110008
STEP: Creating secret with name s-test-opt-create-86945499-54ab-11ea-b1f8-0242ac110008
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 21 13:12:07.745: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-g7z6v" for this suite.
Feb 21 13:12:31.791: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 21 13:12:32.121: INFO: namespace: e2e-tests-secrets-g7z6v, resource: bindings, ignored listing per whitelist
Feb 21 13:12:32.252: INFO: namespace e2e-tests-secrets-g7z6v deletion completed in 24.501515756s

• [SLOW TEST:126.268 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 21 13:12:32.253: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65
[It] RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Feb 21 13:12:32.610: INFO: Creating deployment "test-recreate-deployment"
Feb 21 13:12:32.635: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1
Feb 21 13:12:32.654: INFO: new replicaset for deployment "test-recreate-deployment" is yet to be created
Feb 21 13:12:36.146: INFO: Waiting deployment "test-recreate-deployment" to complete
Feb 21 13:12:36.157: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717887552, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717887552, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717887552, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717887552, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 21 13:12:38.172: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717887552, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717887552, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717887552, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717887552, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 21 13:12:40.872: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717887552, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717887552, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717887552, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717887552, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 21 13:12:42.176: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717887552, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717887552, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717887552, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717887552, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 21 13:12:44.200: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717887552, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717887552, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717887552, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717887552, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 21 13:12:46.252: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717887552, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717887552, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717887552, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717887552, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 21 13:12:48.185: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717887552, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717887552, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717887552, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717887552, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 21 13:12:50.174: INFO: Triggering a new rollout for deployment "test-recreate-deployment"
Feb 21 13:12:50.216: INFO: Updating deployment test-recreate-deployment
Feb 21 13:12:50.216: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59
Feb 21 13:12:52.745: INFO: Deployment "test-recreate-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment,GenerateName:,Namespace:e2e-tests-deployment-dvnjv,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-dvnjv/deployments/test-recreate-deployment,UID:d1ea2fe5-54ab-11ea-a994-fa163e34d433,ResourceVersion:22429476,Generation:2,CreationTimestamp:2020-02-21 13:12:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[{Available False 2020-02-21 13:12:50 +0000 UTC 2020-02-21 13:12:50 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2020-02-21 13:12:51 +0000 UTC 2020-02-21 13:12:32 +0000 UTC ReplicaSetUpdated ReplicaSet "test-recreate-deployment-589c4bfd" is progressing.}],ReadyReplicas:0,CollisionCount:nil,},}

Feb 21 13:12:52.779: INFO: New ReplicaSet "test-recreate-deployment-589c4bfd" of Deployment "test-recreate-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-589c4bfd,GenerateName:,Namespace:e2e-tests-deployment-dvnjv,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-dvnjv/replicasets/test-recreate-deployment-589c4bfd,UID:dcb12f93-54ab-11ea-a994-fa163e34d433,ResourceVersion:22429475,Generation:1,CreationTimestamp:2020-02-21 13:12:50 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment d1ea2fe5-54ab-11ea-a994-fa163e34d433 0xc001e7cf0f 0xc001e7cf20}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Feb 21 13:12:52.779: INFO: All old ReplicaSets of Deployment "test-recreate-deployment":
Feb 21 13:12:52.780: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5bf7f65dc,GenerateName:,Namespace:e2e-tests-deployment-dvnjv,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-dvnjv/replicasets/test-recreate-deployment-5bf7f65dc,UID:d1fbd901-54ab-11ea-a994-fa163e34d433,ResourceVersion:22429466,Generation:2,CreationTimestamp:2020-02-21 13:12:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment d1ea2fe5-54ab-11ea-a994-fa163e34d433 0xc001e7cfe0 0xc001e7cfe1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Feb 21 13:12:53.268: INFO: Pod "test-recreate-deployment-589c4bfd-852dw" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-589c4bfd-852dw,GenerateName:test-recreate-deployment-589c4bfd-,Namespace:e2e-tests-deployment-dvnjv,SelfLink:/api/v1/namespaces/e2e-tests-deployment-dvnjv/pods/test-recreate-deployment-589c4bfd-852dw,UID:dcbac2e8-54ab-11ea-a994-fa163e34d433,ResourceVersion:22429480,Generation:0,CreationTimestamp:2020-02-21 13:12:50 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-recreate-deployment-589c4bfd dcb12f93-54ab-11ea-a994-fa163e34d433 0xc001e7d89f 0xc001e7d8b0}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-jrv2x {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-jrv2x,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-jrv2x true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001e7d9e0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001e7da00}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-21 13:12:51 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-21 13:12:51 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-21 13:12:51 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-21 13:12:50 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-02-21 13:12:51 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 21 13:12:53.268: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-deployment-dvnjv" for this suite.
Feb 21 13:13:01.330: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 21 13:13:01.980: INFO: namespace: e2e-tests-deployment-dvnjv, resource: bindings, ignored listing per whitelist
Feb 21 13:13:02.055: INFO: namespace e2e-tests-deployment-dvnjv deletion completed in 8.758525061s

• [SLOW TEST:29.802 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 21 13:13:02.056: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test override arguments
Feb 21 13:13:02.700: INFO: Waiting up to 5m0s for pod "client-containers-e3c4bb4b-54ab-11ea-b1f8-0242ac110008" in namespace "e2e-tests-containers-l7qhh" to be "success or failure"
Feb 21 13:13:02.715: INFO: Pod "client-containers-e3c4bb4b-54ab-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 14.709975ms
Feb 21 13:13:04.755: INFO: Pod "client-containers-e3c4bb4b-54ab-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.054795483s
Feb 21 13:13:06.775: INFO: Pod "client-containers-e3c4bb4b-54ab-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.074468747s
Feb 21 13:13:08.800: INFO: Pod "client-containers-e3c4bb4b-54ab-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.099616155s
Feb 21 13:13:10.827: INFO: Pod "client-containers-e3c4bb4b-54ab-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.126602547s
Feb 21 13:13:12.840: INFO: Pod "client-containers-e3c4bb4b-54ab-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 10.139789516s
Feb 21 13:13:15.458: INFO: Pod "client-containers-e3c4bb4b-54ab-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 12.757229551s
Feb 21 13:13:17.476: INFO: Pod "client-containers-e3c4bb4b-54ab-11ea-b1f8-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.775710631s
STEP: Saw pod success
Feb 21 13:13:17.477: INFO: Pod "client-containers-e3c4bb4b-54ab-11ea-b1f8-0242ac110008" satisfied condition "success or failure"
Feb 21 13:13:17.482: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-containers-e3c4bb4b-54ab-11ea-b1f8-0242ac110008 container test-container: 
STEP: delete the pod
Feb 21 13:13:18.659: INFO: Waiting for pod client-containers-e3c4bb4b-54ab-11ea-b1f8-0242ac110008 to disappear
Feb 21 13:13:19.161: INFO: Pod client-containers-e3c4bb4b-54ab-11ea-b1f8-0242ac110008 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 21 13:13:19.162: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-containers-l7qhh" for this suite.
Feb 21 13:13:27.221: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 21 13:13:27.288: INFO: namespace: e2e-tests-containers-l7qhh, resource: bindings, ignored listing per whitelist
Feb 21 13:13:27.412: INFO: namespace e2e-tests-containers-l7qhh deletion completed in 8.23947839s

• [SLOW TEST:25.357 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-cli] Kubectl client [k8s.io] Proxy server 
  should support proxy with --port 0  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 21 13:13:27.413: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should support proxy with --port 0  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: starting the proxy server
Feb 21 13:13:28.476: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter'
STEP: curling proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 21 13:13:28.589: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-zh7mg" for this suite.
Feb 21 13:13:34.784: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 21 13:13:34.922: INFO: namespace: e2e-tests-kubectl-zh7mg, resource: bindings, ignored listing per whitelist
Feb 21 13:13:34.968: INFO: namespace e2e-tests-kubectl-zh7mg deletion completed in 6.300053114s

• [SLOW TEST:7.555 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Proxy server
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should support proxy with --port 0  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 21 13:13:34.969: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Feb 21 13:13:35.556: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"f7585e5a-54ab-11ea-a994-fa163e34d433", Controller:(*bool)(0xc00275f3a2), BlockOwnerDeletion:(*bool)(0xc00275f3a3)}}
Feb 21 13:13:35.632: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"f74e70db-54ab-11ea-a994-fa163e34d433", Controller:(*bool)(0xc00275f562), BlockOwnerDeletion:(*bool)(0xc00275f563)}}
Feb 21 13:13:35.729: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"f75404b9-54ab-11ea-a994-fa163e34d433", Controller:(*bool)(0xc0029fc43a), BlockOwnerDeletion:(*bool)(0xc0029fc43b)}}
[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 21 13:13:40.958: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-hlrk8" for this suite.
Feb 21 13:13:47.187: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 21 13:13:47.219: INFO: namespace: e2e-tests-gc-hlrk8, resource: bindings, ignored listing per whitelist
Feb 21 13:13:47.354: INFO: namespace e2e-tests-gc-hlrk8 deletion completed in 6.379166898s

• [SLOW TEST:12.385 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should have monotonically increasing restart count [Slow][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 21 13:13:47.355: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] should have monotonically increasing restart count [Slow][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-s8wnm
Feb 21 13:13:57.652: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-s8wnm
STEP: checking the pod's current state and verifying that restartCount is present
Feb 21 13:13:57.658: INFO: Initial restart count of pod liveness-http is 0
Feb 21 13:14:19.911: INFO: Restart count of pod e2e-tests-container-probe-s8wnm/liveness-http is now 1 (22.252525692s elapsed)
Feb 21 13:14:42.226: INFO: Restart count of pod e2e-tests-container-probe-s8wnm/liveness-http is now 2 (44.568375465s elapsed)
Feb 21 13:15:00.449: INFO: Restart count of pod e2e-tests-container-probe-s8wnm/liveness-http is now 3 (1m2.790545162s elapsed)
Feb 21 13:15:20.753: INFO: Restart count of pod e2e-tests-container-probe-s8wnm/liveness-http is now 4 (1m23.095037765s elapsed)
Feb 21 13:16:21.777: INFO: Restart count of pod e2e-tests-container-probe-s8wnm/liveness-http is now 5 (2m24.11846919s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 21 13:16:21.893: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-s8wnm" for this suite.
Feb 21 13:16:28.047: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 21 13:16:28.136: INFO: namespace: e2e-tests-container-probe-s8wnm, resource: bindings, ignored listing per whitelist
Feb 21 13:16:28.205: INFO: namespace e2e-tests-container-probe-s8wnm deletion completed in 6.266111735s

• [SLOW TEST:160.849 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should have monotonically increasing restart count [Slow][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-storage] Secrets 
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 21 13:16:28.205: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-5e9b9a25-54ac-11ea-b1f8-0242ac110008
STEP: Creating a pod to test consume secrets
Feb 21 13:16:28.969: INFO: Waiting up to 5m0s for pod "pod-secrets-5ec4e5ee-54ac-11ea-b1f8-0242ac110008" in namespace "e2e-tests-secrets-w5qnj" to be "success or failure"
Feb 21 13:16:29.082: INFO: Pod "pod-secrets-5ec4e5ee-54ac-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 112.129087ms
Feb 21 13:16:32.037: INFO: Pod "pod-secrets-5ec4e5ee-54ac-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 3.067142242s
Feb 21 13:16:34.060: INFO: Pod "pod-secrets-5ec4e5ee-54ac-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 5.08985159s
Feb 21 13:16:36.075: INFO: Pod "pod-secrets-5ec4e5ee-54ac-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 7.105586702s
Feb 21 13:16:40.318: INFO: Pod "pod-secrets-5ec4e5ee-54ac-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 11.348092552s
Feb 21 13:16:42.608: INFO: Pod "pod-secrets-5ec4e5ee-54ac-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 13.638509129s
Feb 21 13:16:44.627: INFO: Pod "pod-secrets-5ec4e5ee-54ac-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 15.65687962s
Feb 21 13:16:46.638: INFO: Pod "pod-secrets-5ec4e5ee-54ac-11ea-b1f8-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 17.6683645s
STEP: Saw pod success
Feb 21 13:16:46.638: INFO: Pod "pod-secrets-5ec4e5ee-54ac-11ea-b1f8-0242ac110008" satisfied condition "success or failure"
Feb 21 13:16:46.647: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-5ec4e5ee-54ac-11ea-b1f8-0242ac110008 container secret-volume-test: 
STEP: delete the pod
Feb 21 13:16:48.415: INFO: Waiting for pod pod-secrets-5ec4e5ee-54ac-11ea-b1f8-0242ac110008 to disappear
Feb 21 13:16:48.434: INFO: Pod pod-secrets-5ec4e5ee-54ac-11ea-b1f8-0242ac110008 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 21 13:16:48.435: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-w5qnj" for this suite.
Feb 21 13:16:56.632: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 21 13:16:56.717: INFO: namespace: e2e-tests-secrets-w5qnj, resource: bindings, ignored listing per whitelist
Feb 21 13:16:56.895: INFO: namespace e2e-tests-secrets-w5qnj deletion completed in 8.436199363s
STEP: Destroying namespace "e2e-tests-secret-namespace-4b64q" for this suite.
Feb 21 13:17:03.017: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 21 13:17:03.389: INFO: namespace: e2e-tests-secret-namespace-4b64q, resource: bindings, ignored listing per whitelist
Feb 21 13:17:03.495: INFO: namespace e2e-tests-secret-namespace-4b64q deletion completed in 6.600365648s

• [SLOW TEST:35.290 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 21 13:17:03.496: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating the pod
Feb 21 13:17:18.669: INFO: Successfully updated pod "labelsupdate738b7bb1-54ac-11ea-b1f8-0242ac110008"
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 21 13:17:20.932: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-gj2kk" for this suite.
Feb 21 13:17:44.975: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 21 13:17:45.089: INFO: namespace: e2e-tests-projected-gj2kk, resource: bindings, ignored listing per whitelist
Feb 21 13:17:45.208: INFO: namespace e2e-tests-projected-gj2kk deletion completed in 24.265729035s

• [SLOW TEST:41.712 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 21 13:17:45.208: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0666 on tmpfs
Feb 21 13:17:45.456: INFO: Waiting up to 5m0s for pod "pod-8c5dad3f-54ac-11ea-b1f8-0242ac110008" in namespace "e2e-tests-emptydir-9tbjf" to be "success or failure"
Feb 21 13:17:45.464: INFO: Pod "pod-8c5dad3f-54ac-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 7.339534ms
Feb 21 13:17:47.513: INFO: Pod "pod-8c5dad3f-54ac-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.056400303s
Feb 21 13:17:49.533: INFO: Pod "pod-8c5dad3f-54ac-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.076281451s
Feb 21 13:17:52.031: INFO: Pod "pod-8c5dad3f-54ac-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.57478873s
Feb 21 13:17:54.052: INFO: Pod "pod-8c5dad3f-54ac-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.595608029s
Feb 21 13:17:56.070: INFO: Pod "pod-8c5dad3f-54ac-11ea-b1f8-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.613160347s
STEP: Saw pod success
Feb 21 13:17:56.070: INFO: Pod "pod-8c5dad3f-54ac-11ea-b1f8-0242ac110008" satisfied condition "success or failure"
Feb 21 13:17:56.074: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-8c5dad3f-54ac-11ea-b1f8-0242ac110008 container test-container: 
STEP: delete the pod
Feb 21 13:17:56.730: INFO: Waiting for pod pod-8c5dad3f-54ac-11ea-b1f8-0242ac110008 to disappear
Feb 21 13:17:57.080: INFO: Pod pod-8c5dad3f-54ac-11ea-b1f8-0242ac110008 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 21 13:17:57.081: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-9tbjf" for this suite.
Feb 21 13:18:03.184: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 21 13:18:03.365: INFO: namespace: e2e-tests-emptydir-9tbjf, resource: bindings, ignored listing per whitelist
Feb 21 13:18:03.500: INFO: namespace e2e-tests-emptydir-9tbjf deletion completed in 6.391693113s

• [SLOW TEST:18.293 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod with mountPath of existing file [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 21 13:18:03.501: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with configmap pod with mountPath of existing file [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod pod-subpath-test-configmap-pcnw
STEP: Creating a pod to test atomic-volume-subpath
Feb 21 13:18:03.947: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-pcnw" in namespace "e2e-tests-subpath-k7722" to be "success or failure"
Feb 21 13:18:04.006: INFO: Pod "pod-subpath-test-configmap-pcnw": Phase="Pending", Reason="", readiness=false. Elapsed: 58.872211ms
Feb 21 13:18:06.211: INFO: Pod "pod-subpath-test-configmap-pcnw": Phase="Pending", Reason="", readiness=false. Elapsed: 2.263062193s
Feb 21 13:18:08.236: INFO: Pod "pod-subpath-test-configmap-pcnw": Phase="Pending", Reason="", readiness=false. Elapsed: 4.288010824s
Feb 21 13:18:10.268: INFO: Pod "pod-subpath-test-configmap-pcnw": Phase="Pending", Reason="", readiness=false. Elapsed: 6.320021471s
Feb 21 13:18:12.304: INFO: Pod "pod-subpath-test-configmap-pcnw": Phase="Pending", Reason="", readiness=false. Elapsed: 8.356689574s
Feb 21 13:18:14.457: INFO: Pod "pod-subpath-test-configmap-pcnw": Phase="Pending", Reason="", readiness=false. Elapsed: 10.509858634s
Feb 21 13:18:16.475: INFO: Pod "pod-subpath-test-configmap-pcnw": Phase="Pending", Reason="", readiness=false. Elapsed: 12.527654658s
Feb 21 13:18:18.510: INFO: Pod "pod-subpath-test-configmap-pcnw": Phase="Pending", Reason="", readiness=false. Elapsed: 14.562079214s
Feb 21 13:18:20.549: INFO: Pod "pod-subpath-test-configmap-pcnw": Phase="Running", Reason="", readiness=false. Elapsed: 16.601775435s
Feb 21 13:18:22.577: INFO: Pod "pod-subpath-test-configmap-pcnw": Phase="Running", Reason="", readiness=false. Elapsed: 18.62992464s
Feb 21 13:18:24.633: INFO: Pod "pod-subpath-test-configmap-pcnw": Phase="Running", Reason="", readiness=false. Elapsed: 20.68556358s
Feb 21 13:18:26.648: INFO: Pod "pod-subpath-test-configmap-pcnw": Phase="Running", Reason="", readiness=false. Elapsed: 22.700239641s
Feb 21 13:18:28.669: INFO: Pod "pod-subpath-test-configmap-pcnw": Phase="Running", Reason="", readiness=false. Elapsed: 24.72147017s
Feb 21 13:18:30.699: INFO: Pod "pod-subpath-test-configmap-pcnw": Phase="Running", Reason="", readiness=false. Elapsed: 26.751481013s
Feb 21 13:18:32.750: INFO: Pod "pod-subpath-test-configmap-pcnw": Phase="Running", Reason="", readiness=false. Elapsed: 28.802772504s
Feb 21 13:18:34.771: INFO: Pod "pod-subpath-test-configmap-pcnw": Phase="Running", Reason="", readiness=false. Elapsed: 30.823093687s
Feb 21 13:18:37.575: INFO: Pod "pod-subpath-test-configmap-pcnw": Phase="Running", Reason="", readiness=false. Elapsed: 33.627735933s
Feb 21 13:18:39.609: INFO: Pod "pod-subpath-test-configmap-pcnw": Phase="Succeeded", Reason="", readiness=false. Elapsed: 35.661279008s
STEP: Saw pod success
Feb 21 13:18:39.609: INFO: Pod "pod-subpath-test-configmap-pcnw" satisfied condition "success or failure"
Feb 21 13:18:39.632: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-configmap-pcnw container test-container-subpath-configmap-pcnw: 
STEP: delete the pod
Feb 21 13:18:40.901: INFO: Waiting for pod pod-subpath-test-configmap-pcnw to disappear
Feb 21 13:18:41.018: INFO: Pod pod-subpath-test-configmap-pcnw no longer exists
STEP: Deleting pod pod-subpath-test-configmap-pcnw
Feb 21 13:18:41.018: INFO: Deleting pod "pod-subpath-test-configmap-pcnw" in namespace "e2e-tests-subpath-k7722"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 21 13:18:41.024: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-subpath-k7722" for this suite.
Feb 21 13:18:49.070: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 21 13:18:49.174: INFO: namespace: e2e-tests-subpath-k7722, resource: bindings, ignored listing per whitelist
Feb 21 13:18:49.224: INFO: namespace e2e-tests-subpath-k7722 deletion completed in 8.192225884s

• [SLOW TEST:45.724 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with configmap pod with mountPath of existing file [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 21 13:18:49.226: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with secret that has name projected-secret-test-b29d09b6-54ac-11ea-b1f8-0242ac110008
STEP: Creating a pod to test consume secrets
Feb 21 13:18:49.634: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-b29f952f-54ac-11ea-b1f8-0242ac110008" in namespace "e2e-tests-projected-d2mnl" to be "success or failure"
Feb 21 13:18:49.805: INFO: Pod "pod-projected-secrets-b29f952f-54ac-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 171.602049ms
Feb 21 13:18:51.830: INFO: Pod "pod-projected-secrets-b29f952f-54ac-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.195748494s
Feb 21 13:18:53.852: INFO: Pod "pod-projected-secrets-b29f952f-54ac-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.218362657s
Feb 21 13:18:55.893: INFO: Pod "pod-projected-secrets-b29f952f-54ac-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.258914956s
Feb 21 13:18:58.194: INFO: Pod "pod-projected-secrets-b29f952f-54ac-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.55982557s
Feb 21 13:19:00.208: INFO: Pod "pod-projected-secrets-b29f952f-54ac-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 10.57421152s
Feb 21 13:19:02.642: INFO: Pod "pod-projected-secrets-b29f952f-54ac-11ea-b1f8-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.007975724s
STEP: Saw pod success
Feb 21 13:19:02.642: INFO: Pod "pod-projected-secrets-b29f952f-54ac-11ea-b1f8-0242ac110008" satisfied condition "success or failure"
Feb 21 13:19:02.648: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-b29f952f-54ac-11ea-b1f8-0242ac110008 container projected-secret-volume-test: 
STEP: delete the pod
Feb 21 13:19:03.085: INFO: Waiting for pod pod-projected-secrets-b29f952f-54ac-11ea-b1f8-0242ac110008 to disappear
Feb 21 13:19:03.202: INFO: Pod pod-projected-secrets-b29f952f-54ac-11ea-b1f8-0242ac110008 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 21 13:19:03.202: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-d2mnl" for this suite.
Feb 21 13:19:09.254: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 21 13:19:09.311: INFO: namespace: e2e-tests-projected-d2mnl, resource: bindings, ignored listing per whitelist
Feb 21 13:19:09.387: INFO: namespace e2e-tests-projected-d2mnl deletion completed in 6.169079711s

• [SLOW TEST:20.162 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 21 13:19:09.388: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0666 on tmpfs
Feb 21 13:19:09.606: INFO: Waiting up to 5m0s for pod "pod-be866182-54ac-11ea-b1f8-0242ac110008" in namespace "e2e-tests-emptydir-hpmq8" to be "success or failure"
Feb 21 13:19:09.622: INFO: Pod "pod-be866182-54ac-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 15.322573ms
Feb 21 13:19:11.631: INFO: Pod "pod-be866182-54ac-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024885291s
Feb 21 13:19:13.644: INFO: Pod "pod-be866182-54ac-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.037355112s
Feb 21 13:19:15.812: INFO: Pod "pod-be866182-54ac-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.205397977s
Feb 21 13:19:19.956: INFO: Pod "pod-be866182-54ac-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 10.349378897s
Feb 21 13:19:22.109: INFO: Pod "pod-be866182-54ac-11ea-b1f8-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 12.502388408s
Feb 21 13:19:24.182: INFO: Pod "pod-be866182-54ac-11ea-b1f8-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.576004689s
STEP: Saw pod success
Feb 21 13:19:24.183: INFO: Pod "pod-be866182-54ac-11ea-b1f8-0242ac110008" satisfied condition "success or failure"
Feb 21 13:19:24.298: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-be866182-54ac-11ea-b1f8-0242ac110008 container test-container: 
STEP: delete the pod
Feb 21 13:19:24.505: INFO: Waiting for pod pod-be866182-54ac-11ea-b1f8-0242ac110008 to disappear
Feb 21 13:19:24.669: INFO: Pod pod-be866182-54ac-11ea-b1f8-0242ac110008 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 21 13:19:24.670: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-hpmq8" for this suite.
Feb 21 13:19:30.849: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 21 13:19:30.886: INFO: namespace: e2e-tests-emptydir-hpmq8, resource: bindings, ignored listing per whitelist
Feb 21 13:19:31.087: INFO: namespace e2e-tests-emptydir-hpmq8 deletion completed in 6.397934385s

• [SLOW TEST:21.700 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] Events 
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] [sig-node] Events
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 21 13:19:31.088: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename events
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: retrieving the pod
Feb 21 13:19:41.405: INFO: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:send-events-cb8043f3-54ac-11ea-b1f8-0242ac110008,GenerateName:,Namespace:e2e-tests-events-kms2g,SelfLink:/api/v1/namespaces/e2e-tests-events-kms2g/pods/send-events-cb8043f3-54ac-11ea-b1f8-0242ac110008,UID:cb823cc7-54ac-11ea-a994-fa163e34d433,ResourceVersion:22430268,Generation:0,CreationTimestamp:2020-02-21 13:19:31 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: foo,time: 351068005,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-28xcn {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-28xcn,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{p gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 [] []  [{ 0 80 TCP }] [] [] {map[] map[]} [{default-token-28xcn true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001c15460} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001c15510}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-21 13:19:31 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-21 13:19:39 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-21 13:19:39 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-21 13:19:31 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.4,StartTime:2020-02-21 13:19:31 +0000 UTC,ContainerStatuses:[{p {nil ContainerStateRunning{StartedAt:2020-02-21 13:19:39 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 docker-pullable://gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 docker://0813459f7a40dfe2db2bb0cf9235c3031023170b5c6ecfb32fc1c43cc66c055a}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}

STEP: checking for scheduler event about the pod
Feb 21 13:19:43.425: INFO: Saw scheduler event for our pod.
STEP: checking for kubelet event about the pod
Feb 21 13:19:45.450: INFO: Saw kubelet event for our pod.
STEP: deleting the pod
[AfterEach] [k8s.io] [sig-node] Events
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 21 13:19:45.484: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-events-kms2g" for this suite.
Feb 21 13:20:25.740: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 21 13:20:25.893: INFO: namespace: e2e-tests-events-kms2g, resource: bindings, ignored listing per whitelist
Feb 21 13:20:25.939: INFO: namespace e2e-tests-events-kms2g deletion completed in 40.254474058s

• [SLOW TEST:54.851 seconds]
[k8s.io] [sig-node] Events
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl version 
  should check is all data is printed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 21 13:20:25.939: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should check is all data is printed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Feb 21 13:20:26.117: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version'
Feb 21 13:20:26.291: INFO: stderr: ""
Feb 21 13:20:26.291: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.12\", GitCommit:\"a8b52209ee172232b6db7a6e0ce2adc77458829f\", GitTreeState:\"clean\", BuildDate:\"2019-12-22T15:53:48Z\", GoVersion:\"go1.11.13\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.8\", GitCommit:\"0c6d31a99f81476dfc9871ba3cf3f597bec29b58\", GitTreeState:\"clean\", BuildDate:\"2019-07-08T08:38:54Z\", GoVersion:\"go1.11.5\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 21 13:20:26.291: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-k6dlx" for this suite.
Feb 21 13:20:32.389: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 21 13:20:32.428: INFO: namespace: e2e-tests-kubectl-k6dlx, resource: bindings, ignored listing per whitelist
Feb 21 13:20:32.671: INFO: namespace e2e-tests-kubectl-k6dlx deletion completed in 6.369698409s

• [SLOW TEST:6.732 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl version
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should check is all data is printed  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSFeb 21 13:20:32.671: INFO: Running AfterSuite actions on all nodes
Feb 21 13:20:32.671: INFO: Running AfterSuite actions on node 1
Feb 21 13:20:32.672: INFO: Skipping dumping logs from cluster

Ran 199 of 2164 Specs in 9199.201 seconds
SUCCESS! -- 199 Passed | 0 Failed | 0 Pending | 1965 Skipped PASS